Articles and Analysis


How We Choose Polls to Plot: Part IV

[This is Part IV of the recent discussion betwen Mark Blumenthal and Charles Franklen called "How We Choose Polls to Plot." For previous posts in the discussion see parts I, II, and III].

"What happens if you leave out 'x'?" is probably the single most asked question at Pollster.com. Everyone has their favorite pollster to hate, and wonders if only that one were removed would the results be closer to the truth. It is a really good question because it goes to the heart of the robustness of our trend estimates and the role of one (or a couple) of pollsters in shaping the conventional wisdom of what "the polls show". The former issue is statistical, the later goes to how shared understandings are constructed. If our estimators are highly sensitive to any one pollster then we have a statistical problem. If one pollster unduly influences shared perceptions, then we better hope they are "right".

Today's question from Mark (and many readers) is what role the tracking polls play in our estimates. This is an issue Mark and I debated quite a lot during the winter when Gallup and Rasmussen began their daily tracking polls. Because they produce so many numbers, including all their data runs the risk that these two dominate our trend estimate to an unacceptable degree. But do they exert that much influence-- there is the question.

And just to be contrarian, take note of the opposite problem: data are valuable. You should never want to ignore information. In that sense excluding data from prolific sources is a mistake unless the data are biased in some uncorrectable way.

The first decision we reached in January was that we would only include each INDEPENDENT sample from tracking polls. This was an easy call. Rolling samples are great for daily updates but Thursday's poll isn't independent of Wednesday's because they both contain Tuesday's and Wednesday's results, if it is a three-day track. In that sense, there isn't as much new information as it seems. So we take only the independent results: Mon-Tues-Wed, Thur-Fri-Sat, Sun-Mon-Tues and so on for a three day tracker. This means we are only including independent data collections, and cuts down on the number of entries in our data that come from any single tracking poll.

Despite this, we get a lot of data in the national track from two primary sources: Rasmussen accounts for 63 of 286 data points in our national trend data. Gallup's tracker provides 41 more. (We keep Gallup's USAToday polls separate from the tracker.) And a third source, The Economist/YouGov's internet poll accounts for 24 data points. (Full Disclosure: YouGov/Polimetrix Pollster.com and supports our work here.) The next most common pollster is Zogby with only 12. So let's take a look at the influence of these top-three pollsters in terms of data. Together they account for 128 of 286 data points, or 45% of our national data.

Let's begin with recognizing that every data point MUST have some influence on our trend estimator. If it didn't then the trend would not be responding to the data! So in that simple sense, the Rasmussen, Gallup and YouGov data must play some role in determining the value of our trend estimate. That really isn't the issue that concerns people. The question is whether these three pollsters DISTORT the trends we would otherwise estimate from all other sources. It would be fine if Rasmussen or Gallup or YouGov had a huge influence on our estimate so long as their trends were exactly in line with everyone else's trends. The concern arises when there is the possibility that one of these is both influential AND out of line with the rest of the world.

We need to look at three things: the overall trend with all pollsters included, the trend only for a single pollster, and finally the trend we'd estimate if we excluded this pollster. If a pollster is different from others, that's a concern. But if they don't substantially change the trend estimate, then we aren't that worried. But if they are different AND shift the trend, then we have to worry.

So let's look at the data. The chart below plots the overall trend (the blue line), the trend for each of the three most prolific pollsters (solid red), and the trend estimate if we exclude that pollster (dashed red line). A fourth plot shows what happens if we exclude all three prolific pollsters and rely only on the 28 different pollsters who've done 12 or fewer polls each (dashed blue).


Over all our polls, we estimate an Obama advantage over McCain of 3.4 points (as of early morning on 8/14). If we exclude Gallup, the trend estimate is 3.2. If we exclude Rasmussen, the estimate is 4.5. If we exclude YouGov the estimate is 3.3. And if we omit all three (and 45% of our data) the trend estimate is 5.1. So it DOES matter which of these we include. By as little as 0.1 points or as much as 1.7 points.

The most striking thing to me about these figures is that all three tracking polls trend a bit below the overall trend, which is why omitting them all produces the biggest change in the current trend estimate. Gallup is only a bit below trend, YouGov a bit more in May but less recently. Rasmussen stands out as the most consistently below trend, with convergence only in June for a while.

At first glance, the worst thing about Rasmussen is that his trend seems much more sharply downward since late June than either other frequent pollster (both Gallup and YouGov see flat or rising Obama margins in that time.) The dashed line without Rasmussen looks flat or possibly rising slighting, while including Rasmussen with all others produces a modest downward slope recently. So is Rasmussen determining our current trend's tendency to be moving down? This is especially relevant given the upward moves by Gallup and YouGov.

The bottom right panel of the figure offers some reassurance. While Rasmussen does look different from Gallup or YouGov, when we take all three tracking polls out, the dashed blue line in the bottom right figure trends slightly down, approximately in parallel with the overall trend estimate using all the polls. To be sure, omitting the tracking polls does produce a higher current trend estimate: 5.1 vs 3.4 for all polls. Clearly the tracking polls are showing a lower margin and that is reflected here. But from my point of view, the happy news is that the trend with or without the three trackers moves in pretty much the same way over the year. Granted some minor differences, both curves move up and down at about the same time and the gap between the solid and dashed blue lines is roughly equal over time. This suggests that the effects of the three trackers may be to lower the estimated Obama margin over McCain, but they don't distort the dynamics of the race. When trends are up or trends are down, they are reflected in both the with and without tracker estimates.

It is reassuring that both the Gallup and YouGov trackers have very little influence on the overall trend estimate. Including or excluding either of these polls has very little effect on the trend estimate.

A final point is what this says about the validity of the polls. If Gallup and YouGov are flat or slightly up, and Rasmussen is sharply down, how are we to know which is "right"? The data here say a bit of both are right. Gallup and YouGov do somewhat better jobs tracking the overall trend than does Rasmussen. But the recent decline in Obama support, even though modest, is not captured by Gallup or YouGov. Rasmussen clearly overstates the decline (compared to other polling) but the consensus of the 158 polls NOT from these three sources is that there has been a little downturn in Obama's lead since late June.

It is easy to exaggerate how large these differences are, especially in light of the intrinsically hard problem of knowing what "the truth" is at any moment. The chart below compares the trend estimates we would get from dropping each of the 31 different pollsters in our our data. Two things stand out. Dropping any single pollster has very little effect on the trend estimate, with one exception. Omitting Rasmussen, who is both the most prolific pollster and the one with considerably more variation than others, does make a noticeable difference in the trend estimate. But the reassuring element of this graph is that even the line omitting Rasmussen still falls within the 95% confidence interval around our overall trend estimate. While there was a time in March when the "without Rasmussen" line moves just outside the 95% confidence interval, this is the exception rather than the rule. Most of the time, including now, the trend without Rasmussen is NOT significantly different from the trend over all pollsters (or the trend omitting any individual pollster.)


So what do we conclude from this exercise? I'd say that any individual pollster can have important effects on our trend estimate under the right circumstances. Concentrating a lot of unusual polls in a short time span can shift our estimates. But I am encouraged that while there are important differences in Gallup, Rasmussen and YouGov trends, none of them seem to outright dominate our trend estimates. Even Rasmussen's effects look less important when we see what all the non-tracking polls are showing. We might worry about what the right level of support is, but the shape of the trends looks pretty robust no matter who is included or excluded. While there are differences of as much as 1.7 points in the estimated margin, it is worth taking a deep breath and appreciating the margin of error in these and all other estimates of candidate support right now. The current confidence interval covers an range from +1.1 to + 5.2. That 4.1 point range looks pretty large compared to a 1.7 point difference among estimators. Meanwhile, individual polls range over a MUCH wider spread- over at least 10 points and often more. The trend estimate manages to narrow that range of uncertainty by more that 50%. A good achievement. But not one that is precise to tenths of a percentage point, nor one that is immune to some effects of individual pollsters.


Chris G:

Dr Franklin- Of the 3 polls you analyzed the influence of, Rasmussen is the only one w/ the LV screen. In part II of this debate you showed a figure separating the LV and RV time series, and a similar difference--LV Obama margin went down in the past month while the RV remained flat.

How much of the RV vs LV difference is due to Rasmussen dominating the specific subset of LV polls? Is the same downward trend found in other LV polls (or is there enough data to examine that)? If so, that would suggest a substantive change related to McCain supporters becoming more likely to vote (or pass the screen), conversely LV more likely to support McCain.

It'd be interesting to keep track of that LV-RV difference



nevertheless, why no rasmussen tracking poll today? dont chart it but at least be consistent and post it.



Great job Charles!

I do agree that while one pollster may have a method that differentiates it with the others, generally speaking for every USAToday/Gallup poll with a clear outlier result on LV's showing a 5 point McCain margin, there's a Newsweek poll showing a 15 point Obama margin, and many in between.

The real issue comes from having a heavily sampled pollster that is a consistent outlier. As you showed, the oversampling of Rasmussen has been pulling Obama's advantage down consistently for months. I wouldn't dare suggest throwing them out entirely, but sampling Rasmussen and Gallup once every 3 days just seems like too much. Once every 10 days might give a truer result and protect your method from the undue skew of one or two pollsters. It seems that monthly polls are the standard for most of the others, and I might even suggest going all the way to 30 days just to attempt to equally weight the pollsters. Would I be incorrect to suggest that this would be considered to be a better scientific method?

I also very much appreciate the charts showing the difference instead of the actual percentages. This would also seem like a better scientific method as there can be a big difference between a likely voter result that pushes (generally low undecideds) and a poll of all respondents without pushing (generally high undecideds). I'm not sure how third-party candidates would be tracked under such a model, but I'm sure you guys could figure that out. I would definitely encourage you to make this change.



Wait...if dropping Rasmussen causes the trend to ride the edge of the old 95% confidence interval (CI), doesn't that mean that only about half of the old CI overlaps with the CI of the trend minus Rasmussen? So adding Rasmussen shifts the CI by about two entire standard deviations? And from the viewpoint of the polls excluding Rasmussen, there's only about a 50-50 chance that the true proportion falls within the CI of the entire trend with all polls (and vice versa). Would this suggest a big problem, or am I missing something here?

I am curious, what exactly would be the bar for needing to exclude a pollster? At a glance I'm pretty sure that the data already rejects "removing Rasmussen is equivalent to removing random polls" with 90%+ confidence, but that has no idea of degree (in the sense that, after lots and lots of polls, you could have a very high confidence that a pollster has a bias, but of a practically insignificant 0.01-percentage-points). But say if you decide that a persistent influence on the trend line of 1.0-percentage-point by a single pollster (compared to all polls) is too much? Then perhaps the more pertinent task would be testing "removing n polls by Rasmussen doesn't give a worse shift than removing n polls by a hypothetical pollster with mean -1.0 from the trend". I wonder what the p-value of that would be? (Though that doesn't sound like a nice calculation to do; might be easier to simply measure straight bias instead of influence upon the trend line...)


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.