Pollster.com

Articles and Analysis

 

Different Polls, Different Trends


As the discussion of Charles Franklin's column on house effects suggests, most people believe that "who's right" in their poll results these days will be resolved after Election Day. Then we can compare which polls came closest to the final results, and infer that the most accurate polls in the final pre-election predictions were probably the most accurate during the campaign as well.

 

But it doesn't usually work out that way. In 2004, the seven polls noted in the accompanying chart all showed Bush winning by a margin of one to three percentage points, except for Fox, showing Kerry the winner by two. All the results were well within the polls' margins of errors in comparison with the actual election results.

 

0810_14 Final Poll Predictions 2004 Election.png

However, the interesting point is that during the month of September, these very same polls showed dramatically different dynamics. As shown in the next graph, there were three basic stories: ABC, Gallup, Time and ABC all showed Bush gaining momentum in the weeks following the Republican National Convention, and then falling toward the end of the month. Furthermore, although these pollsters all agreed with the general pattern, at the end of the month Gallup showed Bush with an 8-point lead, CBS and Time had him at one point, and ABC at 6 points.

 

The second story, reported by Fox, Zogby and TIPP, showed very little movement over the month of September, with the margin varying from a Kerry lead of one point to a Bush lead of three points.

 

Finally, Pew had its own dynamic, not found by any of the other polls, showing a significant surge for Bush after the convention, followed by a dramatic decline, then another significant surge.

 

0810_14 Bush Lead Sept 2004.png

One of the most interesting comparisons is between Gallup and Pew, which diverged by 13 points in mid-September, but closed to agreement by the end of the month.

 

At the end, all the pollsters could claim they were "right" on target, and NCPP dutifully noted the fine performance of the media polls. That performance, of course, was only in the final prediction. No effort was made to evaluate the polls during the campaign, though clearly they presented contradictory results. It appears as though we need a means of evaluating the polls during the election campaign.

 

It's true, of course, that we can't know which polls are most accurate during the campaign, but we can say that collectively they often tell quite divergent stories. And that hardly qualifies them for plaudits after Election Day.

Last week (Oct. 6), Gallup and DailyKos/Research 2000 tracking polls both showed Obama up by 9 and 11 points respectively, the same figures they show as of Oct. 13. Diageo/Hotline, GWU/Battleground and Zogby tracking polls all showed quite different results - with quite different trends.

 

On Oct. 7, Diageo/Hotline, GWU and Zogby showed an average of a 2-point lead for Obama, while DailyKos and Gallup showed an average of a 10.5 point lead. All three of the former polls reported an increasing lead for Obama in the subsequent week, while Gallup and DailyKos told us there was essentially no change.

 

Obama's Lead Among Five Tracking Polls

 

Gallup

Gallup2

DailyKos

Diageo

GWU

Zogby

6-Oct

9

 

11

2

7

3

7-Oct

11

 

10

1

4

2

8-Oct

11

 

10

6

3

4

9-Oct

10

 

12

7

8

5

10-Oct

9

 

12

10

 

4

11-Oct

7

6

13

8

 

6

12-Oct

10

10

12

6

8

4

13-Oct

9

10

11

6

13

6

 

 

After the election, will we know which tracking polls were right? If history is a guide, all will come within their polls' margins of errors compared to the final election results. And we will all forget how confusing their different prognostications were during the campaign.

 

Perhaps we need another standard by which to judge the polls' performances during the election campaign.

 

Comments
bill kapra:

Hmm, how about looking at the trend of sigma for all the polls? We could then compare the average deviation from the norm over time to track a single poll's volatility.

Also, if you are right in arguing that they tend to converge, we should see a nice regression of sigma as the election draws near.

Finally, we might get a hint of the behavior of the norm relative to the final outcome from historical data.

____________________

The future is always exactly like the past, until it isn't. (God, never thought I'd be quoting George Will!)

WARNING: If it bothers you, don't click. Otherwise, if you haven't seen this already, SARAH'S SUBTLE SLAM AGAINST BARACK and the rest of us.

____________________

RWCOLE:

Anything in the polls that answers the question of "Why is the McCain campaign in Pennsylvania when they are twelve points down with three weeks to go?"

____________________

gymble:

The only way of evaluating polls during the campaign that comes to mind is the following: treat the poll average as the "real" value (weighted by sampling size) and calculate each poll's chi-squared. From that it should be possible to get a rough estimate of whether certain pollsters have systematic biases (lean D or lean R) and which ones just have big error bars. Obviously, this isn't a perfect metric, but I can't immediately think of a better one.

States are probably polled too infrequently for this to work, but I think that you could do it for the national polls. And I imagine that a pollster's national performance could then be inferred to inform how its state performance is.

____________________

Observer:

I remember reading a lot four years ago about how most of the undecideds always break againsst the incumbent. The theory was that after four years if you weren't going to support Bush by the eve of the poll you weren't going to do so. A lot of historical data was quoted in support.

But according to these polls it didn't happen. The undecideds seem to have split evenly (or they didn't vote). Or did something else happen?

These polls gave Bush about 48 and Kerry about 46, with about 6% undecided. It ended up as 50.7 and 48.3 i.e. a gap of 2.4%. But suppose the late deciders did break for Kerry. Maybe the late polls underestimated the Bush lead and the true figures were 50/44/6, i.e. they were all wrong?

____________________

DTM:

The basic problem is that electoral polls taken substantially before the election are asking people to answer a hypothetical question--and people are generally not great at predicting their actions in hypothetical situations. So, I'm not sure it even makes sense to look for a standard by which to measure electoral polling accuracy substantially before the election.

____________________

George Not Bush:

Polls differ from elections in a number of ways:

There is always leakage from likely to actual voters; so, pollsters have to make assumptions to compensate, but these assumptions are always off a bit.

In Ohio in 2004, inner city residents had to stand in line in the rain to vote until as late as 2am. while rural residents had no such delays.

Ultimately elections are won on turnout and countering the other side's vote suppression tactics.

Obama started mobilising youth turnout in Iowa. If he keeps that up on election day, he's in by a big number.

____________________

todji:

Major events in the campaign constitute the equivalent of the Big Bang in physics- all guesses as to the nature of what was going on beforehand are essentially meaningless and unknowable.

What use is grading the accuracy of polls that occurred before the financial crisis? This was a significant event that fundamentally changed the dynamics of the election, rendering many a previous respondent's answer to polling questions moot.

____________________

cinnamonape:

Given that we can never actually tell what the national results of voting on the dates the polls are taken would actually be...all we can do is contrast the polls to one another.

One possible area of interest would be to look at the raw data (rather than the modified "weighted" results) as a measure of polling methodology.

One example I'd especially be interested in is whether the reported political party preference is corresponding between polls for both RV and "stated" LV samples. Also extract out the same data between new registrants and first time voters who assert they are LV.

That should be a variable measure. But of great interest.

We could also test reliability of samples by observing how accurate their sex ratio and age category results were in the raw data. Although it's true that they can "correct" these by demographic weighting as to what they "believe" will be the voting outcome...it still would suggest disparities in the sampling if one had, for example, 30% under 30 years old in the raw data.

____________________

I simply love polls!
however, it is curious to see how they fail, anyway, charts are clear!

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR