Pollster.com

Articles and Analysis

 

What Happened in NH? AAPOR's Answer

Topics: AAPOR , New Hampshire

Most political junkies remember two things about last year's New Hampshire primary. First, Hillary Clinton's surprising three point win and the fact that the pollsters were the "biggest losers" as the final round off of pre-election polls had shown Barack Obama surging ahead. A dozen different surveys showed Obama leading by a range of 3 to 13 points, and by roughly six percentage points on our final trend estimate. Fewer remember that polling errors were even bigger in subsequent states and fewer still will recall that the American Association for Public Opinion Research (AAPOR) announced formation of an ad-hoc committee to study and report on the problems of the New Hampshire and other primary polls.

Well today, more than fourteen months after the 2008 New Hampshire primary, the AAPOR Ad Hoc committee has released its full report. While those hoping for an obvious smoking gun will be disappointed, the report represents a massive collection of information that does shed new light on what happened in New Hampshire. The evidence is spotty and frequently hedged -- "definitive tests" were "impossible" -- but AAPOR's investigators identify four factors as contributing to polls having "mistakenly predicted an Obama victory." From the AAPOR committee press release:

  • Given the compressed caucus and primary calendar, polls conducted before the New Hampshire primary may have ended too early to capture late shifts in the electorate's preferences there.
  • Most commercial polling firms conducted interviews on the first or second call, but respondents who required more effort to contact were more likely to support Senator Clinton. Instead of continuing to call their initial samples to reach these hard‐to‐contact people, pollsters typically added new households to the sample, skewing the results toward the opinions of those who were easy to reach on the phone, and who more typically supported Senator Obama.
  • Non‐response patterns, identified by comparing characteristics of the pre‐election samples with the exit poll samples, suggest that some groups who supported Senator Clinton--such as union members and those with less education--were under‐ represented in pre‐election polls, possibly because they were more difficult to reach.
  • Variations in likely voter models could explain some of the estimation problems in individual polls. Application of the Gallup likely larger error than was present in the unadjusted data. The influx of first-time voters may have had adverse effects on likely voter models.

In other words, what happened in New Hampshire wasn't one thing, it was a likely lot of small things, all introducing errors in the same direction. Various methodological challenges or shortcomings that might ordinarily produce offsetting variation in polls instead combined to throw them all off in the same direction. Polling's "perfect storm" did not materialize this past fall, but that label seems more apt for the New Hampshire polling debacle.

The report also produces evidence that rules out a number of prominent theories, among them the so-called "Bradley Effect." The authors claim they saw "no evidence that white respondents over-represented their support for Obama," and thus, no evidence of "latent racism" benefiting Clinton. Fair enough, but they do report evidence of a "social desirability effect" that led respondents to report "significantly greater" support for Obama "when when the interviewer is black than when he or she is white" (although Obama still led by smaller margins among when interviewers were white -- see pp 55-59 of the pdf report).

As should be obvious, this very quick and cursory review just scratches the surface of the information in the 123 page report. There is a story here about the sheer breadth of the information provided. For example, today's release also includes immediate availability through the Roper Archives of full respondent level data provided by CBS News, Gallup/USA Today, Opinion Dymamics/Fox News, the Public Policy Institute of California (PPIC), SurveyUSA, University of New Hampshire/CNN/WMUR for polls conducted in New Hampshire, South Carolina, California and Wisconsin.  [Update: I'm told that a small glitch in the documentation is holding up release of some or all of the Roper data until, hopefully, later today].

But aside from the admirable disclosure by the organizations listed above, there is also a story here about an outrageous lack of disclosure and foot-dragging, including three organizations that "never responded" to AAPOR's requests for information over the last fourteen months: Strategic Vision (for polls conducted in New Hampshire and Wisconsin), Clemson University and Ebony/Jet (for polls conducted in South Carolina).

Stay tuned. I will have more to say later today and in the days that follow on this new report. Meanwhile, please share your thoughts on the report in the comments below.

For further reading, see my first review of the theories for the New Hampshire polling flap, our bibliography of reaction around the web and the rest of our coverage from 2008.

Update: ABC's Gary Langer shares his first impressions including one thought I negelected to include:  "The volunteer AAPOR committee members who produced [the report], led by Prof. Michael Traugott of the University of Michigan, deserve our great thanks."

Interests disclosed:  As a member of AAPOR's Executive Commmittee from May 2006 through May 2008, I voted to create the Ad Hoc committee.  I did not serve on the committee but our Pollster.com colleague Charles Franklin did participate.

 

Comments
Mike in Fort Worth:

"Variations in likely voter models could explain some of the estimation problems in individual polls. Application of the Gallup likely larger error than was present in the unadjusted data. The influx of first-time voters may have had adverse effects on likely voter models"

Assuming Obama did better among first-time voters, wouldn't this type of error have boosted Clinton's poll numbers? Unless I'm missing something, this type of error was in a different direction from the others.

____________________

RS:

The "harder to reach, more likely to support Clinton" is reminiscent of the Clinton Caucus Complaints (CCC!) If pollsters typically call in the evenings, similar to when the caucuses (caucii?!) are held, both will tend to see fewer folks who work at nights/double shifts etc. Not saying that's why Clinton flopped in the caucuses - she simply didn't put up a fight (except perhaps in NV). But the similarity is striking...

There were four polls that ended the night before the NH primary - and all 4 showed substantial Obama leads. So I am not sure how critical the first bullet - "polls may have ended too early" - is.

@Mike in FW:
One possibility is that first-time voters were more enthusiastic, and hence more likely to answer the survey - boosting Obama's numbers.

____________________

pollfan:

The CBS results reported in Table 8 of the AAPOR report seem very strange. CBS overestimated the percentage of voters over 60 by about 3%, overestimated HS only voters by 10%, underestimated voters over $100K by 12%, underestimated first time voters by 5%, and overestimated registered Democrats by 9%. All of these effects should have overestimated Hillary Clinton's share of the vote, not underestimated it.

The union household argument seems a bit thin - we only have data for CBS and FOX, plus the differences were only about 5%, compared to the much larger pro-Hillary bias in the CBS figures referenced above. So in general I don't see any major non-response bias here.

I've played around with the FOX/Opinion Dynamics poll from Roper. Obama ended up almost exactly as in the final result - 37% - ignoring the undecideds. Edwards, however, was polling at 21%, but finished at 17%. It seems logical that voters abandoned Edwards after it was clear he would come in 3rd - and from the demographics it's pretty clear his voters went to Hillary. I think Edwards had more of an effect than people realize - and that's a legitimate reason why voters would change their minds late in the game.

____________________

nielej:

Is Andy Kohut going to apologize for his misguided and inflamatory editorial?

____________________

Mark Blumenthal:

Sorry for my absence yesterday -- I got sidetracked by a bunch of Pollster.com administrative stuff. A lot of good questions raised here. I'm going to be writing much more about the AAPOR report, but I wanted to quickly respond here.

@Mike in Fort Worth:

You're right. In NH, according to the exit poll, Obama did better with early voters so an underrepresentation of early voters would have boosted Clinton. That last bulleted paragraph from the executive summary is a little confusing because it juxtaposes different (and contradictory) findings from the report. They highlighted the issue of early voters, not because it explains the larger NH error, but because it might help explain understatements of Obama's vote elsewhere.

@RS:

I'm going to have more to say about the Monday night polls, but one thing to keep in mind: Those four surveys showing Obama ahead were based on combined results from Sunday and Monday (and in one case from Saturday through Monday). The released numbers showed Obama gaining on two and declining on two. Zogby showed Obama's margin increasing,but later claimed to show the race closing on Monday night (something I blogged about at the time because Zogby's statements and his data did not add up). Worse, none of the four pollsters shared their raw data with the AAPOR committee.

@pollfan:

Keep in mind that the CBS survey was very different than the others in one important respect: It was a "panel back" that reinterviewed voters that they had been respondents to a previous survey they conducted in November. That said, I'm not sure if the second wave of non-response and re-weighting explains the inconsistencies you noticed, although at minimum, it meant that the final CBS survey featured a much smaller sample (n=323) than the other polls conducted just before the NH primary. I also wrote more about this poll in January 2008.   

@nielej:

I'm likely to do a longer post about what this report says about the "Bradley Effect" (or lack thereof). For now it might be worth reading Kohut's op-ed again. He did not argue that white voters lied to pollsters, although that has become the commonly assumed meaning of the "Effect." Rather Kohut argued that "poorer, less well-educated white people" -- who were more likely to favor Clinton -- were less likely to have hung up on pollsters and as such, may have been under-represented in New Hampshire polling. The AAPOR report finds some evidence to support Kohut's hypothesis (summarized in the bullet quoted above) and says so explicitly (see p. 33-34).

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR