Pollster.com

New Hampshire 2008

 

Re [2]: AAPOR's Report - 2008 vs 1948


More reader reaction to my post last week on the disappointing cooperation with AAPOR's Ad Hoc Committee report on the New Hampshire primary polling mishap. Or more accurately, this email from long-time survey researcher Jan Werner responds to a comment I posted from former CBS News polling director Kathy Frankovic:

Kathy Frankovic wrote that:

"In 1948, there was an accepted academic standard for survey research – probability sampling – one that was not used by the public pollsters. That – in addition to the lack of polling close to the election – was an obvious conclusion the SSRC researchers could make to explain what went wrong."

Kathy's statement is literally true, but it is also highly misleading because it fails to note that in its report, the SSRC panel led by Frederick Mosteller explicitly ruled out sampling methodology as the primary reason for the failure of the polls to predict the 1948 presidential election. The report did mention the lack of polling close to the election as one among several contributing factors, but that rationale subsequently gained prominence in large part because, as the explanation least damaging to their businesses, it could easily be endorsed by the pollsters themselves.

The Mosteller panel did admonish the pollsters for using quota rather than probability sampling, but it concluded that the incorrect predictions most likely derived from a combination of many different inadequacies in the conduct of the 1948 polls. It also noted that a major problem for the polling industry was the unrealistic expectations of accuracy that it had fostered among the public. These conclusions seem equally compelling today.

Neither the 2009 AAPOR report nor the 1949 SSRC report could satisfactorily answer the question of what went wrong, but both provide superb resources for students of public polling. AAPOR would do the profession and the public a great favor if it could help make "The Pre-Election Polls of 1948" available again, either in print or online.


Re: AAPOR's Report - 2008 vs 1948


Via email, Kathy Frankovic, former Director of Surveys at CBS News, sends this comment about my post yesterday on the disappointing pollster cooperation with AAPOR's ad hoc committee report on the New Hampshire primary polling mishap:

There is a big difference between 1948 and where we are today in the field of survey research. In 1948, there was an accepted academic standard for survey research – probability sampling – one that was not used by the public pollsters. That – in addition to the lack of polling close to the election – was an obvious conclusion the SSRC researchers could make to explain what went wrong. There is no such obvious methodological improvement available as an explanation for the 2008 problems in NH. It’s not cell phones, it’s not respondent selection, and it’s not ballot order. Timing (and the possibilities of last minute changes) may once again be everything. And while more organizations should have disclosed more, I think that it’s unlikely that more data would have told us anything more definitive than we learned from the report as written.

For what it's worth, the report itself specifically referenced the contrast with 1948:

The work of the committee, and hence this report, has been delayed by a slow response from many of the pollsters who collected data from the four states in which the committee focused its efforts – New Hampshire, South Carolina, Wisconsin, and California. This is quite a different situation than after the 1948 general election, when there were fewer firms engaged in public polling, the threat to the future of the industry seemed to be greater, and the polling firms were fully cooperative. In 2008, many of the firms that polled in New Hampshire had studies in the field for primaries that occurred right after that. Today, there are well-publicized standards for disclosure of information about how polls are conducted. AAPOR, an organization of individuals engaged in public opinion research; the National Council on Public Polls (NCPP), an organization of organizations that conduct public opinion research; and the Council of American Survey Research Organizations (CASRO), also an organization of organizations, have all promulgated standards of disclosure. Despite the norms, at the time this report was finalized, one-fifth of the firms from which information was requested had not provided it. For each of these four firms, we were able to retrieve some of the requested information through Internet searches, but this was incomplete at best. If additional information is received after the report’s release, the database at the Roper Center will be updated.

So if and when pollsters who did not share raw, respondent level data share it with AAPOR, it will be posted to the Roper Center's listing (which is open to anyone, not limited to member institutions). I am told that Roper will also soon add pdf reproductions of all the responses received from pollsters, not just those that shared respondent level data.


AAPOR's Report: Why 2008 Was Not 1948


As someone who writes about polling methodology, I consider last week's report from the American Association for Public Opinion Research (AAPOR) on the mishaps in the New Hampshire and other primary election polling last year manna from heaven. Republican pollster David Hill was right to call it "the best systematic analysis of what works and what doesn't for pollsters" in decades. The new findings and data on so many aspects of polling arcania, from "call backs" to automated-IVR polls, is invaluable, especially given that the AAPOR researchers lacked access to all of the public polling data from New Hampshire or the three other states they focused on.

But that lack of information was also important. Valuable as it is, the report was also was hindered by a troubling lack of disclosure and cooperation from many of the organizations that played a part in what even prominent pollsters described as an unprecedented "fiasco" and "one of the most significant miscues in modern polling history."

Last week, the Wall Street Journal's Carl Bialik summed up the problem:

Just seven of 21 polling firms contacted over a year ago by the American Association for Public Opinion Research for the New Hampshire postmortem provided information that went beyond minimal disclosure -- such as data about the interviewers and about each respondent.

Last year, two days after the New Hampshire primary, I wrote a column reminding my colleagues of the investigation that followed the 1948 polling debacle that created the infamous "Dewey Beats Truman " headline (emphasis added):

[A] week after the [1948] election, with the cooperation of virtually every prominent public pollster, the independent Social Science Research Council (SSRC) convened a panel of academics to assess the pollsters' methods. After "an intensive review carried through within the span of five weeks," their Committee on the Analysis of Pre-election Polls and Forecasts issued a report that would ultimately reshape public opinion polling as we know it.

[...]

[SSRC Committee] members moved quickly, as their report explains, out of a sense that "extended controversy regarding the pre-election polls ... might have extensive repercussions upon all types of opinion and attitude studies."

The American Association for Public Opinion Research "commended" the SSRC effort and urged its member organizations to cooperate. "The major polling organizations," most of which were commercial market researchers competing against each other for business, "promptly agreed to cooperate fully, opened their files and made their staffs available for interrogation and discussion."

But that was 1948. Things were different last year.

On January 15, 2008, AAPOR announced it would form an ad-hoc committee to evaluate the primary pre-election polling in New Hampshire. Two weeks later, it announced the names of the eleven committee members. They convened soon thereafter and decided to broaden the investigation to include the primary pre-election polls conducted in South Carolina, California and Wisconsin (p. 16 of the report explains why).   On March 4, 2008, AAPOR President Nancy Mathiowetz sent a six page request to the 21 organizations that had released public polls in the four states, including 11 that had polled in New Hampshire.

The request (reproduced on pp. 83-88 of the report) had two categories: "(1) information that is part of the AAPOR Standards for Minimal Disclosure and (2) information or data that goes beyond the minimal disclosure requirement." The first category included items typically disclosed (such as survey dates, sample sizes and the margin of error), some not always available (including exact wording of questions asked and weighting procedures) and some details that most pollsters rarely release (such as response rates). The second category of information beyond minimal disclosure amounted to the 2008 equivalent of "opening of files" from 1948. Specifically, they asked for "individual level data for all individuals contacted and interviewed, records about the disposition of all numbers dialed, and information about the characteristics of interviewers.

The Committee had originally hoped to complete its report in time for AAPOR's annual meeting in May 2008, but by then as committee chair Michael Traugott reported at the time, only five firms had responded to the request (the first to respond, Mathiowetz tells me, was SurveyUSA which provided a complete, electronic data files for the two states they polled on April 8, 2008). In fairness, many of the pollsters had their hands full with surveys in the ongoing primary battle between Barack Obama and Hillary Clinton. Nevertheless, when I interviewed Traugott in May, he still hoped to complete the report in time for the conventions in August, but as cooperation lagged, the schedule slipped once again.

By late November 2008, with the elections completed, some firms had still not responded with answers to even the "minimal disclosure" questions asked back in March. At that point, Mathiowetz tells me, she filed a formal complaint with AAPOR's standard committee, alleging violations of AAPOR's code of ethics. Since the standards evaluation committee has not yet completed its work, and since that committee is bound to keep the specifics of such complaints confidential, Mathiowetz could not provide further details. However she did say that some pollsters supplied information subsequent to her complaint that the Ad Hoc Committee included in last week's report.

So now that the report is out, let's use the information it provided to sort the pollsters into three categories:

The best: Seven organizations, CBS News/New York Times, the Field Poll, Gallup/USA Today, Opinion Dynamics/Fox News, Public Policy Institute of California (PPIC), SurveyUSA and the University of New Hampshire/CNN/WMUR provided complete "micro-data" on every interview conducted. These organizations lived up to the spirit of the 1948 report, opening up their (electronic) files and, as far as I can tell, answering every question the AAPOR committee asked. They deserve our praise and thanks.

The worst: Three organizations -- Clemson Unversity, Ron Lester & Associates/Ebony/Jet and StrategicVision -- never responded.   

The rest in the middle: Eleven organizations -- American Research Group (ARG), Datamar, LA Times/CNN/Politico, Marist College, Mason-Dixon/McClatchy/MSNBC, Public Policy Polling (PPP), Rasmussen Reports, Research 2000/Concord Monitor, RKM/Franklin Pierce/WBZ, Suffolk University/WHDH and Zogby/Reuters/C-Span -- fell somewhere in the middle, providing answers to the "minimal disclosure" questions but no more.   

The best deserve our praise, while those that provided evaded all disclosure deserve our scorn. But what can we say about the pollsters in the middle?

First, remember that their responses met only the "minimal disclosure" requirements of AAPOR's code of ethics. They provided the "essential information" that the pollsters should include, according to AAPOR's ethical code, "in any report of research results" or at least "make available when that report is released." In other words, the middle group provided information that pollsters should always put into public domain along with their results, and not months later or only upon request following an unprecedented polling failure.

Second, consider the way that minimal cooperation cooperation hindered the committee's efforts to explain what happened in New Hampshire, especially on the question of whether a late shift to Senator Clinton in New Hampshire explained some of the polling error there. That theory is popular among pollsters (yours truly is no exception), partly because of the evidence -- most polls finished interviewing on the Sunday before the primary and thus missed reactions to Clinton's widely viewed "emotional" statement the next day -- and partly because the theory is easier for pollsters to accept, as it lets other aspects of methodology off the hook. The problem wasn't the methodology, the theory goes, just a "snapshot" taken too soon.

While the committee found evidence that several other factors influenced the polling errors in New Hampshire, the concluded that "late decisions "may have contributed significantly." They based this conclusion mostly on evidence from two panel-back surveys -- conducted by CBS News and Gallup -- that measured vote preferences for the same respondents at two distinct times. The Gallup follow-up survey was especially helpful, since it recontacted respondents from their final poll for a second interview conducted after the primary.

Although the evidence suggested that a late shift contributed tot he problem, the committee hedged on this point because, as they put it, "we lack the data for proper evaluation." Did more data exist that could shed light on this issue? Absolutely.

First, four pollsters continued to interview on Monday. ARG, Rasmussen Reports, Suffolk University and Zogby collectively interviewed approximately 1,500 New Hampshire voters on Monday, but the publicly released numbers combined those interviews with others conducted on Saturday and Sunday. The final shifts these pollsters reported in their final releases were inconsistent, but none of the four ever released tabulations that broke out results by day-of-the-week, and all four refused to provide respondent level data to the AAPOR committee.

That omission is more than just a missed opportunity. It also leaves open the possibility that at least one pollster -- Zogby -- was less than honest about what his data said about the trend in the closing hours of the New Hampshire campaign. See my post from January 2008 for the complete details, but the last few days of Zogby's tracking numbers simply do not correspond with the his characterization of that data the day after the primary. Full cooperation with the AAPOR committee would have resolved the mystery. Zogby's failure to cooperate should leave us asking more troubling questions.

But it was not just the "outlaw pollsters," to quote David Hill, that failed to share important data with the AAPOR committee. Consider the Marist Poll, produced by the polling institute at New York's Marist College. Marist is not a typical pollster. It's directors, Lee Miringoff and Barbara Carvalho are long-time AAPOR members. More important, Miringoff is a former president of the National Council on Public Polls (NCPP) and and both Miringoff and Carvalho currently serve on its board of trustees. NCPP is a group of media pollsters that has its own, slightly less stringent disclosure guidelines that nonetheless encourage members to "release raw datasets (ASCII, SPSS, CSV format) for any publicly released survey results."

The day after the New Hampshire primary, Marist reported its theories about what went wrong and promised "to re‐contact in the next few days the voters we spoke with over the weekend to glean whatever additional insights we can." Seven weeks later, Miringoff participated in a forum on "What Happened in New Hampshire" sponsored by AAPOR's New York chapter and shared some preliminary findings from the re-contact study. "Our data," he said, "suggest there was some kind of late shift to Hillary Clinton among women."   

Given the importance of that finding, the academic affiliation of the Marist Poll, Miringoff's role as a leader in NCPP and that organization's stated commitment to disclosure, you might think that Marist would be first in line to share its raw data with the respected scholars on the AAPOR committee.

You might think that, but you would be wrong.

As of this writing, the Marist Institute has yet to share raw respondent level data for either their final New Hampshire poll or the follow-up study. In fact, the Marist Institute has not yet provided any of the results of the recontact study with Professor Traugott or the AAPOR committee -- not a memo, not a filled-in questionnaire, not a Powerpoint presentation...nothing.

I was surprised by their failure to share raw data, so I emailed Miringoff for comment. His answer:

First, we did provide information on disclosure as required by AAPOR and I spoke, along with Frank Newport, on the NH primary results at a meeting of NYAAPOR. It was a great turnout and provided an opportunity to discuss the data and issues.

Unfortunately, the "information on disclosure" they provided was, again by AAPOR standards, the minimum that any researcher ought to include in any publicly released report. To be fair, Marist had already included much of that "minimal disclosure" information in their original release. According to Nancy Mathiowetz, however, Marist did not respond to her requests -- filling in information missing from the public report such as the order of questions, a description of their weighting procedure and response rate data -- until November 17, 2008. And that transmission said nothing at all about the follow-up study.

Miringoff continued:

Second, we did conduct a post-primary follow-up survey to our original pre-primary poll. We think both these datasets should be analyzed in tandem. We are preparing them to be included at the Roper Center along with all of our pre-primary and pre-election polling from 2008 for anyone to review.

What's the hurry?

I am not sure what is more depressing: That a group of "outlaw pollsters" can flaunt the standards of the profession with little or no fear of recrimination or that a former former president of the NCPP can so blithely dismiss repeated requests from AAPOR's president with little more than a "what me worry" shrug. Does it really require 14 months (and counting) to prepare these data for sharing?

Just after the primary, I let myself hope that the pollsters of 2008 might follow the example of the giants of 1948, put aside the competitive pressures and open their files to scholars. Fortunately, the survey researchers at CBS News, the Field Poll, Gallup, Opinion Dynamics, PPIC, SurveyUSA and the University of New Hampshire (and their respective media partners) did just that. For that we should be grateful. But the fact that only 7 of 21 organizations chose to go beyond minimal disclosure in this case is profoundly disappointing.

The AAPOR Report is a gift for what it tells us about the state of modern pre-election polling in more ways than one. The question now is whether polling consumers can find a way to do something about the sad state of disclosure this report reveals.

Correcting the Correction: I had it right the first time. The CBS News/New York Times partnership conducted their first New Hampshire survey in November 2007, but CBS News was solely responsible for the panel-back study The original version of this post incorrectly identified the CBS News New Hampshire polling as a CBS/New York Times survey.  While those organizations are partners for many projects, the New York Times was not involved in the New Hampshire surveys.


What Happened in NH? AAPOR's Answer


Most political junkies remember two things about last year's New Hampshire primary. First, Hillary Clinton's surprising three point win and the fact that the pollsters were the "biggest losers" as the final round off of pre-election polls had shown Barack Obama surging ahead. A dozen different surveys showed Obama leading by a range of 3 to 13 points, and by roughly six percentage points on our final trend estimate. Fewer remember that polling errors were even bigger in subsequent states and fewer still will recall that the American Association for Public Opinion Research (AAPOR) announced formation of an ad-hoc committee to study and report on the problems of the New Hampshire and other primary polls.

Well today, more than fourteen months after the 2008 New Hampshire primary, the AAPOR Ad Hoc committee has released its full report. While those hoping for an obvious smoking gun will be disappointed, the report represents a massive collection of information that does shed new light on what happened in New Hampshire. The evidence is spotty and frequently hedged -- "definitive tests" were "impossible" -- but AAPOR's investigators identify four factors as contributing to polls having "mistakenly predicted an Obama victory." From the AAPOR committee press release:

  • Given the compressed caucus and primary calendar, polls conducted before the New Hampshire primary may have ended too early to capture late shifts in the electorate's preferences there.
  • Most commercial polling firms conducted interviews on the first or second call, but respondents who required more effort to contact were more likely to support Senator Clinton. Instead of continuing to call their initial samples to reach these hard‐to‐contact people, pollsters typically added new households to the sample, skewing the results toward the opinions of those who were easy to reach on the phone, and who more typically supported Senator Obama.
  • Non‐response patterns, identified by comparing characteristics of the pre‐election samples with the exit poll samples, suggest that some groups who supported Senator Clinton--such as union members and those with less education--were under‐ represented in pre‐election polls, possibly because they were more difficult to reach.
  • Variations in likely voter models could explain some of the estimation problems in individual polls. Application of the Gallup likely larger error than was present in the unadjusted data. The influx of first-time voters may have had adverse effects on likely voter models.

In other words, what happened in New Hampshire wasn't one thing, it was a likely lot of small things, all introducing errors in the same direction. Various methodological challenges or shortcomings that might ordinarily produce offsetting variation in polls instead combined to throw them all off in the same direction. Polling's "perfect storm" did not materialize this past fall, but that label seems more apt for the New Hampshire polling debacle.

The report also produces evidence that rules out a number of prominent theories, among them the so-called "Bradley Effect." The authors claim they saw "no evidence that white respondents over-represented their support for Obama," and thus, no evidence of "latent racism" benefiting Clinton. Fair enough, but they do report evidence of a "social desirability effect" that led respondents to report "significantly greater" support for Obama "when when the interviewer is black than when he or she is white" (although Obama still led by smaller margins among when interviewers were white -- see pp 55-59 of the pdf report).

As should be obvious, this very quick and cursory review just scratches the surface of the information in the 123 page report. There is a story here about the sheer breadth of the information provided. For example, today's release also includes immediate availability through the Roper Archives of full respondent level data provided by CBS News, Gallup/USA Today, Opinion Dymamics/Fox News, the Public Policy Institute of California (PPIC), SurveyUSA, University of New Hampshire/CNN/WMUR for polls conducted in New Hampshire, South Carolina, California and Wisconsin.  [Update: I'm told that a small glitch in the documentation is holding up release of some or all of the Roper data until, hopefully, later today].

But aside from the admirable disclosure by the organizations listed above, there is also a story here about an outrageous lack of disclosure and foot-dragging, including three organizations that "never responded" to AAPOR's requests for information over the last fourteen months: Strategic Vision (for polls conducted in New Hampshire and Wisconsin), Clemson University and Ebony/Jet (for polls conducted in South Carolina).

Stay tuned. I will have more to say later today and in the days that follow on this new report. Meanwhile, please share your thoughts on the report in the comments below.

For further reading, see my first review of the theories for the New Hampshire polling flap, our bibliography of reaction around the web and the rest of our coverage from 2008.

Update: ABC's Gary Langer shares his first impressions including one thought I negelected to include:  "The volunteer AAPOR committee members who produced [the report], led by Prof. Michael Traugott of the University of Michigan, deserve our great thanks."

Interests disclosed:  As a member of AAPOR's Executive Commmittee from May 2006 through May 2008, I voted to create the Ad Hoc committee.  I did not serve on the committee but our Pollster.com colleague Charles Franklin did participate.


Undersized Undecideds


Two days ago, Nick Panagakis reopened our debate about the "true" size of the undecided voters in his post on pollster.com, entitled Supersized Undecideds. Oddly, his post tends to support my argument, rather that contradict it.

 

First I should note that Nick has misstated my position somewhat, which was explained here and here. In brief, my argument is that pollsters should measure the undecided vote, by including in their vote choice question a tag line, "or haven't you made up your mind yet?" I also argue that pollsters should not insist on asking who voters would choose "if the election were held today," but who would they support on Election Day. I contend that this way of asking voters their candidate preferences produces a more realistic and accurate picture of the electorate than the way pollster currently report the results of their hypothetical, forced-choice vote question.

 

Nick disagrees, because he thinks that this approach would exaggerate the number of undecided voters. He makes the novel argument that any indecision measured as I suggest would be "calendar-induced" indecision but not "candidate induced" indecision. I don't know of any evidence for the validity of this distinction, but it's crucial to his argument.

 

To illustrate this point, he presents recent data from the ABC/Washington Post tracking polls, which suggest that currently only 9 percent of voters say they could change their mind before election day, including 3 percent who say it's a "good" chance they could do so, and 6 percent who say it's "pretty unlikely" they would do so. The latter term Nick interprets in his own mental framework as "no chance in h*ll."

 

Then, as though it's an obvious problem, Nick says, "Imagine if polls up until last week were showing undecideds 10 to 20 points higher - or still showing 9 points greater this week." Yes, let's imagine the 9 percentage point increase in the undecided voter group over what is reported these days.

 

It's important to note that most polls have been showing just a couple of percentage points of undecided voters, including ABC and the Post. These news organizations did not highlight the 9 percent undecided in their news stories, but instead focused on Obama's lead over McCain by 52 percent to 45 percent - leaving 3 percent unaccounted for (1 percent "other" and 2 percent "undecided"). If you want to know how many voters might "change their minds," you have to look hard for the data. Of course, ABC and the Post are no different from most other polling organizations that regularly suppress the undecided vote.

 

So, if the polls were to show "9 points greater undecided this week," as Nick feared, that would still be only 10 to 11 percent. That hardly seems excessive, given that the 2004 exit poll found 9 percent of voters saying they had made up their minds in the three days just prior to the election. And just today, the AP reported that about 14 percent of voters were "persuadable," a news story that emphasized the size of the undecided voter unlike most poll stories, which suppress that information.

 

Just before the New Hampshire Democratic Primary, the UNH Survey Center found 21 percent of voters who said they had not made up their minds (when asked directly, without the hypothetical, forced-choice version that is standard), and the exit poll showed that 17 percent of voters said they had made up their minds on election day.

 

These numbers suggest that measuring and reporting the size of the undecided voters is an important part of describing the state of the electorate. Not to do so is one of the continuing failures of most media polls.


NYAAPOR's NH Post-Mortem


Last Thursday, the New York chapter of the American Association for Public Opinion Research (AAPOR) held a post-mortem on "What Happened" to the polls in New Hampshire. The meeting included a presentations by pollsters from Gallup, the Marist Institute and CBS News on the polls conducted by their own organizations. Gary Langer, the ABC News polling director, blogged a complete report on the discussion that is well worth reading in full. Some highlights follow.

Gallup's Frank Newport attributed "half of the misstatement" of their poll to their likely voter model:

Gallup, whose final poll had Obama ahead by 13 points, had a closer 5-point Obama lead among people who described themselves as registered voters. That means its likely voter modeling, used to produce a more accurate estimate of who’ll actually vote, instead introduced error.

Gallup’s editor-in-chief, Frank Newport, said the modeling included factors such as enthusiasm and attention to the race, both of which may have increased for Obama and slacked off for Hillary Clinton after Obama’s Jan. 3 victory in Iowa. Unlikely voters – those excluded from the model – were much better for Clinton. “Obviously that was a cause for the incorrect likely voter numbers that Gallup put out,” he said.

The conference also featured the first discussion of post-election follow-up surveys conducted by both Gallup and the Marist Institute:

Newport and Miringoff based their conclusions partly on post-election polls in which they called back respondents to their pre-election polls in an effort to see where those polls went wrong. Analysis of those data is not complete, though Newport said Gallup hopes to post some conclusions on its website next week.

Both said their callback polls reached about two-thirds of the original poll respondents; they hadn’t yet weighted these samples to adjust for the noncoverage, a step that could improve their analysis.

See the full article for some additional details on the call back surveys and a discussion of their potential pitfalls.


New Hampshire Polling Snafu: Bibliography


With his column in The Hill this week, Democratic pollster Mark Mellman becomes the latest pollster to weigh in on the various theories behind the polling kerfuffle in New Hampshire two weeks ago. Since I neglected to link to some or mentioned others only in passing, I thought it would be worthwhile to post a collection of links to everything I have collected on the subject. Going forward, this entry wills serve as a "frequently asked questions" (FAQ) page for the New Hampshire polling controversy.

For now, here are the links. If you know of an analysis worth including that I have overlooked (or if one of these links is broken), please email us (questions at pollster dot com).

My Blog Posts

New Hampshire: So What Happened?
A Lesson from 1948
What About Monday Night?
More Clues: The CBS Panel Survey
The New Hampshire Recount

Charles Franklin

Polling Errors in New Hampshire

See also all posts on Pollster.com tagged "New Hampshire 2008"


Analysis and by Pollsters and Other Notables

Marc Ambinder, The Atlantic
What Happened To The Polls? The ZbornakEffect?

Jon Cohen, polling director, The Washington Post
About Those Democratic Pre-election Polls
What if the Polls Were Right?

Robert Erikson and Chris Wlezian
Likely Voter Screens and the Clinton Surprise in New Hampshire

Kathy Frankovic, director of surveys, CBS News
NH Polls: What Went Wrong
Gender and Race in the Democratic Primary

John Judis, senior editor, The New Republic
Poll Potheads
Response to Kohut

Mickey Kaus, Slate
Hillary Stuns -- Four Theories

Andrew Kohut, president, Pew Research Center
Getting it Wrong
Response to Judis

Gary Langer, polling director, ABC News
New Hampshire's Polling Fiasco
The New Hampshire Polls: What We Know
Why Pollsters Got it Wrong (Video)

Joe Lenski, Edison Media Research
More on the New Hampshire Turnout, And Its Implications

Nancy Mathiowetz, president, American Association for Public Opinion Research (AAPOR)
Pre-Election Polling in New Hampshire: What Went Wrong?

Allan McCutcheon
Who Were New Hampshire's Likely Democratic Primary Voters

Mark Mellman, president, The Mellman Group (D)
N.H. Many Theories, Little Data

John Nichols, The Nation
Did "The Bradley Effect" Beat Obama in New Hampshire?

Frank Newport, editor-in-chief, Gallup
Putting the New Hampshire Polls Under the Microscope
More on New Hampshire

Scott Rasmussen, president, Rasmussen Reports
What Happened to the Polls in New Hampshire

John Sides
Should We Blame Secretly Prejudiced New Hampshire Voters for Obama's Loss?
The New Hampshire Polls, One More Time

John Zogby, president, Zogby International
Polling the New Hampshire Primaries: What Happened?


Prior Research on Bradley/Wilder and Interviewer Effects

Finkel, Guterbock and Borg, Race-of-Interviewer Effects in a Preelection Poll: Virginia 1989 (via AAPOR)

Hugick, Polling in Biracial Elections

Hugick and Zeglarski, Polls During the Past Decade in Biracial Election Contests

Keeter and Samaranayake (Pew Research Center), Can You Trust What Polls Say about Obama's Electoral Prospects?

Pew Research Center, Race and Reluctant Respondents

Traugott and Price, A Review: Exit Polls in the 1989 Virginia Gubernatorial Race: Where Did They Go Wrong? (via AAPOR)

Streb et. al, Social Desirability Effects and Support for a Female American President (via AAPOR)


AAPOR Ad-Hoc Committee

AAPOR FAQ on New Hampshire Polling: What Went Wrong

AAPOR Announces Ad-Hoc Committee to Evaluate New Hampshire Polls


Stolen Vote?

Mark Blumenthal - The New Hampshire Recount

Jennifer Agiesta and Jon Cohen, Washington Post - The Method or the Map

DailyKos Diarist DHinMI - Enough with the "Diebold Hacked the NH Primary" Lunacy

Brad Friedman, BradBlog
NH Primary: Pre-Election Polls Wildly Different Than Announced Results for Clinton/Obama
New Hampshire's Chain of Custody

Farhad Manjoo, Salon - Was the New Hampshire Vote Stolen?

Josh Marshall - Enough

New Hampshire Secretary of State - Recount Results


Verse

Sharon Brogan - I Have This to Say About That

Don't those pollsters know
that married women
lie in the presence
of their husbands?


The New Hampshire Recount


Arguably, the election results that will get the least attention today involve the hand recount underway in New Hampshire at the request of Democratic candidate Dennis Kucinich. The results of the recount so far, as posted by the New Hampshire Secretary of State, show some minor discrepancies but nothing that would explain pre-election surveys over the final weekend of the campaign showing Barack Obama running ahead of Hillary Clinton.

In most cases, the minor glitches appear to involve uncounted write-in votes or minor clerical errors. As the Union Leader reported yesterday:

The widest variations so far were in Manchester's Ward 5. Vote counters there mistakenly transposed write-in votes for vice president as votes for presidential candidate. As a result, all major candidates lost votes. Kucinich lost three in the ward and has a total of 20 votes there. Hillary Clinton lost 64 with a new total of 619; John Edwards lost 38 and has 217 votes; Barack Obama lost 39 and has 365, and Bill Richardson lost seven, leaving him 39.

For those interested, Salon's Farhad Manjoo has a nice review of the various fraud theories and the evidence (or lack thereof) behind them. One possibly overlooked point is that New Hampshire uses no touchscreen voting machines. Every ballot cast there was cast on paper, although as Manjoo reports, four out of five of the ballots were counted with optical scan equipment: "The machines that read the ballots and the computers that count the ballots and report the results are made by a company notorious for shoddy practices: Diebold."

Those who have raised questions about the count have pointed to vote returns showing Barack Obama doing better in the minority of mostly rural precincts that counted the votes by hand, while Clinton did better where votes were counted by Diebold machines. The most likely explanation, as Manjoo puts it: "Those places simply vote differently." See his article for the details, or the analysis of past vote results by the Washington Post's Jennifer Agiesta and Jon Cohen.

What about exit poll results cited by Chris Matthews showing Obama ahead? The problem is that the numbers that Matthews saw were likely based on a "composite" estimate that melds exit poll tallies and pre-election polls. It would not be surprising if those results showed an advantage for Obama (I blogged about that issue on Election Day well before any results were available).

I had no access to the "end of day" exit poll tallies available to the network decision desks, but Manjoo went directly to the source:

Daniel Merkle, who heads ABC News' "decision desk" -- which was getting the exact same exit polling data that folks at NBC were getting -- told me that the numbers he was receiving during Election Day did not show a certain Obama win. Merkle said the data indicated "a very close race on the Democratic side," and "that's what it ended up being."

"It was within a couple points," Merkle said. "When we're seeing an exit poll within a couple points, that's a close race." The exit poll numbers, he added, were a "surprise" compared to pre-election polls. "The exit poll was not showing an 8- to 10-point Obama lead. It was showing a close race."

Manjoo's piece is well worth reading in full, but he closes with a point made so well that I want to quote it in full:

Last night I had a long discussion with Brad Friedman, who runs the election-reform news Web site Brad Blog. Over and over, he said, "My biggest concern here is that 80 percent of the vote is uncounted by any human being." His request is simple and straightforward: "Why not count the damn votes?"

He's right. Why not count the votes?

And thanks to Kucinich, that's what will likely happen now. It will probably take some time; weeks, if not months. But soon, we'll know what happened.

But as many voting-reform experts have argued, manually counting the votes should be a routine in any race. There are logistical reasons why it would be impractical to hand count every vote in every election. But if we're going to use machines -- optical-scan machines that use paper ballots, that is; touch-screen machines everywhere ought to be burned -- we should, at least, conduct a randomized, accountant-approved audit of ballots.

In other words, after every election, officials should randomly count some number of ballots to double-check the machines' results. It is amazing that this is not a standard procedure across the country; it is a disgrace that election officials aren't rushing to implement such procedures now.

I couldn't agree more. Exit polls are extremely useful to those of us that want to understand who voted and the meaning of election outcomes, but they are a terrible way to verify the vote count. Random, hand-count audits coupled with optical scan voting would help raise everyone's confidence in the integrity of our elections. Without regular, independent, random audits, these perennial conspiracy theories will continue.


More NH Clues: The CBS Panel Survey


One of the theories about what went wrong for the polls in New Hampshire is that the apparent post-Iowa "bounce" for Barack Obama never really occurred. Perhaps the surge for Obama was just the artifact of some sort of sampling or other methodological distortion that created the false impression that some New Hampshire voters were moving to Obama (or away from Clinton) in the wake of the Iowa Caucuses. While it certainly does not resolve the New Hampshire mystery, there is one piece of forensic evidence on this point that most of us have overlooked: The "panel back" survey of New Hampshire Democrats conducted by CBS News.

Unlike the other pollsters, who contacted fresh samples of New Hampshire households over the final weekend, CBS did something different. They re-contacted 417 likely New Hampshire Democratic primary voters they had already previously interviewed in November, and were able to re-interview 323. This design, which pollsters typically call a "panel back," allows for an examination of individual change. In other words, instead of comparing the aggregate results of two totally separate samplings, the CBS pollsters were able to look for changed opinion among individual respondents.

The CBS panel-back study, completed on the Saturday and Sunday before the New Hampshire primary, found a large individual-level shift from Clinton to Obama (and Edwards), but virtually no shift away from Obama:

26% of likely Democratic primary voters have changed their preference since November. [...]

The New York Senator lost almost one in five of her November voters to Obama, and 10% of her voters have gone to Edwards. Obama, meanwhile, has kept 95% of the individual voters he had in November.

Those shifting preferences helped move the race, as measured by the two CBS surveys, from a 20-point Clinton lead over Obama in November (39% to 19% among those later re-interivewed) to a seven-point Obama advantage (35% to 28%) over the final weekend.

But that's not all. The design of this survey also provides the only data I am aware of to test another theory: Were Clinton supporters, having grown "dispirited" and "disillusioned by her decline in Iowa," simply "undercounted" by the pollsters, as political scientists Bob Erikson and Chris Wlezien theorize here on Pollster.com.

One check would be the response rate, as reported by CBS polling director Kathy Frankovic in her latest column:

The January response rate for the November Obama and Clinton voters was nearly the same, 74 percent for November Obama supporters and 68 percent for November Clinton voters.

But what about a likely voter screen? CBS polling analyst Anthony Salvanto emailed me to add that 71% of Obama's November supporters responded to the second survey and said they were still "definitely" or "probably" planning to vote in the Democratic primary, as compared to 64% of the Clinton's November supporters.

Were these differences statistically significant? Ah, there's the rub. The sample sizes involved are small and lack the statistical power to determine if the differences in response and intent to vote were real. Extrapolating from the November data suggests that CBS had to re-contact roughly 154 Clinton supporters and 92 Obama supporters from November. The margin of sampling error around each subgroup is in the +/- 8-10% range. So neither difference above is statistically significant (for the statistically fluent: I get p-values in the.20 to .30 range, though your mileage may vary).

Of course, significant or not, CBS did use the response variation in weighting their January data, although they saw "little" change in the Clinton-Obama margin as a result:

Before publication of the results, we adjusted (“post-stratified”) the results to account for that small difference in response by previous candidate preference (which is normally done in panel surveys). Correcting for that small difference in response changed little.

My own back-of-the-envelope estimate is that their non-response adjustment added maybe a point to Clinton's support and took a point away from Obama. Either way, their weighted result still showed Obama leading by seven percentage points (35% to 28%), and their individual level data showed a significant shift away from Clinton. Thus, individual-level shifts in opinion, rather than an enthusiasm gap, explained virtually all of the Obama weekend "surge" on this survey.

Kathy Frankovic's column also uses the same response rate data to question at least one interpretation of the so-called Bradley-Wilder theory:

The theory is that the respondent (white or black) might not want an interviewer to think they aren’t voting for a black candidate. They might think the interviewer will take offense, or believe the respondent to be racist.

Taken to its extreme, this theory predicts that respondents who think they have socially unacceptable opinions -- or situationally unpopular opinions -- simply won’t answer a questionnaire.

As Frankovic points out, "the theory would predict that those not voting for Barack Obama would be less likely to complete an interview." However, as the data above indicate, Obama supporters were just as likely to complete an interview as Clinton supporters, if not more so.

PS: If you like the notion of "panel-back" surveys, you will have more to chew over soon, as the Gallup Organization is apparently calling back respondents to its final New Hampshire survey. Susan Page, whose beat includes the Gallup polls sponsored by USA Today, made the following comment on MSNBC's Tim Russert Show this past Sunday (my transcript):

Page: I think it's going to be some time before we know [what happened in NH]. We're going back in the field to reinterview the people we interviewed in our poll that had [Obama] up thirteen points to ask who changed their mind, who didn't go to vote who said they were going to vote, maybe who did go to vote who told us there weren't going to vote who made it through our likely voter screen.

Russert: Are you going to publish that?

Page: Oh absolutely. We want to know why the poll was off, and we don't want to repeat the error.

I am assuming that other New Hampshire pollsters may have similar re-conctact studies in the works.


New Hampshire: What About Monday Night?


Back to the final polls in New Hampshire. One of the statements I have heard from some pundits over the last few days is that pollsters stopped calling on Sunday. While that was true for most of the organizations that conducted surveys over the final weekend, there were four pollsters that continued calling through Monday. Unfortunately, the results they reported do not show a consistent pattern, although the real story may be a bit more complicated.

All four were doing "rolling average" tracking, so their final release used data collected over the preceding two or thee days. If voter preferences changed radically on Monday those changes would only affect one third or half of their data. However, a comparison of their last two releases should give a good indication of whether the Monday night interviews showed Obama's lead expanding or declining, especially since all four showed Obama gaining over the weekend.

As the table below shows, the results are mixed. Two pollsters -- American Research Group (ARG) and Rasmussen Reports -- showed a slightly narrower Obama lead in their final release, but two pollsters -- Suffolk University and Zogby -- showed the Obama lead growing. None of the final shifts were big enough to be statistically significant, so if we take these results at face value, we are left with a picture of essentially random variation.

01-15 monday polls.png

But can we take all of these results at face value?

Consider that on its six releases, the three-day rolling average Zogby tracking reported Obama's support steadily gaining. Their first track (finished just before the Iowa Caucus results were known) showed Clinton leading by six points (32% to 26%). Successive releases had that lead narrowing to four points, then to one point, then showed Obama suddenly leading by 10 points and then by 13 points (42% to 29%).

01-15 Zogby NH.png

The final Zogby release leads with the following sentence:

The big momentum behind Democrat Barack Obama, a senator from Illinois who is seeking his party’s presidential nomination, continued up to the last hours before voters head to the polls to cast ballots in the New Hampshire primary election, a new Reuters/C-SPAN/Zogby daily tracking poll shows.

Yet the next day, in a release titled, "What Happened," John Zogby reports the following:

My polling showed Clinton doing well on the late Sunday night and all day Monday – she was in a 2-point race in that portion of the polling. But since our methods call for a three-day rolling average, we had to legitimately factor the huge Obama numbers on Friday and Saturday – thus his 12 point average lead. Unfortunately, one day or a day–and–a–half does not make a trend and we ran out of time.

So on Tuesday Zogby tells us of Obama momentum that "continued up to the last hours." On Wednesday he says the momentum ran out on Sunday afternoon. Some would see a contradiction there. Rather than focusing on the verbiage, let's focus on the numbers. Perhaps my mathematically inclined readers can come up with a realistic set of hypothetical single day results (and half day results for early and late Sunday) that can reconcile Mr. Zogby's data reported on Monday and Tuesday with his comments on Wednesday. I cannot.

I know, from personal experience, that Mr. Zogby gets very angry about suggestions that the remarkable last minute surge he was willing to report on the eve of the 2004 New Hampshire primary ("For Kerry the dam burst after 5PM on Monday") represents a last minute "correction" intended to bring his results into line with those of other pollsters.

While I assume he will be similarly unhappy about this piece, he has an easy remedy: Release the one-day results (and part-day Sunday results) used to calculate his rolling-averages. Each daily sample should exceed 250 interviews (more than some complete surveys we have reported in recent months). If nothing else, the day-by-day results will further our understanding of what happened in New Hampshire. And if the single-day results are consistent with both the previously reported data and Mr. Zogby's post election claims, I will happily apologize for any implication to the contrary.

On a related note, the Suffolk University survey, the one that also showed Obama's lead continuing to expand in interviews conducted on Sunday and Monday night, provided full cross-tabulations for all of their data releases (these are still online - see the links in the left column of the Suffolk web site). While the Suffolk University pollsters do not break out single-day results, they do provide the demographic and regional composition of each days' sample. Their sample composition in terms of age, gender and region showed only trivial variation over the last 3 to 4 releases -- certainly nothing that would explain away the continuing improvement in the Obama margin in their final tabulations. For what it's worth, the Suffolk poll featured the largest "undecided" percentage and the smallest sample sizes of the four pollsters that continued to call on Monday.

Finally, the Rasmussen Reports result is also intriguing, because their final release added 571 interviews conducted on Monday to to 1,203 conducted on Saturday and Sunday. As such, we can do a rough extrapolation, which shows Obama leading by only a point (35% to 34%, but there is much room for rounding error here) in the Rasmussen interviews conducted Monday night. Rasmussen hinted at this result in his own post-mortem but did not release single night numbers. For the Rasmussen data, at least, the numbers add up.

PS: Robert Wright, in an election day email that Mickey Kaus blogged earlier toay, noticed a similar pattern in the final CNN/WMUR/UNH release that added 258 interviews of likely Democratic primary voters conducted Sunday night to the 341 gathered on Saturday and early Sunday that they had released previously. The Obama margin narrowed by a single percentage point, neither statistically significant difference nor enough to enable a meaningful extrapolation, though it is consistent with both the direction of the final Rasmussen and ARG releases.


Norpoth: New Hampshire's Crystal Ball in 2008


(Today's Guest Pollster contribution comes Professor Helmut Norpoth of Stony Brook University).

New Hampshire voters may mystify pollsters and pundits, but they have acquired an uncanny sense of picking candidates that go on to the White House. Whatever accounts for Hillary Clinton's surprising showing in her party's primary in New Hampshire, that victory makes her the best bet for Democrats to win the general election in November; likewise, John McCain's victory in the Republican primary in New Hampshire makes him the best hope for the GOP to retain the White House in November. These predictions are derived from a forecast model I developed that uses primary performance as the sole short-term predictor of the vote in the general election (the "Primary Model"). I have applied the model, with slight modifications, in the last three presidential elections, in which it correctly predicted the winners of the popular vote several months before Election Day. (See my 2004 paper in PS: Political Science & Politics). A race between the two New Hampshire winners, so the forecast, would be a nail-biter, with Clinton edging McCain by a margin of just a single percentage point of the two-party vote.

The use of primary elections to predict the outcome of the vote in the general election has some compelling advantages. One, it puts the estimation of a forecast model on a firm footing by letting us use elections all the way back to 1912, when presidential primaries were inaugurated. Two, it makes it possible to include both incumbent and opposition candidates in the model; granted, the incumbent candidate's performance may prove more powerful, but the effect of the out-party's primary showing is not negligible. And finally, the use of primaries as a predictor permits an unconditional forecast of the November vote at a very early moment. No ifs and buts. If one is willing to go with the outcome of the New Hampshire Primary, one can do it right now. The only uncertainty that remains is which of the match-ups will result from the nomination process. Chances are we may not have wait until the national conventions.

To measure primary performance in a standard format that allows for comparison across elections with varying numbers of candidates, I use an equivalent of the two-party vote in general elections. A candidate's primary showing is expressed as his or her vote relative to that of the winner (or in case of the winner in relation to the second strongest candidate). For incumbent-party candidates, the measure is adjusted, depending on whether they are sitting presidents or not. Moreover, the New Hampshire Primary is used only since 1952, when the state switched to a presidential-preference type of primary; prior to 1952, the model relies on the vote in all primaries.

Even though primary performance is the key, giving the model its name, the Primary Model also enlists a cyclical pattern of the presidential vote: the tenure of a party in the White House typically lasts between two to three terms. A compelling explanation for that dynamic is the term limit in presidential elections. Except for FDR, American presidents have eschewed running for more than two terms; and have been barred from doing so since then. The rule guarantees that incumbent presidents are missing from those contests in some periodic fashion, as is the case in 2008. In many such instances the absence of a sitting president with a high degree of popularity may improve the chances of the opposition party of capturing the White House. Given his high approval rating, Bill Clinton's ineligibility in 2000 probably hurt the Democratic prospects that year, although the absence of a much less popular George W. Bush in 2008 may be a blessing for the GOP. In any event, elections without a sitting president in the race tend to favor the opposition party more than elections with an incumbent running for another term. The Primary Model handles this dynamic by way of an autoregressive process (the presidential vote in the two previous general elections). In addition, given the use elections as far back as 1912, the model applies an adjustment for pre-1932 long-term partisanship.

From 1912 to 2004, the out-of-sample forecasts of the Primary Model pick the winner of the popular vote in 23 of the 23 elections, with 1960 being the only exception (and yes, that record includes Gore's popular vote win in 2004). The prediction equation for the presidential vote in 2008 (expressed as the Democratic share of the major-party vote) is:

.361 (RPRIM - 55.6) (-1) + .124 (DPRIM - 47.1) +.368 (VOTE04) -.383 (VOTE00) + 50.7 = .361 (RPRIM - 55.6) (-1) + .124 (DPRIM - 47.1) + 49.4

where RPRIM and DPRIM represent the primary support of the Republican (incumbent party) and Democratic (opposition party) nominees for President, capped within a 30-70 percent range, and Vote04 and Vote00 the Democratic vote shares in 2004 (48.8%) and 2000 (50.3%). The measure for the Republican candidate is inverted (-1) because the Democratic vote is used as the dependent variable. The formula produces the following forecasts of match-ups between the leading contenders in either parties (the vote for each match-up being the Democratic percentage of the two-party vote):

011508gpc.png

The PRIMARY MODEL predicts that in a race of New Hampshire Primary winners, Democrat Hillary Clinton would narrowly defeat Republican John McCain in the November general election (50.5 to 49.5 percent of the two-party vote). The predicted margin of victory, however, is so small that the confidence attached to this forecast is less than 60 percent, given the size of the forecast standard error (2.5). In match-ups between the Republican primary winner and Democratic primary losers, McCain would end up in a virtual tie with Barack Obama (49.9 to 50.1 percent) while defeating John Edwards (52.1 to 47.9 percent) by a margin close to one unit of the forecast standard error (2.6). At the same time, in match-ups between the Democratic primary winner and Republican primary losers, Clinton would dispatch Mitt Romney, Mike Huckabee, and Rudolph Giuliani by margins way beyond that error range. Finally, in match-ups between primary losers, both Obama and Edwards would beat any of the Republicans, and quite handily so in most cases.

That is no sign of partisan bias. Rather, it has to do with the Model assigning more weight to the primary performance of incumbent-party candidates than to the performance of out-party candidates. Nominating a primary loser, or even a candidate with a lackluster primary showing, costs the incumbent party more dearly than it does the out-party. Candidates not listed in the forecast table would do no better than the weakest one in their respective parties.


AAPOR Announces Evaluation of NH Polls


My colleagues at AAPOR have just put out this release:

In the wake of the New Hampshire pre-election polls, the American Association for Public Opinion Research (AAPOR) today announced the formation of an ad hoc committee to evaluate pre-election primary poll methodology and the sponsorship of a public forum on the issue.

"Pre-election polls have a long-running record of being remarkably accurate," said AAPOR President Nancy Mathiowetz. "Sixty years ago the public opinion profession faced a crisis related to the poll predictions of the Truman-Dewey race. The way survey researchers reacted then – with a quick, public effort to identify the causes – played a key role in restoring public confidence and improving research methodology."

The work of the ad hoc committee will be twofold: (1) To review and assist in the dissemination of the evaluations currently being conducted by the individual polling organizations who were engaged in polling prior to the New Hampshire primary; and (2) to request and archive the data related to the New Hampshire primary for future scholarly research.

As has become obvious over the last few weeks, we have no shortage of theories of what happened in New Hampshire. What we lack, and what AAPOR is stepping forward to provide, is an effort to collect, archive and evaluate the relevant data and make it available through a public forum. Hopefully, this effort -- like the one in 1948 -- will provide that function.

Interests disclosed: I serve on AAPOR's Executive Council.


Likely Voter Screens and the Clinton Surprise in New Hampshire


(Editor's note: Today's Guest Pollster contribution comes from Professors Robert S. Erikson of Columbia University and Christopher Wlezien of Temple University).

Does the world need one more explanation for the historic failure of the polls to predict Hillary Clinton's victory in the New Hampshire primary? We offer another possible account. Ours does not require unusual last-minute voter shifts in preference, voters lying to pollsters, or any disconnect between the campaign story line in the media and voter decision-making voters.

We suggest as the possible culprit the way pollsters' employ their likely voter screens. Pollsters may have been tricked not by voters shifting their candidate preferences but by a rapid shift in enthusiasm by Clinton supporters at the last minute. It may be that significant numbers of Clinton supporters were uninclined to vote at the time when the pollsters were doing their final interviews but then regained their interest just in time to vote. In short, the surge to Clinton could have been simply due to uncounted Clinton supporters who the pollsters dismissed as unlikely voters regaining their interest in voting.

According to most accounts, the late Clinton gains stemmed from sympathy for Hillary after her rough treatment in the media, Hillary's response to the questioning of her likeability in the final debate, and her tears on election eve. But how did this response come about? Was it due to truly undecided voters with their blank slates turning overwhelmingly to Hillary? Exit polls show no evidence of this. And it is unlikely that voters tuning in late would see the flow of the news moving in Hillary's direction. It is the idea that late-deciders could have done so that is so jarring to media watchers.

If late-deciders did not split for Hillary, maybe it was Obama supporters changing their minds? But it is even more implausible that voters who followed the campaign and settled on Obama as their choice would follow the late news and see a reason to vote for Hillary. Once people "make up their minds" in a campaign they rarely change and then only for seemingly good reasons. Did Obama supporters have reason to shift? Would the internal dialog of massive numbers of voters be: "I support Obama because he is such an exciting candidate...No wait, Hillary just shed a tear so I'll vote for her instead"?

Rather than voters deciding late for Hillary or shifting late to Hillary, we posit that her proportion of eligible voters in the New Hampshire primary was fairly steady in the final weeks. What changed was the enthusiasm of her supporters. It may be that Hillary supporters followed the news and became disillusioned by her decline in Iowa, her loss of momentum, and the general negative arc of her campaign. They were watching and they were responding to the media's storyline. Their response was not to shift to another candidate but to become dispirited. If interviewed by pollsters, their lessening enthusiasm placed them disproportionately in the "unlikely voter" column. Then, after the pollsters stopped calling, Hillary's supporters gained the enthusiasm necessary to motivate them to vote. This may be because Hillary showed her more human side late in the campaign or because it was her campaign was on the brink or for other less obvious reasons. The point is that the preferences of these voters were undercounted by pollsters. No unusual number of previously undecided voters or former Obama supporters is necessary to account for her late surge in the polls.**

Is our story true? We know that shifts in net enthusiasm from one candidate's supporters to the other's are more volatile than shifts in net preference. We also know that pollsters can be very sensitive to these shifts in enthusiasm when identifying likely voters. (See our paper from 2004 on "Likely Voters and the Assessment of Campaign Dynamics" in the Public Opinion Quarterly). Was it simply a very late shift in enthusiasm that caused the New Hampshire polls to go wrong?

Pollsters hold in their data banks the evidence that would tell if our conjecture is right or wrong. Our suspicion is that voter preferences among potential Democratic primary voters were more stable over the campaign's final weeks than generally realized. This shifting dynamic evident in the polls, we suggest, was exaggerated by daily shifts in enthusiasm that caused shifts in the composition of who gets counted as "likely voters." If likely voters first shifted against Hillary and then for, the shifting membership of the "unlikely voters" may have "surged" back and forth in the opposite way. It would be interesting to see if this was the case.

**Of course the pattern also could be explained by changes in enthusiasm among Obama supporters that mirrored what we have posited for Clinton supporter, flowing after the big victory in Iowa and then ebbing after the pollsters left the field.

Typos corrected.


But What About an "Unbounce?"


One of the theories raised to explain the problematic New Hampshire primary polls is a late shift, perhaps on the heels of the Hillary Clinton "tears" story, that polls missed either because they stopped interviewing or completed the bulk of their calls on Sunday or earlier. Two prominent pollsters say such as shift is unlikely given one result in the exit polls.

For example, Andrew Kohut, writing in his must-read op-ed in yesterday's New York Times, concludes:

Yes, according to exit polls the 17 percent of voters who said they made their decision on Election Day chose Mrs. Clinton a little more than those who decided in the past two or three weeks. But the margin was very small — 39 percent of the late deciders went for Mrs. Clinton and 36 percent went for Mr. Obama. This gap is obviously too narrow to explain the wide lead for Mr. Obama that kept showing up in pre-election polls.

Gary Langer, director of Polling for ABC News, agrees in a lengthy review posted this morning:

[T]he exit poll asked voters the time of their decision. Seventeen percent said they decided on Election Day; they voted for Clinton over Obama by a 3-point margin, 39 to 36 percent – hardly a significant swing from the overall result (Clinton +2). Those who said they decided in the previous three days, 21 percent, favored Obama over Clinton by 3 points, 37-34 percent – further deflating the late-decider argument. Those who decided previously, 61 percent of voters, favored Clinton over Obama by 41-37 percent.

For reference, here is the exit poll data that Kohut and Langer cite:

01-11 2008 time of decision.png

But wait. Putting aside the issue of whether respondents can accurately recall when they reached a decision, why does the absence of a big Hillary bump among the late deciders rule out the possibility of some late shift away from Obama? Keeping in mind where Clinton stood in polls before Iowa, we would not be looking for a Clinton bump as much as an Obama "unbump" (to paraphrase the comment left yesterday by my friend reader Mark Lindeman).

Let me explain. Start by looking at our chart of the polls conducted in New Hampshire during 2007. Our trend estimate shows Clinton winning between 35% and 40% of voters between June and early November. As always some individual polls were a little higher, some lower. Her support declined slightly in December, to an average of about 32%, with the usual variation slightly higher and lower. In December, the undecided category averaged 11% and the support for Richardson, , Biden, Dodd and Kucinich was roughly twice what they received on Election Day. Give Clinton a proportional share of the undecideds and those that moved away from the single digit candidates, and she was headed to roughly 37% of the vote. Thus, putting aside whatever happened in the weekend after Iowa, Clinton would not have needed any massive last minute gains to get to the 39% she received in the final count.

 NHTopzDems400m0111.png

Now on the other hand, if Obama had surged after Iowa to a double digit win as the polls seemed to predict, we certainly would have expected to see late deciders favoring him heavily. But if that weekend bump collapsed (or if it never existed in the first place)? In that case, we would not expect to see much of a difference between early and late deciders. Clinton needed only a modest increase in support to reach 39% of the vote.

Consider a historical example. In 1984, Gary Hart surged to victory along a similar trajectory as what polls seemed to forecast for Obama last week. According to the ABC News/Washington Post polls compiled by Samuel Popkin in his book, The Reasoning Voter, Hart trailed Mondale 13% to 37% just before the Iowa Caucuses (with John Glenn running second (at 20%). The Iowa Caucuses were held on a Monday in 1984, a full week before the New Hampshire primary. On interviews conducted Wednesday through Friday after Iowa, Mondale's numbers held constant (38%) while Hart moved up (to 24%). Over the course of the week, Mondale steadly lost support while Hart continued to rise until moving ahead on the Monday before the primary by an eight point margin (35% to 27% -- although ABC reported a three-day rolling average at the time showing the candidates tied). The next day, Hart defeated Mondale, 37% to 28%.

The ABC exit poll also asked voters when they made up their mind, and the pattern is what we would expect. Among those who made up their minds over the final weekend or the two days just before the primary, Hart led Mondale by a four-to-one margin (56% to 14%).

01-11 1984 time of decision.png

So let's consider a hypothetical question: What would have happened if late deciders had broken for Barack Obama by the same margins as they preferred Hart to Mondale in 1984? If I go to my spreadsheet, play "what if" and imagine that late deciders -- those who made up their minds over the last three days -- had preferred Obama 53% to 20% (while all other preferences held constant), Obama would have defeated Hillary Clinton by 10 points (43% to 33%).

Of course, none of this explains exactly what happened last week, and the time-of-decision exit poll data is just one small piece of the puzzle. Opinion polls may have accurately measured a post-Iowa bounce for Barack Obama that "unbounced" over the last 24 hours, or the apparent surge may have been an artifact of some survey error (or perhaps some combination of both). But either way, the lack of a difference between late and early deciders does not tell us much. It certainly does not preclude -- by itself -- the possibility of shift of voters to Obama on Saturday and Sunday that shifted back to Clinton on Monday.


NH: A Lesson From 1948


My second** NationalJournal.com column (that like all these contributions will be free to non-subscribers for the next week): The lesson from the polling debacle of 1948 that pollsters should apply in aftermath of this week's polling problems in New Hampshire.

**PS: Second? It's been busy week. Details on the new column here.



New Hampshire: So What Happened?


There is obviously one and only one topic on the minds of those who follow polls today. What happened in New Hampshire? Why did every poll fail to predict Hillary Clinton's victory?

Let's begin by acknowledging the obvious. There is a problem here. Even if the discrepancy between the last polls and the results turns out to be about a big last minute shift to Hillary Clinton that the polls somehow missed (and that certainly sounds like a strong possibility), just about every consumer of the polling data got the impression that a Barack Obama victory was inevitable. One way or another, that's a problem.

For the best summary of the error itself, I highly recommend the graphics and summary Charles Franklin posted earlier today. Here's a highlight of how the result compared to our trend estimates:

What we see for the Democrats is quite stunning. The polls actually spread very evenly around the actual Obama vote. Whatever went wrong, it was NOT an overestimate of Obama's support. The standard trend estimate for Obama was 36.7%, the sensitive estimate was 39.0% and the last five poll average was 38.4%, all reasonably close to his actual 36.4%.

It is the Clinton vote that was massively underestimated . . .Clinton's trend estimate was 30.4%, with the sensitive estimate even worse at 29.9% and the 5 poll average at 31.0% compared to her actual vote of 39.1%.

So what went wrong? We certainly have no shortage of theories. See Ambinder, Halperin, Kaus, and, for the conspiratorially minded, Friedman. The pollsters that have weighed in so far (that I've seen at least) are ABC's Gary Langer (also on video), Gallup's Frank Newport, Scott Rasmussen and John Zogby. Also, Nancy Mathiowetz, president of the American Association for Public Opinion Research (AAPOR) has blogged her thoughts on Huffington Post.

Figuring out what happened and sorting through the possibilities is obviously a much bigger task than one blog post the morning after the election. But let me quickly review some of the more plausible or widely repeated theories and review what hard evidence we have, for the moment, regarding each.

1) A last minute shift? - Perhaps the polls had things about "right" as of the rolling snapshot taken from Saturday to Monday, but missed a final swing to Hillary Clinton that occurred over the last 24 hours and even as voters made their final decisions in the voting booth. After all, we knew that a big chunk of the Democratic electorate remained uncertain and conflicted, with strong positive impressions of all three Democratic front-runners. The final CNN/WMUR/UNH poll showed 21% of the Democrats "still trying to decide" which candidate they would support, and the exit poll showed 17% reported deciding on Election Day with another 21% deciding within the last three days. Polls showed Clinton polling in the mid to upper 30s during the late fall and early winter before a decline in December. Perhaps some supporters simply came home in the final hours of the campaign.

I did a quick comparison late last night of the crosstabs from the exit polls and final CNN/WMUR/UNH survey. Clinton's gains looked greatest among women and college educated voters. That pattern, if it also holds for other polls (a big if) seems suggestive of a late shift tied to the intense focus on Clinton's passionate and emotional remarks, especially over the last 24 hours of the campaign.

2) Too Many Independents? - One popular theory is that polls over-sampled independent voters who ultimately opted for a Republican ballot to vote for John McCain. I have not yet seen any hard turnout data on independents from the New Hampshire Secretary of State, but the exit poll data does not offer promising data for this theory. As I blogged yesterday, final Democratic polls put the percentage of registered independents (technically "undeclared" voters) at between 26% and 44% (on four polls that released the results of a party registration question). The exit poll reported the registered independent number as 42%, with another 6% reporting they were new registrants. So if anything polls may have had the independent share among Democrats too high.

On Republican samples, pre-election pollsters reported the registered independent numbers ranging between 21% and 34%. The exit poll put it at 34%, with 5% previously unregistered. So here too, the percentage of independents may have been too low.

Apply those percentages to the actual turnout, do a little math, and you get an estimate of how the undeclared voters split: roughly 60% took a Democratic ballot and 40% a Republican. That is precisely the split that CNN/WMUR/UNH found in their last poll although other

Keep in mind that the overall turnout was over 526,671 (or 53.3% of eligible adults). Eight years ago (the last time both parties had contested primaries) it was 396,385 (or 44.4% of eligible adults at the time). That helps explain why we may have seen an increase in independents in both parties.

Of course, we are missing a lot of data here: Nothing yet on undeclared voter participation from the Secretary of State, and roughly half the pollsters never released a result for party registration.

3) Wrong Likely Voters? OK, so maybe they had the independent share right, but perhaps pollsters still sampled the wrong "likely voters" by some other measure. The turnout above means that pollsters had to try to select (or model) a likely electorate that amounted to roughly half the adults in New Hampshire, they reached with a random digit dial sample.

Getting the right mix is always challenging, possibly more so because the Democratic turnout was so much higher than in previous elections. That's an argument blogged today by Allan McCutcheon of Edison Research:

In 2004, a (then) record of 219,787 voters turned out to vote--the previous record for the Democratic primary was in 1992, when 167, 819 voters participated. This year, a record shattering 287,849 voters participated in the New Hampshire Democratic primary--including nearly two thirds (66.3%) of the state's registered Democrats (up from 43.3% in 2004). Simply stated, the 2008 New Hampshire Democratic primary had a voter turnout rate that resembled a November presidential election, not a usual party primary, and the likely voter models for the polling organizations were focused on a primary--this time, that simply did not work.

One way to assess whether polls sampled the wrong kinds of voters would be to look carefully at their demographics (gender, age, education, region) and see how they compared to the exit poll and vote return data. Unfortunately, as is so often the case, only a handful of New Hampshire pollsters reported demographic composition.

4) The Bradley/Wilder effect? The term, as wikipedia tells us, derives from the 1982 gubernatorial campaign of Tom Bradley, then the long time African-American mayor of Los Angeles. Bradley led in pre-election polls but lost narrowly. A similar effect, in which polls understated the support for the opponents of African-American candidates seemed to hold in various instances during the 1980s. Consider this summary of polls compiled by the Pew Research Center for a 1998 report: which they updated in February 2007:

nhmark0109.png

Note that, in almost every instance, the polls were generally about right in the percentage estimate for African-American candidate but tended to underestimate the percentage won by their white opponents. The theory is that some respondents are reluctant to share an opinion that might create "social discomfort" between the respondent and the interviewer, such as telling a stranger on the telephone that you intend to oppose an African-American candidate.

Of course, the Pew Center also looked at six races for Senate and Governor in 2006 that featured an African-American candidate and did not see a similar effect. Also keep in mind that that all of the reports mentioned above that show the effect were from general election contests, not primaries.

What other evidence might suggest the Bradley/Wilder effect operating in New Hampshire in 2008? We might want to consider whether the race of interviewer or the use of an automated (interviewer-free) methodology would have an effect, although these kinds of analyses are difficult, because other variables can confound the analysis. For what it's worth, the final Rasmussen automated survey had Obama leading by seven points (37% to 30%), roughly the same margin as the other pollsters. We might also look at whether pushing undecided voters harder helped Clinton more than other candidates.

Update: My colleagues at AAPOR have made three relevant articles from Public Opinion Quarterly available to non-subscribers on the AAPOR web site.

5) Non-response bias? We would be crazy to rule it out, since even the best surveys are getting response rates in the low twenty percent range. If Clinton supporters were less willing to be interviewed last weekend than Obama supporters, it might contribute to the error. Unfortunately, it is next to impossible to investigate, since we have little or no data on the non-respondents. However, if pollsters were willing to be completely transparent, we might compare the results among those with relatively high response rates to those with lower rates. We might also check to see if response rates declined significantly over the final weekend.

6) Ballot Placement? Gary Langer's review points to a theory offered by University Prof. Jon Krosnick, that Clinton's placement near the top of the New Hampshire ballot boosted her vote total. Krosnick believes that ballot order netted Clinton "at least 3 percent more votes than Obama."

7) Weekend Interviewing? I blogged my concerns on Sunday. Hard data on whether this might be a factor are difficult to come by, but it is certainly an issue worth pursuing.

8) Fraud? As Marc Ambinder puts it, some are ready to believe "[t]here was a conspiracy, somehow, because pre-election polls are just so much more valid than actual vote counts." Put me down as dubious, but Brad Friedman's Brad Blog has the relevant Diebold connections for those who are interested.

Again, no one should interpret any of the above as the last word on what happened in New Hampshire. Most of these theories deserve more scrutiny and I agree with Gary Langer that "it is incumbent on us - and particularly on the producers of the New Hampshire pre-election polls - to look at the data, and to look closely, and to do it without prejudging." This is just a quick review, offering what information is most easily accessible. I am certain I will have more to say about this in coming days.


Polling Errors in New Hampshire


1NHPollErrorDem19.png

Hillary Clinton's stunning win over Barack Obama in New Hampshire is not only sure to be a legendary comeback but equally sure to become a standard example of polls picking the wrong winner. By a lot.

There is a ton of commentary already out on this, and much more to come. Here I simply want to illustrate the nature of the poll errors. These show the nature of the problem and help clarify the issues. I'll be back later with some analysis of these errors, but for now let's just see the data.

In the chart, the "cross-hairs" mark the outcome of the race, 39.1% Clinton, 36.4% Obama. This is the "target" the pollsters were shooting for.

The "rings" mark 5%, 10% and 15% errors. Normal sampling error would put a scatter of points inside the "5-ring", if everything else were perfect.

In fact, most polling shoots low and to the left, though often within or near the 5-ring. The reason is undecided voters in the survey. Unless the survey organization "allocates" these voters by estimating a vote for them, some 3-10% in a typical election survey are left out of the final vote estimate. Some measures of survey accuracy divide the undecided, either evenly across candidates or proportionately across them. There is good reason to do that in another post. But what the pollsters publish are the unallocated numbers (almost always) and so it seems fair to plot here the percent of the vote the pollster published, not one with undecided reallocated.

What we see for the Democrats is quite stunning. The polls actually spread very evenly around the actual Obama vote. Whatever went wrong, it was NOT an overestimate of Obama's support. The standard trend estimate for Obama was 36.7%, the sensitive estimate was 39.0% and the last five poll average was 38.4%, all reasonably close to his actual 36.4%.

It is the Clinton vote that was massively underestimated. Every New Hampshire poll was outside the 5-Ring. Clinton's trend estimate was 30.4%, with the sensitive estimate even worse at 29.9% and the 5 poll average at 31.0% compared to her actual vote of 39.1%.

So the clear puzzle that needs to be addressed is whether Clinton won on turnout (or Obama's was low) or whether last minute decisions broke overwhelmingly for Clinton. Or whether the pollster's likely voter screens mis-estimated the make up of the electorate. Or if the weekend hype led to a feeding frenzy of media coverage that was very favorable to Obama and very negative towards Clinton, which depressed her support in the polls but oddly did not lower her actual vote.

On the Republican side we see a more typical pattern, and with better overall results. About half of the post-Iowa polls were within the 5-ring for the Republicans, and most of the rest within the 10-ring.

2NHPollErrorRep19.png

As expected, errors tend to be low and left, but the overall accuracy is not bad. This fact adds to the puzzle in an important way:

If the polls were systematically flawed methodologically, then we'd expect similar errors with both parties. Almost all the pollsters did simultaneous Democratic and Republican polls, with the same interviewers using the same questions with the only difference being screening for which primary a voter would participate in. So if the turnout model was bad for the Democrats, why wasn't it also bad for the Republicans? If the demographics were "off" for the Dems, why not for the Reps?

This is the best reason to think that the failure of polling in New Hampshire was tied to swiftly changing politics rather than to failures of methodology. However, we can't know until much more analysis is done, and more data about the polls themselves become available.

A good starting point would be for each New Hampshire pollster to release their demographic and cross tab data. This would allow sample composition to be compared and for voter preferences within demographic groups to be compared. Another valuable bit of information would be voter preference by day of interview.

In 1948 the polling industry suffered its worst failure when confidently predicting Truman's defeat. In the wake of that polling disaster, the profession responded positively by appointing a review committee which produced a book-length report on what went wrong, how it could have been avoided and what "best practices" should be adopted. The polling profession was much the better for that examination and report.

The New Hampshire results are not on the same level of embarrassment as 1948, but they do represent a moment when the profession could respond positively by releasing the kind of data that will allow an open assessment of methods. Such an assessment may reveal that in fact the polls were pretty good, but the politics just changed dramatically on election day. Or the facts could show that pollsters need to improve some of their practices and methods. Pollsters have legitimate proprietary interests to protect, but big mistakes like New Hampshire mean there are times when some openness can buy back lost credibility.

Cross-posted at Political Arithmetik.


NH Results Thread


6:08 p.m. Eastern time - The Atlantic's Marc Abminder is first out of the box with hints from the New Hampshire exit poll:

EXIT POLLS: GOP: 3 in 10 independents are GOP voters...many late deciders... McCain more electable than Romney...33% say economy is biggest issue followed by Iraq (22%) .... Democrats: 46% made up minds without last week.. 4 in 10 are independents.... HRC's favorability: 73%; Obama's: 84%; ... 36% say economy is top issue....

MSNBC is also reporting that nearly half of the Democrats voting in New Hampshire are independents (43%) and that 38% of the Republicans are independent. Those numbers within range but on the high side of what pre-election polls were reporting this week for both parties, although the Democratic-Republican split is more favorable to the Democrats than the CNN/WMUR/UNH and Fox News polls reported. See the post I updated just a few minutes ago for more details.

I am headed home...but the comments section is open, please post your questions. Apologies if you get the dreaded "too many comments" error -- you did nothing wrong. We will squash that bug soon.

7:25 - Via Ben Smith a this report from ABC News posted at 6:00:

ABC News' Gary Langer Reports: Based on preliminary exit poll results from the New Hampshire primaries, Independents are turning out in substantial but customary numbers.

Preliminary exit poll results indicate that just over four in 10 voters in the New Hampshire Democratic primary are independents, compared with 48 percent in 2004 and a record 50 percent in 1992.

To be clear, the numbers from 2004 and 1992 that Langer cites are of party identification. The numbers cited by MSNBC above may be as well. The numbers I reported earlier from pre-election surveys are mostly party registration. And there is a difference, but I have no idea what that difference might imply.

7:50 - Exit poll tabulations will be posted on MSNBC at these links (Democrats, Republicans) when the polls close at 8 p.m. Eastern time. CNN will presumably post tabulations as well. See my post from earlier this morning for more information on what to make of these numbers when they appear.

8:04 - CNN has tabulations up for Democrats and Republicans.

8:11 - Mark Lindeman is posting extrapolations of the current candidate estimates used the weight the tabulations posted on CNN and MSNBC in the comments section below. Look there for further updates.

8:28 - The tabulations on the Democratic side indicate that Obama's advantage was narrower among those who decided in the last few days than among those who decided over the last month (i.e. no big break to Obama in the closing days): Obama's margin over Clinton (using the current estimates, which will likely change over the next few hours) among those who decided their vote today (40% to 37%), the last three days (41% to 35%), last week (47% to 25%), last month (48% to 32%). Among those who decided "before that," Clinton leads 47% to 32%. And more than a third of voters (37%) say they made up their minds today or in the last three days.

10:33 - NBC projects Clinton the winner, prompting the following exchange between the MSBNC anchors:

Chris Matthews: [Clinton] has beaten the odds, she has beaten the pollsters, the pundits. Everyone one of us included who has been trying to follow this campaign and understand it. I think something happened. It must have happened fairly recently, or else the pollsters should find another means of employment.

Keith Olbermann: Well the entire industry was apparently mistaken, it had nothing to do...

Matthews: But every poll. At least with the other side [the Republicans] there was some disagreement, in the Democratic primary, these polls were relentlessly pro-Obama.

12:35 - As I started to write up these final paragraphs, Chris Matthew popped up to say the following on MSNBC: "I'd like to see an inquest of all these polls and the methodology because we always have learned, eventually, what went wrong with polling."

Well, what follows is considerably less than an inquest, but I have been comparing the exit poll tabulations with the last set of cross-tabulations from CNN/WMUR/UNH. Looking at just one poll may turn out to be misleading, so hopefully we can do similar comparisons on a larger group of polls, but based on this initial look, here is what I see:

If there was a problem with this one poll it was not about the composition of the electorate. Were there too few women? Too many independents? Too many young voters? On these three variables, if it erred, the UNH poll erred slightly in Clinton's favor. It had slightly more women, more older voters and more registered independents in the Democratic electorate than the exit poll. The UNH poll did sample slightly more voters with college degrees (61%) than the exit poll (53%), but that difference does not explain Obama's lead. Weight back 61% college educated to 53%, and Obama's lead on the poll shrinks only a little (from 9 to 6 points).

On the other hand, the discrepancy between the last UNH poll and the result seems concentrated in a few key subgroups. I will post the exact numbers tomorrow once the we get a final exit poll tabulations, but virtually all of the difference seems to come from women and college educated voters. For the moment, when comparing the UNH poll to the exit poll, I see a net 17 point gain for Clinton among women compared to a 5 point gain among men, and a 13 point net gain among college educated voters compared to a one point net loss among those with no college degree.

My new colleague* Ron Brownstein has chronicled the critical importance of college educated women as swing voters in the Democratic nomination race. More than any other group, they moved to Clinton in the fall after her strong performances in early debates. Yes she appeared to be doing far less well among these voters in Iowa. If the polls missed a last minute shift to Clinton in New Hampshire, considering the heavily gender focused coverage of the last 48 hours of the campaign, the most logical place to look is among college educated women.

Combine that with the exit poll results showing 37% of the Democrats "finally deciding" for whom they would vote in the last three days of the campaign, and we have a pretty good first clue of what happened with the polls in New Hampshire this week.

*There is a hint there for regular readers -- more on that tomorrow (er..later today).

12:50 - ABC polling director Gary Langer has some very worthy first impressions:

There will be a serious, critical look at the final pre-election polls in the Democratic presidential primary in New Hampshire; that is essential. It is simply unprecedented for so many polls to have been so wrong. We need to know why.

But we need to know it through careful, empirically based analysis. There will be a lot of claims about what happened - about respondents who reputedly lied, about alleged difficulties polling in biracial contests. That may be so. It also may be a smokescreen - a convenient foil for pollsters who'd rather fault their respondents than own up to other possibilities - such as their own failings in sampling and likely voter modeling. [....]

The data may tell us; it may not. What's beyond question is that it is incumbent on us - and particularly on the producers of the New Hampshire pre-election polls - to look at the data, and to look closely, and to do it without prejudging.

Definitely worth reading in full.


NH Election Day Thoughts


I have few thoughts about New Hampshire in my mental "in-box" I want to try to blog this afternoon...

1) Break to Obama? - Reader "FlyOnTheWall," in a comment posted to my exit poll item this morning, noticed something important in the final round of polls on the Democratic race in New Hampshire:

I was struck by something this morning, looking at the final two days of tracking polls, and was hoping that you could illuminate the issue.

There have been 16 numbers released over the past two days, all conducted entirely since Iowa. Of these polls, 14 have pegged Hillary's support in a very narrow range of 28-31 percent. (The other two are from Suffolk, which has been a consistent outlier throughout primary season, but even Suffolk is only at 34.)

The Obama polls, by contrast, are all over the map. They put him anywhere from 32 to 42 percent, and are fairly evenly distributed over that range. In other words, everyone seems to agree on Hillary's level of support - but what determines the margin is the level of support for Obama.

What gives here?

Several readers have posted responses worth reviewing. Here is my quick take. "Fly" is right about the pattern in the data, as the following table shows. There is less variation in the Clinton percentage than for the other candidates, particularly Obama. But notice that Obama's support generally goes up as the percentage of undecided voters goes down.

01-08 final dems.png

That patterns suggests that as of the final snapshot, a lot of voters are leaning to Obama but not quite yet decided. That pattern is consistent with what we sometimes call the "incumbent rule." Obviously, Senator Clinton is not an incumbent, but much as they often do with incumbent candidates, voters may have largely made their decision about Clinton yet are still in the process of deciding to support her most prominent opponent. The final CNN/WMUR/UNH poll still has 21% of the likely Democratic voters saying they have "considered some candidates but are still trying to decide," including 18% of those who say they will "definitely vote" in the primary. Thus, some voters will carry their uncertainty all the way to their polling place, not making their final decision until they cast a ballot. This pattern usually suggests a "break" to the challenger (or in this case Obama), but as I learned the hard way in 2004, not always. We will know later tonight.

PS: MIckey Kaus and his readers noticed signs of the "incumbent rule" pattern in Iowa.

2) What is the Independent Mix? On First Read this morning, our friend Chuck Todd passed along the following:

Team Romney believes many of the tracking polls could be over-sampling independents. And if the indies move en masse to Obama, it could make for a more conservative GOP electorate, benefiting both Romney and (to a lesser extent) Huckabee.

This observation raises a good question. What is the percentage independent on the various tracking surveys. Unfortunately not all pollsters have included the relevant data in their releases, but the following table shows what I was able to gather from the final surveys:

01-11 nh ind.png

A few notes. First, not all surveys ask about party the same way. Most of the numbers cited above appear to be based on self-reported party identification (what do you "consider yourself?") and party registration ("how are you registered?"). The CBS/New York Times survey, for one, reported party ID results. Second, I certainly may have overlooked party numbers, so please email me or leave a comment if you can fill in party numbers missing above.

Note, the numbers above are not the "independent split" calculation that Noam Scheiber blogs about here, although obviously, those numbers are related. The CNN/WMUR/UNH survey reported a 60% to 40% split to the Democrats on their last survey, Fox reported a 55% to 45% split in the same direction.

Two more points about independents (or more correctly, those whose party registration is "undeclared"). Some data in the CNN/WMUR/UNH report makes it clear that the bigger the turnout, the greater the undeclared contribution in both party primaries. Not surprisingly, as the table below shows, the "definite" voters voters are more likely to be undeclared than those who say they "may vote" or that they plan to vote but not if an "emergency" comes up.

01-07 ind by turnout.png

One indirect measure to look for in the exit poll tabulations tonight is the percentage independent in each primary, and how it compares to what the pre-election polls were reporting.

Note: I want to apologize to those who continue to receive the "too many comments" error message when attempting to post a comment here. Suffice it to say, you have not posted too many comments to Pollster.com. We have been trying to squash this bug for months now (without success) and, unfortunately, the very heavy traffic we are experiencing today is aggravating the bug.


Looking for New Hampshire Exit Polls?


Looking for leaked exit poll results from New Hampshire? Sorry to disappoint, but whatever their merits, we are unlikely to see any such leaked results until moments before the polls close.

In past years, the network consortium that conducts the exit polls distributed mid-day estimates and tabulations to hundreds of journalists that would inevitably leak. In 2006, however, the networks adopted a new policy that restricted access to a small number of analysts in a "quarantine room" for most of the day and did not release the results to the networks and subscriber news organizations until just before the polls closed (information that did ultimately leak to blogs). As far as I know, that process will remain in place today.

Here are a few tips for making sense of the exit poll data that you do see tonight:

1) An exit poll is just a survey. Like other surveys, it is subject to random sampling error and, as those who follow exit polls now understand, occasional problems with non-response bias. In New Hampshire (in 1992) and Arizona (in 1996)* primary election exit polls overstated support for Patrick Buchanan, probably because his more enthusiastic supporters were more willing to be interviewed (and for those tempted to hit he comment button, yes, I know that some believe those past errors suggest massive vote fraud -- I have written about that subject at great length).

2) The networks rarely "call" an election on exit poll results alone. The decision desk analysts require a very high degree of statistical confidence (at least 99.5%) before they will consider calling a winner (the ordinary "margin of error" on pre-election polls typically uses a 95% confidence level). They will also wait for actual results if the exit poll is very different from pre-election poll trends. So a single-digit margin on an exit poll is almost never sufficient to say that a particular candidate will win.

3) Watch out for "The Prior." At least two networks are likely to post exit poll tabulations shortly after the polls close that will update as the election night wears on (try these links for MSNBC and CNN). Those data are weighted to whatever estimate of the outcome the analysts have greatest confidence in at any moment. By the end of the night, the tabulations will be weighted to the official count. Typically, the exit poll tabulations are weighted to something called the "Composite Estimate," a combination of the exit poll data alone and a "Prior Estimate" that is based largely on pre-election poll results. So if you look to extrapolate from the initial tabulations posted on MSNBC or CNN (as we did here on Election Night 2006), just keep in mind that in the estimate of each candidate's standing in the initial reports will likely mix exit poll and the pre-election poll estimates (not unlike the kind we report here).

Finally, if you would like more information on how exit polls are conducted, you may want to revisit a Mystery Pollster classic: Exit Polls - What You Should Know. Happy New Hampshire Primary Day!

*Clarification: The original version of this post implied that the 1996 overstatement occurred in New Hampshire.

Note: An apology to those who continue to receive the "too many comments" error message when attempting to post a comment here. Suffice it to say, you have not posted too many comments to Pollster.com. We have been trying to squash this bug for months now (without success) and, unfortunately, the very heavy traffic we are experiencing today aggravates the bug.


New Hampshire Endgame


1NHEndgame17.png

The New Hampshire endgame polling presents an interesting contrast. The Republican race shows virtually no hint of an "Iowa Bounce." The Democratic race, on the other hand, is showing a huge bounce for Obama and a drop for Clinton. Edwards is largely unaffected.

The charts also show the better performance of the sensitive red-line estimator when things are as dynamic as they have been since Thursday. The Red estimator catches the upturn in Obama support pretty well, while the blue estimator is trying hard to keep up but its "slow to change" nature means it totally misses the timing of the upswing.

If anyone were actually asking if there has been an Obama bounce, surely they are no longer asking.

The Republican side is a bit more sedate, probably because the leader there, John McCain, was hardly a top finisher in Iowa. The upward trend for McCain, and the downward one for Romney, predated the Iowa caucuses. At most the trends we saw earlier have largely continued. Huckabee appears to be the candidate without a bounce, in fact.

These are dynamics we've seen before when Iowa has had an impact. The short interval between Iowa has been much debated. One side says it doesn't allow enough time for an Iowa bounce to be fully felt. I'm of the opposite opinion. The short interval maximizes the effect of Iowa by not allowing time for losers in Iowa to retool their approach and for "added scrutiny" of the Iowa winner to slow their climb. The issue for Clinton and Romney is how to halt what is beginning to look like disastrous slides. With more time between events they would be better able to recover. If New Hampshire is a second loss for both, then both campaigns have to find ways to recover by South Carolina.

Cross-posted at Political Arithmetik.


NH: The View from Monday Afternoon


It's been quite a day for new poll releases. We now have new results from 11 different organizations that continued based on interviews in New Hampshire through Sunday, with two more that wrapped up calling on Saturday. As such, I have updated my table from yesterday's post on the size of the Obama bump.

01-07 NH bump.png

Across all of the polls that interviewed through Sunday night, Obama leads by an average of eight percentage points (37% to 29%), by slightly less (36% to 29%) if we also include the two polls that completed on Saturday. These averages are a near perfect match for our standard and sensitive estimates respectively. In terms of the Obama "bump," his support has gained an average of 8 percentage points; his net gain (the difference in the margins Obama's gain minus Clinton's decline) averages 13 points.

Obviously, Obama's margin has expanded from what we reported yesterday. Add to that the obvious increase since last week and the large number of voters still uncertain about their choice (20% "still trying to make up their minds" on the CNN/WMUR/UNH survey), we should assume that Obama's advantages will likely continue to grow over the next 24 hours.

Charles Franklin should have more soon, including a closer graphic look at the final trends.


NH: View from Sunday Morning


So as of this morning we have seven new polls conducted all or in part after results of the Iowa Caucuses were known. The spin-debate dejour is whether Barack Obama received a "bump," and if so, how much. It is, unfortunately, hard to answer that question given the uncertainty of weekend interviewing and the hard decisions that New Hampshire Democrats are now working to make. Let's look at what we know.

Obama is certainly rising, the only question is by how much. As the table below indicates, all seven polls show some increase in Obama's share of the vote. It ranges between 2 and 10 percentage points, with an average of gain of six points and a median of 5, although the most respected New Hampshire pollster on the list -- CNN/WMUR/University of New Hampshire -- shows a smaller slightly smaller 3-point gain.

01-06%20NH%20summary%28400%29.png

With two notable exceptions (ARG and RasmussenReports), the surveys show a very close race. The average of the seven gives Obama a three-point edge, while the median shows Obama up by two. Our regression trends (which should update on our New Hampshire page shortly) show Clinton with a one-point advantage (33% to 32%) on the standard estimate, but a point down (32% to 33%) on the more sensitive estimate. All of these differences fall well within the the margin of real-world and "sampling" error.

However, Obama appears to be gaining. So how big will the bump be? Firm conclusions are premature for two important reasons.

The first involves the issue of weekend interviewing, or more specifically, surveys based on interviews completed entirely on Friday night and Saturday. Most campaign pollsters are reluctant to put too much faith in interviews conducted at those times, when younger and more mobile voters are less likely to be home. In my 20+ years of looking at surveys conducted for campaigns, I can remember only one we did based solely on Friday and Saturday interviewing. In that case even after we weighted by every demographic variable available to make it comparable to others conducted just days before, we produced a weighted sample that appeared much more engaged in politics and better informed about issues and candidates (and thus, more likely to be "certain" about their initial vote preferences).

On the other hand, I cannot claim much experience with weekend interviewing -- my sample size is just one survey. Media pollsters are obviously more willing to conduct such surveys, particularly over the last weekend before an election. So I am willing to suspend disbelief, although I will have a lot more faith in the releases based on interviews conducted through Sunday night.

An aside: When pollsters like me worry about "weekends" we mean Friday night and Saturday, not Sunday. Actually, late Sunday afternoons and evenings are among the best times to catch people at home, especially in the winter. And I see much less to fear from a survey that begins calling on Friday and finishes on Sunday, so long long as all of the "no answer" numbers from Friday and Saturday get dialed again on Sunday night.

The second and more important reason to be cautious about this Sunday morning snapshot is that New Hampshire voters are still in the midst of a difficult decision. The CNN/WMUR/UNH survey tells us that only 52% of Democrats are "definitely decided" about who they will support, while 26% are "leaning toward someone" and 23% are "still trying to decide." Obama has an advantage over Clinton among the definitely decided (41% to 35%; n=183), while Clinton has a slight edge -- for the moment at least -- among those leaning or uncertain (31% to 23%; n=173).

But as you step away from the trial heat results and look at other internal measures, we see why the choice is difficult. Voters like all three of the leading candidates. For example, among the 82 respondents that are "still trying to decide:"

  • 92% rate Obama favorably, only 3% unfavorably

  • 81% rate John Edwards favorably, only 5% unfavorably

  • 75% rate Clinton favorably, only 5% unfavorably

Those same uncertain voters (n=82) also choose:

  • Obama over Clinton as "most inspiring" (68% to 8%)

  • Clinton over Obama as the candidate with "the right experience" (53% to 4%)

  • Obama over Clinton -- though narrowly and with more uncertainty -- as "most likely to bring needed change" (34% to 22%). Obama has a bigger advantage on this measure (40% to 27%) among those who are "leaning" to their choice.

One more thing. I cannot point to an academic study to prove this, but most campaign pollsters will tell you that when a candidate is gaining, vote preference is usually the last thing to change. The movement usually shows up first on internal measures. So on that score, consider that the UNH survey, which shows the smallest "bump" also shows a huge shift on perceptions of electability. Ten days ago, likely Democratic primary voters in New Hampshire considered Clinton the "candidate with the best chance of defeating the Republican" by a a two-to-one margin (45% to 22%). Obama has closed that margin on the most recent survey to a single percentage point (Clinton 36%, Obama 35%).


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR