Articles and Analysis


The Post and the Virginia Polls

In a column this past Sunday, Washington Post polling director Jon Cohen explains why the Post has not reported on recent surveys "purporting to show the status of" the upcoming Democratic primary contest for governor in Virginia. Their bottom line:

None of the recent polls in the Virginia governor's race meet our current criteria for reporting polls: Two primary ones were by Interactive Voice Response, commonly known as "robopolls," and the third was a partial release from one of the candidates eager to change the campaign story line.

Cohen's piece starts a conversation worth having about the difficulty of polling in low turnout primaries, about the coverage of "horse race" results and where journalists should draw the line in reporting on polls conducted by campaigns or of otherwise unknown or questionable quality. For today, I am going to shamelessly gloss over those bigger issues (and shamelessly promote that I'll take up some of them in my about to resume NationalJournal.com column next week) and consider instead the narrower issue of the Post's policy against reporting the results of automated polls (also known as interactive voice response, or IVR).

Cohen makes makes two arguments for not reporting automated surveys:

1) Automated polls take "less care" determining likely voters:

Given the great complexity in determining "likely voters" in the upcoming electoral clash, extra care should taken to gauge whether people will show up to vote. Unfortunately, polls that use recorded voice prompts typically take less care than polls conducted by live interviewers.

2) Automated polls are impractical for surveys asking more than a half-dozen substantive questions:

People are generally less tolerant of long interviews with computerized voices. One recent Virginia robopoll asked six questions about the governor's race; the other asked four....Lost in the brevity is much, if any, substance. Neither of the two in Virginia asked about the top issues in the race, what candidate attributes matter most or anything about the economy. Without this essential context, these thin polls offer little more than an uncertain horse race number. In understanding public opinion, "why" voters feel certain ways is crucially important

Expanding on the second point, Cohen also points out that the requisite brevity of automated polls also leads campaign pollsters to rarely use automated polls. He quotes Joel Benenson and Bill McInturff and cites the poll released by Virginia candidate Brian Moran (conducted by Greenberg, Quinlan Rosner).

Let's take these in reverse order. First, he is right that the automated methodology is inappropriate for longer, in-depth surveys and that a single, automated pre-election poll can typically "offer little more than an uncertain horse race number." So we would want to stick to live interviewer surveys if we want to understand the broader currents of public opinion surrounding an election (the goal of the work done by the Post/ABC poll) or if we want to plot campaign strategy or test campaign messages (the goal of campaign pollsters). The inherent brevity of automated polls is the primary reason that campaign pollsters still rely on traditional, live-interviewer methods for their work.

Similarly, the need for a very short questionnaire on automated polls prevents the use of a classic Gallup-style likely voter model (which requires asking seven or more questions about vote likelihood, past voting and attention paid to the campaign). However, I do not agree that the absence of a Gallup style index means that automated polls take inherently "less care" with likely voter selection than other state-level pre-election surveys. Many pollsters, including most of those that work for political candidates, rely on other techniques (such as screening questions, geographic modeling and stratification and the use of vote history garnered from registered voter lists) to sample and select the likely electorate.

Do we really think the polls produced by SurveyUSA and PPP in Virginia take "less care" in selecting likely voters than the Mason-Dixon Florida primary poll reported yesterday by the Post's Chris Cillizza or the Quinnipiac New Jersey primary poll reported in Sunday's Post?

And while I will grant that final-poll pre-election poll accuracy is a potentially flawed measure of overall survey quality, it is the best yardstick we have to assess the accuracy of likely voter selection methods. After all, the Gallup-style likely voter models were developed by looking back at how poll estimates compare election outcomes and tweaking the indexes until they produced the most accurate retrospective results. With each new election, pollsters look back at how their models performed, adjusting them as necessary to improve their future performance. Thus, if a pollster is careless in selecting likely voters it ought to produce less accurate estimates on the final poll.

On that score, automated "robo" polls have performed well. As PPP's Tom Jensen noted earlier this week, analyses conducted by the National Council on Public Polls (in 2004), AAPOR's Ad Hoc Committee on Presidential Primary Polling (2008), and the Wall Street Journal's Carl Bialik all found that automated polls performed about as well as live interviewer surveys in terms of their final poll accuracy. To that list I can add two papers presented at last week's AAPOR conference (one by Harvard's Chase Harrison and Farleigh Dickinson Unversity's Krista Jenkins and Peter Woolley) and papers on prior conferences on poll conducted from 2002 to 2006 (by Joel Bloom and Charles Franklin and yours truly). All of these assessed poll conducted in the final weeks or months of the campaign and saw no significant difference between automated and live interviewer polls in terms of their accuracy. So whatever care automated surveys take in selecting likely voters, the horse race estimates they produce have been no worse.

One reason why is that respondents may provide more accurate reports of both their vote intention to a computer than a live interviewer. We know that live interviewers can introduce an element of "social discomfort" that leads to an underreporting of socially frowned upon behavior (smoking, drinking, unsafe sex, etc). Is it such a stretch to add non-voting to that list?   

So let me suggest that this argument is really about the value of polls that measure the "horse race" preference -- and little more -- a few weeks or months before an election. Is that something worth reporting? Jon Cohen and ABC News polling director Gary Langer, the two principals of the ABC/Washington Post polling partnership, have been consistently outspoken in saying, "no," urging all of uurging us all to "throttle back on the horse race."

I have no doubt of their sincerity of their commitment to that goal or the obstacles they face putting it into practice, but I wonder if urging abstinence is a workable solution. Political journalists and their political junkie readers are intensely and instinctively interested in the basic assessments that "horse race" numbers provide. Poll references have a way of showing up in stories about the Virginia governor's race, even in a newspaper that is supposedly not reporting on Virginia primary polls. Just yesterday, for example, the Post's print edition debate story reported that the Virginia candidates "sought to stamp a final impression in a race where polls show the majority of voters remain undecided" and Chris Cillizza told us in his online blog that "polling suggests [Terry McAuliffe] leads both [Brian] Moran and state Sen. Creigh Deeds."

So the "polls" show something newsworthy enough to report, but the reporters are not allowed to name or cite the polls they looked at to reach that conclusion. Does that make any sense?



It may be an issue of not wanting to endorse a methodology that, while seemingly reliable for horse-race estimates, robo-polls could be more dubious when measuring attitudes in greater depth, with longer questions that are more reliably administered by a live interviewer. Do you instruct your reporters to use only the horse-race numbers or just outright oppose using them?


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.