Pollster.com

Articles and Analysis

 

IVR & Internet: How Reliable?

Topics: Internet Polls , IVR , IVR Polls , Slate Scorecard

If one story is more important than all others this year--to those of us who obsess over political polls--it is the proliferation of surveys using non-traditional methodologies, such as surveys conducted over the Internet and automated polls that use a recorded voice rather than a live interviewer. Today's release of the latest round of Zogby Internet polls will no doubt raise these questions yet again. Yet for all the questions being asked about their reliability, discussions using hard evidence are rare to non-existent. Over the next month, we are hoping to change that here on Pollster.com.

Just yesterday in his "Out There" column (subscription only), Roll Call's Louis Jacobson wrote a lengthy examination of the rapid rise of these new polling techniques and their impact on political campaigns. Without "taking sides" in the "heated debate" over their merits, Jacobson provides an impressive array of examples to document this thesis:

[I]t's hard to ignore the developing consensus among political professionals, especially outside the Beltway, that nontraditional polls have gone mainstream this year like never before. In recent months, newspapers and local broadcast outlets have been running poll results by these firms like crazy, typically without defining what makes their methodology different - something that sticks in the craw of traditionalists. And in some cases, these new-generation polls have begun to influence how campaigns are waged.

He's not kidding. Of the 1,031 poll results logged into the Pollster.com database so far in the 2006 cycle from statewide races for Senate and Governor, more than half (55%) have been done by automated pollsters Rasmussen Reports, SurveyUSA or over the Internet by Zogby International. And that does not count the surveys conducted once a month by SurveyUSA in all 50 states (450 so far this year alone). Nor does it count the automated surveys recently conducted in 30 congressional districts by Constituent Dynamics and RT Strategies.

Jacobson is also right to highlight the way these new polls "have made an especially big splash in smaller-population states and media markets, where traditional polls - which are more expensive - are considered uneconomical." He provides specific examples from states like Alaska, Kanasas and Nevada. Here is another: Our latest update of the Slate Election Scorecard (which includes the automated polls but not those conducted over the Internet) focuses on the Washington Senate race, where the last 5 polls released as of yesterday's deadline had all been conducted by Rasmussen and SurveyUSA.

Yet the striking theme in coverage of this emerging trend is the way both technologies are lumped together and dismissed as unreliable and untrustworthy by establishment insiders in both politics and survey research.

Jacobson's piece quotes a "political journalist in Sacramento, Calif," who calls these new surveys "wholly unreliable" (though he does include quotes from a handful of campaign strategists who find the new polls "helpful, within limits").

Consider also the Capital Comment feature in this month's Washingtonian, which summarizes the wisdom of "some of the city's best political minds" (unnamed) on the reliability of these new polls. Singled out for scorn were the Zogby Internet polls - "no hard evidence that the method is valid enough to be interesting" - and the automated pollsters, particularly Rasmussen:

[Rasmussen's] demographic weighting procedure is curious, and we're still not sure how he prevents the young, the confused, or the elderly from taking a survey randomly designated for someone else. Most distressing to virtually every honest person in politics: His polls are covered by the media and touted by campaigns that know better

The Washingtonian feature was kinder to the other major automated pollster:

SurveyUSA's poll seems to be on the leading edge of autodial innovation. Its numbers generally comport with other surveys and, most important, with actual votes.

[The Washingtonian piece also had praise for the work of traditional pollsters Mason-Dixon and Selzer and Co, and complaints about the Quinnipiac College polls]

Or consider the New York Times' new "Polling Standards," noted earlier this month in a Public Editor column by Jack Rosenthal (and discussed by MP here), and now available online. The Times says both methodologies fall short of their standards. While I share their caution regarding opt-in Internet panels, their treatment of Interactive Voice Response -- the more formal name for automated telephone polls -- is amazingly brusque:

Interactive voice response (IVR) polls (also known as "robo-polls") employ an automated, recorded voice to call respondents who are asked to answer questions by punching telephone keys. Anyone who can answer the phone and hit the buttons can be counted in the survey - regardless of age. Results of this type of poll are not reliable.

Skepticism about IVR polling based on theoretical concerns is certainly widespread in the survey research establishment, but one can look long and hard for hard evidence of the lack of reliability of IVR, or even Internet polling, without success. Precious little exists, and the few reviews available (such as the work of my friend, Prof. Joel Bloom, or the 2004 Slate review by David Kenner and William Saletan) indicate that the numbers produced by the IVR pollsters comport as well or better than with actual election results than those from their traditional competitors.

The issues involving these new technologies are obviously critical to those who follow political polling and require far more discussion than is possible in one blog post. So over the next six weeks, we are making it our goal here at Pollster to focus on the following questions: How reliable are these new technologies? How have their results compared to election results in recent elections? How do the current results differ from the more traditional methodologies?

On Pollster, we are deliberately collecting and reporting polls of every methodology -- traditional, IVR and Internet -- for the express purpose of helping poll consumers make better sense of them. We certainly plan to devote a big chunk of our blog commentary to these new technologies between now and Election Day. And while the tools are not yet in place, we are also hoping to give readers the ability to do their own comparisons through our charts.

More to say on all the above soon, but in the meantime, readers may want to review my article published late last year in Public Opinion Quarterly (html or pdf), which looked at the theoretical issues raised by the new methods.

Interests disclosed: The primary sponsor of Pollster.com is the research firm Polimetrix, Inc. which conducts online panel surveys.

 

Comments
DemFromCT:

Interests disclosed: I was quoted in the "Public Opinion Quarterly" article and therefore it is one of my favorite polling articles. ;-)

DemFromCT: seriously I wonder if YouGov and internet polling is the future.

The quote from 2004 indicates, however informally, that there have been questions, predictions and expectations for these non-traditional polls for some time. Anything done to review their reliability would be a great help to us political junkies.

____________________

Gary Kilbride:

Thanks for the link to the excellent Public Opinion Quarterly article, which I hadn't seen.

Is there a link listing the methodologies of the various pollsters?

Also, there is a minor error in the listing of the September 28 SurveyUSA Minnesota senate poll. Kennedy is not the incumbent. That is an open race after Democrat Mark Dayton decided not to seek re-election.

____________________

Dwight McCabe:

You highlight a critical issue, as always.

When I look at the published results from many polls for any particular race, the Zogby Internet poll results are usually an outlier from the other data. Not sure why this would be since you'd expect them to have good ways to check the profile of those taking their survey. Still, I completely ignore their results right now.

IVR results seem more in line with the traditional phone poll data, although I have not done a thorough comparison of the methods. If anything, the partisan leaning of the pollster seems to have more of an impact on bias but not always.

It would be very interesting to read how the Internet and IVR polling firms validate their respondents, to avoid the problems you cite with reaching teenagers, older voters with demetia, non-registered voters and so on.

____________________

DemFromCT:

Dwight McCabe, keep in mind that cell phone vs landline phone usage and increasing non-response/refusal are reasons why traditional polls have their own issues.

It's not as if the gold standard phone survey is without flaws of its own. So, outcomes comparison to the actual election results need to be done for both types of surveys, not just the new kids. Perhaps we'll learn the old methodology has hidden issues.

____________________


Here are some problems with that Bloom POQ piece that concludes: "This meta-analysis of survey data shows that the [2004] presidential election polls were far more reliable than the 2002 Senate polls." And "Overall, these results show a vast improvement over the 2002 analysis where 25% were outside [sic] there reported margins of error."

1. I see an obvious inconsistency in this comparison of poll results with election outcomes. 2004 polls were conducted in October *only* while the 2002 polls were done in September *and* October. The 2004 analysis expected polls to be predictive of outcomes up to 4 weeks before election day while the 2002 analysis expected prediction as long as *8* weeks before election day.

2. That the fact is *neither* set from 2002 and 2004 can be expected to be predictive. There is an obvious intervening variable that intersects those timelines: the CAMPAIGNS. Enough said

3. Also..."Pollsters were the favorite media punching bag after the November 2002 elections. Not true. The Wall Street Journal had a column and a story and then there was a tag-along Huffington piece voicing the same criticism. That was it. However, those pieces focused on *only six* polls- and then went on to claim that declining response rates and other factors make *all* polls unreliable.

I posted a short piece about this on Polling Report. Link below.
http://www.pollingreport.com/ncpp1.htm
There is a link to the NCPP site for their full 2002 analysis. I compiled the data. Interviewing time-frames were much shorter. Average error on the estimates was 2.4.

Those are my thoughts.

Nick Panagakis

____________________

DemFromCT:

Thanks, Nick. Zogby in particular had a bad year in 2002 in terms of picking winners (picking winners in a close race is tough to do, but Zogby got it wrong in 5 of 17 cases while Mason-Dixon got it wrong in 1 of 23).

But the polls picked the wrong winner in 21 of 159 cases (13% of the polls picked the wrong guy), and that's what needs focus as well as the average error on the estimates.

____________________

Nick Panagakis:


Yes 21 polls of the of the 159 did pick the wrong winner.

What is not evident is that 8 of them were within their margin of error. So that that leaves 13 polls with the wrong winner that exceeded their margin of error. Still not good.

At the 95% level we would expect 8 polls of the total 159 to exceed the margin of error due to chance (5% of 159).

However, 27 polls or 17% of the total 159 exceeded their margin of error. (14 did so but got the right winner anyway.) So that is the more relevant figure.

There is more to this. For one thing, these analyses do not allow for trends in a pollster's previous polls - was voting preference stable or was it changing.

I still yhink the Bloom piece was unnecessarily harsh on the 2002 poll

Nick

____________________

Nick Panagakis:


That "post" button is too close to this wndow.

I was going to conclude that the Bloom piece was unnecessarily harsh on the 2002 polls. And that there is no perfect of doing these analyses. While there are a large number of polls, we are limited to a few offices, Senate, Governor and every four years presidential that are readily available.

Nick

____________________

CalD:

My sense in comparing IVR and Internet polling to traditional phone polls where available is that for now at least, it matters very much who does the polling. Zogby Interactive and SurveyUSA polls for example, while not completely useless seem to have an effective margin of error roughly double their stated theoretical MoE. The Zogby interactive poll had a pretty terrible record in past elections and seems to be shaping up for a repeat this year. SurveyUSA seems to owe their track record mainly to the fact that they're dirt cheap -- i.e., they get hired to poll a lot of races that aren't particularly close where no one would waste money on a conventional poll.

Rasmussen and Harris Interactive on the other hand seem to track the averages of conventional telephone polls more reliably from what I have seen. So I think both technologies have been proven in concept and can be made to work. However there are still quite a few technical details and best practices that need to be worked out in both cases.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR