Pollster.com

Articles and Analysis

 

A Victory for IVR Polling?


A friend sent me a couple of links earlier pointing to pundits and pollsters who are taking last night's results as evidence for the merits of IVR polling. First off, as Mark noted earlier, it is a bit too early to be making such comparisons. With regard to the claims being made about IVR polling in particular, I would add the following points:

First, there is no way to control for other reasons that these polls might have generated different results, including different approaches to screening for likely voters and how undecideds are dealt with. With regard to the latter issue, it is important to note that the pollsters using live interviewing in New Jersey were showing more than twice the percentage of undecideds as those using IVR.

This leads to a second important point (related to the first): comparing these pollsters based on the final result presupposes that each pollster that has been entered into this fictitious competition was actually trying to get the final result correct in the first place. If that was the goal, then it seems as though each polling firm would have allocated all of their undecided respondents into one camp or another.

Third, one of the reasons for concerns with IVR polling is that citizens with only a cell phone cannot be reached by these pollsters and these citizens now comprise at least one-fifth of the population. Yet, while the cell-only problem may generally be an issue for IVR technology (and for live interview pollsters who aren't calling cell phones), it is less of a problem for polling on elections, and particularly in low turnout elections. This is because the types of people that do not have landlines are less likely to be voters (and particularly less likely to be voting in low turnout elections). Ultimately, an off-year low turnout election may actually be less of a challenge for IVR-based polls because the non-coverage bias should be smaller for these contests. Where these polls may run into greater challenges is when they attempt to make inferences about the American public rather than registered (or likely) voters.

 

Comments
poughies:

Brian-

Good points, but it seems to me the question isn't whether IVR is superior to live-interview polls. Rather, the question is whether the Wash Post and NBC News (among others) should continue to outright reject IVR polls. The results from Tuesday Night (and pretty much every election since 2004) show IVR to be as accurate of a predictor of the final outcomes as live-interview polls. I think the point of any relaying of poll information to the public in the final days is to give as accurate of a prediction of who will win on election night.

If you used IVR in the final days this time around, you'd be at least as likely to give an accurate picture.

To me, the evidence is mounting that those in the mainstream media looking to give an accurate picture of the horserace should accept IVR polls as legitimate and report their results.

-HJE

____________________

Mike Mokrzycki:

Re cell-only, while true that nationally at least a fifth of the population only has a wireless phone, that incidence varies pretty widely by state, and apparently it's quite a bit lower in NJ and VA.

Earlier this year the CDC derived modeled estimates of state-level wireless-only incidence by rubbing telephone status data from its own National Health Interview Survey up against Current Population Survey (Census Bureau) data. Per http://www.cdc.gov/nchs/data/nhsr/nhsr014.htm#fig1, the modeled estimate of wireless-only in NJ as of 2007 was 8.0% of households and 6.1% of adults; in VA, 10.8% of households and 10.0% of adults. Factoring in confidence intervals, those estimates could be a couple points higher or lower, and the passage of two more years probably adds several points, but still it's likely the cell-only situation was even less of a problem for the IVR pollsters in NJ and VA than it could be in most other states.

(Disclosure: The lead author of the CDC study, Stephen Blumberg, is like me a member of the Executive Council of the American Association for Public Opinion Research. I haven't worked with him on his cell-only research but I have reported on it -- including on the state-level modeled estimates -- as a journalist.)

____________________

RussTC3:

No way. PPP and Rasmussen may have gotten close to the final results, but their internals were absolutely attrocious.

For instance PPP had President Obama's approval rating at 47% in New Jersey. Yeah, ok.

According to the exits, President Obama enjoyed a very solid 57% of approval among those voters who turned out to vote in the NJ Governor's race.

The same applies to every other IVR-based poll. They severely got the President's approval rating wrong.

Seems to me like they just did a good job weighting the election result and totally messed up on the actual surveying.

____________________

Polaris:

RussTC3,

Not so fast. You are comparing apples and oranges. In the election that was held on Tuesday the only secret part of the information was the ballot itself. All the other exit polling information which you are referring to are live interviewing information from selected respondants from selected precints that are then normalized against the actual voting data.

The point is that when you criticize IVRs for having "lousy internals", you need to do so in light of the information that is secret, i.e. the vote itself. Otherwise the same impetus that can cause people to lie to a live interviewer in a poll will also apply to an exit poll. [And remember how bad exit polls can be because of interviewer bias and other reasons....remember 2004?]

The point is that IVR polls showed a statistically different result in NJ than live interviewer polls (and in the ME-1 question as well) AND the IVR results were more in line with what happened on election day....which is really the point of polling now isn't it?

-Polaris

____________________

Aaron_in_TX:

The cel-phone issue is a problem. I'm in my 20's and most of my friends my age use only cel-phones. We are also the people least likely to vote in these elections, but in the future, pollsters are going to have to correct this problem. Most of my generation considers a landline useless and a waste of money.

____________________

Aaron_in_TX:

There could also be a selection bias in IVR. I know people who automatically hang up on any robo-call.

I swear, conservatives are obsessed with bias, but always seem oblivious to their own biases.

____________________

poughies:

Are we trying to figure out why people vote they do, or are we trying to figure out which way they will vote?

So the internals were screwed up... That's nice for political scientists.

I'm trying to find some evidence, any evidence, that IVR polls do any worse of a job than live-interviewer polls in predicting the election day result. I cannot find any.

Truthfully, I'd use a horse who "felt the wind" to give its results if it gave me the most accurate results.

____________________

Mike Mokrzycki:

poughies, I'd answer your question with an emphatic "Both." The internals, and the underlying dynamics they illuminate, are important not just for political scientists but for campaigns and any citizen who follows them.

I'll fudge these numbers to make my point more clearly: It's one story if Corzine lost while Obama had 40% job approval in NJ ("Obama seemed to be a drag on Corzine"), and another with Obama at 60% ("Corzine was beyond Obama's help").

An awful lot of spin rides on those internals ...

Also, while I don't entirely discount results-based analysis -- which has made me less skeptical of IVR over time -- I wouldn't bet the bank on it. To use the most famous example, Literary Digest polling had a good track record, until it blew up.

____________________

Mark Blumenthal:

Thanks to all for the excellent comments. Adding one thought:

@Polaris: RTC3's comparison is less apples-to-oranges than you might realize. A live "interviewer" does approach exiting voters to solicit their participation in the exit poll, but he/she then hands the voter/respondent a paper questionnaire and a clip-board so they can fill it out privately.

The questionnaire form simulates the "secret ballot" the same way an IVR interview does. Neither the interviewer nor the form ask the respondents to identify themselves, and the respondents drop the completed forms into large cardboard box that resembles a ballot box.

Yes, we are comparing two sets of survey results, but if they are different for methodological reasons, "live interviewing" is not likely one of them.

____________________

Polaris:

Mark,

How then do you explain the huge errors that were made in the 2004 exit polls. While methedology has been improved, it was my understanding that one of the big reasons for the error in the first place was interviewer bias (stemming from the fact that a lot of the interviewers tended to be Poli Sci grad students with the usual political biases).

-Polaris

____________________

poughies:

Mike,

First, I forgot literary digest polled before 36... thanks for the heads-up on that one.

Second, I think I was a little crude in what I wanted to say. Of course, we want to know both. And the research shows, IVR cannot go more than a few questions because people will simply lose patience. That is why if we want to know the "internals", we have live interviewers.

Part of the advantage of IVR polls is that they can reach a lot of people and are cheap. Campaigns should be able to spend the dough to be able to get live interviewers to figure out the internals.

And for someone like you or me, we also want to get to know why voters are doing what they are doing. Why is it that the side against Gay marriage has done better 80% of the time than the final polls predicted (even after allocating undecideds)? Why does the anti-same-sex marriage side average 3.55% better than the final polls predicted they would?

But at the end of the day for the mass public, the real thing they want to know is who wins and who loses. Even if you cannot answer my question above on Gay marriage (I cannot with any certainty), you would still have to believe that the yes side in Maine was going to come out smelling pretty good. It's the reason I called for the yes side to win 52-48 (not to toot my horn, but to illustrate the point).

Put another way, the charts on this site don't show how single white women are approving of Obama. They show his overall approval.

As for IVR turning into the literary digest, I think the track record is long enough with over 200 (probably more) elections to show that simply isn't the case.

____________________

Mike Mokrzycki:

To clarify, I didn't intend for my Literary Digest comment to be taken as presaging IVR's doom. The LD sampling frame flaw was huge (basically they got lucky before 1936) whereas there's no real difference among most IVR and live-interviewer phone polls' sampling methodologies (aside from RDD vs RBS, and various approaches to LV screening). Personally I think cell phones loom as the biggest challenge to IVR -- and other landline-only surveys -- though even there it's tough to predict exactly how things will play out.

I know there's intense interest in who will win or lose. (That was apparent in the live blog here the other night.) I'm just saying that's not the only important thing to come out of surveys.

Re polling on gay marriage ballot initiatives, seems like exactly the kind of context for social desirability to come into play -- some respondents reluctant to seem "un-PC" and tell live interviewers what they really think. Only one data point but the PPP poll in Maine (which had Yes 51 No 47 while live-interviewer surveys had No tied or ahead) supports this hypothesis.

On a somewhat related note, in NJ the final YouGov/Polimetrix poll didn't do as well as the IVR polls in horse race predictive accuracy. A web survey presumably would have the same "secret ballot" advantage as IVR, but with opt-in online there are sampling frame issues (though the method's proponents don't consider those any worse than shortcomings in probability sampling, as Mark has written about recently). Ultimately all these comparisons suffer from the lack of an ability to control for factors other than survey administration mode, as Brian pointed out in the original post.

____________________

poughies:

So much in that post, I don't know how I can hit all of it.

Fine, on the lit digest. I knew what you meant... I was just covering the base.

SurveyUSA's record on same-sex marriage initiatives is pretty good... they did very good in Oklahoma and Kentucky in 04... and they certainly did better than the live-interviewers in California on prop 8... They blew it big time for Kansas in 05... and were not good at all in Colorado in 06 (though I think that was due to a lack of clarity on the question). Evidence suggests you are correct about a social desirability on same-sex marriage (see Berinsky 04 and Goldman 08)... and about automated polls to potentially control for it (Villarroel et al. 06)...

But are you saying that on questions with social desirability problems surrounding them, we should be using automated surveys (I don't think that is what you are suggesting)....

So, IVR polls can be more accurate in that situation?

As for Yougov/polimetrix, I am actually with you. I think non-random samples have major problems... For instance, in their dataset they sent out as part of their 07-08 election package (CCES), I found a major, major error. The wave for September 08 (as in after the primary was over) (and I believe January 08 before the primary ended as well) had Romney leading McCain by something like 25 points among Republican primary voters when they were ask "who did you vote for or who do you plan on voting for".

The problem was reported to Lynn Vavreck, but I'm still unsure if they ever figured out what was going on...

I think these opt-in web surveys can be used in poly sci literature when trying to figure out certain things about the electorate, but I wouldn't trust them for any sort of horserace question (regardless of how they do prez nationally in the horserace).

My critique doesn't apply to non-opt-in surveys (see Knowledge Networks).

____________________

Mike Mokrzycki:

Not saying automated surveys should or shouldn't be used in a given situation, just that their mode of administration MAY be a way to mitigate social desirability effects. So would Knowledge Networks online polls (probability-based) but last I checked their panel isn't large enough for them to draw state samples from it, except possibly in the most populous states.

For those uncomfortable with entirely automated interviews, mixed-mode could be an approach to polling on sensitive questions like gay marriage -- start with a live interviewer, among other things to better encourage cooperation, then switch to IVR for substantive questions. But I know research has found some respondents drop off in the transition, even if they tell the live human they'd do the IVR.

____________________

poughies:

Some of the smaller states would be difficult to poll from, no doubt. A question you would have to ask yourself is whether you prefer a 200 person KN panel (which theoretically has reached out to many people over a long period of time to provide a sample representative of the population) or a telephone poll with 500 respondents taken over a 2-3 day period with few callbacks.

The margin of error difference between the two polls is only about 2.5... and the sampling errors from the sample that could come about from the telephone poll I described could close any margin of error difference.

As for the transition between different polling types, the most recent study I found on tends to agree with the general idea of a drop-off or at least a non-improvement in response rates on questions.

It's a perplexing puzzle to be sure.

I think at the end of the day no "right" answer is out there. We all have to make decisions on what we want in polling, and what methods we believe allow us to get what we want.

Sometimes some things will work, and sometimes other things will work.

I believe in IVR polling for the horserace, and I think it should be used by the mainstream media.

You (at least at this point) disagree with that assessment, and I know you are not the only one.

About the only thing I can say that perhaps both of us can agree on is that the demographics are probably against both of us. Increasingly, students of survey methodology are looking beyond probability based samples to conduct research and analyze public attitudes. More than that, the cons of non-random based samples are not discussed as much as I think they should be.

What are you going to do?

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR