Pollster.com

Articles and Analysis

 

On Polling Political Junkies

Topics: Automated polls , CBS , IVR Polls , Likely Voters , Nate Silver , Rasmussen , Sarah Dutton

This morning, Nate Silver flagged a pretty glaring difference between two similarly worded and structured questions asked on surveys conducted by Rasmussen Reports and CBS News at roughly the same time. In so doing, he's highlighted a critical question at the heart of an important debate about not just Rasmussen's automated polls, but about all surveys that compromise aspects of their methods: Are the respondents to these surveys skewed to the most attentive, interested Americans? Do Rasmussen's samples skew, to use Nate's phrase, to "political junkies?"

Here is his chart:

2010-05-27-kagannom.png

CBS News found just 11% of adults who say they are "very closely" following "news about the appointment of U.S. Solicitor General Elena Kagan to the U.S. Supreme Court" in a survey fielded May 20-24. Rasmussen found 37% who say they are "very closely" following "news stories about President Obama's nominee for the Supreme Court" in an automated survey fielded from May 24-25. The answer categories were identical.

In addition to the minor wording differences, the big potential confounding factor is that Rasmussen screened for "likely voters," while CBS interviewed all adults. Nate does some hypothetical extrapolating and speculates that likely voter model alone cannot account for all the difference.

Whether you find that speculation convincing or not, the theory that more politically interested people "self select" into automated surveys is both logical and important. GWU Political Scientist John Sides put it succinctly in a blog post last year about an automated poll by PPP:

A reasonable question, then, is whether this small self-selected sample is -- even with sample weighting -- skewed towards the kind of politically engaged citizens who are more likely to think and act as partisan[s] or ideologues.

It is difficult to answer that question definitively, especially about Rasmussen's surveys, in a way that is based on hard empirical evidence and not just informed speculation. The reason is that the difference in mode (automated or live interviewer) is typically confounded by equally significant differences in question wording (examples here and here) or use of likely voter filtering by Rasmussen but not other polls. The Kagan example is helpful because the question wording is much closer, but the likely voter confound remains.

I have long argued that Rasmussen could help resolve some of this uncertainty by being more transparent about their likely voter samples, which dominate their releases to a far greater degree than almost any other media pollster. What questions do they use to select likely voters? What percentage of adults does Rasmussen's likely voter universe represent? What is the demographic composition of their likely voter sample, by age, gender, race, income? That sort of information is withheld even from Rasmussen's subscribers.

They could also start reporting more results among both likely voters and all adults. They call everyone, and they would incur virtually zero marginal expense in keeping all voters on the phone for a few additional questions.

Back to Silver's post. He includes some extended discussion on some of the differences in methodology that might explain why political junkies would be more prone to self-select to Rasmussen's surveys than those done by CBS. I have to smile a little because I outlined the same issues in a presentation at the Netroots Nation conference last August on a panel that happened to include Silver. I've embedded the video of that presentation below. It's well worth watching if you want more details on the stark methodological differences between Rasmussen and CBS News (my presentation begins at about the 52:00 minute mark; I review much of the same material in the first part of my Can I Trust This Poll series).

Finally, I want to follow-up on two of Nate's comments. Here's the first:

I've never received a call from Rasmussen, but from anecdotal accounts, they do indeed identify themselves as being from Rasmussen and I'm sure that the respondent catches on very quickly to the fact that it's a political poll. I'd assume that someone who is in fact interested in politics are significantly more likely to complete the poll than someone who isn't.

I emailed Rasmussen's communications director and she confirmed that they do indeed identify Rasmussen Reports as the pollster at the beginning of their surveys.

But here's the catch: So does CBS. I also emailed CBS News polling director Sarah Dutton and she confirms that their scripted introduction introduces their surveys as being conducted by CBS News or (on joint projects) by CBS and the New York Times. According to Dutton, they "also tell respondents they can see the poll's results on the CBS Evening News with Katie Couric, or read them in the NY Times."

Second, he argues that CBS will "call throughout the course of the day, not just between 5 and 9, which happens to be the peak time for news programming." I checked with Dutton, and that's technically true: CBS does call throughout the day, up until 10 p.m. in the time zone of the respondent. However, they schedule most of their interviewers to work in the evenings. As such, most of their calling occurs during evening hours because, as Dutton puts it, "that's when people are at home."

More important, CBS makes at least 4 dials to each selected phone number over a period of 3 to 5 days, and they make sure to call back during both evening and daytime hours. The idea is to improve the odds of catching the full random sample at home. That said, if a pollster does not do callbacks -- and Rasmussen does not -- it's probably better to restrict their calling to the early evening because, again, that's when people are at home.

But I don't want to get bogged down in the minutiae. The question of whether automated surveys have a bias toward interested and informed respondents is big and important, especially when we move beyond horse race polling to surveys on more general topics. I'm sure Nate Silver will have more to say on it. So will we.

 

Comments

Don't forget that automated poling has an advantage over live interviewers: there is less likelihood for interviewer bias. When there is a live human voice on the other end of the phone, respondents are more likely to respond with what they believe the interviewer wants to hear. Also slight intonations by the interviewer can give cues to the respondent about how to answer. Many times these cues are subconscious.

So with automated polls, we could argue that the polls reflect a more honest response.

But the most important bias facing pollsters today is coverage. Neither pollster would reach me, since I don't have a landline phone, and I am not alone. The CDC estimated 37% of households either have no land line phone or effectively don't have one because all their calls are screened through the landline phone.

____________________

Don't forget that automated poling has an advantage over live interviewers: there is less likelihood for interviewer bias. When there is a live human voice on the other end of the phone, respondents are more likely to respond with what they believe the interviewer wants to hear. Also slight intonations by the interviewer can give cues to the respondent about how to answer. Many times these cues are subconscious.

So with automated polls, we could argue that the polls reflect a more honest response.

But the most important bias facing pollsters today is coverage. Neither pollster would reach me, since I don't have a landline phone, and I am not alone. The CDC estimated 37% of households either have no land line phone or effectively don't have one because all their calls are screened through the landline phone.

____________________

Ptolemy:

About two weeks earlier, Gallup asked people to rate the Kagan nomination; only 24% had no opinion. This seems more consistent with Rasmussen's results than with CBS, and it's not likely that people will lose interest over time.

http://www.gallup.com/poll/127913/Initial-Reaction-Positive-Toward-Kagan-Nomination.aspx

____________________

mthornburg:

@ Jerry B.

Not to mention the significant social desirability bias that occurs when speaking to another human being. Especially on political questions that force a respondent to give an opinion, if talking to another person, the respondent may feel pressured to answer in a socially desirable way even if this is not his/her actual opinion.

Added to this is the fact that due to social norms, people are much less likely to report their income to a live human being. I know in exit polls, when they are done using a self-completed questionnaire, the nonresponse rate is about 7-8%. When talking to a live interviewer, non-response jumps to 18%.

I am not presently aware of any research that examines automated phone polls versus phone polls with interviewers but it would be worthy research.

____________________

GARY WAGNER:

So, since Rasmussen identifies themselves at the beginning of their surveys the twitchy liberals obsessed with bashing Rasmussen every second of their waking moments have convinced the liberal sheep to not answer their questions.

The rasmussen bashing has gotten worse than ridiculous. The constant barage is ruining this site. I know this is a very left-wing liberal site but I still came here to try to get some composite information. This obsession with rasmussen and fox news is ruining it. It's becoming noting more than just another KOS or Huffington post - but one that concentrates on polls.

____________________

John:

@Jerry and mthornburg

If automated phone provide more 'honest' answers then at least for this question over Kagan's nomination shouldn't there be more people be willing to say they are not paying attention? Yes, social pressure can distort live interviewer polls but it is both easier to not be paying attention or just plain lie to an automated poll. I would call it a wash, although any research is always interesting. Agreed about the cell phone, it would be excellent if at some point, if this site could add a non-cellphone poll filter to their already great charts? Although I have no idea how hard this will be.

@Ptolemy:

The gallup poll asks about Obama's nomination of Kagan which is not quite the same thing as the rasmussen/CBS question. Not to mention as soon as Obama's name is mentioned in the question a sizable percentage of both democrats and republican will imediately have an opinion whether they know much about the issue or not.

Gary Wagner
"I know this is a very left-wing liberal site"

Do you mean the site is, or the comments? I have always found that, whatever the political persuasion of the sites' authors, they have tried and done a very good job at staying neutral. The comments can be very partisan in both directions but its a free world with, thankfully, free speech, and as long as the comments are civil....

____________________

mthornburg:

@ Gary Wagner:
All research benefits from transparency. There is certainly nothing hysterical or wrong about asking a polling firm with significant house effects and a different methodology to disclose its procedures.

@John:
What you are referring to is "satisficing"; a live interviewer can serve to prevent a respondent from either failing to answer questions that take thinking (because "they want to get this damn survey over with") or obtain a more honest response. Typically, satisficing effects will lower response to "hard" questions on self-completed questionnaires while social desirability with a live interviewer will lower response to questions like respondent income or extract a dishonest response on questions with a strong social norm, such as things dealing with race.

I'm only familiar with research comparing self-completed questionnaires to live interviewers so I can't say with certainty how the mode effects for different types of phone surveys work. It would be an interesting topic for research.

____________________

hobetoo:

@Mark: interesting tape from Netroots. Thanks for linking to it. My only comment is about your response to a question that asked about validation of likely voter responses. I think you said that what people say about whether they plan to vote is not a very reliable predictor. And you mentioned that the ANES vote validation studies in this connection.

I think a more accurate summary of the research based on the ANEX vote validation studies would be this:

(a) on average in pre-election polls, people are likely to overstate the likelihood of their actually voting.

(b) people who say before the election that they do not plan to vote rarely do in fact vote. So such individuals are almost certainly UNlikely voters (probably especially in polls taken after the close of voter registration rolls).

(c) the probability that a person who says that they do plan to vote will actually vote is predictable from some of their other answers (e.g., SES, past voting record, strength of partisan ID, and concern about the electoral outcome).

These conclusions can be drawn from published analyses of the ANES vote validation studies. But to my knowledge they have not been used by survey organizations systematically to "estimate" the likelihood that the survey resondents will vote in the coming election. The pollsters do, of course, use some information obtained from the respondents to determine the likelihood that the resondent will vote. But (unfortunately) this information is (too) often used as a screen for excluding respondents from the survey, rather than in a model to predict whether the person will vote.

I would add that other than the ANES vote validation studies and a few other studies of the validity of survey responses (involving a comparison of responses with some kind of records -- e.g., some work by the Chicago people many years ago that tested whether different methods of survey administration affected the resporting of sociallt desirable or socially stigmatizing behavior), there aren't a whole lot of true validation studies of survey responses.

____________________

hobetoo:

I need to edit what I wrote above:

I think a more accurate summary of the research based on the ANES vote validation studies would be this:

(a) on average in post-election surveys, people are likely to overstate the likelihood of their having actually voted (perhaps on the order of 10% to 25% overstatment on average in major elections).

(b) people who say in a pre-election survey that they do not plan to vote rarely do, in fact, vote (based on a check of the records in the voting precincts). Such individuals are almost certain unlikely voters (probably, I would speculate, especially in polls taken after the close of the voter registration rolls). Thus their "likely voting behavior" is highly predictable from their responses.

(c) the probability that respondents who say in a pre-election survey that they do plan to vote will actually vote is fairly predictable from some of their other answers (viz., SES, past voting history, strength of partisan ID, and concern about the current electoral outcome).

These conclusions can be drawn from published analyses of the ANES vote validation studies. But to my knowledge they have not been used by survey organizations systematically to "estimate" the likelihood that the survey resondents will vote in the coming election. The pollsters do, of course, use some information obtained from the respondents to determine the likelihood that the respondent will vote. But (unfortunately) this information is (too) often used as a screen for excluding respondents from the survey, rather than to predict whether the pre-election respondents person will vote.

I would add that other than the ANES vote validation studies and a few other studies of the validity of survey responses (involving a comparison of responses with some kind of records -- e.g., some work by the NORC researchers many years ago that tested whether different methods of survey administration affected the accuracy of reporting of socially desirable or socially stigmatizing behavior), there aren't a whole lot of true validation studies of survey responses.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR