Articles and Analysis


So What's a Likely Voter? Answers from Rasmussen and PPP

Topics: Automated polls , IVR , Likely Voters , PPP , Rasmussen

I spent the morning at Midterm Election Preview panel discussion sponsored by our competitor colleagues at the CQ Roll Call Group that featured pollsters Peter Brown of the Quinnipiac University Polling Institute, Tom Jensen of Public Policy Polling and Scott Rasmussen of Rasmussen Reports. During the question-and-answer period I asked a question about my favorite hobby-horse, what a "likely voter" is and how pollsters select them.

I directed the question (which begins at about the 1:00 mark) at Rasmussen and Jensen largely because their national surveys on presidential job approval and other issues are among the few that currently report results for likely voters or "voters" and because their reports provide little definition of those terms. The persistent and noticeable "house effect" in the Rasmussen results has led some to conclude that they are "polling a different country than other polling outfits."

I promise a longer post tomorrow summarizing my take on why Rasmussen is different, but since I'm running out of blogging time today, here are the verbatim answers from earlier today followed by a few comments. First, Scott Rasmussen of Rasmussen Reports:

First of all, we actually do have something in our daily presidential tracking poll that says that it's likely voters not adults, and we we do have a link to a page that explains something about the differences, maybe not as concisely or as articulate as I will say here...

There's a challenge to defining a likely voter. The process is a little different than in the week before an election for us than it is in two months before an election than it is in a year before an election. And to give a little history, normally if you would go do a sample of all adults, you go and interview whoever picks up the phone and you model your population sample to the population at large. When you begin to sample for likely voters you do it by asking a series of screening questions.

At this point in time, we use a fairly loose screening process, in the sense that we don't ask details about how certain you are to vote in a particular election next November. In fact, even the term "likely voters" is probably not the best term. I used to use the phrase "high propensity voters," because it was suggesting that these people who were most likely to show up in a typical mid-term election. We're not claiming this is a particular model of who will show up in 2010. When we used the phrase, "high propensity voters" -- I got a bunch of journalists who wrote back saying, "what does that mean?" I tried to explain it and they said, "oh you mean likely voters." So I finally just gave up.

Now for us [what] happens is that from this point in time, from now until Labor Day right before the election we will continue to use this model. These are people who are generally likely to show up in a mid-term election. When we get closer to the election, we add additional screens based on their interest in the election and their certainty of voting in this particular race and so the number does get more precise.

What does it mean in practical terms? Rasmussen Reports and Gallup are the only two polls out there with a daily tracking poll of the President's job approval. If you go back from January 20th on, most of the time you will see that Gallup's reported number is about three or four or five points higher than ours, because these are surveys and there is statistical noise. Sometimes the gap is bigger, sometimes its smaller. In fact there are some days when our number is a little bit higher than Gallup's. But typically, the gap between the adults and the likely voter sample is in the four or five point range.

The reason: Likely voters are less likely to include young adults, people who [as] Tom mentioned were very supportive of the President. They are less likely to include minority voters who are, again, very strongly supportive of this President. And so the gap is consistent.

Now I would explain that, at this point and time, it's a little like the difference between measuring something in inches or in meters, inches or in centimeters: the trends are the same in both cases, the implications are the same in both instances. And, by the way, the ultimate answers are that Republicans strongly disapprove of this President, Democrats strongly approve of this President, and independent voters have grown a little bit disenchanted, but they're not anywhere near the level of discontent that Republicans show. And that's true whether you measure it with likely voters or adults.

Next, Tom Jensen of PPP:

Well, I'll give a very concise answer. For our national polls, we're just pulling a list from Aristotle Incorporated of registered voters, period. We don't do any sort of likely voter sampling on our national polls. On our state level polls for 2010 races, we're polling lists of people who voted in the 2004, 2006 or 2008 general elections. If we were a live interviewer pollster that would be too liberal a sampling criteria, but we do automated polling and people who don't tend to vote in an election aren't going to answer an automated poll, so they just hang up. So we figure the 2008 wave voters we should be calling because some of them will come out in 2010, and those who will not, just hang up.

A few quick notes. First, very little of Rasmussen's explanation of his voter screen appears on the Rasmussen Reports methodology page (the one that's linked to from their daily presidential presidential tracking poll). Second, I'm still not quite clear on the question or questions that they currently use to screen for likely voters, although he implies that they ask a question about how often respondents typically vote. I understand that media pollsters often treat these screen questions like a proprietary "secret sauce," although the partisan pollsters that rely on screen questions, including Democracy Corps, Resurgent Republic and Public Opinion Strategies, typically include them in their filled-in questionnaires. Rasmussen Reports could help consumers of its data better understand "what country they are polling" if they did the same.

Finally, about Jensen's comment that "people who don't tend to vote in an election aren't going to answer an automated poll, so they just hang up:" He assumes that to be true -- and it's a perfectly reasonable assumption -- but I am not sure anyone has produced hard evidence yet that non-voters "just hang up." If they do, however, it calls into question the wisdom of assuming that an initial sample of adults called with an automated poll is really a sample of all adults (a question I've wondered about for years, even for pre-election surveys conducted with live interviewers).



Mark, you are right on the "mark" to call out Jensen's unverified assertion about self-selection of respondents in off-year polls.

I think this is an important question that needs more attention. First, how does self-selection of the (politically, electorally) motivated respondents shift with the electoral context? Second, does the partisan/ideological "enthusiasm gap" skew polling results (whether or not a likely voter model is applied), perhaps more so during minor or off-year elections than in major ones?

Related to this, isn't one of AAPOR's recommendation that pollsters report response rates and cooperation rates for polls? I'm not sure we ever see those from most (non-academic) pollsters. If those are available, then what is the relationship between RR or cooperation rate and the "accuracy" of polling results? (I've never heard a major electronic or print media report of a poll say anything about the RR, only about the so-called margin of error.)

I think some of the research by Pew has shown that elaborate calling protocols that involve multiple attempts to reach people in the sample basically aren't worth the cost. Do pollsters who just make one call, ask a screening question to find an eligible respondent, and then pop a few questions do about as well as those that make up to 12 (or more) calling attempts including efforts to persuade people to participate do "worse" (cost aside) in providing good polling results or, in particular, finding "likely voters"?


Mark Blumenthal:


I agree that this question is important, and I too wonder whether the occasional "enthusiasm gap" might create non-response problems for non-election issue polling. Two specific answers:

First, while AAPOR does more than recommend disclosure of response rates -- the AAPOR code mandates it -- very few public pollsters disclose response rates. See my column on this subject for more.

Second, Pew's two response rate studies (the most recent in 2003) compared their standard methodology (involving multiple attempts to each sampled number over a 5-day field period) to a much more "rigorous" method (including advance letters offering monetary incentives and many more attempts over a five month field period) that produced a much higher response rate. They found few differences on results to a long list of measures of political values and attitudes.

We should remember, however, that "standard" Pew Research methodology is a lot more rigorous than Rasmussen's. I wrote about that in more detail in this post in August.



Mark: Thank you for your detailed response. I see I have some reading to do based on your links.


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.