Pollster.com

Articles and Analysis

 

Follow-up: Does Mode Make a Difference?

Topics: IVR Polls , Likely Voters , Measurement

About a month ago, I wrote a post about the fairly obvious and consistent differences among pollsters ion the Barack Obama job approval question -- what we usually refer to as "house effects." At issue is that the two of the national pollsters that have produced consistently lower scores for Obama use an automated, recorded voice to ask questions rather than live interviewers. My argument was that we should not overlook the other factors that might also explain the house effects at evidence on our job approval chart.  

One admittedly far-fetched hypothesis I floated to explain the consistently lower approval scores produced by Public Policy Polling (PPP), one of the automated pollsters, is that they ask a slightly different question: Most of the others ask respondents if they "approve or disapprove of the way Barack Obama is handling his job as president." PPP asks if they "approve or disapprove of Barack Obama's job performance" (emphasis mine). I wondered if "some respondents might hear 'job performance' as a question about Obama's performance on the issue of jobs," and suggested that they conduct an experiment to check.

Well, it turns out that the folks at PPP took my advice. They randomly split their most recent North Carolina survey (pdf) in two. The full survey interviewed 686 registered voters, so each half sample had roughly 340 interviews. One random half-sample heard their usual question (rate "Barack Obama's job performance"). The other half heard the more standard question (rate "the way Barack Obama is handling his job as president. According to PPP's Tom Jensen, the two versions "actually came out completely identical- 51 [percent approve] / 41 [percent disapprove] on each."

So much for my theory. That said, the bottom line from last month's post remains the same:

While tempting, we cannot easily attribute to [the automated methodology] all of the apprent difference to Obama's job rating as measured by Rasmussen and PPP on the one hand, and the rest of the pollsters on the other. There are simply too many variables to single out just one critical.

To review, let's quickly list a few (I discussed most in the original post).

1) Population. Rasmussen interviews "likely voters;" PPP interviews registered voters. Most of the other national media polls interview and report on all adults, although a handful (most notably Fox/Opinion Dynamics, Quinnipiac, Diageo/Hotline, Cook/RT Stategies and Resurgent Republic) all report results from registered voters.

Alert reader Tlaloc suggested that while our charts allows easy filtering by mode (live interviewer, automated, etc) it would be even more useful to filter by population. We will add that feature to our to-do list. Meanwhile, Charles Franklin prepared the chart below, which shows three solid (loes regression) trend lines for Obama's approval percentage. Black shows the polls of all adults, blue shows the polls of registered voters (including PPP, whose individual releases are designated with blue triangles) and red shows the Rasmussen Reports results.

ObamaAppovalByPop.png

As the chart shows, the three categories produce consistently different estimates of Obama approval, with Rasmussen lowest, adult surveys highest and registered voter surveys somewhere in the middle. Moreover, the three PPP surveys are closer to the Rasmussen result than the other registered voter surveys (and we omitted the small handful of other pollsters besides Rasmussen that report Obama approval among "likely voters").

2) Question format. If you scan the "undecided" column of our table of recent Obama job approval results (and really that should be "not sure" -- another item for our to-do list), you will see quite a lot of variation. Although Rasmussen rarely reports a specific result, they usually have only a percentage point or so that is neither approve nor disapprove. The unsure percentages for CNN/ORC, ABC/Washington Post, AP/GfK and Ipsos/McClatchy tend to be in the low single digits. PPP has produced an unsure response of 6-8 percent. Meanwhile, pollsters like Pew Research Center, CBS News, Fox/Opinion Dynamics typically produce unsure responses over 10 percentage points.

The reason for the variation is usually some combination of the format of the question, including the number of answer choices offered, whether the pollster offers an explicit "unsure" category and whether they have an added push of those who are initially reluctant to answer the question. The point is not that any particular method is right or wrong, but that these differences matter.

3) Sample frame. PPP is unlike virtually all of the other national pollsters in that they sample from a list of all registered voters culled from voter rolls. Phone numbers are usually obtained by attempting to match names and addresses to listed telephone directories. As such, a significant number of selected voters are not covered -- PPP does not say how many are missed in their public releases. That difference in coverage may also contribute to the apparent house effect.

4) Live interviewer vs automated telephone. If we could easily control for the first three factors, we might be able to reach some conclusion about whether the lack of a live-interviewer produces an effect of its own. In other words, holding all other factors equal, are some respondents providing a different answers to the job approval question when asked by an automated method rather than a live interviewer. Unfortunately, we have only national results on Obama job approval from just three pollsters that use the automated phone mode (Rasmussen, PPP and SurveyUSA - and just one poll from the latter).

The above is not an exhaustive list of the possible reasons for pollster house effects. So again, it's next to impossible to try to reach any firm conclusions about the automated mode alone. Also, as I concluded last month (and it bears repeating):

Just because a pollster produces a large house effect in the way they measure something, especially in something relatively abstract like job approval, it does not follow automatically that their result is either "wrong" or "biased" (a conclusion some readers have reached and communicated to me via email), only different. Observing a consistent difference between pollsters is easy. Explaining that difference is, unfortunately, often quite hard.

 

Comments
sfcpoll:

This is a critically important discussion for polling today, thank you for continuing to post on this and thanks to PPP for running the experiment.

A wildly interesting study would be to run parallel live interviewer and ivr surveys with the same questions, sampling frame, and population reported, maybe on a lengthy questionnaire to be able to compare the viability of ivr surveys past a horse-race question.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR