Pollster.com

August 2, 2009 - August 8, 2009

 

Geeks Rule 'Outliers'

Topics: Outliers Feature

Ed Kilgore notices the sudden disappearance of a much touted pro-life shift; Steve Benen has more.

Paul Krugman smacks Rasmussen for "what looks like slanted poling;" Rasmussen responds.

Scott Rasmussen summarizes his health care polling in a Wall Street Journal op-ed; also lands a big new investor.

Sarah Dutton reviews the stark partisan divide on health care goals.

Drew Altman sums up the "what's in it for me" question on health care reform.

Steve Benen speculates about the age divide on health care reform.

Ezra Klein thinks the health care polling dynamic is nothing new.

Michael Barone says Obama's health care persuasion is working in the wrong direction.

Daniel Stone wonders if any politicians are more popular now than six months ago.

Steve Chaggaris ponders Obama's recent job approval and "what it all means."

Chris Bowers doubts Obama's declining approval ratings change a zero sum, two-party game.

Tom Jensen asks if Obama is in a worse position than election day; also catches a gruesome ABC News typo.   

Neil Newhouse sees a "perfect storm" brewing for Obama.

Alex Bratty says the Obama honeymoon is over (Part II here).

The Eagleton Poll hires Iowa's David Redlawsk.

A Public Opinion Quarterly article features a memorable headline.

Statisticians rule the universe (or at least they will soon; via Jackman).


US: National Survey (Kos 8/3-6)


Daily Kos (D) / Research 2000
8/3-6/09; 2,400 adults, 2% margin of error
Mode: Live telephone interviews
(Kos release)

National

Favorable / Unfavorable

Obama: 60 / 37 (chart)
Pelosi: 35 / 57
The Democratic Party: 44 / 49
The Republican Party: 18 / 73

State of the Country

41% Right Direction, 54% Wrong Track (chart)


'Can I Trust This Poll?' - Part I

Topics: Can I Trust This Poll , IVR , Michael Traugott , Nate Silver , Netroots Nation , Rasmussen , Sampling

What makes a public opinion poll "scientific?" If you had asked that question of a random sample of pollsters when I started my first job at a polling firm twenty-three years ago, you would have heard far more agreement than today. Now, many more pollsters are asking fundamental questions about the "best practices" of our profession, and their growing uncertainty makes it ever harder to answer the question I hear most often from readers of Pollster.com: "Can I trust this poll?"

Let's take a step back and consider the elements that most pollsters deem ssential to obtaining a high quality, representative survey. The fundamental principle behind the term "scientific" is the random probability sample. The idea is to draw a sample in a way that every member of the population of interest has an equal probability of being selected (or at least, to be a little technical, a probability that is both known and greater than zero). As long as the process of selection and response is truly random and unbiased, a sample of a thousand or a few hundred will be representative within a predictable range of variation, popularly known as the "margin of error."

Pollsters disagreed with each other, even twenty or thirty years ago, about the practical steps necessary to conduct a random sample of Americans. However, at the dawn of my career, at a time when at least 93% of American households had landline telephone service, pollsters were much closer to consensus on the steps necessary to draw a random, representative sample by telephone. Those included:

  • A true random sample of known working telephone numbers produced by method known as "random digit dial" (RDD) that randomly generates the final digits in order to reach both listed and unlisted phones.  
  • Coverage of the population in excess of 90%, possible by telephone only with RDD sampling (in the pre cell-phone era) but almost never (decades ago) through official lists of registered voters.
  • Persistence in efforts to reach selected households. Pollsters would call at least 3 or 4 different times on at least 3 or 4 successive evenings in order to get those who might be out of the house on the first or second call.   
  • A "reasonable" response rate (although, then as now, pollsters differed over what constitutes "reasonable").
  • Random selection of an individual within each selected household, or at least a method closer to random than just interviewing the first person to answer the phone, something that usually skews the sample toward older women.
  • The use of live interviewers -- preferred for a variety of reasons, but among the most important was the presumed need for a human touch to gain respondent cooperation.
  • Weighting (or statistically adjusting) to correct any small bias in the demographic representation (gender, age, race, etc.) as compared to estimates produced by the U.S. Census, but never weighting by theoretically changeable attitudes like party identification.

I am probably guilty of oversimplifying. Pollsters have always disagreed about the specifics of some of these practices, and they have always adopted different standards. Still, from my perspective, these characteristics are the hallmarks of quality research for many of my colleagues -- especially those I see every year at the conferences of the American Association for Public Opinion Research (AAPOR; for more detail see their FAQ on random sampling).

The application of these principles has shifted slightly in recent years, even among traditionalists, in two ways: First, pollsters are no longer convinced that a low response rate means a skewed sample. As described in my column last week, pollsters have learned that some efforts to boost response rates can actually make results less accurate. Second, to combat the rapid declines in coverage posed by "cell phone only" households, many national media pollsters now also interview Americans via mobile phone, using supplemental samples of cell phone numbers to boost sample coverage back above 90%. But by and large, traditional pollsters still use the same standards to define "scientific" surveys as they did 20 or 30 years ago.

A new breed of pollsters has come to the fore, however, that routinely breaks some or all of these rules. None exemplifies the trend better than Scott Rasmussen and the surveys he publishes at RasmussenReports.com. Now I want to be clear: I single out Rasmussen Reports here not to condemn their methods but to make a point about the current state of "best practices" of the polling profession, especially as perceived by those who follow and depend on survey data.

When it comes to sampling and calling procedures, Rasmussen is consistent with the framework I describe in only one respect: They use a form of random-digit-dial sampling to select telephone numbers (although Rasmussen's methodology page says only that "calls are placed to randomly-selected phone numbers through a process that ensures appropriate geographic representation"). But in other ways, Rasmussen's methods differ: They use an automated, recorded voice methodology rather than live interviewers. They conduct most surveys in a single evening and never dial a selected number more than once. They routinely weight samples by party identification. They cannot interview respondents on their mobile phones (something not allowed via automated methods) and thus achieve a coverage rate well below 90%.

If you had described Rasmussen's methods to me at the dawn of my career, I probably would have dismissed it the way my friend Michael Traugott, a University of Michigan professor and former AAPOR president, did nine years ago. "Until there is more information about their methods and a longer track record to evaluate their results," he wrote, "we shouldn't confuse the work they do with scientific surveys, and it shouldn't be called polling."

But that was then. This year Traugott chaired an AAPOR committee that looked into the pre-election polling problems in New Hampshire and other presidential primary states in 2008. Their report concluded that use of "interactive voice response (IVR) techniques made no difference to the accuracy of estimates" in the primary polls. In other words, automated surveys, including Rasmussen's, were "about equally accurate" in the states they examined.

Consider also the analysis of Nate Silver. On his website Fivethirtyeight.com last year, he approached the issue of survey quality from the perspective of the accuracy of the results rather than their underlying methodology. He gathered past polling data from 171 contests for President, Governor and Senate fielded since 2000 and calculated accuracy scores for each pollster. His study rated Rasmussen as the third most accurate of 32 pollsters, just behind SurveyUSA, another automated pollster. When he compared eight daily tracking polls last fall, Rasmussen ranked first in Silver's accuracy ratings. He concluded that Rasmussen, "with its large sample size and high pollster rating -- would probably be the one I'd want with me on a desert island."

The point here is not to praise or condemn any particular approach to polling but to highlight the serious issues now confronting the profession. Put simply, at a time when pollsters are finding it harder to reach and interview a representative sample, the consumers of polling data do not perceive "quality" the same way that pollsters do. Moreover, the success of automated surveys in estimating election "horse race" results, and the ongoing transition in communications technology and the way Americans use it, has left many pollsters struggling to agree on best practices and questioning some of the orthodoxies of the profession.

The question for the rest of us, in this period of transition, remains the same: How do we know which polls to trust? I have two suggestions and will take those up in subsequent posts. 

Update: Continue reading Part II and Part III.

[Note: I will be participating in a panel at next week's Netroots Nation conference on "How to Get the Most Out of Polling." This post, and hopefully two to follow, are a preview of some of the thoughts I am hoping to share].


NJ: Christie 50 Corzine 37 (Rasmussen 8/4)



Rasmussen
8/4/09; 500 likely voters, 4.5% margin of error
Mode: IVR
(Rasmussen release)

New Jersey

Favorable / Unfavorable
Chris Christie (R): 49 / 42
Jon Corzine (D): 37 / 62 (chart)

Job Approval / Disapproval
Pres. Obama: 56 / 43 (chart)
Gov. Corzine: 37 / 63 (chart)

2009 Governor
Christie 50%, Corzine 37% (chart)


NJ: Christie 48 Corzine 40 (Kos 8/3-5)


Daily Kos (D) / Research 2000
8/3-5/09; 600 likely voters, 4% margin of error
Mode: Live telephone interviews
(Kos release)

New Jersey

Favorable / Unfavorable
Jon Corzine (D): 35 / 56 (chart)
Chris Christie (R): 44 / 29
Barack Obama: 62 / 31 (chart)

2009 Governor
Christie 48%, Corzine 40% (chart)


US: News Interest (Pew 7/27-8/2)


Pew Research Center
7/27-8/2/09; 1,013 adults
Mode: Live telephone interviews
(Pew Release)

National

Most Closely Followed Story

36% Debate over health care reform
15% Condition of the U.S. economy
15% Controversy surrounding Michael Jackson's death

How much have you heard about some people who claim that Barack Obama was not born in the U.S. and therefore not eligible to be president?

31% A lot, 49% A little, 19% Nothing at all

Do you think news organizations have given too much, too little, or the right amount of attention to the people who claim that Barack Obama was not born in the U.S.?

41% Too much, 28% Too little, 24% Right amount


VA: McDonnell 51 Deeds 43 (Kos 8/3-5)


Daily Kos (D) / Research 2000
8/3-5/09; 600 likely voters, 4% margin of error
Mode: Live telephone interviews
(Kos release)

Virginia

Favorable / Unfavorable

Creigh Deeds (D): 46 / 40
Bob McDonnell (R): 57 / 38
Barack Obama: 51 / 44 (chart)

2009 Governor
McDonnell 51%, Deeds 43% (chart)


IL: 2010 Sen Primary (GQR 7/28-8/2)


Greenberg Quinlan Rosner (D)
7/28-8/2/09; 387 likely Democratic voters, 5% margin of error
Mode: Live telephone interviews
(Politico post)

Illinois

2010 Senate: Democratic Primary
Alexi Giannoulias 45%, Chris Kennedy 17%, Cheryle Jackson 13%
Giannoulias 51%, Jackson 21%


US: National Survey (Ipsos7/3-8/3)


Ipsos
7/30-8/3/09; 1,006 adults, 3.1% margin of error
Mode: Live telephone interviews
(Ipsos release)

National

State of the Country
46% Right Direction, 48% Wrong Track (chart)

Obama Job Approval
58% Approve, 37% Disapprove (chart)
Dems: 82 / 15 (chart)
Reps: 24 / 70 (chart)
Inds: 57 / 34 (chart)

Party ID
35% Democrat, 22% Republican, 44% independent (chart)


US: National Survey (Quinnipiac 7/27-8/3)


Quinnipiac
7/27-8/3/09; 2,409 registered voters, 2% margin of error
Mode: Live telephone interviews
(Quinnipiac release)

National

Obama Job Approval

50% Approve 42% Disapprove (chart)
Reps: 16 / 77 (chart)
Dems: 85 / 9 (chart)
Inds: 45 / 45 (chart)
Economy: 45 / 49 (chart)
Foreign Policy: 52 / 38 (chart)
Health Care: 39 / 52 (chart)

State of the Country
35% Satisfied, 64% Dissatisfied (chart)

Economy:
0% Excellent, 6% Good, 49% Not So Good, 44% Poor (chart)
28% Getting Better, 29% Getting Worse (chart)


US: National Survey (CNN 7/31-8/3)


CNN / Opinion Research Corportation
7/27-8/3/09; 1,136 adults, 3% margin of error
Mode: Live telephone interviews
(CNN article)

National

Obama Job Approval

56% Approve, 40% Disapprove (chart)

Favorable / Unfavorable

Obama: 64 / 34 (chart)
Joe Biden: 47 / 33
Hillary Clinton: 61 / 35


On Rasmussen's Index and Intensity of Opinion

Topics: Approval Ratings , Barack Obama , Glen Bolger , Rasmussen

Last week, TPM's Eric Kleefeld examined the "Presidential Approval Index" reported daily by Rasmussen Reports. Rasmussen calculates the index by subtracting the percentage that "strongly disapprove" of the job Barack Obama is doing as president from the percentage that "strongly approves." Since the index has been a negative number since early July, it should come as little surprise that conservative web sites have frequently reproduced the chart of the index featured in Rasmussen's daily analysis.

Kleefeld asked me and a few others about the Rasmussen index:

Mark Blumenthal of Pollster.com said he didn't know of anyone who had previously given this as a prominent "index." "If Obama now has more strong detractors than strong supporters, that is politically meaningful (though contrary to the results of the recent ABC/Washington Post polls, to pick one example)," said Blumenthal. "But to report only those who strongly approve or strongly disapprove of Obama while neglecting mention of the aggregate numbers strikes me as more political spin than analysis."

I stand by the gist of that comment but, to borrow an Obama-inspired euphemism, I'll concede that I might have "better calibrated" my use of the word "report." Yes, Rasmussen reports the total percentages of those that approve and disapprove on their website. We dutifully enter those percentages into our chart every day, and anyone who looks for the percentages for total approval and disapproval can find them on RasmussenReports.com.

My point was more about the emphasis that Rasmussen Reports gives the "index." Their daily analysis, for example, typically reports the latest value of the index in the first paragraph but buries reference to the total approval number at the end, often following results from other questions. During July, Rasmussen's Twitter updates reported on the latest Index numbers at least 20 times (including the percentages that strongly approve and strongly disapprove") , but never once mentioned the total approval percentages.

Since Kleefeld's piece appeared, they have started citing the total approval number in their twitter feed and now include a total approval chart in the daily analysis. So let's at least give Rasmussen credit for responding to criticism.

And let's move on to the substance. Rasmussen told Kleefeld that he began breaking out strong approval and disapproval numbers partly because of his theory that intensity of opinion would matter more in a mid-term election cycle like 2009-2010:

"I know the intensity by the time we get to 2012 won't matter as much as the overall number. What I don't know, and what we're unsure of, is what it does in 2010," said Rasmussen. "Clearly, if the President's numbers are down from where they are now, whether you mean overall or the index, it's going to be more difficult for Democrats to do well in the midterms. And I don't know, but I suspect, that if the intensity gap is strong it will hurt them. It definitely hurt the Republicans in 2006."

For now, Rasmussen said the usefulness of the strong approval-disapproval index could become more apparent over the coming recess. Members of Congress will go home and hear a lot from constituents who are heavily in favor of Obama's proposals, or heavily against them. "They're probably not gonna hear from people in August who are kind of lukewarm," he said. "Now I'm not saying whether that's a healthy dynamic, but I'm saying people who are more passionate get heard more."

That hypothesis is reasonable. The ongoing debate those who choose to attend town-hall meetings is about whether they reflect a real wave of intense anger at the Obama administration, and Rasmussen is not the only pollster to see Obama's strong disapproval rating rise. Just yesterday, Republican pollster Glen Bolger blogged about the signs of a "tide of intensity" he sees "moving solidly against Barack Obama" in the recent survey that he and Democrat Stan Greenberg conducted for NPR.

To put this issue into some perspective, I gathered all of the recent surveys that probed the intensity of feeling regarding President Obama's job rating. The table below includes results from five pollsters based on surveys fielded in the second half of July (I averaged the results for Rasmussen over this period and included both of the Economist/YouGov internet panel surveys).


2009-08-05_Index_July.png

The table shows quite a lot of variation in the "strong" categories. Strong approval varies from a low of 25% to a high of 40%, while strong disapproval varies from 28% to 39%. If we calculate Rasmussen's index, it ranges from a low of -9 (Rasmussen) to +10 (ABC/Washington Post). Why the variation? Differences in the population interviewed appear to explain some of it -- with samples of "likely voters" yielding bigger "strong disapproval" percentages -- but differences in survey mode (whether it interviewed by telephone with live interviewers, by telephone with an automated method or over the internet) and question format may have been important too. The point is that the "true numbers" regarding intense approval and disapproval are lost in a fog of methodology.

But regardless of which poll you believe, all have shown a similar trend in strong approve and disapprove numbers since the spring. Four of the five pollsters also conducted surveys in March or April, and all of these showed single-digit declines in Obama's strong approval rating and similar single-digit increases in strong disapproval.

2009-08-05_index_marchapril.png

If I average results for these four pollsters in both time periods (giving each pollster equal weight), we go from an index of +12 in March/April (37% strong approve, 25% strong disapprove) to zero now (31% strong approve, 31% strong disapprove).

So what do these numbers tell us about the state of public opinion regarding the Obama administration? Intensity of approval is moving in the same basic direction as overall approval. Obama's ratings are trending downward either way. As Bolger points out, "this is NOT a case where voters have been moving only in the middle -- from somewhat approve to somewhat disapprove."

Also, and perhaps more important, as of late July there were about about as many Americans who strongly approved of Obama's performance as strongly disapproved. In other words, there are about as many Americans thrilled with his performance as angry about it, although the balance may tip slightly toward the negative among habitual voters. Whether you see that result as implying a jar half empty or half full for Obama probably depends on whether you are one of the thrilled or one of the angry.   

PS: All of the above concerns Obama's overall approval. Intensity of opinion on health care reform is probably different. Note for example that while today's CNN survey shows more in favor (50%) than opposed (45%) to "Barack Obama's plan to reform health care" (the most positive result by far of the last few weeks), the numbers look different when CNN presses for intensity of opinion: Twenty three percent (23%) are strongly in favor, but far more (33%) are strongly opposed.


US: National Survey (Zogby 7/31-8/3)



Zogby
7/31-8/1/09; 1,005 likely voters, 3.2% margin of error
Mode: Live telephone interviews
(Zogby release)

National

Obama Job Approval
53% Approve, 38% Disapprove (chart)

Obama Favorable Rating
62% Approve, 37% Disapprove (chart)

State of the Country
47% Right Direction, 46% Wrong Track (chart)


US: National Survey (DemCorps 7/22-26)


Democracy Corps (D)
7/22-26/09; 1,000 likely voters, 3% margin of error
Mode: Live telephone interviews
(Democracy Corps release)

National survey

Favorable / Unfavorable

Obama: 56 / 33 (chart)
The Democratic Party: 43 / 40
The Republican Party: 30 / 48

From what you have seen and heard so far about Barack Obama's policies and goals for the country, would you say that you support or oppose his policies and goals?

57% Support, 38% Oppose

Party ID
40% Democrat, 32% Republican, 28% independent (chart)


VA: Job Approval, Obama Birthplace (PPP 7/31-8/3)


Public Policy Polling (D)
7/31-8/3/09; 579 likely voters, 4.1% margin of error
Mode: IVR
(PPP release)

Virginia

Job Approval / Disapproval
Pres. Obama: 42 / 51 (chart)
Gov. Kaine: 42 / 40 (chart)
Sen. Warner: 56 / 32 (chart)

Do you think Barack Obama was born in the
United States?

53% Yes, 24% No


US: Abortion Attitudes (Gallup 7/17-19)


Gallup
7/17-19/09; 1,006 adults, 4% margin of error
Mode: Live telephone interviews
(Gallup story)

National

With respect to the abortion issue, would you consider yourself to be pro-choice or pro-life?

47% Pro-Life
46% Pro-Choice

Do you think abortions should be legal under any circumstances, legal only under certain circumstances, or illegal in all circumstances?

21% Legal under any
57% Legal under certain
18% Illegal in all


US: Health Care (CNN 7/31-8/3)


CNN / Opinion Reseaerch Corportation
7/27-8/3/09; 1,136 adults, 3% margin of error
Mode: Live telephone interviews
(CNN article)

National

From everything you have heard or read so far, do you favor or oppose Barack Obama's plan to reform health care?

50% Favor, 45% Oppose (chart)

Would you rather have [treatment] decisions made by people who work for insurance companies, or people who work for the government?

40% Insurance Companies, 40% Government

Half sample: Do you think it is or is not necessary to make major structural changes in the nation's health care system in order to make sure that all Americans have health insurance?

77% Necessary, 21% Not necessary

Half sample: Do you think it is or is not necessary to make major structural changes in the nation's health care system in order to reduce health care costs?

74% Necessary, 23% Not necessary


US: Health Care (Quinnipiac 7/27-8/3)


Quinnipiac
7/27-8/3/09; 2,409 registered voters, 2% margin of error
Mode: Live telephone interviews
(Quinnipiac release)

National

Do you approve or disapprove of the way Barack Obama is handling health care?

39% Approve, 52% Disapprove (chart)

Do you think President Obama's health care plan would improve the quality of health care in the nation, hurt the quality of health care in the nation, or not make a difference?

39% Improve
41% Hurt
14% No difference

Do you think President Obama's health care plan will improve the quality of health care you receive, hurt the quality of health care you receive, or not make a difference?

21% Improve
36% Hurt
39% No difference

Do you think President Obama's health care plan would help the economy, hurt the economy, or not make a difference?

34% Help
44% Hurt
16% No difference

Do you think President Obama's health care plan would increase your health care costs, decrease your health care costs, or not make a difference?

42% Increase
18% Decrease
33% No difference


Pile on Polling 'Outliers'

Topics: Outliers Feature

Walter Shapiro ponders "pile on polling" (via Political Wire)

Tom Edsall sees Republicans returning to a "white voter strategy."

Gallup finds increased consumer confidence (in 2009 vs 2008) in all states.

Greg Sargent reports on an internal DCCC health care message testing survey.

Michael Shear reports on Obama's internal polling on health care.

Theresa Poulson reports by video on the grassroots health care message war (featuring comments from a mysterious pollster).

Gerald Seib assesses America's schizophrenic mind set on health care.

Ruy Teixeira rounds up polling on health care reform (via Cohn).

Ezra Klein considers the problem with seniors and health care reform.

Bill McInturff and Alex Bratty argue that Americans don't like what they see of health care reform.

Mark Mellman shares data from a survey on reform of long term care.

David Paul Kuhn sees equivalence between birthers and 9/11 conspiracists; Nate Silver sees differences.

Dave Weigel finds more "birther" believers in the South (via Kos).

David Hill asks if region bashing is the last remaining "politically correct" prejudice.

Chris Bowers believes in the generic ballot.

Chris Weigant updates his Obama poll watch.

John Sides and Andrew Gelman ponder how to measure Michelle Malkin's conservatism.

Tom Jensen points to some "hard data" on an enthusiasm gap in NJ & VA.

The Field poll assesses (pdf) long term shifts in California's demographics.

Patrick Murray ponders a big divergence between likely and registered voters in NJ.

Glen Bolger sees intensity beginning to work against Obama.

Mike Mokrzycki discovers a common problem for newspapers and Twitter.

STATS notes that a third of Americans like to take naps.


More On Response Rates

Topics: Ann Selzer , Evans Witt , Huffpollstrology , Response Rates , Robert Groves

Following up on yesterday's column and my additional comments on the Huffington Post's requests for response rate data from pollsters last fall, I want to provide a little more of a users' guide to response rates with a focus on how hard it can be to (a) calculate a response rate and (b) make valid comparisons across pollsters.

Generally speaking, the response rate alone does not not tell us very much and, as such, is a poor indicator of the overall quality of the survey. That's one point that Evans Witt, the president of the National Council on Public Polls (NCPP), made in a comment worth reading on my post yesterday:   

NCPP does not believe any single number is the perfect guideline to judge a poll. That is why NCPP calls for the release of a substantial amount of information when a poll is the subject of public debate. With all the required information in hand, the informed consumer can judge a given poll and evaluate it against other surveys.

My column referenced my 2008 interview on the subject with Robert Groves, then a University of Michigan professor, now director of the U.S. Census. For those not familiar with his career, Groves is one of the most widely respected authorities on survey methdology and non-response bias in surveys. I asked him whether we should consider it a problem that political surveys have response rates at or below 20%. His answer:

The key to answering that question is to determine whether the non-respondents are different from the respondents. What we do know from about ten years of research around the world is that [the response] rate, that 20% you cited, isn't itself informative to that answer. We don't know what a 20% response rate means with regard to the difference between respondents and non-respondents.

We do know, secondly, that in a single survey, some estimates are subject to large non-response biases -- that is, the respondents are really quite different from the non-respondents -- and others in the same survey are subject to no bias. So if you just know the response rate, you can't answer the question.

As always, knowing something about the non-respondents is hard, since we don't interview them. Groves goes on to talk about the importance of including "auxiliary variables" on the sample as a way to "get a purchase on an answer" of how respondents differ from non-respondents. For more detail, listen to the full interview or see his (free) article in the 2006 Public Opinion Quarterly special edition on non-response bias.

Next, on the difficulty of calculating a response rate, here's the short version: In the late 1990s, the American Association for Public Opinion Research (AAPOR) made an effort to standardize the computation of response rates. The document they published -- Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys -- is now 50 pages long. It includes six different ways to calculate a response rate, four ways to calculate a cooperation rate and three ways to calculate a refusal rate.

Now here's the long version (and if it gets too hopelessly wonky, just skip to the paragraph that starts "Let's go back to the Huffpollstrology feature"):

Why so many formulas? The underlying idea is not complicated. Suppose, hypothetically, you have a perfect list of some population you want to sample from and you draw a simple random sample of 1,000 names from that list and then attempt to interview everyone on the sampled list. The response rate would be the number of respondents to the survey divided by the 1,000 names sampled.

In the real world of telephone surveys, however, this calculation gets a lot more complicated. First, should the numerator include partial interviews -- those that begin the survey but do not complete it? And where should the pollster draw the line between those that refuse to participate but answer a question or two before hanging up, and those that are otherwise cooperative but have to get off the phone before the interview is complete? The answer depends, in part, on what the pollster does with data gathered from partial interviews. If they throw out all partial interviews, the answer is simple: Exclude them from the numerator (which makes the response rate lower). If they use partial data in their results, the decision of how to calculate the response rate gets a lot more complicated (see pp. 12-13 of the AAPOR Standard Definitions).

The denominator of the formula is even more complicated, especially for telephone surveys that use randomly generated telephone numbers, the so-called random digit dial (RDD) methodology. In any RDD sample, some non-trivial percentage of the randomly generated numbers are non-working but will -- due to the vagaries of telephony -- either produce a busy signal or ring endlessly the way a working phone would if no one answered. Some unknown percentage of the "no answers" are business numbers that can be excluded from the response rate calculation because they are not eligible for the survey.

There is one very accurate way to determine which numbers are working or otherwise eligible for the survey: Dial each number over and over for a period of months, not days or weeks. Eventually, you end up identifying 99% or more of the working numbers. But virtually all political opinion surveys call for just a few days, so for most of the polls we care about, the pollster is left with some sampled "no answer" numbers whose status is uncertain. The AAPOR Standards document resolves this issue by allowing for three different calculations: A response rate that includes all of the mystery "no-answer" numbers (and is thus lower than the true number), a response rate that includes none of the mystery numbers (and is thus higher than the true number), and a response rate that involves an estimate of the percentage of eligible numbers from among the "no answers" (and -- surprise, surprise -- pollsters differ on the best way to estimate this percentage).

Put these variables together and you get six different official AAPOR response rates, labled as RR1 through RR6 in the table below:


2009-08-04_AAPOR_Response_Rate

The main point, if you're having trouble following all the detail, is that AAPOR's Response Rate #1 (RR1) is the most conservative way to calculate the rate (i.e. it produces the lowest response rate, all else being equal) and RR6 is the most "liberal" (i.e. produces the highest rate). The two rates that involve estimates of the eligible "no answers," RR3 and RR4, usually produce rates somewhere in the middle.

Now, let's consider another reason why the response rate alone is a poor indicator of survey quality. The overall response rate is a combination of two things: How many sampled units the pollster is able to contact (the contact rate) and how many of those live human beings are willing to be interviewed once contacted (the cooperation rate, or its converse, the refusal rate). Pollsters know how to boost the response rate. That's easy. Just dial over and over again for period of weeks. But how useful is a political campaign survey with a field period of a month or more? So focusing on the response rate can be deceiving.

Also consider some of the other methodological differences that might cause a response rate to go higher or lower, as noted by the ABC News summary of its methodology and standards:

It cannot be assumed that a higher response rate in and of itself ensures greater data integrity. By including business-listed numbers, for instance, ABC News increases coverage, yet decreases contact rates (and therefore overall response rates). Adding cell-only phones also increases coverage but lessens response rates. On the other hand, surveys that, for instance, do no within-household selection, or use listed-only samples, will increase their cooperation or contact rates (and therefore response rates), but at the expense of random selection or population coverage. (For a summary see Langer, 2003, Public Perspective, May/June: 16-8.)

Let's go back to the Huffpollstrology feature. In response to the Huffington Post requests, some pollsters specified the AAPOR formula they used, some did not. But even if all used the same response rate formula, we would still get variation in the rates depending on how each pollster drew their sample, how they selected individuals within each household, how long they stayed in the field and how they interpreted AAPOR's guidelines for coding the calls they made. Consider also that pollsters that use an automated method have much less ability to "resolve" the status of the calls they make. They can determine whether a human being answers the phone, but know little about why some choose to hang up.

So trying to make comparisons across pollsters is frustrating at best. When response rates are generally in the same range, they are also a lousy way of trying to tell "good" pollsters from "bad."

Finally, consider the comment I received via email from Ann Selzer, the Iowa-based pollsters best known for conducting the Des Moines Register's Iowa Poll:

If low response rates were a big problem, no pollster could consistently match election outcomes. In the end, we have a good test of what matters more and what matters less. How one defines likely voters is much more important than the current (albeit seemingly low) response rates.

What pollsters do to minimize the potential for response bias, and how that intersects with how the select likely voters, is something I'm going to take up in subsequent posts.


US: Gates Arrest (CNN 7/31-8/3)


CNN / Opinion Research Corporation
7/31-8/3/09; 1,136 adults, 3% margin of error
226 African-Americans, 6.5% margin of error
773 non-Hispanic whites, 3.5% margin of error
Mode: Live telephone interviews
(CNN release)

National

Do you approve or disapprove of how Obama has handled race relations since he became President?

61% Approve, 34% Disapprove
Black respondents: 92 / 6
White respondents: 56 / 40

Now I have some questions about an incident in Massachusetts in which a professor named Henry Louis Gates, who is black, was arrested for disorderly conduct at his home by a police officer named James Crowley, who is white. Based on what you have heard or read about this incident, do you think the police officer, James Crowley, acted stupidly or don't you think so?

33% Acted stupidly, 54% Did not act stupidly
Black respondents: 59 / 29
White respondents: 29 / 58

And based on what you have heard or read about this incident, do you think the professor, Henry Louis Gates, acted stupidly or don't you think so?

53% Acted stupidly, 30% Did not act stupidly
Black respondents: 44 / 43
White respondents: 58 / 27


NJ: Christie 42 Corzine 35 (GSG 7/29-30)


Global Strategy Group (D) / NJ Democratic Assembly Campaign Committee
7/29-30/09; 604 likely voters, 4% margin of error
Mode: Live telephone interviews
(GSG toplines, analysis)

New Jersey

Favorable / Unfavorable
Chris Christie (R): 47 / 29
Jon Corzine (D): 42 / 51 (chart)

2009 Governor
Christie 42%, Corzine 35%, Daggett (i) 6% (chart)


VA: McDonnell 51 Deeds 37 (PPP 7/31-8/3)


Public Policy Polling (D)
7/31-8/3/09; 579 registered voters, 4.1% margin of error [Clarification: PPP interviewed registered voters in households of those who voted in general elections between 2005 and 2007]
Mode: IVR
(PPP release)

Virginia

Favorable / Unfavorable
Creigh Deeds (D): 43 / 32
Bob McDonnell (R): 54 / 26

2009 Governor
McDonnell 51%, Deeds 37% (chart)

Party ID
32% Democrat, 35% Republican, 33% Other


US: Clinton 51 Palin 39 (Rasmussen 7/30-31)


Rasmussen
7/30-31/09; 1,000 likely voters, 3% margin of error
Mode: IVR
(Rasmussen release)

National

2012 President
Hillary Clinton 51%, Sarah Palin 39%


NJ: Christie 50 Corzine 36 (Monmouth 7/29-8/2)


Monmouth University / Gannett
7/29-8/2/09; 723 registered voters, 3.7% mrgin of error
484 likely voters, 4.5% margin of error
Mode: Live telephone interviews
(Monmouth University release)

New Jersey

Favorable / Unfavorable

Jon Corzine (D) (chart)
Registered Voters: 39 / 46
Likely Voters: 37 / 53

Chris Christie (R)
Registered Voters: 42 / 30
Likely Voters: 49 / 33

Chris Daggett (i)
Registered Voters: 9 / 9
Likely Voters: 11 / 9

Gov. Corzine Job Approval (chart)
Registered Voters: 38% Approve, 54% Disapprove
Likely Voters: 35% Approve, 58% Disapprove

2009 Governor (chart)
Registered voters: Christie 43%, Corzine 39%, Daggett 4%
Likely voters: Christie 50%, Corzine 36%, Daggett 4%


VA poll backs Kos result on Obama birth

Topics: birth , birth certificate , Daily Kos , Obama , poll , Public Policy polling , Virginia

Tom Jensen of Public Policy Polling reported on Twitter today that a new poll his firm conducted finds that only 32% of Virginia Republicans think Obama was born in the US, while 41% think he was not and 27% are not sure. These numbers are even worse than the national results from the Daily Kos/Research 2000 poll released on Friday, which found that 28% of Republicans think Obama is not a citizen and 30% are not sure.

Here is a bar chart that combines results from the two polls to compare Virginia Republicans with Republicans and the public nationally (click it for a larger version):

Noncitizen2

Pretty depressing stuff.

(Cross-posted at brendan-nyhan.com)


New Charts: Romney and Huckabee Favorable Ratings


A few weeks ago, we began charting Sarah Palin's Favorable Rating. Fair's fair, so today we've added two new charts: national favorable ratings for Mitt Romney and Mike Huckabee. Please note that these are ratings from samples of adults (or for all registered or likely voters), not among Republicans or likely Republican primary voters.

As always, be sure to check out all our state and national trends, and if you have any suggestions for new national or state charts you'd like to see added next, let us know by e-mailing us or leaving a comment below.


US: Health Care (Rasmussen 8/1-2)


Rasmussen
8/1-2/09; 1,000 likely voters, 3% margin of error
Mode: IVR
(Rasmussen release)

National

How do you rate the healthcare you receive?

74% Excellent / Good
24% Fair / Poor

How do you rate the U.S. health care system?

48% Excellent / Good
49% Fair / Poor

Are you willing to pay higher taxes so all Americans can be provided with health insurance?

28% Yes
60% No


States: Party ID (Gallup Jan.-June '09)


Gallup
January-June 2009; 160,236 adults, 3-7% margin of error (in individual states)
Mode: Live telephone interviews
(Gallup article)

U.S. States

Gallup:

Only four states show a sizeable Republican advantage in party identification, the same number as in 2008. That compares to 29 states plus the District of Columbia with sizeable Democratic advantages, also unchanged from last year.


'Huffpollstrology' and Response Rates

Topics: Arianna Huffington , Huffpollstrology , National Journal column , Response Rates

My National Journal column for this week, on a largely overlooked Huffington Post feature ("Huffpollstrology") that asked national media pollsters for their response and refusal rates last fall, is now posted online.

The subject of response rates is a tough one to try to approach with a 800-900 word column. Inevitably, something important is left out. With more space and time I would have worked in mention of the two experimental projects on response rates conducted by the Pew Research Center in 1997 and 2003. These involved parallel surveys with identical questionnaires, but one used Pew's standard procedures and the second used a much more rigorous methodology to obtain the highest response rate possible. In both years, the difference in results across a wide variety of demographic traits and political attitudes was negligible. In addition to the Pew write-ups, both experiments led to articles in Public Opinion Quarterly (in 2000 and 2006).

For those looking to read further on non-response bias, Public Opinion Quarterly also devoted a special issue to the subject that remains free to all online. I summarized it here.

I also gathered all of the responses from pollsters that Huffington Post ran last fall. I will post those either later today or tomorrow, along with some explanation of the guidelines on how to calculate and report response rates published by the American Association for Public Opinion Research (AAPOR).

Finally, I sent Arianna Huffington some questions for the column that she chose to answer via email. I am reproducing them here in full:

Q: Generally, what did you learn from queries to pollsters about their response rates made by the Huffpollstrology project? Were you surprised at the level of cooperation you received (or the lack thereof) or by the information provided?

Plummeting response rates have been the dirty little secret of the polling industry for years now. They've often dropped below 20 percent, making the core polling principle of "equal probability of selection" something of a joke. But polling companies often refuse to release these numbers - and when they do release them, they often bury them at the end of a poll in tiny print. So when we launched HuffPollstrology, we decided that we would also put response rates front and center. We wanted to delve into the gray area of how polls are conducted.

Q: Do you think the response rate information that pollsters provided was helpful to Huffington Post readers? Why/why not?

Absolutely. The media continue to let polls dominate their political coverage - and yet are reluctant to let the public know how much skepticism it should bring to its consumption of polling results. Not just because of results-skewing response rates but also variables like undecided voters and margins of error. So it was important to remind readers that poll results need to be taken with a grain of salt, not treated like they were just brought down from the mountaintop by Moses.

Q: Do you plan to repeat this project again in the future and if so, what if anything might you do differently?

HuffPollstrology was our way of putting the media's obsession with polls into what we consider the proper context -- that is, alongside astrology and betting lines. Asking for and highlighting response rates was only one aspect of the project. Moving forward, given the media's addiction to polls and polling, we will continue digging deeper into response rates and other polling methodology - and, sometime before the 2010 election, we'll decide whether to bring HuffPollstrology back.



 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR