September 6, 2009 - September 12, 2009


US: Health Care (Rasmussen 9/10-11)

9/10-11/09; 1,000 likely voters, 3% margin of error
Mode: IVR
(Rasmussen release)


Generally speaking, do you strongly favor, somewhat favor, somewhat oppose or strongly oppose the health care reform plan proposed by President Obama and the congressional Democrats?
47% Favor, 49% Oppose (chart)

If the health care reform plan passes, will the quality of health care get better, worse, or stay about the same?
28% Better, 46% Worse, 20% Same

If the health care reform plan passes, will the cost of health care go up, go down, or stay about the same?
47% Up, 23% Down, 22% Same

Chart Intervention 'Outliers'

Topics: Outliers Feature

Alan Binder does a dial-test group of Obama's speech for the DNC.

Democracy Corps reports on their speech dial test too.

Resurgent Republic summarizes their focus groups, urges Republicans to hit Democrats in Congress more than Obama.

Gary Langer nitpicks the health care data in Obama's speech.

Nate Silver sees potential for Obama improvement among Democrats.

Mark Hemmingway ponders public ignorance on the public option.

Chris Cillizza speculates about a re-engergized Democratic base.

Larry Sabato and Isaac Wood foresee a GOP revival in 2010.

Mark Mellman faults the economy for the Obama slide.

David Hill assesses the Texas GOP primary for governor.

Nicole McClesky finds growing skepticism of government solutions.

Sarah Dutton reviews CBS polling from 1993, sees little impact from Clinton health care speech.

The Human Rights Campaign surveys its members.

BusinessForward surveys business leaders on health care.

Tom Webster predicts Sprint's Anymobile plan will change the game for telephone pollsters.

And chart makers should not miss this classic "chart intervention" (via Lundry).

CT: 2010 Sen (Rasmussen 9/10)

9/10/09; 500 likely voters, 4% margin of error
Mode: IVR
(Rasmussen release


Job Approval / Disapproval
Pres. Obama: 59 / 39 (chart)
Gov. Rell: 55 / 43 (chart)

Favorable / Unfavorable
Chris Dodd (D): 40 / 59 (chart)
Rob Simmons (R): 53 / 32
Tom Foley (R): 33 / 35
Sam Caligiuri (R): 32 / 32
Peter Schiff (R): 27 / 35

2010 Senate (trends)
Simmons 49%, Dodd 39% (chart)
Foley 43%, Dodd 40%
Dodd 43%, Caligiuri 40% (chart)
Dodd 42%, Schiff 40%

CA: Approval Ratings (PPIC 8/26-9/2)

Public Policy Institute of California
8/26-9/2/09; 2,006 adults, 2% margin of error
Mode: Live telephone interviews
(PPIC release)


Job Approval / Disapproval
Gov. Schwarzenegger: 30 / 61 (chart)
Pres. Obama: 63 / 32 (chart)
Sen. Feinstein: 54 / 32 (chart)
Sen. Boxer: 53 / 32 (chart)
Speaker Pelosi: 49 / 40

SC-02: Rep. Wilson (PPP 9/10-11)

Public Policy Polling (D)
9/10-11/09; 747 likely voters, 3.6% margin of error
Mode: IVR
(PPP release)

South Carolina 2nd Congressional District

Rep. Joe Wilson Job Approval
41% Approve, 47% Disapprove

2010 House
Rob Miller (D) 44%, Wilson 43%

Do you approve or disapprove of Joe Wilson's actions at Barack Obama's speech to Congress on Wednesday night?
29% Approve, 62% Disapprove

Does Joe Wilson calling the president a liar make you more or less likely to vote to re-elect

35% More likely, 49% Less likely, 16% No difference

Do you think that Barack Obama was lying when he said his health care plan would not cover illegal immigrants?
42% Yes, 46% No

Crowdsourcing and the next Netflix Prize

Topics: crowdsourcing , Netflix prize , open source

An interesting development on the polling front: Tom Jensen of Public Policy Polling is open-sourcing his polls. Yesterday he asked for suggestions on which state to poll next and posted a draft questionnaire for Joe Wilson's district for comment.

This approach, which I think is brilliant, raises a more general question: where's the innovation in content creation among political organizations? Beyond MoveOn.org, very few organizations in politics take advantage of the creativity and intelligence of their supporters.

Along similar lines, why doesn't a political organization like one of the major parties offer up some money in a competition to, say, predict who will respond favorably to solicitations for money, votes, etc. using anonymized data? The Netflix Prize, which will be awarded on the 21st, drew a vast amount of effort from the machine learning community, and there's now a company that will provide infrastructure for similar contests. Who's going to be the innovator?

(Cross-posted to brendan-nyhan.com)

NC: 2010 Sen (PPP 9/2-8)

Public Policy Polling (D)
9/2-8/09; 600 likely voters, 4% maargin of error
Mode: IVR
(PPP release)

North Carolina

Job Approval / Disapproval
Sen. Burr: 38 / 32 (chart)

2010 Senate (trends)
45% Burr, 38% Generic Democrat
42% Burr, 30% Cal Cunningham
41% Burr, 34% Bob Etheridge
43% Burr, 29% Kevin Foy
43% Burr, 27% Kenneth Lewis
42% Burr, 31% Elaine Marshall
42% Burr, 31% Dennis Wicker

US: National Survey (Kos 9/7-10)

Daily Kos (D) / Research 2000
9/7-10/09; 2,400 adults, 2% margin of error
Mode: Live telephone interviews
(Kos release)


Favorable / Unfavorable
Barack Obama: 56 / 39 (chart)
Nancy Pelosi: 33 / 59
Harry Reid: 30 / 59
Mitch McConnell: 18 / 64
John Boehner: 14 / 62
The Democratic Party: 40 / 51
The Republican Party: 22 / 68

State of the Country
39% Right Direction, 55% Wrong track (chart)

CO: 2010 Sen (Rasmussen 9/9)

9/9/09; 500 likely voters, 4.5% margin of error
Mode: IVR
(Rasmussen release)


Favorable / Unfavorable
Sen. Michael Bennet (D): 41 / 34 (chart)
Ken Buck (R): 35 / 20
Ryan Frazier (R): 26 / 24

2010 Senate (trends)
43% Bennet, 37% Buck
40% Frazier, 39% Bennet

US: Health Care (CBS 9/10)

CBS News
9/10/09; 648 adults first interviewed 8/27-31, 4% margin of error
Mode: Live telephone interviews
(CBS story)


Obama Job Approval on Health Care
52% Approve, 38% Disapprove
(8/27-31: 40% Approve, 47% Disapprove)

Has President Obama clearly explained his plans for health care reform?
42% Yes, 43% No
(8/27-31: 33% Yes, 61% No)

Would Congress' current health reforms help or hurt you personally
22% Help, 27% Hurt, 42% No effect
(8/27-31: 19% Help, 30% Hurt, 45% No effect)

US: Health Care (Rasmussen 9/9-10)

9/9-10/09; 1,000 likely voters, 3% margin of error
Mode: IVR
(Rasmussen release)


Generally speaking, do you strongly favor, somewhat favor, somewhat oppose or strongly oppose the health care reform plan proposed by President Obama and the congressional Democrats?
46% Favor, 51% Oppose (chart)

If the health care reform plan passes, will the quality of health care get better, worse, or stay about the same?
31% Better, 46% Worse, 16% Same

If the health care reform plan passes, will the cost of health care go up, go down, or stay about the same?
47% Up, 23% Down, 21% Same

US: Terrorism (CBS 8/27-31)

CBS News
8/27-31/09; 1,097 adults, 3% margin of error
Mode: Live telephone interviews
(CBS: story, results)


Do you think the policies of the Obama Administration have made the United States safer from terrorism, less safe from terrorism, or have the policies of the Obama Administration not affected the U.S.' safety from terrorism?
25% Safer
23% Less safe
42% No effect

How likely do you think it is that there will be another terrorist attack in the United States within the next few months
32% Very/Somewhat likely, 62% Not very/Not at all likely

In general, do you think the United States is adequately prepared to deal with another terrorist attack, or not?
50% Prepared, 44% Unprepared

US: Terrorism (CNN 8/28-31)

CNN / Opinion Research Corporation
8/28-31/09; 1,010 adults, 3% margin of error
Mode: Live telephone interviews
(CNN release)


How likely is it that there will be further acts of terrorism in the United States over the next several weeks?
34% Very/Somewhat, 64% Not too/Not at all

How much confidence do you have in the Obama administration to protect U.S. citizens from future acts of terrorism?
27% A great deal
36% A moderate amount
19% Not much
17% None at all

US: Terrorism (Gallup 8/31-9/2)

8/31-9/2/09; 1,026 adults, 4% margin of error
Mode: Live telephone interviews
(Gallup release)


Looking ahead for the next few years, which political party do you think will do a better job of protecting the country from international terrorism and military threats?
49% Republican Party, 42% Democratic Party

Which political party do you think will do a better job of keeping the country prosperous?
50% Democratic, 39% Republican

Shapiro: Will Obama's Speech Increase Public Support for Health Care Reform?

Topics: Barack Obama , Brandon Rotttinghaus , Health Care Reform

Robert Y. Shapiro is a professor of political science at Columbia University who specializes in public opinion, policymaking, political leadership, and mass media. He is a member of the board of directors of the Roper Center for Public Opinion Research.

The polling and pundit world is now looking to see if President Obama's speech will rally public support for his health care reform plan. In addition to looking at the stream of polls that will now follow, I direct your attention, hot off the presses, to the latest issue of the journal Political Communication. A timely article by Brandon Rottinghaus provides a broader political science view on presidential efforts to influence public opinion. What we know from George Edwards' book, On Deaf Ears: The Limits of the Bully Pulpit (Yale, 2003), is that it is difficult for presidents to succeed at influencing public opinion. However, Rottinghaus's article provides evidence for why Obama correctly chose to take his best shot in a nationally televised speech.

The article uses "a comprehensive data set spanning 1953 to 2001," to examine "several strategic communications tactics through which the presidents might influence temporary opinion movements." Specifically, it finds that "presidential use of nationally televised addresses is the most consistently effective strategy to enhance presidential leadership, but the effect is lessened for later serving presidents." In contrast, other strategies such as those involving domestic travel do not have positive effects and "televised interactions"--press conferences and the like - tend to have negative effects. While some may not be surprised with these findings, it is good to have empirical evidence to wrestle with.

But getting to the point, how will this now play out for Obama? My sense is that Obama's speech will come out on or above average in impact, though there is a question of what its half-life will be. What I see as most important, however, is not the new polls that we will soon see (if they are not out already). Putting Rottinghaus' article aside, what will count most is not what the public thinks at this moment, but rather the extent to which Democratic leaders unite around Obama's plan (which may well be close to Baucus'?); it is this elite consensus that will enable any positive effect of the speech to last or even widen. This assumes that the consensus will be more salient and striking than any continued Republican opposition.

Echoing the famous political scientist, V.O. Key, what matters more than the immediate polls is political leadership more broadly. The speech itself is the start of what could be a stronger consensual message than we have seen to date from Democratic and potentially other political leaders. The relevant public opinion research comes from Richard Brody's book on presidential leadership, (Assessing the President: The Media, Elite Opinion, and Public Support. Stanford, 1991), John Zaller's seminal book on public opinion (The Nature and Origins of Mass Opinion, Cambridge, 1992), and what Ben Page and I examined (The Rational Public. Chicago, 1992).

Larry Jacobs and I (Politicians Don't Pander, Chicago, 2000) looked at the President Clinton's 1993-94 health care reform effort from this perspective. What happened there was the Democratic leaders never supported any Clinton plan, and this, along with the strong Republican leadership opposition caused the public to become apprehensive and turn against health care reform. This happened much earlier in the legislative process than what occurring now, as the Clinton plan got to Congress later in Clinton's first term. In contrast, we are at that same juncture now --- however, earlier in Obama's first term but later in the legislative process, as there are now actual bills that have made it through congressional committees. Clinton never made it that far. The Democrats now have a better chance than Clinton did, since at this moment they are poised to unite around a president's plan. But if they don't do that quickly, then it's 1994 all over again. If by all appearances they come together, they can prevent public support from tapering off and very likely increase it.

In the end, Obama may have timed his entry into the fight just right--it's earlier than when Clinton entered the actual legislative fray in 1994--and this may have been the only way he could have gotten a major health care reform bill through. Given the financial crisis, the stimulus bill, and the two wars, he may well have been stopped in his tracks earlier on--without the health care reform bills making it through multiple committees as they have. He needed to enter the fight when he could rally congressional support in both houses, with drafted legislation in hand and already substantially debated. Of course we will never know since as we can't replay history. For now, the main point is don't just watch the polls-watch the leaders. The public will not just be responding to Obama but to the extent to which he has liberal, blue dog, and any (albeit unlikely) Republican leadership support.

CO: 2010 Gov (Rasmussen 9/9)

9/9/09; 500 likely voters, 4% margin of error
Mode: IVR
(Rasmussen release)


Job Approval / Disapproval
Gov. Ritter: 49 / 49 (chart)

Favorable / Unfavorable
Scott McInnis (R): 42 / 22
Bill Ritter (D): 47 / 42
Josh Penry (R): 28 / 25

2010 Governor (trends)
McInnis 44%, Ritter 39%
Ritter 41%, Penry 40%

Riehle: Just Don't Do It

Topics: CNN , Instant Reaction Polls , Speech Reaction

Today's Guest Pollster article comes from Thomas Riehle, a Partner of RT Strategies.

Technological capabilities can become temptations to conduct research studies that add nothing to our knowledge of public opinion, just because we can. Get thee behind me, Satan!

For example, it would be no problem, technologically, to display squiggly lines with the moment-by-moment reactions of a panel of viewers to the blathering of the talking heads on news show panels. The Onion demonstrates what a mess that would be, in a parody entitled "New Live Poll Allows Pundits to Pander to Viewers in Real Time."

What would happen if we let the talking heads see whether viewers at home agreed or disagreed with what they were saying, "using the Insta-Poll Tracker on our web site"? The talking heads would become self-conscious about the direction of their own squiggly line and start tailoring their statements...word by word...to make the squiggly line go up.

Insta-polls like September 9th's CNN/Opinion Research Corporation poll of adults who watched President Barack Obama's address to Congress may have a similar effect on poll respondents. Mark Blumenthal correctly points out the age-old problem of such polls--the partisan make-up. Last night, the audience for this address was heavily weighted with Obama supporters rallying to watch their leader, supplemented with a few civic-minded Americans who would watch any Presidential address, regardless of their own partisanship. Of the 427 adults in this study, all of them interviewed September 5-8 in advance of the speech, and all of whom indicated both an intention to watch the speech and a willingness to be re-interviewed after the speech, 18% were Republicans, 45% Democrats. These kinds of post-speech poll samples always skew heavily in favor of the speaker. Pollster.com's report on this poll last night squeezes out what knowledge can be gleaned by comparing the "bump" among this group of speech watchers to the bump registered among similarly situated groups of speech watchers in the past.

The problem with this kind of insta-poll may be exacerbated when the study is designed, as this one was, to compare the pre-speech responses of speech watchers to opinions after the speech. In the pre-speech survey, I would guess that respondents would strive to express their opinions as forthrightly as possible, as most survey respondents do. In the follow-up poll after the speech, however, I am afraid respondents would be like the Onion's self-conscious pundits. They'd be aware that they are about to become as much a part of the story as South Carolina Republican Rep. Joe Wilson who heckled the President. They'd tailor their answers to make their leader look good. Drawing much of a conclusion from their answers would not be any fairer than judging the entire Republican caucus by the boorishness of a few Members.

NJ: Christie 41 Corzine 38 (DemCorps (9/8-9)

Democracy Corps (D)
9/8-9/09; 615 likely voters, 4% margin of error
Mode: Live telephone interviews
(Democracy Corps release)

New Jersey

Favorable / Unfavorable
Jon Corzine (D): 36 / 48 (chart)
Chris Christie (R): 33 / 33
Barack Obama: 55 / 32 (chart)
Chris Daggett: 5 / 11

2009 Governor
41% Christie, 38% Corzine, 10% Daggett (chart)

Model-Based Inference

Topics: Internet Polls , Model-based inference , Opt-in internet polls

In a recent column citing a study by Krosnick, et. al. that "Finds Trouble for Opt-in Internet Surveys" (the same study that Doug Rivers responded to here on Tuesday), ABC News polling director Gary Langer re-issued an earlier challenge to "hit me" with a "reasonable theoretical justification" for opt-in Internet polling: "I welcome any coherent theoretical defense of the use of convenience samples in estimating population values; it's a debate we need to have."

Today, Stanford University political science professor Simon Jackman took a shot at an answer:

Try this: model-based inference is an idea that has been around for a long time, and contrasts quite markedly with design-based inference for data generated by surveys. There is plenty written on this, but I'd suggest starting with a reasonably accessible book on sampling, like Sharon Lohr's Sampling: Design and Analysis. Model-based inference for survey data is discussed in various places, typically in a "starred section" in each chapter (e.g., here's how we can do design of and inference for cluster sampling from the model-based perspective, etc). The references provided by Lohr include important works by Basu and Royall etc. See also the delightful book called Combined Survey Sampling Inference by Ken Brewer -- if you can get your hands on it. Doug Rivers pointed me to this book a year or two ago and it is a treat (as these things go).

As I've said before, as soon as non-response enters the picture we're relying on models (e.g., what variables to use when weighting for non-response) and the "purity" of randomization in the sampling design is starting to fall by the wayside.

Jackman goes on to note that "we've been making use of model-based ideas for decades (e.g., weighting to correct for non-response)." I'll second that. So why is it that pre-election telephone surveys that cut all sorts of methodological corners appear to predict election outcomes as well as those that apply the accepted best practices of what Jackman calls "design-based inference?" It surely has something to do with the "modeling" they apply via survey weights. As users of the data, we need to know more about how those models work and, as per the underlying premise of the Krosnick study, more about the accuracy of the data they produce.

'The Perfect Balanced Sample'

Topics: Health Care Reform , Measurement

I posted this video clip from the British series, "Yes, Prime Minister," a few months ago. However, when a long time reader sent it along this morning, I thought it might bear watching again, especially in light of the arguments in recent weeks over health care reform polling and how easy it can be to "lead" respondents in one direction or another with questions that provide new information. The video exaggerates, obviously, but as this Peter Suderman article conveys, not as much as some of us would like to believe.

And as long as we're on the subject, per Suderman -- yes, until last night, Americans technically had no "Obama plan" to favor or oppose (some will argue that is still true). However, I don't agree with Suderman that simple questions along those lines (whose results we chart) amount to "bad polling." The Pew Research Center tells us that, as of last week, two thirds of Americans are following "the debate over health reform" very closely (40%) or fairly closely (26%). As such, most know that Congress is debating proposals that Obama asked them to pass, even if he left the details to others, and we know that Americans have opinions on this subject. So a simple question about whether Americans generally support these ideas or consider them good or bad is a perfectly reasonable way to ask about them especially if, as in the NBC/Wall Street Journal or Gallup examples, the pollsters offer respondents the option of saying they "do not have an opinion."

NJ: Christie 46 Corzine 38 (Rasmussen 9/9)

9/9/09; 500 likely voters, 4.5% margin of error
Mode: IVR
(Rasmussen release)

New Jersey

Job Approval / Disapproval
Pres. Obama: 53 / 45 (chart)
Gov. Corzine: 40 / 67 57 (chart)

Favorable / Unfavorable
Chris Christie (R): 42 / 52
Jon Corzine (D): 45 / 54 (chart)
Chris Daggett (i): 29 / 26

2009 Governor
Christie 46%, Corzine 38%, Daggett 6% (chart)

IL: Approval, Health Care (Trib/WGN 8/27-31)

Chicago Tribune / WGN / Market Shares Corp.
8/27-31/09; 700 registered voters, 4% margin of error
Mode: Live telephone interviews
(Tribune: story, approval graphic, health care graphic, Quinn story)


Obama Job Approval
59% Approve, 33% Disapprove
Health Care: 42 / 43

Gov. Quinn Job Approval
39% Approve, 26% Disapprove

How much have Obama's economic policies helped employment and jobs?
45% A lot/some, 49% Little/none

What effect will the health care reform plan have on you and your family's health care?
16% Change for the better
35% Change for the worse
40% Stay about the same

Who do you side with on the health care reform debate
48% Obama and the Democrats
28% Republicans in Congress

US: Health Care (Rasmussen 9/8-9)

9/8-9/09; 1,000 likely voters, 3% margin of error
Mode: IVR
(Rasmussen release)


Generally speaking, do you strongly favor, somewhat favor, somewhat oppose or strongly oppose the health care reform plan proposed by President Obama and the congressional Democrats?
44% Favor, 53% Oppose (chart)

If the health care reform plan passes, will the quality of health care get better, worse, or stay about the same?
29% Better, 48% Worse, 15% Stay about the same

If the health care reform plan passes, will the cost of health care go up, go down, or stay about the same?
46% Cost of health care will go up
22% Cost will go down
23% Cost will stay the same

Instant Poll Roundup

Earlier today I posted a primer on the instant polls typically released after a major presidential address like the one from Barack Obama tonight. If you have not read it yet, I'd recommend you start there. If you're in a hurry, the short version is that instant response polls measure only speech-watchers, the audience is usually skewed toward the President's fans and historically, presidential addresses rarely move their approval numbers. If nothing else, do not compare the results from these instant measures to any number you see on previously released poll of all adults or likely voters.

That said, I want to provide some links and initial results here.

The first on my radar screen comes from CNN of a survey of people who watched the speech (update: full results now posted). Candy Crowley says, not surprisingly, that the sample "skews heavily Democratic, we think that the Democratic sample in this flash poll is 8 to 10 points higher than in the general population."

  • 72% say yes, Obama clearly stated his health care goals, 26% say no.
  • 56% had a very positive reaction, 21% somewhat positive, 21% negative
  • Support for Obama's health care plans jumped 14 points among speech viewers: from 53% in favor to 67%

More from the just posted full results:

  • "18% of the respondents who participated in tonight's survey identified themselves as Republicans, 45% identified themselves as Democrats, and 37% identified themselves as Independents.
  • The percentage who "think the policies proposed by Barack Obama will move the country in the right direction" rose from 60% pre-speech to 70% post speech.

By comparison, CNN's most recent poll reported that 52% say Obama's policies will move the country in the right direction, which tends to confirm Crowley's point about the Democratic skew of the audience.

To put these results into some perspective, consider two tables I created for a post just before President Bush's 2006 state of the union speech. Two of the questions noted above, those rating the speech as positive or negative and assessing whether the president's policies move the country in the right or wrong direction, were asked by the then CNN/USA Today/Gallup poll on instant response polls since the mid-1990s. Here are the results through 2005 for both questions:

2009-09-09_SOTU_reactions.jpg 20-09-09_right_dir.jpg  

In 2006, 75% had a positive reaction to the SOTU on the Gallup/CNN/USA Today poll, including 48% who said "very positive." The percentage who said Bush's policies will move the country in the right direction increased from 52% before the speech to 68% after.   

So what does this all mean? In terms of these "instant reactions," this speech falls within the range of previous addresses. Depending on the measure it's better than some, worse than others. But keep in mind that none of these positive reactions translated into meaningful changes in presidential approval. Will the speech lead to lasting change in perceptions of health reform? To know that, we will need more surveys of the general population and, mostly, more time.

Final update: I see nothing from CBS, so I am assuming this is all we have for tonight.

Belated update (9/10):  Not sure how I missed this, but the Democracy Corps project of Democratic pollster Stan Greenberg conducted a dial-test focus group last night.

Health Reform Opinion: In One Word

Topics: Barack Obama , Health Care Reform , Measurement , Open-end

In my 'outliers' update yesterday, I pointed to the very useful graphic published by the Washington Post over the weekend that included tag clouds of two "open ended" questions asked on national survey roughly two weeks ago. Jennifer Agiesta described the results in more detail in the Post's "Behind the Numbers" blog.

Both the Washington Post and the Pew Research Center have been experimenting with open-ended questions that ask respondents to answer "in a single word." In pollster lingo, an "open-ened" question suggests no answers and allows respondents to answer in their own words; a closed-ended questions prompt respondents with standard answer categories. Open-ends (as we call them) are not new, but media pollsters tend to avoid them because of the time and effort required to record verbatim answers, to probe for more details and code the answers. The one word open-end is considerably quicker to ask and record and, better yet, is easily suited to tag cloud graphics where the type size is proportional to frequency of response of each word.

On their most recent survey, the Post included two such questions that asked respondents to describe their feelings about President about and, separately, about "the proposed changes to the health care system being developed by (Congress) and (the Obama administration)" (they randomly rotated whether Congress or Obama came first). I thought the responses to the second question (reproduced below) were interesting, but the fact that the tag cloud displayed just the 22 words mentioned most often may have obscured something important.


I thought it might be useful to compare a cloud with the positive words to a cloud with the negative. I emailed Jennifer Agiesta, and she kindly provided the full list of words (unweighted) and the codes (positive, negative, neutral or none) applied to each one. Here are the graphics I created, starting with the responses that were 43% of the weighted adult sample:

Wordle - negative.png

And here is the cloud of the positive responses that were 31% of the adult sample (and I've made a feeble attempt to make the negative cloud roughly 1.4 times bigger than the positive to try to keep them roughly to scale -- unfortunately, Wordle, the service that creates these beautiful graphics allows little control over white space, so hopefully my sizing comes close to proportional).

Wordle - Positive-3.png

The difference between the two is striking, especially if you click to view each full size. As noted by Jennfer Agiesta, the negative responses include a "broader array of words" that imply far more passion and intensity. People use many different words to say what they dislike and those words carry a lot of emotion: "scary," "terrible," "disaster/disastrous," "socialism."

The positive responses, on the other hand, tend to be less varied and convey more of a sense of ambivalence: Two words, "necessary" and "good," account for nearly half of the responses. Those two, along with "hopeful" and "okay" account for roughly 60% of the positive comments (unweighted). We see words like "great," "excited," "excellent" -- the positive analogues to the emotional words used by opponents -- far less often.

The Post question asks for "feelings" not facts, but my sense is that opponents of the Obama proposals have a greater conviction that they know what they don't like about the plan, while supporters are hopeful but more often unsure about the details. And, of course, the Post found that one adult in four (26%) is either unsure or neutral. Obama and the Democrats are hoping to reach those uncertain and hopeful Americans with his speech tonight.

That said, I wish that some of my pollster colleagues had asked Americans to describe in a single word or a sentence what they think the health reform proposals will do. For all the sturm and drang of the debate over the public option (and about poll questions that ask about the public option), I am surprised that no one has bothered to ask a question sequence like the following before asking the more standard questions in which respondents react to brief descriptions of the proposals:

  • In the health care reform debate, have you heard anything about a proposed "public option" (yes/no)?
  • In your own words, please tell me what you have heard about the proposed public option? (followed by a probe: can you be more specific?)
  • Based on what you've heard, do you favor or oppose the public option or don't you know enough to say?

More probing along these lines would at least give us a sense of how many people think they are familiar with the proposal and some sense of what they know. It would also allow us to compare support for the public option among those who seem to know what the public option to those who do not. Instead, we are left to debate the many nuances of question language and wonder whether we are measuring current attitudes or reactions conjured up on the spot.

Pollsters, to help us understand public opinion and health reform, more open-ends please!

Instant Reaction Polls: A Pre-Speech Primer

Topics: Barack Obama , Health Care Reform , Instant Reaction Polls

The odds are very good that shortly after President Obama completes his health care address this evening, at least two television networks will release results from "instant reaction" surveys. Others will likely stage focus groups in which selected participants react to what they see and hear. The odds are almost as good that pundits and partisans will grossly misread or spin whatever those surveys and focus groups produce. Here is a primer for those hoping to make sense out of whatever new poll data we see over the next 24 hours.

1) Instant response polls measure only speech-watchers. While the methodologies vary, the most important thing to remember that these surveys aim to sample only those who watch the speech and, as such, are are not intended to represent the views of all Americans. The pollsters will hopefully provide some before-and-after comparisons of the speech audience -- showing how viewers felt about health care reform before and after the speech -- but those comparisons will involve only the sample of speech viewers. Thus, no one should take any of the numbers they see tonight and make comparisons to full-sample results from previously surveys of all adults or all "likely voters."   

2) The audience is usually skewed toward the President's fans. Remember, not all Americans watch presidential addresses. Between 52 and 63 million Americans watched the debates last fall and roughly 53 million watched President Bush's address on the economic crisis last September.** Those are huge audiences, but plenty of Americans still tune out.

Since admirers of the president are usually disproportionate among those who tune in, the overall instant reaction numbers can be deceiving. Like debates, presidential speeches usually reinforce the opinions people held before the speech, so it will be important to look at cross-tabulations by party or by pre-debate attitudes about health care reform (for more details, see my posts from just before and after the 2006 State of the Union address).

3) Instant impressions can be fleeting. Generally speaking, reactions on instant response polls tend to be a lot more positive immediately after the event than on surveys taken days or weeks later. The sampling problem may explain much of this phenomenon, or it may be, as ABC's Gary Langer wrote earlier this year, that viewers "get caught up in the moment, [so] a single speech in and of itself is very highly unlikely to change any fundamental attitudes." The point is, be careful, instant reactions may fade.

4) Some pollsters have reservations about instant reaction polls. To understand their skepticism, let's review the methods of the polls that are likely to get the most attention:

  • CNN and their pollsters, the Opinion Research Corporation, typically recontact respondents by telephone that they called a few days earlier who say they are planning to watch the speech. They call as many as they can, as quickly as they can, to get an immediate reaction.
  • Gallup uses essentially the same method, not surprising since until 2007 Gallup produced surveys for the partnership of CNN and USA Today. The main difference has nothing to do with methodology: in recent years is that Gallup typically held their results for the following day rather than rush them on the air immediately.
  • CBS also conducts panel back surveys, although theirs is conducted online using the nationally representative Knowledge Networks internet panel. Since all respondents can immediately log in and complete the survey online, CBS can gather their data very quickly.

They important point: All three surveys use what pollsters call a "panel back" design, a classic and widely used survey research technique. The strength of this approach is that it allows relatively quick, efficient recontact of likely speech watchers contacted earlier through more rigorous methods. It also allows for before-and-after comparisons with results gathered from the previous survey. The downside with any such "panel-back" survey is that (a) some polled the first time do not respond the second time which may create a bias that simple weighting does not resolve and (b) the experience of having been interviewed may indirectly change attitudes (respondents may pay more attention to the news after the first interview).

I should stipulate that I have no advance word on which networks will be conducting surveys tonight, but CNN and CBS News conducted instant response polls following all of last fall's presidential debates, and Gallup joined joined CNN and CBS in conducting a one-night survey following Obama's address to Congress earlier this year.

SurveyUSA, a company that polls with an automated, recorded methodology, sometimes conducts post speech polls using fresh samples in media markets in the Pacific time zone (where the speech ends before 7:00 p.m.). Their automated method allows for a lot of simultaneous dialing of a fresh sample.

But again, not all pollsters are enamored with these methods. Back in 2004, Republican pollster David Hill wrote a scathing assessment that described the traditional panel-back design as "worthless":

Considerable scholarly research demonstrates that simply being interviewed renders an otherwise normal voter abnormal. After being polled, voters are much more likely to seek out political information through the media, discuss politics with others and eventually to vote. The known effects are so great that in the earliest days of polling, voters would be screened at the outset of an interview to ascertain if they had ever been interviewed before.

ABC News stopped conducting instant reaction polls a few years ago, although as Gary Langer explains, their reservations were about more than the challenge of panel-back sampling:

Good sampling's a bear in this kind of thing, but there are two equally basic problems: Speech watchers tend to be favorably inclined to the speechifier in the first place (those who can't stand him are unlikely to watch); and speeches are crowd-pleasing (even platitudinous) by design (e.g., let's cure cancer).

5) Focus groups have value, but they are not surveys, and should be treated with far more caution. If past history repeats itself, we are likely to see Fox News, CNN and MSNBC conduct some sort of focus group. The Democratic party affiliated Democracy Corps frequently conducts its own focus groups. Regardless of the sponsor, the focus group usually involves a group of 20-40 adults gathered in a central location selected by some hopefully-representative-but-not-random method. A moderator talks to the group before and after the debate. Sometimes the members of the group provide their feedback throughout the speech using a dial, with the aggregate scores of the dials appearing as rising and falling lines on the television screen.

Ideally, focus groups provide "qualitative" insights that are tough to glean from standardized "quantitative" survey questions. The problem is, they are not random samples, and the discussion is sensitive to the "group dynamic" set by the moderator or the most verbose participants. I have long been skeptical that the reality-TV feel of turning participants into pundits in a television studio produces much of value. Also, while the squiggly line of the dial group chart may help campaign consultants and television producers spot the most memorable sound bites, the value for the rest of us is pretty negligible.

6) Will the speech make a lasting change in attitudes on health reform? That's the really important question, but to answer it, we will need another round of more rigorous national surveys that we will see over the next few weeks. Your favorite blogs are probably already humming with predictions and speculation about what might or might not change. For a useful reality check let me recommend three: John Sides passes along the work of political scientist George Edwards, who reviewed polling around presidential speeches and found that "statistically significant changes in approval rarely follow a televised presidential address." Sides also charts all of the Clinton job approval polls from late 1993 and finds "no increase in Clinton's approval immediately after [his 1993 health care] speech." Gary Langer recalls that Bill Clinton swayed opinions in an ABC poll conducted immediately his 1993 address that had largely dissipated two weeks later. Finally, its worth reviewing Charles Franklin's two-year-old post that illustrates the "mostly nil" affects of State of the Union speeches on presidential approval.

So my best safe but boring advice is to wait and see. For all the similarity to 1993, this speech provides its own new and hard to predict "model." The speech is coming much later in the debate, and for all the evident passion of opponents, many Americans remain both interested and confused. As this week's Pew Research Center New Interest Index poll shows, virtually all Americans (93%) consider health care reform important, 73% say it affects them personally, 40% to 49% say they have been following the issue "very closely, but a huge number (67%) also says the subject is "hard to understand." That sounds like a recipe for a large audience that is potentially more attentive and persuadable than for most previous presidential addresses.

But we shall see.

Update: Brenden Nyhan agrees with Sides and Langer that the speech is unlikely to move public opinon: "The reason is simple -- the president's message is typically offset by that of the opposition. In the aggregate, the effects tend to cancel out and the numbers don't move."

**Clarification: The original version of this post also cited the number of television households (113 million), which is a bit of a non-sequitur since the other statistics are counts of individual viewers. 

US: Most Important Issue (Gallup 8/31-9/2)

8/31-9/2/09; 1,026 adults, 4% margin of error
Mode: Live telephone interviews
(Gallup release)


What do you think is the most important problems facing this country today?
29% The economy in general
26% Health care
15% Unemployment
10% Dissatisfaction with government
9% Federal budget deficit
8% Iraq war

NC: Approval Ratings (Civitas 9/2-3) -updated

Civitas Institute (R) / Insider Advantage
9/2-3/09; 665 registered voters, 3.9% margin of error
Mode: IVR
(Civitas: Perdue, Health care, Obama)

North Carolina

Job Approval
Gov. Perdue: 29 / 50 (chart)
Pres. Obama: 44 / 46 (chart)

Generally speaking, do you strongly favor, somewhat favor, somewhat oppose or strongly oppose the health care reform plan proposed by President Obama and the Congressional Democrats?
48% Favor, 47% Oppose

US: Health Care (Harris 8/10-18)

Harris Interactive
8/10-18/09; 2,984 adults
Mode: Internet
(Harris release)


Even if you don't know the details of his plan, how do you feel about President Obama's proposals for health care reform?"
49% Support, 40% Oppose (chart)

Based on what you're read, seen or heard, how would you rate the health care plans proposed by each of the following?"
President Obama: 54% Good, 46% Bad
Democrats in Congress: 46 / 54
Republicans in Congresss: 31 / 69

Do you believe that President Obama's health care proposals would do the following...

Create a government-run health care system?
58% Would, 19% Would not

Let anyone who wants to keep the health insurance they have now?
47% Would allow, 28% Would not allow

US: National Survey (AP-GfK 9/3-8)

9/3-8/09; 1,001 adults, 3.1% margin of error
Mode: Live telephone interviews
(AP: story, toplines)


State of the Country
37% Right direction, 57% Wrong track (chart)

Obama Job Approval
50% Approve, 49% Disapprove (chart)
Economy: 44 / 52 (chart)
Health Care: 42 / 52 (chart)

Congress Job Approval
28% Approve, 69% Disapprove (chart)

In general, do you support, oppose or neither support nor oppose the health care reform plans being discussed in Congress?
34% Support, 49% Oppose (chart)

What do you think the President and Congress should do when they come back to
Washington this fall? Do you think they should...

39% Keep working to pass a health care plan by the end of the year
42% Scrap the current negotiations and start over from scratch
18% Leave the health care system as it is now

Party ID
30% Democrat, 20% Republican, 30% independent (chart)

NC: Approval Ratings (PPP 9/2-8) -updated

Public Policy Polling (D)
9/2-8/09; 600 likely voters, 4% margin of error
Mode: IVR
(PPP: Perdue, Obama)

North Carolina

Job Approval
Gov. Perdue: 26 / 54 (chart)
Perdue on Education: 26 / 49
Perdue on Health Care: 20 / 50
Pres. Obama: 45 / 51 (chart)

MA: Senate Special Election (Rasmussen 9/8) -Updated

9/8/09; 800 likely voters, 3.5% margin of error
611 likely Democratic primary voters, 4% margin of error
Mode: IVR
(Rasmussen: Democratic primary, interim appointment)


ob Approval / Disapproval
Pres. Obama: 58 / 41
Gov. Patrick: 41 / 57

2010 Senate Special Election: Democratic Primary
38% Martha Coakley
11% Stephen Lynch
10% Ed Markey
7% Michael Capuano
3% John Tierney

When there is an open Senate seat, should the Governor appoint a replacement or should there be a special election to determine the new Senator
17% Governor should appoint the replacement
77% There should be a special election

Under Massachusetts law a special election is held to fill an open senate seat. While waiting for a special election should the governor appoint an interim senator?
44% Yes, 43% No

U.S. & Europe: Transatlantic relationships (TNS 6/9-7/1)

German Marshall Fund / Compagnia di San Paolo / TNS
6/9-7/1/09; 1,000 adults/country, 3% margin of error
Mode: Live telephone interviews and face-to-face interviews
(Transatlantic Trends: release, toplines)

Transatlantic trends:

European support for U.S. President Barack Obama's handling of foreign policy is quadruple the approval given to his predecessor, George W. Bush, according to a new survey released today by the German Marshall Fund of the United States (GMF). But people in Central and Eastern Europe and Turkey were markedly less enthusiastic about Obama and the United States than were their West European counterparts. And Obama's personal popularity has not bridged serious transatlantic differences over Afghanistan, Iran, and climate change.

Transatlantic Trends 2009 (www.transatlantictrends.org) shows that three-in-four (77%) respondents in the European Union and Turkey support President Obama's handling of international affairs compared to just one-in-five (19%) who approved of President Bush's foreign policy in 2008.

"We see a remarkable shift in transatlantic opinion from the previous administration," said Craig Kennedy, GMF president. "With American leadership enjoying unprecedented modern popularity, partners on both sides of the Atlantic have an immense opportunity to cooperate on a range of economic and security issues."

US: Supreme Court (Gallup 8/31-9/2)

8/31-9/2/09; 1,026 adults, 4% margin of error
Mode: Live telephone interviews
(Gallup release)


Do you approve or disapprove of the way the Supreme Court is handling its job?
61% Approve, 28% Disapprove

In general, do you think the current Supreme Court is too liberal, too conservative, or just about right?
28% Too liberal
19% Too conservative
50% About right

Tuesday Feels Like Monday 'Outliers'

Topics: Outliers Feature

The Washington Post shares word clouds showing one-word reactions to Obama and health care reform, Jennifer Agiesta blogs the details.

Gallup releases new "for/against" results on health reform; Greg Sargent notices a big undecided, Glen Thrush highlights two key constituencies not "fired up" (via Smith).   

Gary Langer remembers '94 and says a good speech alone "won't do it."

Peter Suderman reviews the challenges of health care polls (though I wish he would have used "message testing" instead of "push poll").

Steve Singiser sees no anti-Dem wave in recent special elections.

Nate Silver notes lower support for unions during recessions.

Tom Jensen finds little long term damage to George Allen in Virginia.

Andrew Gelman reacts to David Shor's 2008 election forecasting.

The Marist Poll tells us what they're all about, via video.

Doug Rivers' defense of opt-in Internet surveys draws quick reactions from Andrew Gelman and Mike Mokrzycki (more here).

Research Rants critiques the screening out of "straightiners" on internet polls.

US: News Interest (Pew 9/3-6)

Pew Research Center
9/3-6/09; 1,005 adults, 3.5% margin of error
Mode: Live telephone interviews
(Pew release)


Most closely followed story
29% Debate over health care reform
16% Reports about the condition of the U.S. economy
13% The discovery of 29-year old Jaycee Dugard who had been kidnapped and held captive since she was 11
12% Reports about swine flu and the availability of the vaccine
10% Southern California wildfires
6% The U.S. military effort in Afghanistan

Are you hearing mostly good news about the economy these days, mostly bad news about the economy or a mix of both good and bad news?
5% Good news, 27% Bad news, 68% Mixed

In the past few weeks, have you seen or heard any ads on the subject of health care reform?
2% Yes, mostly positive
28% Yes, mostly negative
21% Yes, mixed positive and negative
36% No, have not seen

On Wednesday, President Obama will give a prime time speech to a joint session of Congress on health care - do you plan to watch the speech or not?
56% Yes, plan to watch
42% No, do not plan to watch

US: National Survey (RSLC 8/30-9/1)

Republican State Leadership Committee (R) / Public Opinion Strategies (R)
8/31-9/1/09; 800 registered voters, 3.5% margin of error
Mode: Live telephone interviews
(POS: memo, toplines)


State of the Country
38% Right direction, 56% Wrong track (chart)

Obama Job Approval
51% Approve, 46% Disapprove (chart)

So far, do you think the federal government's stimulus package has made the economy better, made the economy worse, or has it had no impact on the economy so far?
36% Better, 22% Worse, 39% No impact

Do you think the economic stimulus plan has made your financial situation better than if the stimulus plan had not passed, has it not had an effect, or has the economic stimulus plan made your financial situation worse than if the stimulus plan had not been passed?
23% Better, 24% Worse, 52% No effect

From what you have heard about Barack Obama's health care plan, do you think his plan is a good idea or a bad idea?
35% Good idea, 46% Bad idea (chart)

Party ID
36% Democrat, 28% Republican, 35% independent (chart)

Column: Health Coverage That's Good For The Goose?

Topics: health care , Health Care Reform , National Journal column

My column for the week reviews data showing positive reactions to the proposed insurance exchanges at the heart of all of the health reform bills now making their way through Congress and floats an idea for the President: "Challenge Congress to pass a reform bill that requires all members to obtain their health insurance the same way as those without employer-provided health insurance -- through the newly created health care exchanges, rather than the Federal Employee Health Benefit Plan." Click through for the details.

One thing I overlooked when drafting the column last week was the candidate Obama was already making the connection during the campaign between the proposed exchanges and the plan currently available to federal employees, including members of congress. Here is Obama discussing his health care plan in the debates with John McCain in the fall:

Debate 2: If you don't have health insurance, you're going to be able to buy the same kind of insurance that Sen. McCain and I enjoy as federal employees. Because there's a huge pool, we can drop the costs. And nobody will be excluded for pre-existing conditions, which is a huge problem.

Debate 3: If you don't have health insurance, then what we're going to do is to provide you the option of buying into the same kind of federal pool that both Sen. McCain and I enjoy as federal employees, which will give you high-quality care, choice of doctors, at lower costs, because so many people are part of this insured group.

For what it's worth, one thing that went unmentioned by Obama in all three debates with McCain were the words "public option" or any mention of a "government run" insurance plan.

Finally, a word of thanks to Jonathan Cohn, author of "The Treatment, a first-rate blog about about health reform at The New Republic. Cohn kindly helped connect me to the handful of health policy analysts who passed along the words of caution included in the column about my admittedly half baked notion about challenging Congress to insure themselves using the proposed exchanges.

US: Energy Policy (Rasmussen 9/2-3)

9/2-3/09; 1,000 likely voters, 3% margin of error
Mode: Live telephone interviews
(Rasmussen release)


Which is more important, finding new sources of energy or reducing the amount of energy Americans now consume?
60% Finding new sources of energy
32% Reducing the amount of energy Americans now consumer

How serious a problem is Global Warming?
64% Very / Somewhat, 34% Not very / Not at all

Is Global Warming caused primarily by human activity or by long term planetary trends?
42% Human activity
47% Long term planetary trends

Is there a conflict between economic growth and environmental protection?
40% Yes, 37% No

How would you rate the way that Barack Obama will handle energy issues such as offshore drilling and research for alternative energy sources as President?
43% Excellent / Good, 55% Fair / Poor

US: Health Care (Gallup 8/31-9/2)

8/31-9/2/09; 1,026 adults, 4% margin of error
Mode: Live telephone interviews
(Gallup release)


Would you advise your member of Congress to vote for or against a healthcare reform bill when they return to Washington in September, or do you not have an opinion?
37% Vote For, 39% Vote Against

How much will your representative's position on healthcare reform affect your vote in the next Congressional elections? Will it be a major factor, a minor factor, or not a factor in your vote?
64% Major Factor, 21% Minor Factor

Doug Rivers: Second Thoughts About Internet Surveys

Topics: Douglas Rivers , Gary Langer , Internet Polls , Jon Krosnick , Probability samples , Sampling , Weighting

Douglas Rivers is president and CEO of YouGov/Polimetrix and a professor of political science and senior fellow at Stanford University's Hoover Institution. Full disclosure: YouGov/Polimetrix is the owner and principal sponsor of Pollster.com.

I woke up on Tuesday morning to find several emails pointing me to Gary Langer's blog posting, which quoted extensively from a supposedly new paper by Jon Krosnick. These data and results appeared previously in a paper, "Web Survey Methodologies: A Comparison of Survey Accuracy," Krosnick coauthored with me and presented at AAPOR in 2005. The "new" paper has added some standard error calculations, some late arriving data, and a new set of weights, but the biggest changes in this version are a different list of authors and conclusions.

The 2005 study compared estimates from identical questionnaires fielded to a random digit dial (RDD) sample by telephone, an Internet-based probability sample, and a set of opt-in panels. Of these, Internet probability sample had the smallest average absolute error, followed closely by the RDD telephone survey, and the opt-in Internet panels were around 2% worse. In his presentation of our paper at AAPOR in 2005, Krosnick described the results of all the surveys, both probability and non-probability, as being "broadly similar." My own interpretation of the 2004 data, similar to James Murphy's comment on AAPORnet, was that although the opt-in samples were worse than the two probability samples, the differences were small enough--and the cost advantage large enough--to merit further investigation. Even if it were impossible to eliminate the extra 2% of error from opt-in samples, they could still be a better choice for many purposes than an RDD sample that cost several times as much.

Krosnick now concludes that "Non-probability sample surveys done via the Internet were always less accurate, on average, than probability sample surveys" and, tendentiously, criticizes "some firms that sell such data" who "sometimes say they have developed effective, proprietary methods" to correct selection bias in opt-in panels.

In fact, the data provide little support for Krosnick's argument. The samples from the opt-in panels were, as we noted in 2005, unrepresentative on basic demographics such as race and education because the vendors failed to balance their samples on these variables, while the two probability samples were balanced on race, education, and other demographics. This is not a result of probability sampling, but of non-probabilistic response adjustments. It is too late to re-collect the data, but the solution (invite more minorities and lower educated respondents) doesn't involve rocket science.

Instead, Krosnick tries to fix the problem by weighting, and concludes that weighting doesn't work. A more careful analysis indicates, however, that despite the large sample imbalances in the opt-in samples, weighting appears to remove most or all selection bias in these samples. Because the samples were poorly selected, heavy weighting is needed and this results in estimates with large variances, but no apparent bias. In fact, if we combine the opt-in samples, we can obtain an estimate with equal accuracy to the two probability samples.

First, consider the RDD telephone sample. The data were collected by SRBI, which used advance letters, up to 12 call attempts, $10 incentives for non-respondents, and a field period of almost five months. Nonetheless, the unweighted sample was significantly different from the population on ten of the 19 benchmarks. RDD samples, like this one, consistently underrepresent male, minority, young, and low-education respondents. These biases are reasonably well understood and, for the most part, can be removed by weighting the sample to match Census demographics.

Next, consider the Probability Sample Internet Survey, conducted by Knowledge Networks (KN). The unweighted sample does not exhibit the skews typical of RDD. How is this possible, since the KN panel is also recruited using RDD? Buried in a footnote is an explanation of how KN managed to hit the primary demographic targets more closely than SRBI (which had a much better response rate). The answer is that "The probability of selection was also adjusted to eliminate discrepancies between the full panel and the population in terms of sex, race, age, education, and Census region (as gauged by comparison with the Current Population Survey). Therefore, no additional weighting was needed to correct for unequal probabilities of selection during the recruitment phase of building the panel." That is, the selection probabilities that are supposedly so important to probability sampling were not used because they would have generated an unrepresentative sample!

The opt-in panels, for the most part, were not balanced on race and education. Only one of the opt-in samples, Non-Probability Sample Internet Survey #6 actually used a race quota. Another, the odd Non-Probability Internet Sample #7, claims to have sent invitations proportionally by race and ended up with 46% of the sample white, despite a 51% response rate. (This survey will be excluded from subsequent comparisons.) Non-probability Sample Internet Survey #1 involved large over-samples of African Americans and Hispanics. I could find no explanation of how Krosnick dealt with the oversamples in the 2009 paper, but it should either match exactly (if the conventional stratified estimator is used) or be far off (if the data are not weighted). In fact, the proportion of whites and Hispanics is off by 1% to 2%.

The selection of a subsample of panelists for a study is critical to the accuracy of opt-in samples. Regardless of how the panel was recruited, the combination of nonresponse or self-selection at the initial stage along with subsequent panel attrition, will tend to make the panel unrepresentative. In 2004, we instructed the panel vendors to use their normal procedures to produce a sample representative of U.S. adults. The practice then (and perhaps now for some vendors) was to use a limited set of quotas. If you didn't ask most opt-in panels to use race or education quotas, they wouldn't use them.

Even without correcting these obvious imbalances, the opt-in samples provided what most people would consider usable estimates for most of the measures. For example, the percentage married (unweighted) was between 53.7% and 61.5% vs. a benchmark of 56.5%). The percentage who worked last week (unweighted) was between 53.6% and 63.1% (vs. a benchmark of 60.8%). The percentage with 3 bedrooms (unweighted) was between 41.2% and 46.1% (vs. a benchmark of 43.4%). The percentage with two vehicles (unweighted) was between 40.1% and 46.9% (vs. a benchmark of 41.5%). Home ownership (unweighted) was between 64.8% and 72.8% (vs. a benchmark of 72.5%). Has one drink on average (unweighted) was between 33.8% and 40.2% (vs. a benchmark of 37.7%). The KN sample and phone samples were better, but the difference was much less than I expected. (Before doing this study, I thought the opt-in samples would all look like Non- probability Sample Internet Survey #7.)

The 2009 paper attempts to correct these imbalances by weighting, but the weighted results do not show what Krosnick claims. He uses raking (also called "rim weighting") to compute a set of weights that range from .03 to 70, which he then trims at 5. The fact that the raking model wants to weight a cell at 70 is a sign that something has gone wrong and can't be cured by arbitrarily trimming the weight. If there really are cells underrepresented by a factor of 70, then trimming causes severe bias for variables correlated with the weight and not trimming causes the estimates to have large variances. In either case, the effect is to increase the mean absolute error of estimates.

The fact that the trimmed and untrimmed weights have about the same average absolute error does not mean that weighting is unable to remove self-selection bias from the sample. The mean absolute error is a measure of accuracy. It is driven by two factors: bias (the difference between the expected value of the estimate and what it is trying to estimate) and variance (the variation in an estimate around its expected value from sample to sample). The usual complaint about self-selected samples is that you can never know whether they will be biased or the size of the bias. Inaccuracy due to sampling variation can be reduced by just taking a larger sample. Bias, on the other hand, doesn't decrease when the sample size is increased.

Obviously, uneweighted estimates from these opt-in samples will be biased because the vendors ignored race and education when selecting respondents. This wouldn't have been difficult to fix, but it wasn't done. Apparently very large weights are needed to correct demographic imbalances in these samples, but the large weights give estimates with large variances and, hence, a high level of inaccuracy. If one tries to control the variance, as Krosnick does, by trimming the weights, then the variance is reduced at the expense of increased bias. The result, again, is inaccuracy. We are asking the weighting to do too much.

A simple calculation shows that all of Krosnick's results are consistent with the weighting removing all of the bias from the opt-in samples. One way to combat increased variability is to combine the six opt-in samples. Without returning to the original data, a simple expedient is to just average the estimates. Since the samples are independent and of the same size, the average of 6 means or proportions should have a variance about 1/6 as large as the single sample variances. The variance is approximately equal to the square of the mean absolute error which, after weighting, was about 5 for the opt-in samples, implying a variance of about 25. If there is no bias after weighting, then the variance of the average of the estimates should be 25/6 or approximately 4, implying a mean absolute error of about 2%.

How does this prediction pan out? If we average each of the weighted estimates and compute the error for each item using the difference between the average estimate and the benchmark, the mean absolute error for the opt-in samples is 1.4% -- almost identical to the mean absolute error for each of the weighted probability samples. That is, the amount of error reduction that comes from averaging the estimates is about what would be predicted if the all bias could have been removed by weighting. Thus, the combination of these six opt-in samples gives an estimate with about the same accuracy as a fairly expensive probability sample (which also required weighting, though not as much).

There is no reason, however, why you should need six opt-in samples to achieve the same accuracy as a single probability sample of the same size. If the samples were selected appropriately, then we could avoid the need for massive weighting. It is still an open question what variables should be used to select samples from opt-in panels or what the method of selection should be. In the past few years, we have accumulated quite a bit of data on the effectiveness of these methods, so there is no need to focus on a set of poorly selected samples from 2004.

Probability sampling is a great invention, but rhetoric has overtaken reality here. Both of the probability samples in this study had large amounts of nonresponse, so that the real selection probability--i.e., the probability of being selected by the surveyor and the respondent choosing to participate--is not known. Usually a fairly simple nonresponse model is adequate, but the accuracy of the estimates depends on the validity of the model, as it does for non-probability samples. Nonresponse is a form of self-selection. All of us who work with non-probability samples should spend our efforts trying to improve the modeling and methods for dealing with the problem, instead of pretending it doesn't exist.