Pollster.com

August 9, 2009 - August 15, 2009

 

See You In A Week 'Outliers'

Topics: Outliers Feature

Bill Clinton predicts increasing Obama approval should health care reform pass.

Gary Andres says the health care debate is hurting Obama with independents.

Glen Bolger and Jim Hobart see more evidence of Democrats losing the health care message war.

David Paul Kuhn reviews the health care politics and sees a hard sell for Obama.

John Sides scores the truthfulness of the Democrats and Republicans on health care.

William Schneider says a repeat of 1994 is unlikely.

Charlie Cook warns of danger for Democrats in Virginia.

Tom Jensen thinks Obama would still win Virginia.

Nate Silver cautions about depending too aggressive early LV modeling of races for 2010.

Greg Dworkin (DemfromCt) sums up our polling panel at Netroots Nation.

Matt Galvin recounts the role of research in the redesign of NPR.org (via Mokrzycki)

and...

First, some belated housekeeping: I neglected to officially welcome Brenden Nyhan and Robert Moran as new regular contributors to Pollster.com. Brenden, now a Robert Wood Johnson Scholar in Health Policy Research at the University of Michigan, was a co-editor of Spinsanity and co-author of the New York Times bestseller All the President's Spin. He will cross-post occassional polling items from his blog www.brendan-nyhan.com.

Robert Moran is Executive Vice President at StrategyOne, where he leads the Corporate and Public Affairs research practice, is the Director of Product Marketing, and manages the primary research team in the Washington office. Before that he worked for more than ten years at the Republican polling firms Fabrizio, McLaughlin & Associates and Public Opinion Strategies.

Finally, as of tonight I am officially off for a much needed week of vacation. My National Journal column will appear on Monday and, hopefully, you will see a little more of our other contributors of the next week. Also, Emily will be posting poll updates and adding new data to our charts as usual. See you in a week!


PA: 2010 Sen (Kos 8/10-12)


Daily Kos (D) / Research 2000
8/10-12/09; 600 likely voters, 4% margin of error
400 likely Democratic primary voters, 5% margin of error
Mode: Live telephone interviews
(Kos release)

Pennsylvania

Favorable / Unfavorable

Sen. Arlen Specter (D): 52 / 40 (chart)
Joe Sestak (D): 37 / 19
Pat Toomey (R): 37 / 34
Gov. Ed Rendell (D): 48 / 41 (chart)
Sen. Bob Casey (D): 56 / 29 (chart)
Barack Obama: 55 / 40 (chart)

2010 Senate: Democratic Primary
Specter 48%, Sestak 33% (chart)

2010 Senate: General Election

Specter 45%, Toomey 40% (chart)
Sestak 42%, Toomey 41% (chart)


US: Health Care (Marist 8/3-6)


Marist
8/3-6/09; 854 registered voters, 3.5% margin of error
Mode: Live telephone interviews
(Marist release)

National

Obama Job Approval
Health Care: 43% Approve, 45% Disapprove (chart)

Do you think the current health care system in this country needs major changes, minor changes, or no changes at all?
67% Major Changes, 28% Minor Changes

If health care reform is passed by Congress, do you expect health care in this country to get better, to get worse, or remain the same as it is now?
38% Get better, 38% Get worse, 17% Remain the same

If health care reform is passed by Congress, do you expect health care for you and your family to get better, to get worse, or remain the same as it is now?
27% Get better, 34% Get worse, 35% Remain the same

(Update: Marist also tested opinions of Michael Vick's return to football. Those results are available here)


US: National Survey (Kos 8/10-13)


Daily Kos (D) / Research 2000
8/10-13/09; 2,400 adults, 2% margin of error
Mode: Live telephone interviews
(Kos release)

National

Favorable / Unfavorable
Barack Obama: 60 / 36 (chart)
Nancy Pelosi: 36 / 56
Democratic Party: 45 / 48
Republican Party: 17 / 74

State of the Country
42% Right direction, 53% Wrong track (chart)


NC: 2010 Sen (PPP 8/4-10)


Public Policy Polling (D)
8/4-10/09; 749 likely voters; 3.6% margin of error
Mode: IVR
(PPP release)

North Carolina

Job Approval / Disapproval
Sen. Burr: 38 / 32 (chart)

2010 Senate (all trends)
Burr 42%, Generic Democrat 35%
Burr 43%, Cal Cunningham 28%
Burr 43%, Kevin Foy 27%
Burr 43%, Kenneth Lewis 27%
Burr 43%, Elaine Marshall 31%


A Tale Of Two Reform Packages


Picture the scene: a fairly popular President, having amassed a significant amount of political capital, decides its time to cash in and spend some on a tough reform effort for a failing, inadequate system. Many Americans agree that the status quo isn't acceptable long-term but hesitate to sign on to changes that they deem too risky. Members of Congress go out to their districts and are confronted at town hall meetings with frustrated, vocal constituents worried about the risks of the plan. The President's popularity outpaces his policies and in particular, this major reform package. Even with control of both houses of Congress, the package can't survive. The reform fails.

If you feel like you've seen this story before, you're not wrong. The trajectory of the 2009 health care debate seems eerily similar to that of the 2005 battle for Social Security reform. Taking a look at the polling from then and comparing it to the data of today shows the parallels in the situation and shows why the health care debate feels all too familiar.

Similarity #1: Presidential Popularity

First, take a look at a bit of a throwback post from 2006 at MysteryPollster.com where Bush's job approval from January 2005 forward is tracked. Bush began 2005 with job approval over 50% - slightly below where Obama started at the beginning of July (Gallup's 7/05-07/2009 poll had Obama at 56%). The trends are not dissimilar: Charles Franklin's plot of Bush job numbers from January 05 forward shows a similar shrinking of support that looks an awful lot like the Obama job approval chart on the front page. This isn't a particularly surprising finding, but provides context to the other more striking comparisons.

Similarity #2: The Agreement that the Status Quo is Unacceptable

In both the Social Security debate and the health care debate, Americans agree: the system needs major overhaul. While so many other issues fail to get Americans to agree with the crucial "we need to do something" sentiment, both Social Security and health care had that extra boost from a public that agreed: maintaining the current system is not workable long term. In February 2005, Gallup found 73% of Americans said Social Security was "in crisis" or "has major problems". (18% said Social Security was "in crisis").

Compare that to the health care debate of today. Gallup has found that 20% of Americans believe health care is "in crisis" and at least a majority believe it has major problems (unfortunately, Gallup doesn't tell us how large a majority). To flesh that out a bit, Gallup asked the question in November 2008 and found 73% of respondents said that health care was either "in crisis" or had "major problems". Does that number sound familiar?

Similarity #3: Issue Handling

By March 2005, Bush's numbers on issue handling of Social Security were brutal, with an ABC/WaPo poll showing only 35% approving and 56% disapproving. CNN/Gallup had even worse news with only 1 out of 3 approving. Compared to 49% approval shortly after Bush took office, once the issue became a hot topic, Bush's number tanked.

Similarly, Obama's numbers have plummeted on health care since before the debate. In April, during Obama's honeymoon, Pew showed Obama with a 51-26 advantage on health care job approval. By August, he had a 42-43 disadvantage - quite the fall from the earlier numbers. The idea that "the president is more popular than his policies" held true then as it does now. (Just take a look at Mara Liasson's February 2005 NPR story, titled: "Bush More Popular that His Social Security Plan").

In both cases, the President began his administration with the trust and support of the people to fix their given "crisis". In both cases, once the debate flared, their numbers dropped significantly. But it is worthwhile to point out that the comparison is not perfect - the Obama honeymoon numbers were immediately followed by the debate, while Bush had a full four years before tackling Social Security.

At any rate, this is just the basic side-by-side look at the reasons why this health care debate may seem like a bit of a "glitch in the Matrix", giving those who watch politics a sense of deja vu.

Because sometimes the more things change, the more they stay the same.

(This item has been cross posted at The Next Right)


States: Ideology (Gallup Jan.-June '09)


Gallup
January-June 2009; 160,236 adults, 3-7% margin of error (in individual states)
Mode: Live telephone interviews
(Gallup article)

U.S. States

Gallup:

The strength of "conservative" over "liberal" in the realm of political labels is vividly apparent in Gallup's state-level data, where a significantly higher percentage of Americans in most states -- even some solidly Democratic ones -- call themselves conservative rather than liberal...

The overall percentages of self-declared conservatives in each state range from a high of 49% in Alabama to a low of 23% in the nation's capital. The "liberal" label is embraced most widely in D.C., by 37%, followed by 29% in Massachusetts. At 14%, it is used least commonly in Louisiana.


NJ: 2009 Gov (DemCorps 8/11-12)


Greenberg Quinlan Rosner (D) / Democracy Corps (D)
8/11-12/09; 620 likely voters, 4.1% margin of error
Mode: Live telephone interviews
(GQR: summary, memo, results)

New Jersey

Favorable / Unfavorable
Jon Corzine: 32 / 47 (chart)
Chris Christie: 32 / 31
Chris Daggett: 4 / 7
Barack Obama: 54 / 30 (chart)

2009 Governor
Christie 40%, Corzine 35%, Daggett 10% (chart)
Christie 43%, Corzine 37%


US: National Survey (Fox 8/11-12)


Fox News / Opinion Dynamics
8/11-12/09; 900 registered voters, 3% margin of error
353 Democrats, 5% margin of error
312 Republicans, 6% margin of error
199 independents, 7% margin of error
Mode: Live telephone interviews
(Fox story, toplines)

National

Obama Job Approval
53% Approve, 40% Disapprove (chart)
Dems: 87 / 8 (chart)
Reps: 16 / 76 (chart)
inds: 49 / 44 (chart)

Congressional Job Approval
30% Approve, 59% Disapprove (chart)

State of the Country
38% Satisfied, 61% Dissatisfied (chart)

Based on what you know about the health care reform legislation being
considered right now, do you favor or oppose the plan?

34% Favor, 49% Oppose (chart)

Do you think most Americans would be better off or worse off under the
health care reforms being considered or would the reforms not make much of a
difference to most Americans across country?

34% Better off, 36% Worse off, 20% No difference

Do you think you and your family would be better off or worse off under the
health care reforms being considered or would the reforms not make much of a
difference to your family?

20% Better off, 35% Worse off, 37% No difference

Party ID
39% Democrat, 35% Republican, 22% independent (chart)


CA: 2010 Sen, Gov (Kos 8/9-12)


Daily Kos (D) / Research 2000
8/9-12/09; 600 likely voters, 4% margin of error
400 Democratic primary voters, 5% margin of error
400 Republican primary voters, 5% margin of error
Mode: Live telephone interiews
(Kos release)

California

Favorable / Unfavorable
Jerry Brown (D): 48 / 37
Gavin Newsom (D): 40 / 42
Meg Whitman (R): 41 / 30
Tom Campbell (R): 38 / 29
Steve Poizner (R): 35 / 27
Barbara Boxer (D): 49 / 43 (chart)
Carly Fiorina (R): 22 / 29
Chuck DeVore (R): 21 / 27
Arnold Schwartzenegger (R): 40 / 54 (chart)
Dianne Feinstein (D): 51 / 42 (chart)
Barack Obama: 63 / 30 (chart)

2010 Governor: Democratic Primary (chart)
Brown 29%, Newsom 20%

2010 Governor: Republican Primary (all match-ups)
Whitman 24%, Campbell 19%, Poizner 9%
Whitman 27%, Campbell 21%

2010 Governor: General Election (all match-ups)
Brown 42%, Whitman 36%
Brown 43%, Campbell 35%
Brown 43%, Poizner 34%
Whitman 37%, Newsom 36%
Newsom 36%, Campbell 35%
Newsom 36%, Poizner 35%

2010 Senate: Republican Primary (/polls/ca/10-ca-sen-reppr.html)
Fiorina 29%, DeVore 17%

2010 Senate: General Election (all match-ups)
52% Boxer, 31% Fiorina
53% Boxer, 29% DeVore


US: National Survey (Economist 8/9-11)


The Economist / YouGov
8/9-11/09; 1,000 adults, 4.9% margin of error
Mode: Internet
(Economist post)

National

Favorable / Unfavorable
Obama: 53 / 41 (chart)

Obama Job Approval
49% Approve, 43% Disapprove (chart)
Dems: 85 / 10 (chart)
Reps: 10 / 85 (chart)
inds: 42 / 48 (chart)
Economy: 47 / 44 (chart)
Health Care: 44 / 47 (chart)

Congressional Job Approval
19% Approve, 56% Disapprove (chart)

2010 Congress: National Ballot
36% Republican, 46% Democrat (chart)

State of the Country
37% Right Direction, 48% Wrong track (chart)
Economy: 26% Getting better, 34% Getting worse, 33% About the same (chart)


US: News Interest (Pew 8/7-10)


Pew Research Center
8/7-10/09; 1,004 adults, 3.5% margin of error
Mode: Live telephone interviews
(Pew release)

National

Most Closely Followed News Story

36% Debate in Washington over health care reform
21% Reports on the condition of the U.S. economy
14% Bill Clinton securing the release of two American journalists held by North Korea

Are you hearing mostly good news about the economy these days, mostly bad news about the economy or a mix of both good and bad news?

11% Mostly good news
29% Mostly bad news
59% A mix of good and bad news

From what you've seen and heard, do you think the way people are protesting at town hall meetings over health care reform is appropriate or inappropriate?

61% Appropriate
34% Inappropriate


US: Health Care (Gallup 8/11)


USA Today / Gallup
8/11/09; 1,000 adults, 4% margin of error
Mode: Live telephone interviews
(USA Today story, Gallup Today story)

National

Have town hall meeting protests against the proposed bills made you...

34% More sympathetic to the protesters' views
21% Less sympathetic
36% No difference

Are the following actions at townhall-style forums on health care an example of democracy in action, or of abuse of democracy?

Angry attacks against a bill: 51% Democracy, 41% Abuse
Booing members of Congress: 44% Democracy, 47% Abuse
Shouting down supporters of a bill: 33% Democracy, 59& Abuse


PA: Toomey 48 Specter 36 (Rasmussen 8/11)


Rasmussen
8/11/09; 1,000 likely voters, 3% margin of error
Mode: IVR
(Rasmussen release)

Pennsylvania

Job Approval / Disapproval

Pres. Obama: 51 / 47 (chart)
Gov. Rendell: 39 / 60 (chart)

Favorable / Unfavorable

Arlen Specter: 43 / 54 (chart)
Pat Toomey: 54 / 26
Joe Sestak: 40 / 36

2010 Senate: General Election

Toomey 48%, Specter 36% (chart)
Toomey 43%, Sestak 35% (chart)

Generally speaking, do you strongly favor, somewhat favor, somewhat oppose or strongly oppose the health care reform plan proposed by President Obama and the congressional Democrats?

42% Favor, 53% Oppose

(Primary election data for this poll is available here)


'Can I Trust This Poll?' - Part III

Topics: AAPOR , Can I Trust This Poll , Disclosure , Divergent Polls , Huffington Post , Huffpollstrology , NCPP , New Hampshire

In Part II of this series on how to answer the question, "can I trust this poll," I argued that we need better ways to assess "likely voter" samples: What kinds of voters do pollsters select and how do they choose or model the likely voter population? Regular readers will recall how hard it can be to convince pollsters to disclose methodological details. In this final installment, I want to review the past efforts and propose an idea to promote more complete disclosure in the future.

First, lets review the efforts to gather details of pollster methods carried out over the last two years by this site, the American Association for Public Opinion Research (AAPOR) and the Huffington Post.

  • Pollster.com - In September 2007, I made a series of requests of pollsters that had released surveys of likely caucus goers in Iowa. I asked for information about their likely voter selection methods and for estimates of the percentage of adults represented by their surveys. A month later, seven pollsters -- including all but one of the active AAPOR members -- had responded fully to my requests, five provided partial responses and five answered none of my questions. I had originally planned to make similar requests regarding polls for the New Hampshire and South Carolina primaries, but the responses trickled in so slowly and required so much individual follow-up that that I limited the project to Iowa (I reported on the substance of their responses here)
  • AAPOR - In the wake of the New Hampshire primary polling snafu, AAPOR appointed an Ad Hoc Committee to investigate the performance of primary polls in New Hampshire and, ultimately, in three other states: South Carolina, California and Wisconsin. They made an extensive request of pollsters, asking not only for things the AAPOR code requires pollsters to disclose but also for more complete information, including individual-level data for all respondents. Despite allowing pollsters over a year to respond, only 7 of 21 provided information beyond minimal disclosure, and despite the implicit threat of AAPOR censure, three organizations failed to respond with even the minimal information mandated by AAPOR's ethical code (see the complete report).
  • HuffingtonPost - Starting in August 2008, as part of their "Huffpollstrology" feature, the Huffington Post asked a dozen different public pollsters to provide response and refusal rates for their national polls. Six replied with response and refusal rates, two responded with limited calling statistics that did not allow for response rate calculations and four refused to respond (more on Huffpollstrology's findings here).

The disclosure requirements in the ethical codes of survey organizations like AAPOR and the National Council on Public Polls (NCPP) gained critical mass in the late 1960s. George Gallup, the founder of the Gallup Organization, was a leader in this effort, according to Albert Golin's chapter in a published history of AAPOR (The Meeting Place). In 1967, Gallup proposed creating what would ultimately become NCPP:

The disclosure standards [Gallup] was proposing were meant to govern "polling organizations whose findings regularly appear in print and on the air....also [those] that make private or public surveys for candidates and whose findings are released to the public." It was clear from his prospectus that the prestige of membership (with all that it implied for credentialing) was thought to be sufficient to recruit public polling agencies, while the threat of punitive sanctions (ranging from a reprimand to expulsion) would reinforce their adherence to disclosure standards [p. 185].

Golin adds that Gallup's efforts were aimed at a small number of "black hat" pollsters in hopes of "draw[ing] them into a group that could then exert peer influence over their activities." Ultimately, this vision evolved into AAPOR's Standards for Minimal Disclosure and NCPP's Principles of Disclosure.

Unfortunately, as the experiences of the last year attest, larger forces have eroded the ability of groups like AAPOR and NCPP to exert "peer pressure" on the field. A new breed of pollsters has emerged that cares little about the "prestige of membership" in these groups. Last year, nearly half the surveys we reported at Pollster.com had no sponsor other than the businesses that conducted them. These companies either disseminate polling results for their market value, make their money by selling subscription access to their data, or both. They know that the demand for new horse race results will drive traffic to their websites and expose their brand on cable television news networks. As such, they see little benefit to a seal of approval from NCPP or AAPOR and no need for exposure in more traditional, mainstream media outlets to disseminate their results.

The recent comments of Tom Jensen, the communications director at Public Policy Polling (PPP) are instructive:

Perhaps 10 or 20 years ago it would have been a real problem for PPP if our numbers didn't get run in the Washington Post but the fact of the matter is people who want to know what the polls are saying are finding out just fine. Every time we've put out a Virginia primary poll we've had three or four days worth of explosion in traffic to both our blog and our main website.

So when pressured by AAPOR many of these companies feel no need to comply (although I should note for the record that PPP responded to my Iowa queries last year and responded to the AAPOR Ad Hoc Committee request for minimal disclosure, but no more). The process of "punitive sanctions" moves too slowly and draws too little attention to motivate compliance among non-AAPOR members. Although the AAPOR Ad Hoc Committee made its requests in March 2008, its Standards Committee is still processing the "standards case" against those who refused to comply. In February, AAPOR issued a formal censure, its first in more than ten years, of a Johns Hopkins researcher for his failure to disclose methodological details. If you can find a single reference to it in the Memeorandum news compilation for the two days following the AAPOR announcement, your eyes are better than mine.

Meanwhile, the peer pressure that Gallup envisioned continues to work on responsible AAPOR and NCPP members, leaving them feeling unfairly singled out and exposed to attack by partisans and competitors. I got an earful of this sentiment a few weeks ago from Keating Holland, the polling director at CNN, as we were both participating in a panel discussion hosted by the DC AAPOR chapter. "Disclosure sounds like a great idea in the confines of a group full of AAPOR people," he said, "but it has real world consequences, extreme real world consequences . . . as a general principal, disclosure is a stick you are handing to your enemies and allowing them to beat you over the head with it."

So what do we do? I have an idea, and it's about scoring the quality of pollster disclosure.

To explain what I mean, let's start with the disclosure information that both AAPOR and NCPP consider mandatory -- the information that their codes say should be disclosed in all public reports. While the two standards are not identical, they largely agree on these elements (only AAPOR considers the release of response rates mandatory, while NCPP says pollsters should provide response rate information on request):

  • Who sponsored/conducted the survey?
  • Dates of interviewing
  • Sampling method (e.g. RDD, List, Internet)
  • Population (e.g. adults, registered voters, likely voters)
  • Sample size
  • Size and description of the subsample, if the survey report relies primarily on less than the total sample
  • Margin of sampling error
  • Survey mode (e.g. live interviewer, automated, internet, via cell phone?)
  • Complete wording and ordering of questions mentioned in or upon which the release is based
  • Percentage results of all questions reported
  • [AAPOR only] The AAPOR response rate or a sample disposition report

NCPP goes farther and spells out a second level of disclosure -- information pertaining to publicly released results that its members should provide on written request:

  • Estimated coverage of target population
  • Respondent selection procedure (for example, within household), if any
  • Maximum number of attempts to reach respondent
  • Exact wording of introduction (any words preceding the first question)
  • Complete wording of questions (per Level I disclosure) in any foreign languages in which the survey was conducted
  • Weighted and unweighted size of any subgroup cited in the report
  • Minimum number of completed questions to qualify a completed interview
  • Whether interviewers were paid or unpaid (if live interviewer survey mode)
  • Details of any incentives or compensation provided for respondent participation
  • Description of weighting procedures (if any) used to generalize data to the full population
  • Sample dispositions adequate to compute contact, cooperation and response rates

They also have a third level of disclosure that "strongly encourages" members to "release raw datasets" for publicly released results and "post complete wording, ordering and percentage results of all publicly released survey questions to a publicly available web site for a minimum of two weeks."

The relatively limited nature of the mandatory disclosure items made sense given the print and broadcast media into which public polls were disseminated when these standards were created. But now, as Pollster reader Jan Werner points out via email, things are different:

When I argued in past decades for fuller disclosure, the response was always that broadcast time or print space were limited resources and too valuable to waste on details that were only of interest to a few specialists. The Internet has now removed whatever validity that excuse may once have had, but we still don't get much real information about polls conducted by the news media, including response rates.

So here is my idea: We make a list of all the elements above, adding the likely voter information I described in Part II. We gather and record whatever methodological information pollsters choose to publicly release into our database for every public poll that Pollster.com collects. We then use the disclosed data to score the quality of disclosure of every public survey release. Aggregation of these scores would allow us to rate the quality of disclosure for each organization and publish the scores alongside polling results.

Now imagine what could happen if we made the disclosure scores freely available to other web sites, especially the popular poll aggregators like RealClearPolitics, Fivethirtyeight and the Polling Report. What if all of these sites routinely reported disclosure quality scores with polling results the way they do the margin of error? If that happened, it could create a set of incentives for pollsters to improve the quality of their disclosure in a way that enhances their reputations rather than making them feel as if they are handing a club to their enemies.

Imagine what might happen if we could create a database available for free to anyone for non-commercial purpose (via Creative Commons license) of not just polls results, sample sizes and survey dates, but also a truly rich set of methodological data appended to each survey. We might help create the tools that would allow pollsters to refine their best practices and the next wave of ordinary number crunchers to find ways to decide which polls are worthy of our trust.   

The upside is that this system would not require badgering of pollsters or a reliance on a slow and limited process of "punitive sanctions." It would also not place undue emphasis on any one element of disclosure (as the "Huffpollstrology" feature does with response rates). We would record whatever is in the public domain, and if pollsters want to improve their scores, they can choose what new information to release. If a particular element is especially burdensome, they can skip it.

The principal downside is that turning this idea into a reality requires considerable work and far more resources than I have at my disposal. We would need to expand both our database and our capacity to gather and enter data. In other words, we would need to secure funding, most likely from a foundation, to make this idea a reality.

The scoring procedure would have to be thought out very carefully, since different types of polls may require different kinds of disclosure. We would need to structure and weight the index so that different categories of poll get scored fairly. I am certain that to succeed, any such project would need considerable input from pollsters and research academics. The index and scoring would also need to be utterly transparent. We would want to set up a page or data feed so that anyone on the Internet could see the disclosed information for any poll, to evaluate how any survey was scored.

For the moment, at least, this is more an idea than a plan, and it may be little more than fanciful "pie in the sky" that gets not much further than this blog posting. Nevertheless, in my five years of participating in this amazing revolution of news and information on the internet that we used to call "the blogosphere," I have come to a certain faith that ideas become a reality when we put them out in the public doman and offer them up for comment, criticism and revision.

So, dear readers, what do you think? Want to help make it reality?

[Note: I will be participating in a panel Tomorrow on "How to Get the Most Out of Polling" at this week's Netroots Nation conference. This series of posts previews the thoughts I am hoping to summarize tomorrow].


PA: Specter 47 Sestak 34 (Rasmussen 8/11)


Rasmussen
8/11/09; 423 likely Democratic primary voters, 5% margin of error
Mode: IVR
(Rasmussen release)

Pennsylvania

2010 Senate: Democratic Primary

Arlen Specter 47%, Joe Sestak 34% (chart)

Favorable / Unfavorable (among Democrats)
Specter: 71 / 25
Sestak: 54 / 23


AR: Approval, Favs (TBQ: 7/13-15)


Talk Business Quarterly / The Political Firm (R) / The Markham Group (D)
7/13-15/09; 600 likely voters, 4% margin of error
Mode: Live telephone interviews
(TBQ release)

Arkansas

Job Approval / Disapproval

Pres. Obama: 42 / 54
Gov. Mike Beebe: 78 / 15
Sen. Blanche Lincoln: 49 / 40

Favorable / Unfavorable

Obama: 44 / 51
Beebe: 77 / 17
Lincoln: 49 / 40


US: Palin Approval (CNN 7/31-8/3)


CNN / Opinion Research Corportation
7/31-8/3/09; 1,136 adults, 3% margin of error
Mode: Live telephone interviews
(CNN post)

National

Favorable / Unfavorable
Sarah Palin: 39 / 48 (chart)
John McCain: 51 / 40


NC: Perdue Approval (PPP 8/4-10)


Public Policy Polling (D)
8/4-10/09; 749 likely voters, 3.6% margin of error
Mode: IVR
(PPP release)

North Carolina

Job Approval / Disapproval

Bev Perdue: 27 / 52 (chart)


US: National Survey (Marist 8/3-6)


Marist
8/3-6/09; 845 registered voters, 3.5% margin of error
Mode: Live telephone interviews
(Marist toplines)

National

Obama Job Approval
55% Approve, 35% Disapprove (chart)
Dems: 90 / 6 (chart)
Reps: 20 / 71 (chart)
inds: 47 / 37 (chart)
Economy: 52 / 41 (chart)

State of the Country
50% Right Direction, 42% Wrong Track (chart)


NH: 2012 Pres Primary (Populus 8/10-11)


Populus Research / Now Hampshire (R)
8/10-11/09; 403 likely Republican primary voters, 5% margin of error
Mode: IVR
(Now Hampshire Article)

New Hampshire

2012 President: Republican Primary

Romney 50%
Palin 17%
Huckabee 17%
Gingrich 13%
Pawlenty 3%


US: Obama Approval (Gallup 8/6-9)


Gallup
8/6-9/09; 1,010 adults; 4% margin of error
Mode: Live telephone interviews
(Gallup story)

National

Obama Job Approval
Foreign Affairs: 53 / 40 (chart)
The Economy: 48 / 49 (chart)
Health Care Policy: 43 / 49 (chart)


VA: McDonnell 49 Deeds 41 (Rasmussen 8/10)


Rasmussen
8/10/09; 500 likely voters, 4.5% margin of error
Mode: IVR
(Rasmussen release)

Virginia

Job Approval / Disapproval
Pres. Obama: 48 / 51 (chart)
Gov. Kaine: 56 / 43 (chart)

Favorable / Unfavorable
Bob McDonnell (R): 53 / 30
Creigh Deeds (D): 48 / 39
Sen. Warner: 63 / 31 (chart)
Kaine: 54 / 42 (chart)

2009 Governor
49% McDonnell, 41% Deeds (chart)


Woodstock Flashback 'Outliers'

Topics: Outliers Feature

Rasmussen finds consumer confidence hitting a new high for 2009; ABC News measures a 3-month high.

Charles Blow explains that those who most want health care reform are the most apathetic.

Ed Kilgore asks if town hall meetings should matter.

Ezra Klein ponders the new Rasmussen health care results.

John Sides takes issue with David Kurtz on whether Obama has a health care mandate.

Tom Jensen thinks Obama may have a Colorado problem.

Steve Benen takes issue with CNN's presentation of its party favorable ratings.

Kos notes a decline in the GOP favorable rating among Latinos.  

Media Matters gathers polling showing support for a public option (via McJoan).

Tom Schaller sees a newly emerging Democratic majority in state legislatures.

The New York Times graphical wizards chart data from the American Time Use Survey (via FlowingData).

Research Rants wants online surveys to agree that blank=zero.

Former NPR polling guru Marcus Rosenbaum recounts his Woodstock experience.


'Can I Trust This Poll?' - Part II

Topics: Can I Trust This Poll , Charts , Disclosure , Divergent Polls , Sampling

"Can I trust this poll?" In Part I of this series I tried to present the growing clash between traditional polling methods and a new breed that breaks many of the old rules and makes answering this question difficult. In this post, I want to review the philosophies at work behind efforts to evaluate polls and offer a few suggestions about what we can do to assess whether poll samples are truly representative.

Those who assess polls and pollsters generally fall into two categories, those who check the methodology and those who check the results. Let's consider both.

Check the Methods - Most pollsters have been trained to assess polls by looking at the underlying methods, not the results they produce. The idea is that you do all you can to contact and interview a truly random sample, ask standardized, balanced, clearly-worded questions and then trust the results. Four years ago, my Hotline colleagues asked pollsters how they determine whether they have a good sample. The answer from Gary Langer, director of polling at ABC News, best captures this philosophy:

A good sample is determined not by what comes out of a survey but what goes into it: Rigorous methodology including carefully designed probability sampling, field work and tabulation procedures. If you've started worrying about a "good sample" at the end of the process, it's probably too late for you to have one.   

A big practical challenge in applying this philosophy is that the definition of "rigorous methodology" can get very subjective. While many pollsters agree on general principles (described in more detail in Part I), we lack consensus on a specific set of best practices. Pollsters disagree, for example, about the process used to choose a respondent in sampled households. They disagree about how many times to dial before giving up on a phone number or about the ideal length of time a poll should be in the field. They disagree about when it's appropriate to sample from a list, about which weighting procedures are most appropriate, about whether automated interviewing methods are acceptable and more.

This lack of consensus has many sources: The need to adapt methods to unique situations, differing assessments of the tradeoffs between different potential sources of error and the usual tensions between the goals of cost and quality. Yet whatever the reason, these varying subjective judgments make it all but impossible to score polls using a set of objective criteria. All too often, methodological quality is in the eye of the beholder.

A bigger problem is that the underlying assumption -- that these rigorous, random-digit methods produce truly random probability sampling -- is weakening. The unweighted samples obtained by national pollsters now routinely under-represent younger and non-white people while routinely over-representing white and college educated Americans. Of course, virtually all pollsters weight their completed samples demographically to correct these skews. Also, many pollsters are now using supplemental samples to interview Americans on their cell phones in order to improve coverage of the younger "cell phone only" population.

Most of the time, this approach appears to work. Pre-election polls continued to perform well during the 2008 general election, matching or exceeding their performance in 2004 and prior years. But how long will it be before the assumptions of what SurveyUSA's Jay Leve calls "barge in polling" give way to a world in which most Americans treat a ringing phone from an unknown number the way they treat SPAM email? And when it does, how will we evaluate the newer forms of research?

Check the Results - When non-pollsters think about how to evaluate polls, their intuitive approach is different. They typically ask, well, how does the pollster compare in terms of accuracy? The popularity of Nate Silver and the pollster ratings he posted last year at FiveThirtyEight.com last year speaks to the desire of non-pollsters to reduce accuracy to a simple score.

Similarly, pollsters also understand the importance of the perceived accuracy of their pre-election poll estimates. "The performance of election polls," wrote Scott Keeter and his Pew Research Center colleagues earlier this year, "is no mere trophy for the polling community, for the credibility of the entire survey research profession depends to a great degree on how election polls match the objective standard of election outcomes."

So what's the problem in using accuracy scores to evaluate individual pollsters? Consider some important challenges. First, pollsters do not agree on the best way to score accuracy, with the core disagreement centering on how to treat the undecided percentage that appears nowhere on the ballot. And for good reason. Differences in scoring can produce very different pollster accuracy rankings.

Second, the usual random variation in individual poll results due to simple sampling error gives especially prolific pollsters -- those active in many contests -- an advantage in the aggregate scores over those that poll in relatively few contests. Comparisons for individual pollsters get dicey when the number of polls used to compute the score gets low.

Third, and probably most important, scoring the accuracy this way tells us about only one particular measure (the vote preference question) on one type of survey (pre-election) at one point in the campaign (usually the final week). Consider the chart below (via our colleague Charles Franklin). It plots the Obama-minus-McCain margin on roughly 350 surveys that tracked national popular vote between June and November, 2008. An assessment of pollster error would consider only the final 20 or so surveys -- the points plotted in red.

2009-08-11_Last20.png

Notice how the spread of results (and the frequency of outliers) is much greater from June to October than in the final week (the standard deviation of the residuals, a measurement of the spread of points around the trend line, falls from 2.79 for the grey points from June to October to 1.77 for the last 20 polls in red). Our colleague David Moore has speculated about some of the reasons for what he dubs "the convergence mystery" (here and here; I added my own thoughts here with a related post here). But whatever you might conclude about the reasons for this phenomenon, something about either voter attitudes or pollster methods was clearly different in the final week before the 2008 election. Assuming, as many pollsters do, that this phenomenon was not unique to 2008, how useful are the points in red from any prior election in helping us assess the "accuracy" of the grey points for the next one?

So what do we do? How can we evaluate new polling results when we see them?

The key issue here is, in a way, about faith. Not religious faith per se, but faith in random sampling. If we have a true random probability sample, we can have a high degree of faith that the poll is representative of the larger population. That fundamental philosophy guides most pollsters. The problem for telephone polling today is that many of the assumptions of true probability sampling are breaking down. That change does not mean that polls are suddenly non-representative, but it does make for a much greater potential than 10 or 20 years ago for skewed, flukey samples.

What we need is some way to assess whether poll samples are truly representative of a larger population that does not rely entirely on faith that "rigorous" methods are in place to make it so. I will grant that this is a very big challenge, one for which I do not have easy answers, especially for the random digit dial (RDD) samples of adults typically used for national polls. Since most pollsters already weight adult samples by demographics, their weighted demographic distributions are already representative. But what about other variables like political knowledge, interest or ideology? Again, I lack easy answers though perhaps as the quality of voter lists improve in the future, we may get better "auxiliary data" to help identify and correct non-response bias. But for now, our options for validating samples are very limited.

When it comes to "likely voter" samples, however, pollsters can do far better informing us about who these polls represent. As we have reported here and especially on my old Mystery Pollster blog over the years, there are almost as many definitions of likely voters as there are pollsters. Some use screen questions to identify the likely electorate, some use multiple questions to build indexes that either select likely voters or weight respondents based on their probability of voting. The questions used for this purpose can be about intent to vote, past voting, political interest or knowledge of voting procedures. Some select likely voters using registered voters lists and actual turnout records for the individuals selected from voter lists. So simply knowing that the pollster has interviewed 600 or 1,000 "likely voters" is not very informative.

The importance of likely voters around elections is obvious, but it is less apparent that many public polls of "likely voters" routinely report on wide variety of policy issues even in non-election years. These include the polls from Rasmussen Reports, NPR, George Washington University/Battleground and Democracy Corps. What is a "likely voter" in an odd-numbered year? Those who voted or tend to vote in higher turnout presidential elections? Those who intend to vote in non presidential elections? Something else?

One thing I have learned from five years of blogging on this topic is that some pollsters consider their likely voter methods proprietary and fiercely resist disclosure of the details. Some will disagree, but I think there are some characteristics that can be disclosed, much like food ingredients, without giving away the pollster's "secret sauce." These could include the following:

  • In general terms, how are likely voters chosen - by screening? Index cut-off models? Weights? Voter file/vote history selection?
  • What percentage of the adult population does the likely voter sample represent?
  • If questions were used to screen respondents or or build an index, what are the text of questions asked?
  • If voter lists were used, what sort of vote history (in general terms if necessary) defined the likely voters?
  • Perhaps most important, what is the demographic and attitudinal (party, ideology) profile -- weighted and unweighted -- of the likely voter universe?
  • Access to cross-tabulations, especially by party identification.

Regular readers will know that better disclosure of these details is a topic I return to often, but will also remember that obtaining consistent disclosure of such details can be difficult to impossible, depending on the pollster.

How can we help motivate pollsters to disclose more about their methods? I have an idea that I will explain in the third and final installment of this series.

Update: continue reading Part III.

[Note: I will be participating in a panel on Thursday at this week's Netroots Nation conference on "How to Get the Most Out of Polling." This series of posts previews the thoughts I am hoping to summarize on Thursday].


US: Health Care (Gallup 8/6-9)


Gallup
8/6-9/09; 1,010 adults, 4% margin of error
Mode: Live telephone interviews
(Gallup story)

National

Would you advise your member of Congress to vote for or against a healthcare reform bill when they return to Washington in September?

35% Vote for, 36% Vote against
Republicans: 10 / 66
Democrats: 59 / 10


Reifman: The "Public Option"


Prof. Alan Reifman teaches social science research methodology at Texas Tech University, and has begun compiling the results of public opinion polls on the specifics of health care reform at his new blog, Health Care Polls.

Perhaps the most contentious issue among congressional negotiators and interest groups in Washington, DC (and elsewhere) is the so-called public option. The idea is that the government would create a new health-insurance program (modeled to one degree or another on Medicare, the government insurance program for seniors) that people could join. Proponents argue that, by having it compete with private insurers, the public option would help control costs. Opponents, on the other hand, see the public option as yet another government intrusion into an area they feel should be left to the private market.

Where does the public seem to stand? Not surprisingly, the public option has been widely polled, and we shall focus exclusively on it today. As seen in the diagram below (which you can click on to enlarge), levels of support for the public option vary widely according to different polls, despite the relative consistency of question wording (all the survey items refer in some fashion to the public option being a government health-insurance program that would compete with private insurance companies). The predominant trend, I would say, is that a majority of respondents supports a public option, with five of the eight polls showing between 52-66 percent in favor.


Still, though, two other polls show support in the mid-40s and one poll (Rasmussen) has support way down at 35%. What to make of this? Let's start with Rasmussen. Whereas Rasmussen's presidential-election polling has tended to be highly accurate (relative to the actual results), other types of polls from this outfit appear to have had a Republican slant. Here are some examples:

*Whereas most polls tended to have George W. Bush's job-approval ratings during the waning months of his administration in the low-30s or even the 20s, Rasmussen consistently had it around 35%.

*Whereas virtually every pollster other than Rasmussen has shown a majority of voters to prefer the Democrats (at this early point) in next year's U.S. House elections, Rasmussen has been showing the Republicans in the lead (albeit with large percentages undecided).

Polling analysts refer to systematic differences in the results (on the same basic issue) between different survey firms (or survey "houses") as house effects. These may stem from different firms' practices regarding question-wording, sample weighting, etc. On health care reform and other issues, it looks to me as though Rasmussen has a substantial house effect.

There's one other aspect of the public-option polling I'd like to point out. As can be seen in the diagram above, I have highlighted in red the words "option" and "offering" in the wording of some of the survey items. It appears that wordings stressing the voluntariness of the public option (i.e., that it is an "option," or something "offered" to the consumer) tend to elicit higher support than wordings that don't highlight voluntariness as much. This is just a hunch. If anyone has other explanations for the large variation in support between the polls, please share them in the comments section.

(Cross-posted to Health Care Polls)


KS: 2010 Sen Primary (SurveyUSA 8/7-9)


SurveyUSA
8/7-9/09; 471 likely Republican voters, 4.6% margin of error
Mode: IVR
(SurveyUSA release)

Kansas

2010 Senate: Republican Primary

Jerry Moran 38%, Todd Tiahrt 32%


NC: Obama Approval, Birth (PPP 8/4-10)


Public Policy Polling (D)
8/4-10/09; 749 likely voters, 3.6% margin of error
Mode: IVR
(PPP release)

North Carolina

Job Approval / Disapproval

Pres. Obama: 46 / 47 (chart)

Do you support or oppose President Obama's health care plan, or do you not have an opinion?

39% Support, 50% Oppose

Do you think Barack Obama was born in the United States?

54% Yes, 26% No


US: Health Care (Rasmussen 8/9-10)


Rasmussen
8/9-10/09; 1,000 likely voters, 3% margin of error
Mode: IVR
(Rasmussen release)

National

Generally speaking, do you strongly favor, somewhat favor, somewhat oppose or strongly oppose the health care reform plan proposed by President Obama and the congressional Democrats?

42% Favor, 53% Oppose (chart)

If the health care reform plan passes, will the quality of health care get better, worse, or stay about the same?

26 % Better, 51% Worse, 17% Same

If the health care reform plan passes, will the cost of health care go up, go down, or stay about the same?

51% Up, 19% Down, 21% Same


US: Gov't Favs (Pew 7/22-26)


Pew Research Center
7/22-26/09; 1,506 adults, 3% margin of error
Mode: Live telephone interviews
(Pew release)

National

Favorable / Unfavorable
Federal Government: 42 / 50
State Government: 50 / 44
Local Government: 60 / 32


More GOP birthers in heavily black states?

Topics: birth certificate , birther , black population , Obama , state polls

Two new polls are out measuring the state-level prevalence of the misperception that President Obama is not a citizen of this country.

Tom Jensen of Public Policy Polling has released a preview of a poll showing that 47% of North Carolina Republicans think President Obama is not a citizen -- an even more disturbing finding than his previous poll, which found that 41% of Virginia Republicans believed in the myth. By contrast, a Deseret News/KSL-TV poll found that only 13% of Utah Republicans -- and 9% of Utahns generally -- said that they believe Obama is not a citizen (via David Weigel).

These results are consistent with the national figures from a Daily Kos/Research 2000 poll, which found that the myth was endorsed by 28% of Republicans (and 11% of Americans) overall and that it was more prevalent in the South.

What explains the state-level differences in birther misperceptions that we observe? The Washington Independent's David Weigel suggests the difference may be linked to a lack of racial polarization in Utah:

So why does rock-solid Republican Utah have fewer "birthers" than, the deep South, or even fewer than blue Virginia and North Carolina? A lack of racial polarization has something to do with it. Utah, like the rest of the great plains and western states, got bluer in 2008 despite overall McCain victories and despite having a very, very white population. In Utah, Obama got 327,670 votes in 2008, up from the 241,199 votes that Sen. John Kerry (D-Mass.) got in 2004. For the first time since 1964, the Democratic candidate for president actually carried Salt Lake County. This happened with 31 percent of Utah whites backing Obama. Not even close to a winning margin; but in Louisiana, for example, Obama only won 14 percent of the white vote.

The reason, of course, for the lack of racial polarization in Utah is that it is overwhelmingly white. By contrast, states with large black populations (particularly those in the South) are often much more polarized along racial lines. Following up on my analyses of state-level Obama support by black population, I therefore plotted state-level GOP birther misperceptions against the state-level black population (with the aggregate US total added for context). While it is obviously far too early to draw any firm conclusions, the result is highly suggestive:

Birthers-by-state

Again, the plot is only for illustrative purposes -- it is far too soon to tell if the relationship will hold with data from more states. But the fit to the state data is almost perfectly linear thus far (R2=.99).

(Cross-posted to brendan-nyhan.com)


NV: 2010 Senate (V&A 7/29-30)


Vitale & Associates (R)*
7/29-30/09; 510 likely voters, 4.4% margin of error
Mode: Live telephone interviews
(Las Vegas Review-Journal Article)

Nevada

2010 Senate
Sue Lowden (R) 48%, Harry Reid (D) 42%

Favorable / Unfavorable
Sen. Ensign: 40 / 46

*(Editor's note: this poll was commissioned by supporters of Sue Lowden)


NJ: Christie 51 Corzine 42 (Quinnipiac 8/5-9)


Quinnipiac
8/5-9/09; 1,167 likely voters, 2.9% margn of error
Mode: Live telephone interviews
(Quinnipiac: Gov, Pres)

New Jersey

Favorable / Unfavorable
Chris Christie (R): 42 / 26
Jon Corzine (D): 37 / 54 (chart)
Chris Daggett (i): 4 / 3

Job Approval / Disapproval
Gov. Corzine: 36 / 58 (chart)
Sen. Lautenberg: 45 / 38 (chart)
Sen. Menendez: 39 / 38 (chart)
Pres. Obama: 56 / 39 (chart)

2009 Governor
Christie 51%, Corzine 42% (chart)
Christie 46%, Corzine 40%, Daggett 7%


US: Political Parties (CNN 7/31-8/3)


CNN / Opinion Research Corportation
7/31-8/3/09; 1,136 adults, 3% margin of error
Mode: Live telephone interviews
(CNN release)

Do you think the country would be better off if the Republicans controlled Congress, or if the Democrats controlled Congress?

34% Republicans, 44% Democrats

Favorable / Unfavorable

The Republican Party: 41 / 50
The Democratic Party: 52 / 39


UT: Huntsman Approval (DJA 8/3-5/)


Deseret News / KSL-TV / Dan Jones & Associates
8/3-5/09; 402 adults, 4.9% margin of error
Mode: Live telephone interviews
(Release Huntsman, 2010 Gov)

Utah

Job Approval / Disapproval
Gov. Jon Huntsman: 86 / 8
Lieut. Gov. Gay Herbert: 50 / 9

Herbert Has Announced He Will Run In GOV '10; Assuming Herbert Gets The GOP Nod, Would You Vote For Him?

39% Definitely / Probably
18% Probably not / Definitely not


New Blogger, New Data


As a new blogger on Pollster, I'm looking forward to sharing insights on the craft and new data.

And, on the topic of new data, I have some national polling data from June 12-15 to share regarding focus group participation.

In an n=1,000 national telephone survey StrategyOne asked: "Have you ever been recruited for and participated in a focus goup discussion?"

We really had no idea what percentage of American adults have participated, but now we know.

14%.

Going through the internals, the only two things that stand out are: (1) much heavier participation among $100,000+ households (25%) and (2) college graduates (25%).

This suggests to me that, if anything, the industry focuses on those with discretionary income more than on the base of the SES pyramid.

I was somewhat surprised to see that there wasn't much difference in participation by gender. Given that women make the vast majority of household purchases decisions, I expected to see this data skewed female.

Among the 14% that have participated in a focus group, 86% responded that yes, they would recommend participation to a friend or family member. Although improvement could be made here, I'm happily surprised that 86% would recommend. Moderator techniques have vastly improved, but I do worry that the participant experience could be more pleasant (directions to facility, check in, pre group sandwhich, etc.)

RPM


US: Health Care (Rasmussen 8/7-8)


Rasmussen
8/7-8/09; 1,000 likely voters, 3% margin of error
Mode: IVR
(Rasmussen release)

National

When it comes to health care decisions, who do you fear the most: the federal government or private insurance companies?

51% Federal government
41% Private insurance companies


9/11 and birther misperceptions compared

Topics: 9/11 , birth certificate , Bush , conspiracy , misperception , myth , Obama

Since the release of a Daily Kos/Research 2000 poll showing that 28% of Republicans believe President Obama was not born in this country, Chris Matthews, Ann Coulter, Bernie Goldberg, David Paul Kuhn at Real Clear Politics, and other media figures have drawn an equivalence between the Kos poll and a 2007 Rasmussen poll which found that 35% of Democrats believe George W. Bush knew about 9/11 in advance.

The problem, as Media Matters points out, is that the wording of the Rasmussen poll ("Did Bush know about the 9/11 attacks in advance?") almost surely conflates people who believe Bush intentionally allowed an attack to occur with those who think the administration was negligent in its attention to the potential threat from Al Qaeda. Even National Review Online's Jonah Goldberg conceded this point in a column published soon after the poll was released.

However, another, lesser-known poll used less ambiguous wording and found similar results. A July 2006 Scripps Howard/Ohio University poll asked the following question:

There are also accusations being made following the 9/11 terrorist attack. One of these is: People in the federal government either assisted in the 9/11 attacks or took no action to stop the attacks because they wanted to United States to go to war in the Middle East.

When asked how likely this was, 16% of Americans said it was very likely and 20% said it was somewhat likely that people in the Bush administration "assisted in the 9/11 attacks or took no action to stop the attacks because they wanted to United States to go to war in the Middle East."

The partisan breakdown was not provided in the Scripps news report on the poll, but using the weighted data provided by Scripps (see update below), we can directly compare the proportion of incorrect or don't know responses to the 9/11 conspiracy and Obama birth certificate questions:

9-11 v birthers

There is an undeniable symmetry to the misperceptions, which skew in the expected partisan directions in both cases. The total proportion of incorrect or don't know responses among Republicans on Obama's citizenship (58%) is comparable to the proportion of comparable responses among Democrats on a 9/11 conspiracy (51%).

The pattern of responses by party is similar if we only include those respondents who directly endorsed the misperception in question (i.e. "very likely" to be a 9/11 conspiracy, Obama not a citizen):

9-11 v birthers 2

Even under this more stringent standard, 23% of Democrats and 28% of Republicans indicated direct support for the misperception of interest.

In short, using a more appropriate comparison poll, the primary conclusion stands -- both party's bases are disturbingly receptive to wild conspiracy theories.

Update 8/10 1:39 PM: I've updated the response totals and graphics based on data provided to me by Scripps that is weighted by race, age, and gender to match Census figures. Applying these survey weights results in slightly higher estimated levels of misperceptions on the 9/11 conspiracy question than I previously reported. This accounts for the discrepancy between the publicly available Scripps data and their published results that I mentioned in the initial version of this post.

Update 8/15 10:39 PM: I just discovered that the first chart had not been updated to reflect the correct weighted response totals. Apologies -- it has been corrected above.

(Cross-posted at brendan-nyhan.com)


SUSA's Leve: Polling (As We Know It) 'Doomed' by 2012*

Topics: IVR Polls , Jay Leve , National Journal column , SurveyUSA

My National Journal column for the week, on the surprisingly dire view of the future of polling from SurveyUSA's Jay Leve, is now online.

At a panel at last week's Joint Statistical Meetings in Washington DC, Leve delivered a presenation with this surprising conclusion: "If you look at where we are here in 2009," for phone polling, he said, "it's over... this is the end. Something else has got to come along." Intrigued? Hope so. Click through for the details.

*Correction:  The original headline and subheading on both the National Journal column and this entry incorrectly stated that Leve forecasts "doom" for all of polling and the polling profession.  Leve sees doom for a particular kind of polling, what he calls "barge-in telephone polling" -- in essence,this means telephone surveys as we now know them, both live operator and automated.  However, as I hope the last paragraph of the column makes clear, he is optimistic about the future of polling:  "And for those who might ask, he adds that he 'doesn't look to the future with despair but with wonder' at the opportunities for the polling profession."


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR