February 1, 2009 - February 7, 2009


Romantic Outlier "Outliers"

Topics: Outliers Feature

Gallup retests Limbaugh, Max Blumenthal (no relation) finds more from the Greenberg poll I cited last week.

Rasmussen reports 55% of Republicans want their party to "become more like Sarah Palin" (via TPM).

Gary Langer considers the potential GOP outreach to African Americans from Michael Steele

Tom Jensen reviews how different favorable rating questions yield different results.

David Hill saw a backlash against negative ads in 2008.

Mark Mellman ponders how long voters will wait before blaming Obama for economic woes.

David Winston considers the CBS survey and the stimulus package.

John Sides links to a Gerber, Huber and Washington [pdf] experiment that links registering as a partisan to a strengthening of partisan attitudes.

Carl Bialik reviews the methods used to rate Super Bowl ads.

Andrew Gelman reminds us of a classic misinterpretation of "not statistically significant."

Webcomic xkcd introduces the romantic outlier (via Gelman):


Is Support for the Stimulus Plan Falling?

Topics: Measurement , Stimulus

Is support for the economic stimulus legislation falling? Three polls provide data on point this week: Gallup says support is flat, while CBS News and Rasmussen Reports say support is declining.


While it is easy to plot the trends in a chart (as above), caution is in order. We have trends from only three pollsters and some have made small changes in question wording. As noted last week, small differences in wording and question format appear to make big differences in the level of support measured.

To help make that point more clearly, I put full text of the questions used in the chart above into the following table (along with more complete results and links to the source pages).

Question Text (differences italicized) Favor Oppose No Opinion
Gallup (2/4) - As you may know, Congress is considering a new economic stimulus package of at least 800 billion dollars. Do you favor or oppose Congress passing this legislation? 52% 38% 10%
Gallup (1/27) - [Same text as above] 52% 37% 11%
Gallup (1/6-7) - Do you favor or oppose Congress passing a new 775 billion dollar economic stimulus program as soon as possible after Barack Obama takes office? 53% 36% 11%
CBS News (2/2-4) - Would you approve or disapprove of the federal government passing an economic stimulus bill costing more than 800 billion dollars in order to try to help the economy?
51% 39% 10%
CBS News (1/11-15) - Would you approve or disapprove of the federal government passing a 775 billion dollar economic stimulus package in order to try to help the economy? 63% 24% 13%
Rasmussen (2/2-3) - Would you favor or oppose the economic recovery package proposed by Barack Obama and the Congressional Democrats? 37% 43% 20%
Rasmussen (1/27-28) - [Same text as above] 42% 39% 19%
Rasmussen (1/19-20) - Would you favor or oppose the economic recovery package proposed by Barack Obama? 45% 34% 21%

First, notice the big difference between the Rasmussen question and those used by CBS and Gallup. In addition to Rasmussen's consistently higher "don't know" response (discussed here last week, presumably the result of a prompt for "don't know), CBS and Gallup include an approximate price tag for the stimulus plan in their question. Rasmussen includes no dollar amount. Meanwhile Rasmussen explicitly associates the stimulus plan with "Barack Obama and the Congressional Democrats," while the other two identify no specific sponsor.

As Nate Silver observed, Rasmussen consistently shows less support for the stimulus plan than other pollsters. My bet is that question text and format explain most of the difference, although variation in sampling (Rasmussen screens for "likely voters" while Gallup and CBS samples all adults) and mode (Rasmussen uses an automated methodology while Gallup and CBS use live interviewers) may also be factors.

Second, notice that both CBS and Gallup changed the dollar amounts, Gallup on their second of three surveys and CBS this week. Perhaps more important: CBS also made a subtle change in their verbiage. The old CBS question references a "775 billion dollar economic stimulus package." The new question calls it an "economic stimulus bill costing more than 800 billion dollars" (emphasis added). They needed to change the amount, but why change the sentence structure? And more important, does adding "costing" make some respondents realize that proposal is not a government giveaway but rather something they might have to pay for someday? Without a split-form experiment, it is hard to know for certain.

What we do know is that pollsters are getting different results, something that often happens when many respondents lack strongly held views. We also know that most Americans are not closely following stimulus debate.** Less than a third tell CBS (28%) and Gallup (31%) that they are following news about the stimulus debate "very closely," while almost a quarter (23% on CBS and 24% on Gallup) say they are following the issue "not too closely" or "not at all."

We have some evidence of a modest decline in support for the stimulus, although given all the potential noise around slight changes in question wording, we probably need a few more polls to know for certain.

Update: Thanks to the reader who caught something I missed. Rasmussen also changed the wording of their stimulus question. In their first test in early January, the question identified only "Barack Obama" as the sponsor. Beginning with their 1/27-28 survey, that changed to "Barack Obama and the Congressional Democrats" (emphasis added). I have corrected the table above to reflect the changed wording

That change is important: While "congressional Democrats" are earning slightly better ratings than their Republican counterparts, their numbers are nowhere near as positive as Obama's. On the CBS survey, or example, Obama 62% approve of his performance as president, but only 48% rate the "congressional Democrats" favorably. Nancy Pelosi's favorable rating, using the tougher CBS format (that encourages respondents to report when they are unfamiliar), has dropped to just 10% favorable, 30% unfavorable).

So we have yet more reason for skepticism about the apparent decline in support for the stimulus package.

**We have limited data on whether the most attentive Americans differ in their overall support. In a release earlier this week based on a different question, Gallup reported that those "most closely following news about the plan differ little from the overall national average in terms of their attitudes about the plan." However, their table showed that the attentive Americans were more likely to want to reject the stimulus package altogether (27%) than those following it "somewhat closely" (11% reject) or not closely (15% reject). Similarly, without reporting any specific numbers, CBS News tells us that "those following very closely are more likely to oppose the bill than thosfollowing just somewhat closely."

VA 2009 Governor (Rasmussen-2/4)

Rasmussen Reports
2/4/09; 500 likely voters, 4.5% margin of error
Mode: Live Telephone Interviews


Favorable / Unfavorable
Obama (D): 64 / 35
Gov. Kaine (D): 61 / 37
McDonnell (R): 50 / 18
Deeds (D): 30 / 29
McAuliffe (D): 32 / 39
Moran (D): 32 / 33

'09 Gubernatorial General Election
McDonnell 39, Deeds 30
McDonnell 39, Moran 36
McDonnell 42, McAuliffe 35


US: National Survey (CBS-2/2-4)

CBS News
2/2-4/09; 864 adults, 3% margin of error
Mode: Live Telephone Interviews


Obama Job Approval
62% Approve, 15% Disapprove

Congressional Job Approval
26% Approve, 62% Disapprove

Direction of Country
23% Right Direction, 68% Wrong Track

State of the Economy
5% Good, 94% Bad
5% Getting better, 51% Getting worse, 42% staying about the same

In your opinion which will do more to get the U.S. out of the current recession: increasing government spending, or reducing taxes?

    16% Increasing government spending
    62% Reducing taxes

Would you approve or disapprove of the federal government passing an economic stimulus bill costing more than 800 billion dollars in order to try to help the economy

    51% Approve
    39% Disapprove

(story, results)

US: National Survey (USAToday-2/4)

USA TODAY / Gallup
2/4/09; 1,012 adults, 3% margin of error
Mode: Live Telephone Interviews


As you may know, Congress is considering a new economic stimulus package of at least $800 billion. Do you favor or oppose Congress passing this legislation? *

    52% Favor
    38% Opppose

Would you favor or oppose Congress passing a smaller economic stimulus package that would cut the size by up to $200 billion?

    48% Favor
    41% Oppose

(Asked of a half sample) Barack Obama promised in his campaign to change the way Washington works. Based on what you have heard or read about his administration, do you think he has made progress or not made progress in doing this so far?

    50% Yes, made progress
    41% No, has not

(data, analysis, stimulus analysis)

* A recent Rasmussen Reports national survey conducted 2/3-4/09 found 37% favored and 43% opposed "the economic recovery package proposed by Barack Obama and the Congressional Democrats."

Omero: Women's Issues In The Post-Ledbetter Era

Topics: Gender

This week I went to the fem2.0 conference here in DC. It was a great place to hear traditional feminist groups interact with bloggers and younger activists about the future of the women's movement.  In particular, there was widespread excitement over President Obama's recent signing of the Lily Ledbetter Fair Pay Act. And at the end of the day, there was a spirited discussion about reaching out to a broader audience by demonstrating the relevance of "women's issues" to women (and men) across partisan, ideological, and demographic lines.


Fair pay, access to child care, and flexible work are all popular, across gender


Well before Lily Ledbetter became a progressive icon, there was already public debate over stronger equal pay laws.  I personally have tested equal pay on my candidate surveys for nearly a decade. It has been consistently a strong topic, and a top message among both men and women, for male candidates and female candidates, and in both Democratic- and Republican-leaning districts. 


National pollsters have tested equal pay as well, and found similar results.  Back in 2000, Gallup found 79% of adults supported "increased enforcement of equal pay laws relating to women in the workplace."  In fact, that question even included a price tag ($27 million), yet still enjoyed wide support. 


Other national polling suggests even more issues of interest.  A 2008 National Women's Law Center poll conducted by Hart Research showed clear majorities of both men and women agreeing "we need to do more to help families balance work and family."  Specifically, majorities of both genders support government funding to expand access to quality affordable child care and early education, and also expanding the Family and Medical Leave Act to make all workers eligible.  Like with the price tag above, when a question includes the phrase "increasing government funding" and still garners majority support from both men and women, then you know you've found a popular issue. 


And even as the economy struggles, the issue of work-family balance remains salient.  A November 2008 Rockefeller Family Fund poll, conducted by Lake Research Partners, showed as many working parents (across gender) worry about work and family responsibilities as worry about the economy.


"Fairness" resonates more than "feminist"


Returning to one of the debates at the fem2.0 conference, how do we reach out to women beyond a traditional definition of the women's movement?  The need to do so is striking.  Identification as a "feminist" continues to decline.  In the early 1990s, Gallup found about a third of adults considered themselves feminists.  Gallup has shown that number continue to slip through 2001, when a quarter identified with the phrase.  A November 2008 poll for The Daily Beast, conducted by Penn, Schoen & Berland, revealed only 14% of adults consider themselves feminists. 


However, despite attitudes toward the word "feminist," support for gender equality and desire for a strong women's movement is widespread.  The same Daily Beast poll showed both men and women feel women are not treated equally in the workplace.  Two-thirds (68%) of women and half of men in the NWLC survey said there was still a need for a women's movement.   And this 2005 CBS Poll revealed that more women than ever before say the women's movement has made their life better.  That poll also shows identification as a feminist more than doubled when the following definition was given: "someone who believes in social, political, and economic equality of the sexes."


Successful language is optimistic, inclusive


These polls suggest reaching out beyond the women's movement means adopting optimistic language about how far women have come, while acknowledging that more work needs to be done.  This includes a focus on equality and fairness, particularly in the workplace.  More specifically, reminding voters that better access to child care and flexible work schedules are not simply fair, but help people (not just women) juggle workplace and family demands.


Of course, women are concerned about other issues besides child care and flexible work time.  And the women's movement is both diverse, and committed to many other causes.  But we have an opportunity to build on the success of the Fair Pay Act, a popular new President, and a new generation of younger women's activists, to make real progress on these already well-received issues.  Or, as one fem2.0 participant tweeted in response to a presentation from a BlogHer co-founder, "gosh, if everyone saw women's issues as things that can unite us, instead of divide us..."

"Reaction" Polling

In his post on "Hired Gun" Polling, Mark Blumenthal suggests the need for pollsters to "distinguish between questions that measure pre-existing opinions and those that measure reactions."


He makes an important point. Much of what pollsters offer to the world as "public opinion" is in reality hypothetical, based on giving respondents information that many in the general public may not have and then immediately asking respondents for their reaction to that information.


Such results can be illuminating, but pollsters recognize that feeding respondents information means the sample no longer represents the American public and what Mark calls its "pre-existing" opinion. Unfortunately, many pollsters fail to acknowledge the hypothetical nature of such results, and instead treat them as though they represent the current state of the public's views.


The problem with this kind of approach is illustrated in the case that Mark discussed in his post, dealing with the "card check" bill, the proposed Employee Free Choice Act (EFCA) concerning the authorization of unions in the workplace.


The vast majority of Americans, one can reasonably assume, have little to no knowledge of the provisions of the bill. Thus, to measure "public opinion" on the issue, pollsters feel they need to tell respondents what the bill is all about. A Republican pollster explained the bill one way, a Democratic pollster another way, and - to no one's surprise - they ended up with a "public opinion" that reflected their respective party's position on the issue.


While one may argue the relative merits of the questions used by the two pollsters, the main point is that informing the public of any major policy proposal is intrinsically biased. Pollsters have to decide what is important among all the various elements of the proposal, and they can often come up with quite different conclusions. This problem applies to public policy pollsters as well, who - we can reasonably assume - have no partisan agenda, but who nevertheless can produce what appear to be partisan results.


Such problems have multiplied with the recent public policy polling on the bailout proposals for Wall Street and for the auto industry, and on the stimulus plan being considered by Congress. Most pollsters assume the public has little specific knowledge of such proposals, and thus pollsters provide respondents specific information to measure the public's (hypothetical) reaction to the proposal.


When CNN described the proposal to bailout the auto industry by characterizing it as "loans in order to prevent them from going into bankruptcy," in exchange for which the companies would produce plans "that show how they would become viable businesses in the long run," it found a 26-point margin in favor (63 percent to 37 percent). But when an ABC/Washington Post poll only a few days earlier had mentioned the word "bailout" in its question and did not refer to plans leading to the companies becoming viable, the poll showed a 13-point majority against the proposal (55 percent to 42 percent).


Again, one can debate the relative merits of the two questions, but the tendency of pollsters is to say that each set of results provides different insights into the dynamics of the public's views on this matter. In short, each provides a picture of potential public reaction to the proposal, if the proposal is framed to the general public in the way each polling organization presented the issue to its respondents.


That distinction is generally lost in the news reports. Each polling organization instead announces its results as though they reflect the current views of the public, over which the polling organization had no influence. But the reality is that the polling organization inevitably shapes its results by the very way it presents the issue to respondents.


As Mark argues, such "reaction" polling has a useful role to play in the public discourse on public opinion. However, it's also important that pollsters make clear that their results do not reflect "pre-existing" opinion (opinion before the polls were conducted - though one might instead use the word "extant" opinion), but rather hypothetical opinion under restricted conditions.


It used to be that newspapers made a formal distinction between "hard news" articles and "analysis" articles - clearly labeling the latter as such. That procedure doesn't seem to be followed these days, but it may be a useful analogous model for pollsters. Perhaps, in a similar way, pollsters can devise a method to formally separate their reports of potential "reaction" public opinion from existing public opinion.


I can envision, for example, one article in a newspaper that describes how many (or how few) people are actually aware of an issue and how many express ambivalence about the matter, while another article could explicitly describe how the public might react if the issue were universally framed in one way or another. Pollsters have made such distinctions sporadically, which is why we know that the public is more likely to support "rescuing" the auto industry than "bailing it out."


Still, the need for a formal and widely accepted method of distinguishing "reaction" questions from those that measure existing opinion needs to be found, if pollsters are to avoid the confusion that occurs when highly reputable polls produce wildly contradictory results.


US: National Survey (USAToday-1/30-2/1)

USA Today / Gallup
1/30 - 2/1/09; 1,027 adults, 3% margin of error
Mode: Live Telephone Interviews


Obama Job Approval
64% Approve, 25% Disapprove

Thinking now about some of the specific actions Barack Obama has taken since he has been in office, would you say you approve or disapprove of each of the following.

    Naming special U.S. envoys to deal with the situations in Afghanistan and Pakistan and the Middle East
    76% Approve, 15% Disapprove

    Restricting former lobbyists from working in the Obama administration, and restricting administration officials from working as lobbyists after leaving the government
    76% Approve, 17% Disapprove

    Ordering the Guantanimo Bay prison for terrorist suspects be closed within a year
    44% Approve, 50% Disapprove

    Allowing U.S. funding for overseas family planning organizations that provide abortions
    35% Approve, 58% Disapprove

Do you think Congress should pass Obama's economic stimulus plan basically as Barack Obama has proposed it, pass it but only after making major changes, or reject this plan?

    38% Pass as proposed
    37% Pass with major changes
    17% Reject plan

(story, analysis, release 1, release 2)

AAPOR Censures Lancet Iraq Casualty Survey

Topics: AAPOR , Disclosure , Gilbert Burnham , Lancet Survey

My colleagues at the American Association for Public Opinion Research (AAPOR) announced yesterday that an eight month investigation found that Dr. Gilbert Burnham violated AAPOR's Code of Professional Ethics and Practices.

At issue is the controversial study (pdf) of civilian deaths in Iraq conducted by Burhnam, a faculty member at the Johns Hopkins Bloomberg School of Public Health, and published in the journal Lancet in 2006. The study was the subject of considerable criticism because it produced a significantly higher estimate of Iraqi deaths than those of the Iraq Body Count project, the United Nations and the Iraqi Ministry of Health (for more details see the reporting by my National Journal colleagues, Slate's Fred Kaplan and the review on Wikipedia).

The AAPOR censure does not involve Burnham's methodology and renders no opinion on the substantive conclusions of the Lancet study. Instead, it focuses entirely on disclosure, or rather on Burham's failure to disclose "essential facts about his research." From the AAPOR release:

AAPOR holds that researchers must disclose, or make available for public disclosure, the wording of questions and other basic methodological details when survey findings are made public. This disclosure is important so that claims made on the basis of survey research findings can be independently evaluated. Section III of the AAPOR Code states: "Good professional practice imposes the obligation upon all public opinion researchers to include, in any report of research results, or to make available when that report is released, certain essential information about how the research was conducted."

Mary E. Losch, chair of AAPOR's Standards Committee, noted that AAPOR's investigation of Burnham began in March 2008, after receiving a complaint from a member. According to Losch, "AAPOR formally requested on more than one occasion from Dr. Burnham some basic information about his survey including, for example, the wording of the questions he used, instructions and explanations that were provided to respondents, and a summary of the outcomes for all households selected as potential participants in the survey. Dr. Burnham provided only partial information and explicitly refused to provide complete information about the basic elements of his research.”

AAPOR's President, Richard A. Kulka, added "When researchers draw important conclusions and make public statements and arguments based on survey research data, then subsequently refuse to answer even basic questions about how their research was conducted, this violates the fundamental standards of science, seriously undermines open public debate on critical issues, and undermines the credibility of all survey and public opinion research. These concerns have been at the foundation of AAPOR’s standards and professional code throughout our history, and when these principles have clearly been violated, making the public aware of these violations is in integral part of our mission and values as a professional organization."

The release also notes that Burnham is not a member of the organization. AAPOR has opted in the past to censure non-members over non-disclosure, including pollster Frank Luntz in 1997.   

Update: The Associated Press could not reach Burnham for comment but reports a reaction from the Johns Hopkins Bloomberg School of Public Health:

Tim Parsons, a spokesman for the school said: "We are disappointed AAPOR has chosen to find Dr. Burnham in violation of the organization's ethics code. However, neither Dr. Burnham nor the Johns Hopkins Bloomberg School of Public Health are members of AAPOR."

ABC's Gary Langer adds more background and has an email response from Tony Kirby, the Lancet's press officer: "The Lancet is making no comment."

Update 2: Science blogger Tim Lambert asks a reasonable question: What information, specifically, did Burnham fail to provide to AAPOR?

[AAPOR's statement] seems to be more than a little misleading. Burnham has released the data from the study. This report goes into a fair bit of detail on how the survey was conducted. And here is the survey instrument which includes the "wording of the questions he used".

Lambert emailed AAPOR for comment this morning (and I did the same after reading his blog item). AAPOR's standards chair, Mary Losch, responded with the following message:


I have read your entry and would note that the links you provided did not supply the questionnaire items but rather a simple template (as noted in the heading). The Johns Hopkins report provides only superficial information about methods and significantly more detail would be needed to determine the scientific integrity of those methods -- hence our formal request to Dr. Burnham. The Hopkins website refers to data release but, in fact, no data were provided in response to our formal requests. Included in our request were full sampling information, full protocols regarding household selection, and full case dispositions -- Dr. Burnham explicitly refused to provide that information for review.

We do not provide public reports of the investigations but if there are other specific questions that I could answer, I would be happy to try to do so.

I also asked Losch why AAPOR considers the "template" of questions posted online to be something less than "the wording of the questions used." She replied that they "requested the survey instrument, (including consent information) and it was not provided. The template did not appear to be much beyond an outline and certainly was not the instrument in its entirety."

Interests disclosed: I am an AAPOR member and served on AAPOR's Executive Council for two years, concluding in May of 2008, but had no involvement in the Standards Committee's investigation of the Lancet study.

NJ: 2009 Governor (Quinnipiac-1/29-2/2)

Quinnipiac University
1/29 - 2/2/09; 1,173 registered voters, 2.9% margin of error
385 registered Republicans (5%)
Mode: Live Telephone Interviews

New Jersey

Gov. Corzine Job Approval
41% Approve, 50% Disapprove

Sen. Lautenberg Job Approval
45% Approve, 38% Disapprove

Sen. Menendez Job Approval
42% Approve, 30% Disapprove

Favorable / Unfavorable
Gov. Jon Corzine (D): 41 / 49
Christopher Christie (R): 31 / 7
Steven Lonegan (R): 15 / 6

'09 Gubernatorial Republican Primary
Christie 44%, Lonegan 17%, Levine 5% Merkt 2%

'09 Gubernatorial General Election
Christie 44%, Corzine 38%
Corzine 42%, Lonegan 36%

Looking ahead to the 2009 election for Governor, do you feel that Jon Corzine deserves to be reelected, or do you feel that he does not deserve to be reelected?

    33% Yes/Deserves
    54% No/Does not


OH: 2010 Senate (Quinnipiac-1/29-2/2)

Quinnipiac University
1/29 - 2/2/09; 1,127 registered voters, 2.9% margin of error
492 registered Democrats (4.4%), 374 registered Republicans (5.1%)
Mode: Live Telephone Interviews


Sen. Voinovich Job Approval
55% Approve, 29% Disapprove

Sen. Brown Job Approval
51% Approve, 22% Disapprove

Favorable / Unfavorable
Jennifer Brunner (D): 34 / 10
Lee Fisher (D): 33 / 10
Rob Portman (R): 21 / 6
Tim Ryan (D): 12 / 3
Mary Taylor (R): 17 / 5

'10 Senate Republican Primary
Portman 44%, Taylor 11%

'10 Senate Democratic Primary
Fisher 18%, Brunner 16%, Ryan 14%

'10 Senate General Election
Fisher 42%, Portman 27%
Fisher 41%, Taylor 27%

Brunner 38%, Portman 28%
Brunner 28%, Taylor 26%


"Hired Gun" Polling and Card Check (EFCA)

Topics: Card Check , EFCA , Employee Free Choice Act , Guy Molyneux , John McLaughlin , Marc Ambinder

A week or so ago, my colleague Marc Ambinder (anchor of the new Atlantic Politics Channel), did a series of blog posts on some privately commissioned polling on the subject of the so-called "card check" bill, or more formally, the proposed Employee Free Choice Act (EFCA). It is a great example of two big lessons we ought to remember when considering this sort of "hired gun" polling data.

Ambinder started with a post that contrasts questions from two pollsters working for opposite sides of the Card Check debate. First up was the a question asked by Peter D. Hart Research Associates on behalf of the AFL-CIO:

[Do you favor or oppose legislation that] Allows employees to have a union once a majority of employees in a workplace sign authorization cards indicating they want to form a union. 75% favor.

Next, Ambinder presented a different question asked by pollster John McLaughlin on behalf of the anti-EFCA organization Coalition for a Democratic Workplace (CDW):

There is a bill in Congress called the Employee Free Choice Act which would effectively replace a federally supervised secret ballot election with a process that requires a majority of workers to simply sign a card to authorize organizing a union and the workers' signatures would be made public to their employer, the union organizers and their co-workers. Do you support or oppose Congress passing this legislation? 15% favor, 74% oppose.

Ambinder followed up with a three-part exchange between anti-EFCA consultant Mike Murphy and pro-EFCA pollster Guy Molyneux. The short version: Our poll was "more accurate," no your poll was "outrageously biased," no wait, let's let Mikey try it ** let's test Ambinder's language.

Who was right? Which question is best (or more "accurate" or less "biased")? My answer: Neither. Or both.

The big challenge with this sort of issue, as Ambinder puts it in his first post, is that most Americans "don't know what EFCA is, or what 'card check' would mean." So any question that begins by describing the provisions in the bill does not test pre-existing opinions for the vast majority of Americans. Instead, such questions test the way Americans react to new information. In that sense, they provide a gauge of "public opinion" only in a very hypothetical way. They tell us what public opinion might be if all Americans knew was "[fill in the blank]."

These questions can be useful because attitudes about public issues can change as they get a high profile debate in Congress. Assuming that EFCA comes to a vote in the coming months, more Americans will learn about it and form new impressions. It can be helpful to try to preview how they might react and how different "framings" of the debate can shape reactions. Pollsters like McLaughlin and Molyneux get hired by clients who want to do just that, frame the debate and help sway public opinion to their point of view (and full disclosure to those just tuning in: I earned my living for many years as just such a pollster).

The warning label that ought to go on results is that they can be, as Molyneux argues, "very sensitive to question wording" without a single "'correct' way to ask the question." The inevitable arguments about which question is most "right" usually mirror the larger substantive debate. So, if you are a partisan on EFCA and you have little trouble choosing the "correct" version of the questions above, you should also know this: Your ability to see "the truth" on this issue does not lead to the conclusion that "public opinion" is on your side. It might be someday, but only if your "framing" wins the day and shapes the way most Americans (and not just policy-makers and very well informed Americans) learn about the issue.

In this case, the side-by-side comparison tells us -- indirectly -- that most Americans are unfamiliar with EFCA and that very few have real, pre-existing opinions about it. Also, we learn that both sides have arguments that are potentially very persuasive. So they are useful, even if testing something hypothetical.

Two big lessons: First, we need to be especially cautious about interpreting interest group sponsored results when we only have only one side's poll and cannot do a side-by-side examination of surveys sponsored by opposing interests.

Second and even more important, we need to better distinguish between questions that measure pre-existing opinions and those that measure reactions.

I have more to say about that second lesson, and hope to come back to the topic later this week.

**The explanatory link for those of you too young to get the reference.

VA: 2009 Governor (PPP-1/30-2/1)

Public Policy Polling (D)
1/30-2/1/09; 998 likely Democratic primary voters, 3.19%
Mode: IVR


Favorable / Unfavorable
Creigh Deeds (D): 23 / 11
Terry McAuliffe (D): 30 / 23
Brian Moran (D): 34 / 10

'09 Gubernatorial Democratic Primary
McAuliffe 18, Moran 18, Deeds 11


We Have A Lot to Learn About the Cell Phone Only Population

Frequent Pollster.com readers will know that the Cell Phone Only (CPO) population is a subject I blogged on frequently during the campaign. One of the reasons for this interest is because there is still a lot we don't know about CPOs at this point (I like a good mystery). In this post, I want to present a little analysis that demonstrates just how much we still have to learn about CPOs.

CPOs wouldn't be nearly as fascinating if their distinctiveness could be easily explained by their age or other basic demographic or political factors. One view is that CPOs are just younger, more urban, and more mobile and that once you account for these factors, CPOs behave pretty much just like their landline counterparts. Last year, however, a Pew report suggested that the differences might not be that simple when they discovered that weighting alone might not be enough to account for CPOs.

To explain why this is the case, I used some of the data Pew relied on to publish that report (in this case, Pew's June 2008 Voter Attitudes Survey). In this survey, Pew interviewed respondents both on landline phones and on cell phones (see more on the methodology here). What I wanted to know was whether CPOs are really that different from landline users once you account for demographic and political factors?

To answer this question, I estimated a multivariate statistical model that would allow me to take into account any demographic or political information that the Pew survey collected that I could imagine would explain the difference between CPOs and the rest of the public. I controlled for age, gender, education, ethnicity, race, income, home ownership, marital status, and whether the respondent lived in an urban area. In addition to these demographic factors, I also accounted for the partisan affiliation of the respondent. If these factors explained the difference between CPOs and landline respondents, then we wouldn't expect to find any measurable difference between these groups once we've controlled for them.

If you enjoy reading output from statistical models, you can get that information View image. However, the key information is presented in the chart below. The chart shows the probability of a landline or CPO respondent registering a vote preference for Obama, McCain, or neither candidate after controlling for the demographic and political factors.


This chart indicates that even after controlling for all of the demographic and political factors listed above, CPOs still had distinctive vote preferences relative to those with landlines in their homes. While landline respondents were 49% likely to prefer Obama in June, CPOs were 65% likely to do so. These differences can't be explained as a result of CPOs being younger, or because they were single, or because they lived in urban areas, or even because they were more likely to be Democrats. Essentially, if you had two people who were the same on all of the factors mentioned above, the one without a landline would still be more likely to support Obama than the one with a landline.

If it isn't age, income, education, or even mobility, then what makes CPOs distinctive from those with landlines? Is it something more inherent about embracing a CPO lifestyle? Perhaps it is an outlook on life that makes CPOs more willing to cast off traditions and venture into something new? Perhaps a higher tolerance for taking risks and embracing change? Or something else entirely? We still don't have a good handle on what makes CPOs so distinctive, which is why this makes for such a great mystery.

(Note: I'm using this Pew data to examine another reason why CPOs may cause pollsters so much trouble--because it may be more difficult to pin down whether they are actually going to vote. I'll present some analysis on that topic in my next post.)