May 17, 2009 - May 23, 2009


Polling Idol "Outliers"

Topics: Outliers Feature

Ron Brownstein and colleagues chart the Southernization of the GOP.

Mark Mellman takes issue with the Gallup abortion poll.

Dalia Sussman says abortion polling is tricky.

Hans Noel sees value in measuring pro-life identity.

Jay Cost assesses bouncy party identification.

Publius responds to Nate Silver's post on the generational patterns in party ID.

Gary Langer smacks STATS.

David Hill says it's Obama's economy after September.

Alex Lundry explores the Pew Research Values study.

Jason Dempsey reports on the how soldiers vote.

The National Journal insiders react to Nancy Pelosi's handling of the waterboarding controversy.

Jennifer Agiesta reviews the Best Places to Work index.

Jake Brewer explains the significance of the Data.gov launch, Micah Sifry adds more.

Blogus Operandi (aka Dana Stanley) will be video-blogging the CASRO conference.

The Pew Research Center profiles American Idol followers.

SurveyUSA could have called it for Kris Allen.

VA: 2009 Gov (PPP-5/19-21)

Public Policy Polling (D)
5/19-21/09; 617 likely Democratic primary voters, 3.9% margin of error
Mode: IVR


2009 Governor - Democratic Primary
McAuliffe 29, Deeds 20, Moran 20 (chart)


VA: 2009 Gov (DailyKos-5/18-20)

DailyKos.com (D) / Research 2000
5/18-20/09; 600 likely voters, 4% margin of error
400 likely Democratic primary voters, 5% margin of oerror
Mode: Live Telephone Interviews


Favorable / Unfavorable
Pres. Barack Obama (D): 56 / 40 (chart)
Bob McDonnell (R): 53 / 33
Creigh Deeds (D): 35 / 36
Terry McAuliffe (D): 37 / 40
Brian Moran (D): 35 / 36

2009 Governor - Democratic Primary
McAuliffe 36, Moran 22, Deeds 13 (chart)

2009 Governor - General Election
McDonnell 42, Moran 35 (chart)
McDonnell 44, McAuliffe 34 (chart)
McDonnell 45, Deeds 32 (chart)


US: 2012 President (PPP-5/14-18)

Public Policy Polling (D)
5/14-18/09; 1,000 registered voters, 3.1% margin of error
Mode: IVR


Obama Job Approval
55% Approve, 38% Disapprove (chart)
Dems: 84 / 11 (chart)
inds: 55 / 36 (chart)
Reps: 19 / 71 (chart)

2012 President - General Election
Obama 53, Gingrich 36
Obama 52, Huckabee 39
Obama 56, Palin 37
Obama 53, Romney 35

Party ID
40% Democrat, 34% Republican, 26% independent (chart)


US: Obama Approval (Harris-5/11-18)

Harris Poll
5/11-18/09; 2,681 adults
Mode: Internet


Obama Job Approval
59% Excellent/Good, 41% Only fair/Poor (chart)
Dems: 88 / 12 (chart)
inds: 57 / 43 (chart)
Reps: 27 / 73 (chart)

State of the Country
44% Right Direction, 56% Wrong Track (chart)


US: Pelosi vs CIA (Gallup-5/19)

Gallup Poll
5/19/09; 997 adults, 3% margin of error
Mode: Live Telephone Interviews


Do you approve or disapprove of how each of the following has handled the matter of interrogation techniques used against terrorism suspects?

    ... Barack Obama
    59% Approve, 29% Disapprove
    Those Following Closely: 53% Approve, 44% Disapprove

    ... The CIA
    52% Approve, 31% Disapprove
    Those Following Closely: 63% Approve, 33% Disapprove

    ... Nancy Pelosi
    31% Approve, 47% Disapprove
    Those Following Closely: 30% Approve, 63% Disapprove


Pew Research Political Values Study

Topics: Pew Research Center

This morning the Pew Research Center released an update of its long running survey measuring trends in political values and attitudes (overview, questionnaire, complete PDF report). The Pew Center releases so many length reports so frequently that we can easily overlook their more significant efforts, but this report is especially noteworthy because it combines (1) a long-time series study spanning more than 20 years, (2) a large number of questions asked consistently about a very comprehensive list of values and attitudes, (3) huge sample sizes (7,127 interviews this year for the estimates of party ID, 3,013 interviews for the values study) and (4) the very high quality of sampling, data collection and analysis that Pew is known for. So for those who follow public opinion, today's report is truly a must read.

"The lead," as Pew Research president Andrew Kohut explained in a briefing yesterday, "is that centrism has emerged as a dominant factor in public opinion." The percentage of adults who describe their party identification as independent -- 36% so far in 2009 -- equals its highest level in 70 years." Meanwhile, Republican numbers "have dropped precipitously" since 2004, while the Democratic numbers though markedly improved during the Bush years have 'fallen off a little bit" since November 2008. Meanwhile, basic measures on other political attitudes remain stable. From the report:

The latest values survey, conducted March 31-April 21 among 3,013 adults reached on landlines and cell phones, finds that there has been no consistent movement away from conservatism, nor a shift toward liberalism -- despite the decline in Republican identification. In, fact, fewer Americans say the government has the fundamental responsibility to provide a safety net than it did two years ago, and the share supporting increased help for the needy, even if the debt increases, has declined.

Some of the most powerful data in the report concerns the intersection of long term trends and the attitudes of younger Americans.

Republicans are aging. The average age of Republican identifiers inceased by nearly four years since 1990, while the age composition of Democrats has held steady.


Social conservatism is in decline, especially among the young. The survey shows a continuing long term decline on five questions used to track "social conservatism" (questions about school boards firing homosexual teachers, banning books with "dangerous ideas" from libraries, returning women to "traditional roles in society" holding "old fashioned values about family and marriage," and the existence of "clear guidelines about what's good or evil"). The remarkable chart shown below shows both a decline among specific age cohorts over time and the progressively lower levels of social conservatism among each younger age cohort.


The same generational trends are evident on racial attitudes. Consider one item on interracial dating:


These are just the highlights. The report is long (over one hundred pages) and meaty, but well worth reading in full.

Update:  More commentary on the study from Tom Edsall, Jill Lawrence, Marc Ambinder, Jennifer Agiesta, Chris Bowers and Joe Gandelman.

Omero: Obama & Dems Close National Security Gap

Topics: Cheney , Democracy Corps , Democrats , National Security , Obama , Republicans

Bad news continues for Republicans.  Not only is the national party identification gap widening, as I posted a few weeks ago, support for progressive views on social issues is increasing.  Now a recent Democracy Corps survey piles on.  For the first time in Democracy Corps' research, voters are now evenly divided on which party is doing the better job on national security (41% Democrats, 43% Republicans).  In 2003, for example, more than twice as many voters felt Republicans did a better job (54%) than said Democrats were doing the better job (25%). 


Further, Democrats were at or better than parity on many other foreign policy issues, such as "improving global respect for America" (+36 Dem advantage), "foreign policy" (+17), "the situation in Iraq" (+10), "immigration" (+2), and "the war on terrorism" (+0). 


Obama's own job approval ratings on national security are even stronger.  Nearly two-thirds (64%) approve of the job Obama is doing on national security (31% disapprove).  These numbers are actually stronger than Obama's overall approval rating (58% approve, 33% disapprove).


Further, despite former Vice-President Cheney's claims, a majority (55%) feels Obama's policies have increased our national security (37% undermine). By contrast, a majority (51%) feels President Bush's policies undermined our national security (44% increased). 


What these data (along with other recent polling) show is how pervasive are recent Democratic gains.  We've moved far beyond "it's the economy, stupid" to the once-unthinkable--movement on gay marriage, immigration, and national security.  Across issues, across demographics, Democrats have consolidated their support.


We'll see if anything changes after today's national security speeches by both the President and Cheney.  But it seems highly unlikely it could reverse this trend. 

The Post and the Virginia Polls

In a column this past Sunday, Washington Post polling director Jon Cohen explains why the Post has not reported on recent surveys "purporting to show the status of" the upcoming Democratic primary contest for governor in Virginia. Their bottom line:

None of the recent polls in the Virginia governor's race meet our current criteria for reporting polls: Two primary ones were by Interactive Voice Response, commonly known as "robopolls," and the third was a partial release from one of the candidates eager to change the campaign story line.

Cohen's piece starts a conversation worth having about the difficulty of polling in low turnout primaries, about the coverage of "horse race" results and where journalists should draw the line in reporting on polls conducted by campaigns or of otherwise unknown or questionable quality. For today, I am going to shamelessly gloss over those bigger issues (and shamelessly promote that I'll take up some of them in my about to resume NationalJournal.com column next week) and consider instead the narrower issue of the Post's policy against reporting the results of automated polls (also known as interactive voice response, or IVR).

Cohen makes makes two arguments for not reporting automated surveys:

1) Automated polls take "less care" determining likely voters:

Given the great complexity in determining "likely voters" in the upcoming electoral clash, extra care should taken to gauge whether people will show up to vote. Unfortunately, polls that use recorded voice prompts typically take less care than polls conducted by live interviewers.

2) Automated polls are impractical for surveys asking more than a half-dozen substantive questions:

People are generally less tolerant of long interviews with computerized voices. One recent Virginia robopoll asked six questions about the governor's race; the other asked four....Lost in the brevity is much, if any, substance. Neither of the two in Virginia asked about the top issues in the race, what candidate attributes matter most or anything about the economy. Without this essential context, these thin polls offer little more than an uncertain horse race number. In understanding public opinion, "why" voters feel certain ways is crucially important

Expanding on the second point, Cohen also points out that the requisite brevity of automated polls also leads campaign pollsters to rarely use automated polls. He quotes Joel Benenson and Bill McInturff and cites the poll released by Virginia candidate Brian Moran (conducted by Greenberg, Quinlan Rosner).

Let's take these in reverse order. First, he is right that the automated methodology is inappropriate for longer, in-depth surveys and that a single, automated pre-election poll can typically "offer little more than an uncertain horse race number." So we would want to stick to live interviewer surveys if we want to understand the broader currents of public opinion surrounding an election (the goal of the work done by the Post/ABC poll) or if we want to plot campaign strategy or test campaign messages (the goal of campaign pollsters). The inherent brevity of automated polls is the primary reason that campaign pollsters still rely on traditional, live-interviewer methods for their work.

Similarly, the need for a very short questionnaire on automated polls prevents the use of a classic Gallup-style likely voter model (which requires asking seven or more questions about vote likelihood, past voting and attention paid to the campaign). However, I do not agree that the absence of a Gallup style index means that automated polls take inherently "less care" with likely voter selection than other state-level pre-election surveys. Many pollsters, including most of those that work for political candidates, rely on other techniques (such as screening questions, geographic modeling and stratification and the use of vote history garnered from registered voter lists) to sample and select the likely electorate.

Do we really think the polls produced by SurveyUSA and PPP in Virginia take "less care" in selecting likely voters than the Mason-Dixon Florida primary poll reported yesterday by the Post's Chris Cillizza or the Quinnipiac New Jersey primary poll reported in Sunday's Post?

And while I will grant that final-poll pre-election poll accuracy is a potentially flawed measure of overall survey quality, it is the best yardstick we have to assess the accuracy of likely voter selection methods. After all, the Gallup-style likely voter models were developed by looking back at how poll estimates compare election outcomes and tweaking the indexes until they produced the most accurate retrospective results. With each new election, pollsters look back at how their models performed, adjusting them as necessary to improve their future performance. Thus, if a pollster is careless in selecting likely voters it ought to produce less accurate estimates on the final poll.

On that score, automated "robo" polls have performed well. As PPP's Tom Jensen noted earlier this week, analyses conducted by the National Council on Public Polls (in 2004), AAPOR's Ad Hoc Committee on Presidential Primary Polling (2008), and the Wall Street Journal's Carl Bialik all found that automated polls performed about as well as live interviewer surveys in terms of their final poll accuracy. To that list I can add two papers presented at last week's AAPOR conference (one by Harvard's Chase Harrison and Farleigh Dickinson Unversity's Krista Jenkins and Peter Woolley) and papers on prior conferences on poll conducted from 2002 to 2006 (by Joel Bloom and Charles Franklin and yours truly). All of these assessed poll conducted in the final weeks or months of the campaign and saw no significant difference between automated and live interviewer polls in terms of their accuracy. So whatever care automated surveys take in selecting likely voters, the horse race estimates they produce have been no worse.

One reason why is that respondents may provide more accurate reports of both their vote intention to a computer than a live interviewer. We know that live interviewers can introduce an element of "social discomfort" that leads to an underreporting of socially frowned upon behavior (smoking, drinking, unsafe sex, etc). Is it such a stretch to add non-voting to that list?   

So let me suggest that this argument is really about the value of polls that measure the "horse race" preference -- and little more -- a few weeks or months before an election. Is that something worth reporting? Jon Cohen and ABC News polling director Gary Langer, the two principals of the ABC/Washington Post polling partnership, have been consistently outspoken in saying, "no," urging all of uurging us all to "throttle back on the horse race."

I have no doubt of their sincerity of their commitment to that goal or the obstacles they face putting it into practice, but I wonder if urging abstinence is a workable solution. Political journalists and their political junkie readers are intensely and instinctively interested in the basic assessments that "horse race" numbers provide. Poll references have a way of showing up in stories about the Virginia governor's race, even in a newspaper that is supposedly not reporting on Virginia primary polls. Just yesterday, for example, the Post's print edition debate story reported that the Virginia candidates "sought to stamp a final impression in a race where polls show the majority of voters remain undecided" and Chris Cillizza told us in his online blog that "polling suggests [Terry McAuliffe] leads both [Brian] Moran and state Sen. Creigh Deeds."

So the "polls" show something newsworthy enough to report, but the reporters are not allowed to name or cite the polls they looked at to reach that conclusion. Does that make any sense?

VA: 2009 Gov (SurveyUSA-5/17-19)

5/17-19/09; 1692 registered voters, 2.4% margin of error
502 likely Democratic primary voters, 4.5% margin of error
Mode: IVR


2009 Governor - Democratic Primary
McAuliffe 37, Deeds 26, Moran 22 (chart)

2009 Governor - General Election
McDonnell 46, Deeds 40 (chart)
McDonnell 46, McAuliffe 40 (chart)
McDonnell 47, Moran 37 (chart)


US: National Survey (Moore-5/5-7)

Moore Information (R)
5/5-7/09; 800 registered voters
Mode: Live telephone Interviews


Obama Job Approval
60% Approve, 30% Disapprove chart

2010 National Congressional Ballot
Democrat 42, Republicn 35 chart

Party ID
Democrat 44, Republican 36, independent 20 chart

FL: 2010 Sen, Gov (MasonDixon-2/11-16)

Mason Dixon / Ron Sachs Communications
5/14-18/09; 625 registered voters, 4% margin of error
300 likely Republican primary voters, 300 likely Democratic primary voters (6%)
Mode: Live Telephone Interviews


2010 Senate

Democratic Primary
Meek 26%, Gelber 16%

Republican Primary
Crist 53, Rubio 18

General Election
Crist* 55, Meek 24
Crist 57, Gelber 22

2010 Governor

Republican Primary
McCollum 39, Bronson 12
Bush 64, McCollum 13, Bronson 2

General Election
McCollum 40, Sink 34
Sink 37, Bronson 29
Bush 50, Sink 34

(story, results)

* Typo fixed, :-)

NJ: 2009 Gov (Monmouth-5/13-19)

Monmouth University / Gannett New Jersey
5/13-18/09; 706 registered Republicans, 3.7% margin of error
Mode: Live Telephone Interviews

New Jersey

2009 Governor - Republican Primary
Christie 50%, Lonegan 32%, Merkt 2% (chart)


NJ: 2009 Gov (Quinnipiac-5/12-18)

Quinnipiac University
5/12-18/09; 2,532 registered voters, 2% margin of error
543 likely Republican voters (4.2%), 524 likely Democratic voters (4.3%)
Mode: Live Telephone Interviews

New Jersey

Job Approval / Disapproval
Pres. Obama: 67 / 27 (chart)
Gov. Corzine 38 / 53 (chart)
Sen. Lautenberg: 46 / 37 (chart)
Sen. Menendez: 41 / 32 (chart)

Favorable / Unfavorable
Gov. Corzine: 37 / 51 (chart)
Christie: 33 / 11
Lonegan: 19 / 11

2009 Governor:

Republican Primary
Christie 56, Lonegan 33, Merkt 2 (chart)

Democratic Primary
Gov. Corzine 65, Bacon 5, Boss 5, Bergmanson 4 (trends)

General Election
Christie 45, Gov. Corzine 38 (chart)
Lonegan 42, Gov. Corzine 40 (chart)


MN: 08 Senate Seat (Rasmussen-5/18)

Rasmussen Reports
5/18/09; 500 likely voters, 4.5% margin of error
Mode: Live Telephone Interviews


Obama Job Approval
66% Approve, 33% Disapprove

Gov. Pawlenty Job Approval
53% Approve, 27% Disapprove

Favorable / Unfavorable
Norm Coleman: 44 / 55
Al Franken: 44 / 55

Should Norm Coleman concede now and let Al Franken be seated in the United States Senate?

    54% Yes
    41% No

If the State Supreme Court rules in favor of Al Franken, should Governor Pawlenty sign an election certificate naming Al Franken as the winner?

    67% Yes
    35% No


While I Was Busy at AAPOR (and Since) "Outliers"

Topics: Outliers Feature

Gallup tabulates declines in GOP identification spanning "nearly every major subgroup," prompting reactions from Chris Cillizza, Steve Benen, Jon Singer, Matt Yeglesias and much of the blogosphere.

Jon Cohen explains why the Washington Post is not reporting Virginia primary polls.

Lee Sigelman is encouraged.

Tom Jensen is amused.

Ron Brownstein ponders Obama's opportunity to build "a lasting and potentially crushing advantage" with millennials.

Chris Good considers the potential appeal of economic conservatism to millenials.

Gary Langer reviews perceptions of tougher emissions standards.

Glen Bolger sees opportunity for the GOP in Pelosi's problems.

Chris Bowers averages favorable ratings for national Republicans.

Tom Schaller uses the Cook/Polidata "PVI" ratings of congressional districts to preview 2010.

Andrew Gelman plots higher ratings for small state governors.

The National Journal finds unhappiness with Dick Cheney among Republican political insiders.

Steve Singiser becomes a new DailyKos contributing editor, considers the long term trends in urban/rural voting patterns.

John Sides wonders whether observed differences in a willingness to "blame Jews" for the financial crisis would hold up with additional control variables.

Linton Weeks assesses the "outrageous proliferation of surveys in contemporary America" (via Lundry).

US: National Survey (DemCorps-5/10-12)

Democracy Corps (D) /
Greenberg Quinlan Rosner Research (D)
5/10-12/09; 1,000 2008 voters, 3% margin of error
Mode: Live Telephone Interviews


Favorable Ratings
Obama: 60 / 28 (chart)
The Republican Party: 28 / 45
The Democratic Party: 46 / 37

State of the Country
44% Right Direction, 47% Wrong Track (chart)

Obama Job Approval
59% Approve, 33% Disapprove (chart)
Dems: 89 / 5 (chart)
inds: 58 / 34 (chart)
Reps:23 / 66 (chart)

(Half-Sample) Economy: 61% Approve, 37% Disapprove (chart)

2010 National Congressional Ballot
46% Democrat, 43% Republican (chart)

Party ID
40% Democrat, 32% Republican, 27% independent (chart)


NV: 2010 Senate (MasonDixon-5/12-14)

Las Vegas Review-Journal / Mason-Dixon
5/12-14/09; 625 active voters, 4% margin of error
Mode: Live Telephone Interviews


Favorable / Unfavorable
Pres. Obama (D): 55 / 30
Sen. Reid (D): 38 / 50
Sen. Ensign (R): 53 / 18

If the 2010 election for Nevada's U.S. Senate seat were held today, would you vote to re-elect incumbent Harry Reid, would you consider voting for a challenger, or would you definitely vote to replace Reid?

    35% Re-elect
    17% Consider
    45% Replace

Do you feel President Obama's economic stimulus plan is working, or not?

    37% Yes
    42% No


US: National Survey (CNN-5/14-15)

5/14-15/09; 1,010 adults, 3% margin of error
Mode: Live Telephone Interviews


Obama Job Approval
62% Approve, 35% Disapprove (chart)

Speaker Pelosi Job Approval
39% Approve, 48% Disapprove

The 1973 Roe versus Wade decision established a woman's constitutional right to an abortion, at least in the first three months of pregnancy. Would you like to see the Supreme Court completely overturn its Roe versus Wade decision, or not?

    30% Yes, overturn
    68% No, not overturn

NY: 2010 Gov (Rasmussen-5/14)

Rasmussen Reports
5/14/09; 500 likely voters, 4.5% margin of error
Mode: IVR

New York State

Obama Job Approval
65% Approve, 32% Disapprove (chart)

Paterson Job Approval
31% Approve, 67% Disapprove (chart)

Favorable / Unfavorable
Gov. Paterson (D): 33 / 65 (chart)
Cuomo (D): 65 / 29
Pataki (R): 51 / 45
Giuliani (R): 57 / 38

2010 Governor
Giuliani 58, Gov. Paterson 30 (chart)
Pataki 47, Gov. Paterson 33
Cuomo 55, Giuliani 37 (chart)
Cuomo 57, Pataki 29


US: Abortion, et al (FOX-5/12-13)

FOX News / Opinion Dynamics
5/12-13/09; 900 registered voters, 3% margin of error
Mode: Live Telephone Interviews


On the issue of abortion, would you say you are more pro-life or more pro-choice?

    49% Pro-life
    43% Pro-choice

Party ID
42% Democrat, 30% Republican, 24% independent (chart)


US: Bush Interrogation (Resurgent-5/11-14)

Resurgent Republic (R) /
Ayres, McHenry (R)
5/11-14/09; 1,000 registered voters, 3.1% margin of error
Mode: Live Telephone Interviews


Party ID
36% Democrat, 30% Republican, 30% independent (chart)

Based on what you have read or heard, would you say harsh interrogation of detainees was justified or not justified?

    53% Justified
    34% Not Justified

    82% Justified, 10% Not Justified

    53% Justified, 31% Not Justified

    27% Justified, 57% Not Justified

Now I would like to read you a list of interrogation techniques that were considered by the Bush Administration and CIA when questioning detainees. For each of the following, would you please tell me if you consider each technique to be justified or not justified for the United States to use when trying to get information that could prevent future terrorist attacks.


Waterboarding, where a detainee is strapped to a flat board, his face covered with a hood and cloth, and pouring water on the cloth in a way that simulates the feeling of being drowned.

    46% Justified
    50% Not Justified

    75% Justified, 20% Not Justified

    49% Justified, 46% Not Justified

    18% Justified, 80% Not Justified

(Memorandum, Presentation, Toplines, Press Release)

About That Gallup Abortion Poll

While I was busy at the AAPOR conference, Gallup released a new survey in which more Americans describe themselves as "pro-life" (51%) than "pro-choice" (42%) for the first time since they started asking the question in 1995. The Gallup release noted that an April survey conducted by the Pew Research Center that found a similar "conservative turn" on abortion. Not surprisingly, the release generated considerable commentary around the web -- here is a brief round-up.

On Saturday, our own Charles Franklin took at look at the partisan balance of respondents to the Gallup survey and found that, yes, as some of our commenters had suspected, it included more self-identified Republicans and fewer Democrats than other recent surveys of adults. However, even an "idiosyncratic sampling fluctuation" favoring Republicans on the Gallup poll does not entirely explain away the abortion shift since, as the Gallup release points out, the movement toward the "pro-life" label appears to come entirely from Republicans or independents that lean Republican.

So what to make of the new results? Fortunately, I can point to two excellent summaries by John Sides and Gary Langer that urge caution before making too much of the apparent "conservative turn."

The post by GWU political scientist John Sides is by far the most comprehensive and well worth reading in full.  It summarizes results from other media polls and the National Election Studies, the General Social Survey and other commentary on the web (with complete links to all). His bottom line:

Simply put, the Pew and Gallup findings obscure far more than they reveal. They purport to show shifts in opinion that are not evident in other data. There is no consistent evidence for a "conservative turn," as Pew puts it.

Moreover, both Pew and Gallup employ vague questions that do not easily map onto actual policy debates. Once more precise data are employed, it becomes clear that opinion strongly depends on the circumstances under which the abortion would occur. While people who are favor a legal abortion under any of the circumstances mentioned outnumber those who unequivocally oppose abortion by a factor of abut 3, most people are in the middle. In the GSS data, 58% favor a legal abortion under some circumstances, but not others.

The more concise post by ABC News polling director Gary Langer makes a similar point about conflicting results:

While Gallup gets a 51-42 percent "pro-life" vs. "pro-choice" division, a CNN poll that asked the same question last month got a 45-49 percent split - slightly more for "pro-choice." Moreover, CNN had it 50-45 percent - more for "pro-life" - back in May 2007. Thus this is a measure on which sentiment moves around a bit, and one on which something like Gallup's current result has been seen before, by CNN two years ago.

This question, in any case, is essentially message testing, not policy testing. The reality is that most people are both "pro-life" and "pro-choice" (both highly charged terms) at once. Public opinion on abortion is complicated, even conflicted, and heavily dependent on circumstances. Most people think it's between a women and her doctor, but most also object to it on moral grounds; many accept it when it's needed, but not as a casual matter. This has been so for many years.

Langer also cautions strongly against leaping to conclusions about the new results from Gallup:

So what's the best approach to understanding current attitudes on abortion? The first is to steer away from firm conclusions until attitudes resolve themselves in clear and consistent measurement. The second, as ever, is to look not just at the results, but at the questions being asked - and to recognize that public opinion on such a difficult issue is far more complex than a single number can resolve.

Again, both posts are worth reading in full. For even more on this subject, see the initial commentary and subsequent roundup from Ed Kilgore and the abortion trend charts created last week (prior to release of the new Gallup poll) by Nate Silver.

AAPOR 2009 Interview Wrapup

Topics: AAPOR , AAPOR2009

I'm back in DC and rested, or at least, better rested than I was this time yesterday. As a wrap up to my coverage of this year's conference of the American Association for Public Opinion Research (AAPOR), here are links to all of the video interviews (with some final thoughts below):

  • Michael Link, AAPOR's conference chair, provides an introduction.
  • Masahiko Aida on a likely voter validation study conducted by Greenberg, Quinlan Rosner & Democracy Corps.
  • Reg Baker on AAPOR's online panel survey task force.
  • Chris Borick on the whether the "incumbent rule" made a comeback in 2008.
  • Elizabeth Dean on RTI International's efforts to recruit and interview respondents in Second Life.
  • Don Dillman on Address Based Sampling (ABS) and its applications for Internet surveys.
  • Paul Donato on the role of traditional survey research in a world of changing electronic measurement.
  • Tom Guterbock on his research on improving political message testing.
  • Lou Harris recounts experiences working for candidate John F. Kennedy in 1960.
  • Sunshine Hillygus on her study of how primary supporters of Hillary Clinton (and those of other unsuccessful presidential candidates) ended up voting in the general election.
  • Scott Keeter on the perils of pre-election polling in 2008.
  • Jon Krosnick on the challenges in assessing the quality of new survey methods.
  • Jennie Lai of the Nielsen Company on their research into the use of mobile devices for time use surveys.
  • Christopher Wlezien on the work he and Bob Erikson have done on the comparative accuracy of polls and political prediction markets.

A few things to remember about AAPOR's annual conference: First, my interviews barely scratched the surface of the breadth and depth of subjects covered and findings presented. I tend to focus more closely on topics related to pre-election polling, but the conference covered a much wider array of methodological issues. For example, there were by my count eight panels on survey non-response, six on web surveys, and five each on the address based sampling and cell phone interviewing. These tend to be highly technical and not easily conveyed via quick video interview.

Remember also that a lot of the findings presented at the conference -- including some that made their way into these interviews -- are very preliminary. The quality of the "papers" presented at the AAPOR varies widely (and I put that word in quotations because most are just Powerpoint presentations). Only a handful will eventually make their way into academic journals, and those that do are still at the beginning of a long peer review process that may ultimately lead their authors to different conclusions. Some of that review comes in the form of tough questions at the conference that my interviews cannot capture.

Finally, some words of thanks: First, thanks to o all of those I interviewed for making themselves avalable. Second, a big thanks to Michael Link, the conference chair, and Monica Evans-Lombe and her colleagues at AMP (AAPOR's management company), who helped provide important logistical support. Finally, another big thank you to Lisa Mathias at the Winston Group for creating the animation that appears at the beginning of each video (and her colleague and pollster regular Kristen Soltis for recommending her). Thank you to all!

AAPOR09: Sunshine Hillygus

Topics: AAPOR , AAPOR2009 , Sunshine Hillygus

On Saturday, I also interviewed Sunshine Hillygus, Director of Program on Survey Research at Harvard University, on her study of how primary supporters of Hillary Clinton (and those of other unsuccessful presidential candidates) ended up voting in the general election, as as presented at the annual conference of the American Association for Public Opinion Research (AAPOR).

The following links take you to all the videos, the conference program (pdf and online searchable) and occasional Twitter updates from me and others at the conference.

AAPOR09: Jennie Lai

Topics: AAPOR , AAPOR2009 , Jennie Lai , Nielsen Company

On Saturday, I also interviewed Jennie Lai, research methodologist for The Nielsen Company, on her research into the use of mobile devices for time use surveys as presented at the annual conference of the American Association for Public Opinion Research (AAPOR).

The following links take you to all the videos, the conference program (pdf and online searchable) and occasional Twitter updates from me and others at the conference.

AAPOR09: Don Dillman

Topics: AAPOR , AAPOR2009 , Don Dillman , Internet Polls

On Saturday, I interviewed Washington State University Professor Don Dillman, author of Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method, on his work on the use of Address Based Sampling (ABS) and applications for internet surveys, as presented at the annual conference of the American Association for Public Opinion Research (AAPOR).

The following links take you to all the videos, the conference program (pdf and online searchable) and occasional Twitter updates from me and others at the conference.