Pollster.com

April 5, 2009 - April 11, 2009

 

VA: 2009 Gov (DailyKos-4/6-8)


DailyKos.com (D) / Research 2000
4/6-8/09; 600 likely voters, 4% margin of error
Mode: Live Telephone Interviews

Virginia

2009 Governor - Democratic Primary
Moran 24, McAuliffe 19, Deeds 16

2009 Governor - General Election
McDonnell 37, Moran 36
McDonnell 40, McAuliffe 33
McDonnell 38, Deeds 31

(source)


Polarization Meme "Outliers"

Topics: Outliers Feature

Karl Rove cites numbers from the Pew Reseach Center to argue that Barack Obama "has done more to [quickly] polarize America" than any president "in the past 40 years."

Michael Gerson uses the same numbers to call Obama "the most polarizing new president in recent times."

Michael Dimock, Pew's associate director, responds via Greg Sargent:

"It's unfair to say that Obama has caused this divisiveness or to say that he is a polarizing president," Dimock said. He claimed that this phenomenon is driven by long-term trends, uncommon Dem enthusiasm, and the Republican tendency to be more hostile to opposing presidents than Democrats.

CBS's Sarah Dutton and Gallup's Jeff Jones share historical "polarization" data collected by their organizations.

Andrew Sullivan and Charlie Cook are watching the independents (and remember, we now have separate charts that break out Obama's approval rating among Democrats, Republicans and independents).

More from Nate Silver, Chris Cillizza, Amy Walter, Jay Cost, Peter Wehner, Eric Kleefeld, Chuck Todd et. al., DemfromCT, Steve Benen, Glen Bolger and Ed Kilgore.

And in other news...

Mark Mellman adds his perspective on the AAPOR report on what went wrong in New Hampshire.

Nate Silver and Michael Goldfarb debate the future of public opinion on gay marriage, Andrew Gelman has more.

Steve Benen, Chris Good, Nate Silver and John Judis react to a Rasmussen Reports result on American's views on capitalism.\

Jim Warren surveys political scientists about the state of political polling.

Jennifer Agiesta takes the Post's Doug Feaver to task for calling blog comments "a pretty good political survey."

Gary Langer contrasts polls and online ballots on the subject of marijuana legalization.

Anna Greenberg releases a multi-mode survey of young adults on the economy (commissioned by Qvisory, via First Read).

Patrick Ruffini questions whether NY-20 is a Republican stronghold.

Eric Kleefeld reports county-by-county breakdowns of the NY-20 recount.

M. Zuhdi Jasser takes exception to Washington PostABC News results on American perceptions of Muslims.

Carl Bialik considers the shortcomings of measuring an online activity with an online panel survey (more here).

John Sides shares a tantalizing graph on the 2008 campaign from the not yet released Annenberg tracking data.

Anand Rajaraman says "more data usually beats better algorithms" (via Lundry).

Ohio lets voters draw their own congressional districts (via John Sides).


And...Happy Passover everyone (and Easter too)!!

(via Lee Siegelman)



Re: AAPOR's Report - 2008 vs 1948


Via email, Kathy Frankovic, former Director of Surveys at CBS News, sends this comment about my post yesterday on the disappointing pollster cooperation with AAPOR's ad hoc committee report on the New Hampshire primary polling mishap:

There is a big difference between 1948 and where we are today in the field of survey research. In 1948, there was an accepted academic standard for survey research – probability sampling – one that was not used by the public pollsters. That – in addition to the lack of polling close to the election – was an obvious conclusion the SSRC researchers could make to explain what went wrong. There is no such obvious methodological improvement available as an explanation for the 2008 problems in NH. It’s not cell phones, it’s not respondent selection, and it’s not ballot order. Timing (and the possibilities of last minute changes) may once again be everything. And while more organizations should have disclosed more, I think that it’s unlikely that more data would have told us anything more definitive than we learned from the report as written.

For what it's worth, the report itself specifically referenced the contrast with 1948:

The work of the committee, and hence this report, has been delayed by a slow response from many of the pollsters who collected data from the four states in which the committee focused its efforts – New Hampshire, South Carolina, Wisconsin, and California. This is quite a different situation than after the 1948 general election, when there were fewer firms engaged in public polling, the threat to the future of the industry seemed to be greater, and the polling firms were fully cooperative. In 2008, many of the firms that polled in New Hampshire had studies in the field for primaries that occurred right after that. Today, there are well-publicized standards for disclosure of information about how polls are conducted. AAPOR, an organization of individuals engaged in public opinion research; the National Council on Public Polls (NCPP), an organization of organizations that conduct public opinion research; and the Council of American Survey Research Organizations (CASRO), also an organization of organizations, have all promulgated standards of disclosure. Despite the norms, at the time this report was finalized, one-fifth of the firms from which information was requested had not provided it. For each of these four firms, we were able to retrieve some of the requested information through Internet searches, but this was incomplete at best. If additional information is received after the report’s release, the database at the Roper Center will be updated.

So if and when pollsters who did not share raw, respondent level data share it with AAPOR, it will be posted to the Roper Center's listing (which is open to anyone, not limited to member institutions). I am told that Roper will also soon add pdf reproductions of all the responses received from pollsters, not just those that shared respondent level data.


AAPOR's Report: Why 2008 Was Not 1948

Topics: AAPOR , Disclosure , Lee Miringoff , Marist Poll , Michael Traugott , Nancy Mathiowetz , NCPP , New Hampshire , Zogby

As someone who writes about polling methodology, I consider last week's report from the American Association for Public Opinion Research (AAPOR) on the mishaps in the New Hampshire and other primary election polling last year manna from heaven. Republican pollster David Hill was right to call it "the best systematic analysis of what works and what doesn't for pollsters" in decades. The new findings and data on so many aspects of polling arcania, from "call backs" to automated-IVR polls, is invaluable, especially given that the AAPOR researchers lacked access to all of the public polling data from New Hampshire or the three other states they focused on.

But that lack of information was also important. Valuable as it is, the report was also was hindered by a troubling lack of disclosure and cooperation from many of the organizations that played a part in what even prominent pollsters described as an unprecedented "fiasco" and "one of the most significant miscues in modern polling history."

Last week, the Wall Street Journal's Carl Bialik summed up the problem:

Just seven of 21 polling firms contacted over a year ago by the American Association for Public Opinion Research for the New Hampshire postmortem provided information that went beyond minimal disclosure -- such as data about the interviewers and about each respondent.

Last year, two days after the New Hampshire primary, I wrote a column reminding my colleagues of the investigation that followed the 1948 polling debacle that created the infamous "Dewey Beats Truman " headline (emphasis added):

[A] week after the [1948] election, with the cooperation of virtually every prominent public pollster, the independent Social Science Research Council (SSRC) convened a panel of academics to assess the pollsters' methods. After "an intensive review carried through within the span of five weeks," their Committee on the Analysis of Pre-election Polls and Forecasts issued a report that would ultimately reshape public opinion polling as we know it.

[...]

[SSRC Committee] members moved quickly, as their report explains, out of a sense that "extended controversy regarding the pre-election polls ... might have extensive repercussions upon all types of opinion and attitude studies."

The American Association for Public Opinion Research "commended" the SSRC effort and urged its member organizations to cooperate. "The major polling organizations," most of which were commercial market researchers competing against each other for business, "promptly agreed to cooperate fully, opened their files and made their staffs available for interrogation and discussion."

But that was 1948. Things were different last year.

On January 15, 2008, AAPOR announced it would form an ad-hoc committee to evaluate the primary pre-election polling in New Hampshire. Two weeks later, it announced the names of the eleven committee members. They convened soon thereafter and decided to broaden the investigation to include the primary pre-election polls conducted in South Carolina, California and Wisconsin (p. 16 of the report explains why).   On March 4, 2008, AAPOR President Nancy Mathiowetz sent a six page request to the 21 organizations that had released public polls in the four states, including 11 that had polled in New Hampshire.

The request (reproduced on pp. 83-88 of the report) had two categories: "(1) information that is part of the AAPOR Standards for Minimal Disclosure and (2) information or data that goes beyond the minimal disclosure requirement." The first category included items typically disclosed (such as survey dates, sample sizes and the margin of error), some not always available (including exact wording of questions asked and weighting procedures) and some details that most pollsters rarely release (such as response rates). The second category of information beyond minimal disclosure amounted to the 2008 equivalent of "opening of files" from 1948. Specifically, they asked for "individual level data for all individuals contacted and interviewed, records about the disposition of all numbers dialed, and information about the characteristics of interviewers.

The Committee had originally hoped to complete its report in time for AAPOR's annual meeting in May 2008, but by then as committee chair Michael Traugott reported at the time, only five firms had responded to the request (the first to respond, Mathiowetz tells me, was SurveyUSA which provided a complete, electronic data files for the two states they polled on April 8, 2008). In fairness, many of the pollsters had their hands full with surveys in the ongoing primary battle between Barack Obama and Hillary Clinton. Nevertheless, when I interviewed Traugott in May, he still hoped to complete the report in time for the conventions in August, but as cooperation lagged, the schedule slipped once again.

By late November 2008, with the elections completed, some firms had still not responded with answers to even the "minimal disclosure" questions asked back in March. At that point, Mathiowetz tells me, she filed a formal complaint with AAPOR's standard committee, alleging violations of AAPOR's code of ethics. Since the standards evaluation committee has not yet completed its work, and since that committee is bound to keep the specifics of such complaints confidential, Mathiowetz could not provide further details. However she did say that some pollsters supplied information subsequent to her complaint that the Ad Hoc Committee included in last week's report.

So now that the report is out, let's use the information it provided to sort the pollsters into three categories:

The best: Seven organizations, CBS News/New York Times, the Field Poll, Gallup/USA Today, Opinion Dynamics/Fox News, Public Policy Institute of California (PPIC), SurveyUSA and the University of New Hampshire/CNN/WMUR provided complete "micro-data" on every interview conducted. These organizations lived up to the spirit of the 1948 report, opening up their (electronic) files and, as far as I can tell, answering every question the AAPOR committee asked. They deserve our praise and thanks.

The worst: Three organizations -- Clemson Unversity, Ron Lester & Associates/Ebony/Jet and StrategicVision -- never responded.   

The rest in the middle: Eleven organizations -- American Research Group (ARG), Datamar, LA Times/CNN/Politico, Marist College, Mason-Dixon/McClatchy/MSNBC, Public Policy Polling (PPP), Rasmussen Reports, Research 2000/Concord Monitor, RKM/Franklin Pierce/WBZ, Suffolk University/WHDH and Zogby/Reuters/C-Span -- fell somewhere in the middle, providing answers to the "minimal disclosure" questions but no more.   

The best deserve our praise, while those that provided evaded all disclosure deserve our scorn. But what can we say about the pollsters in the middle?

First, remember that their responses met only the "minimal disclosure" requirements of AAPOR's code of ethics. They provided the "essential information" that the pollsters should include, according to AAPOR's ethical code, "in any report of research results" or at least "make available when that report is released." In other words, the middle group provided information that pollsters should always put into public domain along with their results, and not months later or only upon request following an unprecedented polling failure.

Second, consider the way that minimal cooperation cooperation hindered the committee's efforts to explain what happened in New Hampshire, especially on the question of whether a late shift to Senator Clinton in New Hampshire explained some of the polling error there. That theory is popular among pollsters (yours truly is no exception), partly because of the evidence -- most polls finished interviewing on the Sunday before the primary and thus missed reactions to Clinton's widely viewed "emotional" statement the next day -- and partly because the theory is easier for pollsters to accept, as it lets other aspects of methodology off the hook. The problem wasn't the methodology, the theory goes, just a "snapshot" taken too soon.

While the committee found evidence that several other factors influenced the polling errors in New Hampshire, the concluded that "late decisions "may have contributed significantly." They based this conclusion mostly on evidence from two panel-back surveys -- conducted by CBS News and Gallup -- that measured vote preferences for the same respondents at two distinct times. The Gallup follow-up survey was especially helpful, since it recontacted respondents from their final poll for a second interview conducted after the primary.

Although the evidence suggested that a late shift contributed tot he problem, the committee hedged on this point because, as they put it, "we lack the data for proper evaluation." Did more data exist that could shed light on this issue? Absolutely.

First, four pollsters continued to interview on Monday. ARG, Rasmussen Reports, Suffolk University and Zogby collectively interviewed approximately 1,500 New Hampshire voters on Monday, but the publicly released numbers combined those interviews with others conducted on Saturday and Sunday. The final shifts these pollsters reported in their final releases were inconsistent, but none of the four ever released tabulations that broke out results by day-of-the-week, and all four refused to provide respondent level data to the AAPOR committee.

That omission is more than just a missed opportunity. It also leaves open the possibility that at least one pollster -- Zogby -- was less than honest about what his data said about the trend in the closing hours of the New Hampshire campaign. See my post from January 2008 for the complete details, but the last few days of Zogby's tracking numbers simply do not correspond with the his characterization of that data the day after the primary. Full cooperation with the AAPOR committee would have resolved the mystery. Zogby's failure to cooperate should leave us asking more troubling questions.

But it was not just the "outlaw pollsters," to quote David Hill, that failed to share important data with the AAPOR committee. Consider the Marist Poll, produced by the polling institute at New York's Marist College. Marist is not a typical pollster. It's directors, Lee Miringoff and Barbara Carvalho are long-time AAPOR members. More important, Miringoff is a former president of the National Council on Public Polls (NCPP) and and both Miringoff and Carvalho currently serve on its board of trustees. NCPP is a group of media pollsters that has its own, slightly less stringent disclosure guidelines that nonetheless encourage members to "release raw datasets (ASCII, SPSS, CSV format) for any publicly released survey results."

The day after the New Hampshire primary, Marist reported its theories about what went wrong and promised "to re‚Äźcontact in the next few days the voters we spoke with over the weekend to glean whatever additional insights we can." Seven weeks later, Miringoff participated in a forum on "What Happened in New Hampshire" sponsored by AAPOR's New York chapter and shared some preliminary findings from the re-contact study. "Our data," he said, "suggest there was some kind of late shift to Hillary Clinton among women."   

Given the importance of that finding, the academic affiliation of the Marist Poll, Miringoff's role as a leader in NCPP and that organization's stated commitment to disclosure, you might think that Marist would be first in line to share its raw data with the respected scholars on the AAPOR committee.

You might think that, but you would be wrong.

As of this writing, the Marist Institute has yet to share raw respondent level data for either their final New Hampshire poll or the follow-up study. In fact, the Marist Institute has not yet provided any of the results of the recontact study with Professor Traugott or the AAPOR committee -- not a memo, not a filled-in questionnaire, not a Powerpoint presentation...nothing.

I was surprised by their failure to share raw data, so I emailed Miringoff for comment. His answer:

First, we did provide information on disclosure as required by AAPOR and I spoke, along with Frank Newport, on the NH primary results at a meeting of NYAAPOR. It was a great turnout and provided an opportunity to discuss the data and issues.

Unfortunately, the "information on disclosure" they provided was, again by AAPOR standards, the minimum that any researcher ought to include in any publicly released report. To be fair, Marist had already included much of that "minimal disclosure" information in their original release. According to Nancy Mathiowetz, however, Marist did not respond to her requests -- filling in information missing from the public report such as the order of questions, a description of their weighting procedure and response rate data -- until November 17, 2008. And that transmission said nothing at all about the follow-up study.

Miringoff continued:

Second, we did conduct a post-primary follow-up survey to our original pre-primary poll. We think both these datasets should be analyzed in tandem. We are preparing them to be included at the Roper Center along with all of our pre-primary and pre-election polling from 2008 for anyone to review.

What's the hurry?

I am not sure what is more depressing: That a group of "outlaw pollsters" can flaunt the standards of the profession with little or no fear of recrimination or that a former former president of the NCPP can so blithely dismiss repeated requests from AAPOR's president with little more than a "what me worry" shrug. Does it really require 14 months (and counting) to prepare these data for sharing?

Just after the primary, I let myself hope that the pollsters of 2008 might follow the example of the giants of 1948, put aside the competitive pressures and open their files to scholars. Fortunately, the survey researchers at CBS News, the Field Poll, Gallup, Opinion Dynamics, PPIC, SurveyUSA and the University of New Hampshire (and their respective media partners) did just that. For that we should be grateful. But the fact that only 7 of 21 organizations chose to go beyond minimal disclosure in this case is profoundly disappointing.

The AAPOR Report is a gift for what it tells us about the state of modern pre-election polling in more ways than one. The question now is whether polling consumers can find a way to do something about the sad state of disclosure this report reveals.

Correcting the Correction: I had it right the first time. The CBS News/New York Times partnership conducted their first New Hampshire survey in November 2007, but CBS News was solely responsible for the panel-back study The original version of this post incorrectly identified the CBS News New Hampshire polling as a CBS/New York Times survey.  While those organizations are partners for many projects, the New York Times was not involved in the New Hampshire surveys.


US: National Survey (Marist-4/1-3)


Marist Poll
4/1-3/09; 928 registered voters, 3.5% margin of error
Mode: Live Telephone Interviews

National

Obama Job Approval
56% Approve, 30% Disapprove (chart)
Dems: 88 / 4 (chart)
inds: 53 / 28 (chart)
Reps: 25 / 59 (chart)
Economy: 54% Approve, 37% Disapprove (chart)

(Obama Job, Economy, Foreign Policy)


US: National Survey (Pew-3/31-4/1)


Pew Research Center
3/31 - 4/6/09; 1,506 adults, 3% margin of error
Mode: Live Telephone Interviews

National

Obama Job Approval
61% Approve, 26% Disapprove (chart)
Dems: 91 / 3 (chart)
inds: 56 / 27 (chart)
Reps: 29 / 60 (chart)

State of the Country
23% Satisfied, 70% Dissatisfied (chart)

Are you generally optimistic or pessimistic that Barack Obama's policies will improve economic conditions in the country?

    66% Optimistic
    26% Pessimistic

This year, have Republicans and Democrats in Washington been working together more to solve problems OR have they been bickering and opposing one another more than usual?

    25% Working together more
    53% Bickering more than usual
    8% Same as in the past

(source)


US: Iraq (CNN-4/3-5)


CNN / ORC
4/3-5/09; 1,023 adults, 3% margin of error
Mode: Live Telephone Interviews

National

Do you favor or oppose the U.S. war in Iraq?

    35% Favor
    63% Oppose

Barack Obama has announced that he will remove most U.S. troops from Iraq by August of next year but keep 35 thousand to 50 thousand troops in that country longer than that. Do you favor or oppose this plan?

    69% Favor
    30% Oppose


NJ: 2009 Gov (PublicMInd-3/30-4/5)


Fairleigh Dickinson / PublicMind
3/30 - 4/5/09; 809 registered voters, 3.5% margin of error
Mode: Live Telephone Interviews

New Jersey

Gov. Jon Corzine (D) Job Approval
40% Approve, 49% Disapprove (chart)

2009 Governor - Republican Primary
Christie 43, Lonegan 21, Merkt 2, Levine 2 (chart)

2009 Governor - Democratic Primary
Corzine 48, Booker 33
Corzine 45,Codey 37

2009 Governor - General Election
Christie 42, Corzine 33 (chart)
Levine 34, Corzine 33
Corzine 37, Lonegan 36 (chart)
Corzine 37, Merkt 33

(source)


The NY Times and "Push Polling"

Topics: Doug Schoen , Michael Bloomberg , Push Polls

I've lost count of how many times we've seen it happen. A campaign testing negative messages on an internal poll gets accused of conducting a "push poll." More often than not, the opposing campaign, bristling with outrage, reaches out to a reporter who plays along. Soon we have a story quoting in which the opponents spokesperson characterizes negative message testing as "one of the most discredited and dishonorable forms of negative campaigning." Never mind that virtually all campaigns test negatives messages. Never mind that the differences between message testing and so-called push "polls" are easily discovered via Google (say here, here or here). It happens over and over.

What's unusual about today's example is that it appears on the front page of the New York Times. It was based on calls received by "several people" in New York City who picked up the phone, were told that a survey was being conducted, but "were soon asked a series of questions featuring negative information about [Representative Anthony] Weiner," one of the potential opponents of Mayor Michael Bloomberg.

The story confirms that the poll was conducted by Bloomberg. It also provides the following description of the negative information included in the poll:

The issues highlighted in the telephone calls closely echoed negative stories that have appeared in city newspapers about Mr. Weiner, and that the congressman's aides have accused opposition researchers from the Bloomberg campaign of planting.

[...]

The questions, [a respondent] recalled, accused Mr. Weiner of taking contributions from lobbyists and of securing federal funding for an organization whose members had contributed to his campaign.

Shocking. Negative campaigning may be breaking out. In New York City.

Do those facts add up to a push poll? No. Politico's Ben Smith saves me the trouble of block-quoting myself:

The story suggests the poll is a "push poll," and doesn't raise the alternative -- that it was the kind of message-testing that, those who followed the presidential campaign will recall, was often confused with push polling.

The difference is explained at length here -- "push poll" is a pollster term of art, with a technical definition -- but the main distinction is that a push poll is designed to push a sharp, memorable, negative message, typically right before an election; a message testing poll is designed to test an arsenal of attacks to see which works best, for later mail, television, or other public attacks, not to persuade the call recipient of anything.

A so-called "push poll" is not a poll at all. It's a telemarketing call made under the guise of a survey. So if the pollster makes only a few hundred calls, asks 15 to 20 minutes worth of questions, includes demographic items, those are usually clues that the intent was to conduct legitimate research for the campaign. Of course, as I've written many times, the pollster isn't off the hook just because they are testing messages. They still have an ethical obligation to tell the truth and not abuse the respondent. Unfortunately, the Times makes no effort to examine the veracity of the attacks.

If on the other hand, if the Times had evidence that the calls were very short, included just a few questions and then the negatives attacks, and were being made to many thousands of New Yorkers, there might be more to this story.

Such an effort would be important given the sort of calling that Bloomberg's pollsters, Doug Schoen and Mike Berland conducted for the Mayor's first campaign in 2001, as described in Schoen's autobiography, The Power of the Vote (pp. 281-282):

Up until this point, our work had been highly sophisticated, but not unprecedented. However, our next step proved even more radical [...] The effort that was taking shape under Mike Berland and our associate, Bradley Honan, was something new. We were now proposing to move beyond sampling and extrapolating to something new -- a census of the city, a database with information on every voter, not a sample.

"This election is going to involve over a million voters," I said. "We don't have to sample, we can do it all." By purchasing consumer information and using phone banks, we could build a profile of every swing voter that combined demographic, voting history and consumer data. We could tag every voter as a member of one of the groups and develop a dialog with them that emphasized the issues they cared about most. Every piece of mail, every phone call, would be targeted precisely on them.

Bloomberg's answer was simple and direct. "Let's do it," he said.

The theory was simple and elegant but assembling the actual database was an enormously complex endeavor: after merging voting demographic and attitudinal information, it was not uncommon to have as many as two hundred and fifty variables per voter.

So they used "phone banks" eight years ago an attempt to collect "attitudinal information" on every voter in New York City. Did these voters know the information was being collected so that advertising could be "targeted precisely on them," or did they think they were taking part in a confidential survey? If the 2001 calls were billed to respondents as the latter, then it would have amounted to what market researchers call "sugging" - or "selling under the guise of research." Not push polling, to be sure, but not entirely ethical either.

Now it would be huge leap to assume that Bloomberg's campaign is once again conducting "census" of voters, and an even bigger leap to assume that negative message testing is a part of it. However, Smith reports that Schoen is once again polling for Bloomberg. If every voter in New York City is hearing negative arguments about Bloomberg's opponents under the guise of a poll, then the "push poll" label might fit. Also, as we have learned several times in recent years, mass political messaging campaigns are now commonly conducted under the false guise of less expensive, interactive-voice-response (IVR) surveys. Those efforts, while not a "poll" by any conventional standard, do collect the "data" provided by those who stay on the line to answer questions. So questions about the kinds of questions that Bloomberg's poll asked, and the length and mode of the survey are certainly worth asking.

However, all we know for certain in this case is that Bloomberg's pollster asked questions in a survey about negative charges that have appeared in newspaper stories. Those facts don't add up to a "push poll."


US: National Survey (CBSTimes-4/1-5)


CBS News / New York Times
4/1-5/09; 998 adults, 3% margin of error
Mode: Live Telephone Interviews

National

Obama Job Approval
66% Approve, 24% Disapprove (chart)
Dems: 89% / 7% (chart)
inds: 63% / 24% (chart)
Reps: 31% / 54% (chart)
Economy: 56% / 34% (chart)

Congressional Job Approval
26% Approve, 64% Disapprove (chart)

State of the Country
39% Right Direction, 53% Wrong Track (chart)
Economy: 20% Getting Better, 34% Worse, 43% Same (chart)

If you had to say, which do you think is a more serious problem right now -- keeping health care costs down for average Americans, OR providing health insurance for Americans who do not have any insurance?

    40% Keeping costs down
    54% Providing for the uninsured

(CBS Obama story/results, Tax story, Health Care story/results; Times story, results)


Turnout by Age, 2000-2008


Turnoutbyage08.png
Michael McDonald has done a great first look at the Current Population Survey voter turnout data that were just released at the end of last week. Be sure to read his discussion of the new CPS data.

Here I just wanted to update the turnout by age chart that I did last fall. What is striking is that voters under 30 increased their turnout rate over that of 2004, which in turn was up from 2000 by a lot.  But more surprisingly, voters over 35 decreased their participation rate from what it was in 2004.  These are not huge differences, but they are quite consistent for the over-35 group.

In contrast to the 2008 pattern of increase among the young and small declines among the older citizens, the 2004 pattern showed an upward shift for all age groups, though more so for the young.

There will be time to attack these data with more sophisticated tools, but one has to speculate that among older voters the appeal of Obama was less stimulating than among the young for Democratic constituencies,  and perhaps among Republican leaning constituencies there was a slightly greater willingness to stay home than in 2004. Let me label this clearly as speculation-- I've not run analysis to address this yet. The change in turnout in the CPS overall is very slightly down (by 2 tenths of a point) from 2004 to 2008, but we know the actual vote count was up a bit, by 1.6 points by McDonald's Voting Eligible Population estimate. 

So the CPS estimate may be a little at odds with the vote data, but the relative turnout among age groups tells an interesting tale.  Even with increases among the young, the classic pattern of increases in turnout with age still holds, and the recent upturn of voting by those under 30 has served to reduce that gap, but has not come close to putting an end to the age-turnout relationship.


McDonald's Sneak Peek at CPS 2008 Turnout Data

Topics: Current Population Survey (CPS) , Michael McDonald , Turnout

How did voter turnout in 2008 compare to prior years, particularly among key subgroups such as African Americans and younger voters? Some of the best data on these questions comes from the massive Current Population Survey (CPS) conducted by the Census Bureau. Every month, the Census Bureau conducts a random-sample, in-person survey of roughly 50,000 Americans. In November of every election year, they include supplemental questions on voting and registration.

The Census Bureau has not yet published their official report on 2008 voter turnout, but they released the raw data late last week and our friend Michael McDonald, the George Mason University professor who has long specialized in the study of voter turnout, did some preliminary number crunching over the weekend and has a sneak preview on his web site.

On the most anticipated questions, McDonald's analysis "confirms that African-American and youth voter turnout increased between 2004 and 2008." Specifically, the CPS shows a 4.9% increase in turnout among African Americans and 2.1% increase among citizens aged 18 to 29. Turnout among African-Americans (65.2%), which has long lagged 5 to 10 percentage points behind turnout among whites, was slightly less than a percentage point lower than turnout among whites in 2008 (66.1%).

The CPS data also essentially confirms McDonald's earlier estimate of early voting -- 29.7% of Americans told the CPS they voted early in 2008.

However, the new CPS data also produces a puzzling finding that may overwhelm the subgroup analysis once the CPS releases its official report: The overall rates of turnout and registration "declined slightly between 2004 and 2008" (from 63.8% to 63.6%). That finding is counterintuitive, at best, since the tallies from the Secretaries of State show that 9.0 million more people voted in 2008 than in 2004, while the voting age population increased by 10.1 million over the same period. Thus, McDonald concludes that "it appears almost certain that the turnout rate did not decline between 2004 and 2008 despite the statistics from the CPS" and provides this important caution:

[T]he Census Bureau's CPS voting and registration supplement is an important report that always receives major media coverage. This CPS turnout rate decline will therefore likely be uncritically reported by the media when the 2008 voting and registration report is released later in 2009. I do not believe that a turnout rate decline occurred between 2004 and 2008. While this may cast doubt on the accuracy of the 2008 CPS Voting and Registration Supplement, I would add that I believe that the data are still useful to illuminate patterns in voting and registration. Indeed, voter turnout statistics by race and age, discussed below, are consistent with patterns speculated to be present in the 2008 election. warning

McDonald has all the details on his website -- as always, his analysis is worth reading in full.

PS:  Charles Franklin promises an update soon on one of my favorite charts from 2008, his look at turnout by age using the CPS data.  Update: Franklin's post is up


US: Islam Relations (ABCPost-3/26-29)


ABC News / Washington Post
3/26-29/09; 1,000 adults, 3% margin of error
Mode: Live Telephone Interviews

National

Would you say you have a generally favorable or unfavorable opinion of Islam?

    41% Favorable
    48% Unfavorable

How important do you think it is for Obama to try to improve U.S. relations
with Muslim nations -- very important, somewhat important, not so important or
not important at all?

    81% Important
    18% Not important

(ABC stoy, results; Post story, results)


US: N. Korea et al (Gallup-4/1-2)


Gallup Poll
4/1-2/09; 988 adults, 3% margin of error
Mode: Live Telephone Interviews

National

International terrorism concerns Americans most -- 88% say they are concerned, including 59% who are very concerned.

Americans are next-most likely to be "very concerned" about Iran's nuclear capabilities (54%) and North Korea's nuclear capabilities (52%). In terms of overall concern, the rankings of both are nearly as high as those of Afghanistan and Iraq.

Americans are slightly more likely to be "very concerned" about the conflict in Afghanistan (51%) than about the conflict in Iraq (48%). These ongoing conflicts rank second and third, respectively, in terms of overall concern.

The emerging issue of drug violence in Mexico is rated almost on par with concerns about Afghanistan, Iraq, Iran, and North Korea -- 79% of Americans are concerned, including 51% who are very concerned.

The military threat posed by China concerns Americans more than the military threat posed by Russia -- 39% are very concerned about China, while 25% are very concerned about Russia. While these threats are less worrisome to Americans than the others discussed thus far, more than 6 in 10 Americans do express some concern about them.

The conflict between Israel and the Palestinians concerns 72% of Americans -- ranking it toward the bottom of the overall list. Only 35% are very concerned.

(source)


NY: 2010 Sen, Gov (Quinnipiac-4/1-5)


Quinnipiac University
4/1-5/09; 1,528 registered voters, 2.5% margin of error
664 registered Democrats, 3.8% margin of error
Mode: Live Telephone Interviews

New York State

Job Approval / Disapproval
Pres. Barack Obama (D): 71 / 21 (chart)
Gov. David Paterson (D): 28 / 60 (chart)
Sen. Chuck Schumer (D): 61 / 25 (chart)
Sen. Kirsten Gillibrand (D): 33 / 13

Favorable / Unfavorable
Andrew Cuomo (D): 63 / 17
Gov. David Paterson (D): 27 / 55 (chart)
Rudy Giuliani (R): 55 / 35
Sen. Kirsten Gillibrand (D): 24 / 11 (chart)
Carolyn McCarthy (D): 23 / 8
Peter King (R): 22 / 11

2010 Governor - Democratic Primary
Cuomo 61, Paterson 18 (chart)

2010 Governor - General Election
Giuliani 53, Paterson 32 (chart)
Cuomo 53, Giuliani 36% (chart)

2010 Senate - Democratic Primary
McCarthy 33, Gillibrand 29%

2010 Senate - General Election
Gillibrand 40, King 28 (chart)

(source)


Overdue "Outliers"


Carl Bialik is absolutely right (more on this topic here tomorrow).

The Pew Research Center calls the partisan gap in Obama's ratings the biggest ever.

Tom Jensen notices net negatives for Terry McAuliffe among VA independents.

Nate Silver models support for gay marriage in Iowa and beyond.

Steve Benen finds humor in Fox News polls.

Chris Good speculates about economic patience and Obama's ratings.

National Journal's DC Insider survey shows polarization over Obama's GM policy.

ResearchRants critiques an RNC fundraising "survey."

John Sides adds an extensive blogroll of political science blogs to the Monkey Cage.

And this settles it, Domenico Montanaro is the new Nate Silver.


Groves Update

Topics: Census , Robert Groves

Some updates and a bit more background on the nomination Robert Groves to run the Census Bureau announced on Thursday.

First, in addition to the Associated Press and article I linked to on Thursday, the Groves nomination was also the subject of stories on Friday by The Washington Post, New York Times and the Detroit Free Press. The Post's Ed O'Keefe also blogged excerpts of the press releases issues by various lawmakers and interest groups in reaction to the Groves nomination.

Second, in a development that should surprise no one, the world of survey and statistical science is quickly rallying behind the Groves' nomination. Seven statistical and social science organizations have signed a letter urging confirmation. These include:

  • Consortium of Social Science Associations
  • Council of Professional Associations on Federal Statistics
  • American Sociological Association
  • American Statistical Association
  • Population Association of America
  • Association of Population Centers
  • AAPOR: the American Association for Public Opinion Research

Third, note well the point that the FreeP's Todd Spangler honed in on: The argument over the use of statistical sampling in connection with the upcoming 2010 decennial census is now essentially moot:

Even before the announcement was made, Groves, 60, a sociology professor and director of the Survey Research Institute at U-M, was being criticized by congressional Republicans. They remembered him advocating statistical sampling in the decennial count of the U.S. population when he was an associate census director.

It wasn't used because the Commerce secretary at the time rejected it, and by the end of the '90s, the U.S. Supreme Court ruled that sampling could not be used in setting congressional representation.

[...]

There are no plans to use sampling in next year's count. The bureau's mission plan doesn't call for it, and it's too late to start now. Census officials have questioned in the past whether they have an effective way of employing sampling to correct the tally of historically undercounted populations, like young people, transients, African Americans and Hispanics.

Plus, Commerce Secretary Gary Locke said in confirmation hearings recently that sampling wouldn't be used in next year's census, and, just as in the 1990s, it's his call.

Finally, some links to further reading: Before the Groves nomination, Andrew Reamer of the Brookings Institution and my colleague Eliza Newlin Carney of National Journal wrote about the issues facing the Census Bureau. And for those who want to get deep into the weeds of the science of the ongoing debate, consider the reports issued by the National Research Council (NRC) of the National Academy of Sciences (NAS), particularly their 1999 report, Measuring a Changing Nation: Modern Methods for the 2000 Census, and last year's Coverage Measurement in the 2010 Census (both have free online versions).

While the discussion in the NRC/NAS reports is highly technical, a quick skim of the executive summaries reveals the way the 1999 Supreme Court decision barring the use of statistical sampling for counts used for congressional reapportionment has impacted the scientific consensus. Just before the decision, NRC's panel endorsed the use statistical sampling to follow-up with those that initially fail to return their census form. Last year, the NRC report focused more on developing statistical databases and models to improve the way the census works.  


IA: 2010 Gov, Sen (Selzer-3/30-4/1)


Des Moines Register / Selzer & Co.
3/30 - 4/1/09; 802 adults, 3.5% margin of error
Mode: Live Telephone Interviews

Iowa

Job Approval
Obama: 64% Approve, 29% Disapprove
Gov. Culver (D): 55% Approve, 36% Disapprove
Sen. Grassley (R): 66% Approve, 20% Disapprove
Sen. Harkin (D): 59% Approve, 28% Disapprove

Would you vote to re-elect the following officials, consider an alternative, or definitely vote for someone else?

    Culver: 35% Vote for, 46% Definitely/Consider alternative
    Grassley: 47% Vote for, 37% Definitely/Consider alternative

(source)


US: North Korea (Rasmussen-3/3-4)


Rasmussen Reports
3/3-4/09; 1,000 likely voters, 3% margin of error
Mode: IVR

National

How concerned are you about the possible threat of North Korea using nuclear weapons against the United States?

    39% Very concerned
    34% Somewhat concerned
    20% Not very concerned
    5% Not at all concerned

If North Korea launches a long-range missile, should the United States take military action to eliminate North Korea's ability to launch missiles?

    57% Yes
    15% No

(source)


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR