February 17, 2008 - February 23, 2008
Why does Ohio's Democratic primary electorate seem so different from Wisconsin's? Barack Obama led narrowly in most of the surveys conducted earlier this month in Wisconsin before Tuesday's primary election, but has trailed in Ohio by margins varying from 21 to 7 points in recent weeks. What explains the difference?
Most observers consider Ohio a better state for Clinton than Texas, larger because of the large portion of downscale white voters in Ohio who have been a crucial base of support for Clinton throughout the primaries. That characteristic is one reason why the results of the Wisconsin exit polls among less-educated white voters caught my eye on Tuesday night. To recap the analysis posted by ABC News:
Less-educated whites have been a core group for Clinton; in previous primaries combined she's won those who lack a college degree by 30 points, while Obama's won college-educated whites.
In Wisconsin, however, Obama won less-educated whites, 52-47 percent, while crushing Clinton among the better-educated. That is Obama's best showing among less-educated whites in any primary to date.
I am an Ohio native, so I know that Ohio and Wisconsin are demographically and culturally more similar than they are different. I put the following table together with statistics gathered from the U.S. Census and the Almanac of Politics. Ohio has a bigger "urban" population (77.3% vs 68.3%), a larger African American population (12% vs. 6%), while Wisconsin has a slightly higher median income ($46.1K) than Ohio ($43.3K). Otherwise, the demographics of their adult populations are remarkably similar.
Of course, our real interest is the smaller population of primary voters, that sometimes varies across elections. As a percentage of eligible adults, Wisconsin's Democratic primary turnout this year (27%) was larger than four years ago (21%) and one of the largest so far this year. Ohio's Democratic primary had lower turnout in 2004 (14% of adults), partly because the nomination contest was essentially over by the time Ohio voted.
Wisconsin's primary is also more "open" than Ohio's. Wisconsin has same day registration, and both the Democratic and Republican candidates are listed on the same ballot, so Wisconsin voters can choose a primary in the secrecy of the voting booth. Ohio lacks formal party registration, so their primary is "semi-open." Registered voters who have never voted in a primary before can show up participate in the Democratic primary simply by showing up at their polling place and "publicly" requesting a Democratic ballot (those who have previously voted in Republican primaries have to "complete a statement" at the polling place that confirms their change in affiliation).
The following table shows some key comparisons of the exit polls from Ohio four years ago to the results from Wisconsin from both 2004 and 2008. Two differences stand out: African-Americans were a greater share of the Ohio electorate four years ago (18%) than in Wisconsin this week (8%), a difference that should work in Obama's favor. On the other hand, Ohio had more Democratic identifiers (72%) than Wisconsin either last week (62%) or four years ago (62%), a difference that benefits Clinton in Ohio (remember, party identification is not party registration -- the exit poll question asks: "how you usually think of yourself?"). Otherwise, the states look similar. Less-educated white voters are roughly half of each electorate as measured by the exit polls in each state.
So why has Clinton doing so much better in Ohio polls than she did in Wisconsin on Tuesday? The answer, for the moment, appears to stem mostly from her continuing strength among Ohio's downscale white Democrats. In Wisconsin, as noted above, Obama ran slightly ahead of Clinton among less-educated white voters. However, in both the Quinnipiac poll conducted two weeks ago and the ABC/Washington Post poll done earlier this week, Clinton continues to hold an enormous lead among less-educated white voters. Obama's better overall performance on the more recent ABC/Post survey results mostly from a stronger showing among African-Americans and college educated whites (and perhaps from the typically smaller undecided percentage that by the ABC/Post survey).
These results show the potential for Obama if he can even partially replicate the inroads made into Clinton's base in Wisconsin. Some of Obama's Wisconsin success owes to the larger proportion of non-Democrats there. However, a roughly 10-point difference in the independent percentage alone cannot explain the more than 20 point difference in preference for Obama. Consider this: If Ohio's demographic composition from 2004 holds, if Obama and Clinton win their usual margins among black and Latino voters, and if Obama defeats Clinton by 10 points among college educated whites, he can win Ohio by cutting Clinton's lead among less-educated whites to 20 points.
Given the exit poll numbers from Wisconsin, that result does not seem far-fetched. What do you think? Is that result a real possibility? And putting the hypotheticals aside, why does Clinton run stronger in Ohio than in Wisconsin among downscale whites? Can she maintain that advantage through March 2?
(Thanks for Alan Abramowitz, Jon Cohen of the Washington Post and Doug Schwartz of Quinnipiac University for providing some of the data above).
ABC News/Washington Post
(ABC story, results; Post story, results)
Clinton 48, Obama 47
Clinton 50, Obama 43
My latest column for NationalJournal.com, which examines criticism of poll driven "horse race" coverage, is now online.
Fox News/Opinion Dynamics
Clinton 44, Obama 44... McCain 51, Huckabee 34, Paul 7
McCain 47, Clinton 44... Obama 47, McCain 43
Diageo/The Hotline (release)
Clinton 45, Obama 43... McCain 53, Huckabee 25, Paul 7
Obama 48, McCain 40... McCain 48, Clinton 40
Jay Leve and his crew at SurveyUSA have been busy this week. Following up on our discussion of their pollster report cards, SurveyUSA has a new and improved scorecard chart for individual states primaries (example for Florida Republicans with explanation here, example for Wisconsin Democrats here). The new state-level report card format includes eight different measures of error and a number of additional variables intended to help us "better understand the correlation between the methodological choices an election pollster makes and the results an election pollster produces."
- Length of field period.
- Proximity of poll release to election
- The number of undecided voters
- The number of respondents interviewed
- The sample source (if available)
- The interviewing technique (if available)
- The method of respondent selection (if available)
They also updated their "high-level" report cards (summarizing one measure of error for all final polls in 2008) to include polls from Wisconsin.
Finally, in another interesting innovation, they are also soliciting reader input on the McCain story:
What questions should SurveyUSA ask Americans in its polling today about today’s New York Times story, today’s Washington Post story, and John McCain’s response to it? We welcome your suggestions at firstname.lastname@example.org.
American Research Group
Approve 19, Disapprove 77
Carrying on with our "live blogging" tradition, I'll post what seems relevant here on what we can learn from the non-leaked exit poll information tonight. Use the following links for actual exit poll tabulations after the polls close at 8:00 p.m. Central time (9:00 p.m. Eastern):
Updates will follow in reverse chronological order -- all times Eastern:
7:51 a.m. Wednesday - ABC updated their analysis. Obama's margin among non-college whites was 52% to 47%.*
10:55 - Following up on our discussion last week on the preferences of non-college white voters comes this from the now-updated ABC News exit poll analysis:
Another core group for Clinton has been less-educated whites; in previous primaries combined she's won those who lack a college degree by 30 points, while Obama's won college-educated whites.
In Wisconsin, however, less-educated whites split about evenly in preliminary data, while Obama continued to sail ahead among the better-educated.
There have been only two previous primaries in which Clinton didn't clearly win less-educated whites, Utah and Illinois.
Note: ABC's Gary Langer also looked at the Democratic vote by socio-economic status in a blog post earlier today using a combined exit poll sample from previous primary states:
Overall, combining all primaries to date, voters who hold a college degree have voted for Barack Obama over Hillary Clinton by an 8-point margin, 51-43 percent, while those who haven’t been graduated from college have favored Clinton by 10 points.
9:19 - The networks just posted an update. The extrapolated overall estimate now shows Obama with 56%, Clinton 43% among Democrats (n=1431); McCain 52%, Huckabee 36%, Paul 6% among Republicans (n=840).
9:00 - As the polls close in Wisconsin, MSNBC has the crosstabulations up online. Our friend Mark Lindeman reports an extrapolated overall exit poll estimate is 55% for Obama, 43% for Clinton among Democrats (based on n=878) and 52% McCain, 35% Huckabee, 6% Paul (n=454). Click here for the usual caveats on how these numbers are derived and how they improve over the course of the evening.
8:43 - The Page has a summary of non-leaked issue results from the early waves of exit poll interviews, including a link to an early analysis from ABC News. TPM also passes along early analysis from AP and what was apparently an inadvertent sneak preview from CBS of the early exit poll estimate everyone is most curious about (though keep in mind that all of these numbers are based on data collected as of the late afternoon, which has been less than reliable on previous primary nights and in previous years).
A week ago, I linked to two new pollster "report cards" prepared by SurveyUSA (one for all pollsters, one for the 14 most active this year), based on average accuracy scores for all pollsters that have released presidential primary surveys this year. I included a few paragraphs to try to add some perspective, both on these specific report cards and the subject of measuring pollster accuracy in general. I did not intend to be dismissive of SurveyUSA's work nor their generally excellent performance both this year and in prior years, although I can understand why some may have read it that way. Regardless, SurveyUSA's Jay Leve has posted a lengthy response worthy of further comment.
First, and most important, Leve's post highlights an error that I need to correct. I wrote:
SurveyUSA bases their ranking on one particular measure of polling error, which compares the margin between the percentages received by the first and second place finishers on election day to the margins as reported for the same two candidates on the final poll. There are other measures of poll error (SurveyUSA has posted a paper they authored that reviews eight such measures). Those critical of SurveyUSA will note that they typically report very small percentages for the "undecided" category, so they tend to do better on their measure of choice (Mosteller 5) which does not reallocate undecided voters [emphasis added].
The words in italics are not correct, at least according to the data that SurveyUSA includes on an interactive spreadsheet posted on their web site that summarizes head-to-head accuracy comparisons against other pollsters over the last five years. That spreadsheet shows that, if anything, the opposite is true: SurveyUSA tends to do a little worse relative to other pollsters on the Mosteller 5 measure than it does on other measures. I have corrected the original post, and I apologize for the error.
SurveyUSA is understandably sensitive to slights from the "traditional 'headset operator' telephone pollsters," who according to Leve, "have worked for 16 years to mock and marginalize the innovative work done by SurveyUSA." While there is some truth to that characterization, I hope readers will appreciate that I have not been among the "mockers." In fact, I took to the pages of Public Opinion Quarterly, the most respected journal of survey methodology, to advise that while "healthy skepticism is appropriate . . . a reflexive rejection of IVR as 'theoretically unsound' seems unwarranted." In the same article I quoted from a paper by an academic methodologist (Joel Bloom, now of SUNY-Albany), showing that SurveyUSA had "'performed at roughly the same level as other nonpartisan polling organizations in 2002,' though it did 'somewhat better' on 'most measures.'"
While it was unfair of me to imply that SurveyUSA "cherry-picked" (as Leve put it) a favorable measure for their 2008 report card, the issue of how the various measures of polling error handle the "undecided" category is important and may have implications for where some pollsters rank. That issue is the underlying theme of the paper on such measures that SurveyUSA linked to in their scorecard post. For the record, that paper makes the case that three other Mosteller measures (Mosteller 3, 4 and 6, but not Mosteller 5) should theoretically benefit a pollster with low undecided voters, and concludes by arguing for a new measure that "rewards the pollster whose estimate is not just the most precise, but whose numbers leave him/her the least amount of wiggle room." For their 2008 report card, however, SurveyUSA picked a measure that is typically tougher on them than the others available, and they deserve credit for that decision.
Aside from the issue of how to measure error, however, there are some additional issues still worth discussing. For example, Leve does step up and suggest at least one way to determine statistical significance from their error comparisons, but it is limited. I had raised the issue of how to identify "statistically meaningful" differences on a pollster scorecard because, to be perfectly honest, we have been discussing how to best create our own scorecard and provide appropriate guidance.
In his response, Leve points to their "Interactive Election Scorecard", a spreadsheet which (among other things) computes the odds of SurveyUSA besting their competitors over the five years of comparisons included therein. Unfortunately, the spreadsheet is not set up to allow for similar comparisons among other pollsters or (as far as I can tell) for comparisons filtered for individual election years. The 2008 report card tells us, for example, that Mason-Dixon has an average error score of 8.26 on 19 polls while ARG has a score of 8.50 on 20 polls. It tells us that SurveyUSA had an average error on 4.50 on 22 polls, while Gallup had an average error of 4.60 on 2 polls. Are those differences statistically meaningful? The point of these examples, by the way, is not to trash the SurveyUSA report card but to underscore that these are tricky questions.
The issue of timing -- which Leve promises to address in the future -- remains important. In my post, I wrote that the SurveyUSA report card is based:
[O]n the last poll conducted by each organization. Typically, surveys get more accurate as we get closer to election day, and the polls conducted a week or more before the election tend to be at a disadvantage when compared against those from organizations like SurveyUSA that typically continue to call right up until the night before the election. You can decide whether that issue is a "bug" in the report card or a critical "feature" in SurveyUSA's approach to pre-election polling.
I realize, in retrospect, that my argument and language were a little too glib. First, while polls generally tend to get more accurate as election day approaches, I do not know for certain that SurveyUSA has a meaningful advantage on these accuracy scores because they do more late polling. I can certainly think of specific races in which they have had such an advantage, but those are anecdotes. We have a still unresolved empirical question here as to how much of SurveyUSA's relative accuracy accrues from polling a bit later in the process than many of their competitors.
Let's assume for the sake of argument that SurveyUSA tends to score higher on accuracy measures because they field more polls later. One conclusion would be that their methodology -- which involves very short questionnaires and the ability to make a lot of calls for less money within a short period of time -- allows their clients to do more polling later in the campaign. The net result is a more accurate depiction of the horse race in the final hours of the campaign. In other words, under this hypothetical, the difference amounts to a "feature" not a "bug."
At the same time, again to the extent that differences in "accuracy" depend on timing, it may not be fair to describe all of the pollsters that tend to stop earlier as relatively "inaccurate." In some cases, their surveys may have been equally accurate at the time, but received lower accuracy scores because of shifts in vote preference that occurred in the final week of the campaign. Keep in mind that different surveys are done for different purposes, and those purposes sometimes come with methodological trade-offs. If a media organization wants to measure opinions on a wide variety of attitudes beyond the basic horse race question (especially if those measurements involve open-ended questions), then an automated methodology makes less sense. Moreover, media organizations that sponsor more in-depth surveys typically want to gather their data sooner, to drive stories over the final week of the campaign, rather than waiting until election eve to release the data.
We need to understand that different polls are done for different purposes and a one-size-fits-all measure of accuracy may not make sense for all polls. Either way, this is certainly a topic wide open for further commentary, debate and, ideally, more empirical evidence.
Press Register/University of S. Alabama
McCain 58, Clinton 29... McCain 59, Obama 31
Public Policy Polling (D)
McCain 48, Clinton 43... McCain 47, Obama 42
CNN/Opinion Research Corporation
Clinton 50, Obama 48... McCain 55, Huckabee 32, Paul 11