1. By looking at the macro data, I don't see how one can conclude with any definitiveness that there was no Bradley Effect. You don't cite any real data, just the fact that the percentages were close. Might not there have been a Bradley effect in some places, and a reverse-Bradley in others? Or might not any BE have been offset by other polling biases in the other direction, such as the (possibly debunked) cell-phone error issue? It is my subjective opinion that the BE is not a big deal and did not SEEM TO have had a measurable impact on this race, but the "objective data" does not prove or disprove the hypothesis.
2. The cell-phone effect turned out OK because: (1) the cell-phone only vote was captured by a traditional demographic group (youth) that it mirrored, (2) this group is still a small part of the electorate, and (3) this group did not differentiate along the technology. But it seems this will be a growing and very real concern moving forward. Might cell-phone only people vote differently from their traditional demographic contingent? That is, might cell-phone only white thirty-somethings vote distinctly from landline-only white thirtysomethings? At some point, weighting is not going to do it.
3. Might something akin to the "Bradley Effect" have impacted other polls in which personal opinions might have differed from acceptable public opinions or what is morally preferable? I'm thinking of Ted Stevens & Prop 8. People might not feel comfortable admitting they support a criminal, but when they get in the booth they vote for the guy they know, or they publicly support gay marriage so they don't hurt their friends feelings, but are deep down opposed.
4. Have these potential errors with respect to accurately capturing the Latino community been seen in previous polls in these areas? How much of this error might be due to folks who normally don't poll in, say, Nevada and New Mexico, coming in from out of state? Can we split out polls from "locals" to see if they did a better job because they had a better idea of how to approach this community?
Posted on November 12, 2008 11:02 AM
At some point I would like to see a discussion (or be pointed to a discussion if it has already been covered) of the statistical effect of poll averaging on sampling error. If we assume for the sake of argument that 10 polls are carried out in an identical fashion (same population, same sampling techniques, same sample size), isn't it possible to compute an "aggregate" margin of error? How would that compare to the error computed from a single sample 10 times as large? More to the point, could we theoretically eliminate sampling error as a factor by averaging a large number of identical polls?
Posted on November 12, 2008 12:46 PM
There is one other western state not mentioned in which the polls did an abominable job: Alaska. As Nate points out on 538.com there was an approximate 12% shift toward the Repubs in 3 races: President, US Senate and US Congress.
What the hell happened there? The turnout numbers are approaching 2004 but will not get there. I can understand a couple percent off but not 12 given Palin running, a presidential and congressional election.
Here's a link to Alaskan Shannyn Moore's column on the latest there - which, parenthetically, is not very recent. Nothing up on the GEM website for Alaska since the day after the election.
Posted on November 12, 2008 1:54 PM
While it is true that the polls -- at the top line were pretty close to the final result, a look at the internals of polls suggest that was often the (lucky) result of errors that were off -- but canceled each other out:
1. Polls actually did underestimate McCain's strength with white voters as well as under-or accurately estimate his strength with non-white voters, including black voters. Consistent with what some call "Bradley," Obama got exactly what the polls said he would get among those groups and McCain got his share as well as the undecideds.
2. How could the polls understimate his strength with all subgroups and yet accurately or over-estimate his overall strength? The polls also, pretty uniformly, overestimated the size of the white vote and underestimated the size of the non-white vote, particularly the black vote. Most polls estimated 77% of the voter population would be white; in fact, it was only 73%. More detail in my post at http://weeklystandard.com/Content/Public/Articles/000/000/015/801dvflv.asp
3. Similarly, the results suggested a huge difference between the cell-phone only population and those that can be reached by landline. In the Exit poll, Obama had a 60-38 lead among cellphone-only voters; he only had a 50-48 lead among people who have landlines (only or also). Again, a combination of factors, including over-estimating Obama's strength in particular groups and weighting by age and race may have hidden some of this discrepancy.
The results suggest the challenge pollsters will have coming out of this election:
1. Figuring out better ways of identifying likely voters, particularly after the impact of the Obama organization. Will newly registered young people and African Americans continue to vote in the numbers seen this year?
2. Thinking more clearly about how to handle and aportion "undecided" voters. It is unfortunate that the term Bradley Effect carries the baggage of race. One need not think that undecided voters are lying (for any reason) to think that voters who say they are undecided in a race between an "Obama" and a "McCain-Palin" the undecided voter is unlikely to go with the "Obama."
3. Figuring out how to reach the cellphone-only population more effectively.
Posted on November 12, 2008 3:03 PM
I did a survey of how pollsters performed in the 10 most "important" states, which I deemed CO, FL, IN, MO, NC, NM, NV, OH, PA, and VA. Here are the results of pollsters who surveyed 7 or more of these states in the last 2 weeks before the election. The number is the Absolute value by which they were off of the above states in that order followed by an average of how far they were off:
#1 AP/Gfk: 1.2, 0.5, NA, NA, 1.6, NA, 0.4, 3.0, 1.7, 1.5, ave: 1.41
#2 Zogby: NA, 1.3, 6.2, 0.2, 0.8, NA, 1.6, 2.0, 0.5, 1.5, ave: 1.76
#3 PPP: 2.2, 0.5, 0.1, 0.2, 0.6, 2.5, 8.4, 2.0, 2.3, 0.5, ave: 1.93
#4 ARG: 0.8, 1.5, 0.9, 0.2, 0.6, NA, 7.4, NA, 4.3, 1.5, ave: 2.15
#5 SurveyUSA: NA, 0.5, 3.1, 0.2, 1.4, 7.5, NA, 2.0, 1.3, 1.5, ave: 2.19
#6 CNN/Time: 0.2, 1.5, NA, 1.8, 5.6, NA, 5.4, 0.0, 1.7, 3.5, ave: 2.46
#7 Rasmussen: 3.8, 3.5, 3.9, 0.2, 1.4, 4.5, NA, 4.0, 4.3, 1.5, ave: 3.01
#8 Mason Dixon: 2.8, 0.5, NA, 0.8, 3.4, NA, NA, 6.0, 6.3, 2.5, ave: 3.19
#9 Yougov: 7.2, 0.5, 8.9, 1.8, 3.6, 4.5, 7.4, 2.0, 3.3, NA, ave: 4.36
Among the major sites like this one that turned all the polls into composites, here are the results:
#1 FiveThirtyEight.com: 1.2, 0.8, 2.4, 0.0, 0.6, 4.8, 7.5, 0.6, 2.2, 0.1, ave: 1.86
#2 Pollster.com: 0.2, 0.8, 2.1, 1.3, 0.0, 5.6, 5.3, 0.9, 3.1, 0.1, ave: 1.94
#3 Real Clear Politics: 2.3, 0.7, 2.3, 0.5, 0.8, 7.2, 5.6, 1.5, 3.0, 1.1, ave: 2.38
Posted on November 12, 2008 3:14 PM
Obama's margin of victory has now crept up to 6.6%; it may end up at 6.8% or 6.9% when all the votes are counted. (The bulk of the uncounted claims seem to be in CA, WA, and OR, three states where Obama ran up huge margins.)
Gary Langer makes the excellent point that focusing on just the margin of victory is a dubious measure of a pollster's accuracy. A better measure would be to compare the predicted vote percentage for each candidate with the actual results.
Also, there are some pollsters whose numbers exhibited large swings before settling on the final numbers before the election. Given that few people made up their minds at the last minute (according to the exit polls), and that these people essentially split even between the candidates, such large last minute swings seem to be complete fiction. Furthermore, there were no significant events in the final days of the campaign that would cause such swings.
Therefore, pollsters whose numbers gyrated in the last couple of weeks before the election number should be treated with skepticism, even if they happened to land on the right final number.
Posted on November 12, 2008 4:04 PM
A few people have brought up the apparent inaccuracy of polling in the state of Alaska. I would point out that a significant portion of the early vote in that state has not yet been counted. I've heard estimates of anywhere between 40000 and 90000 ballots that are not yet included in the official results. Before we draw any conclusions we should wait to see what the actual final counts are. I think they are supposed to be finished counting them by the end of this week.
The conventrional wisdom is that the Democrats had a very large advantage in the early vote in that state, similar to the rest of the country, so the margins will likely tighten in all races.
Posted on November 12, 2008 4:05 PM
zvelf: good analysis, but you left out the possibility that the error was not just error, but one-way *bias*.
Unlike the National polls, there are clear winners and losers in the state polls.
For state polls, we typically see a larger error in the non-contested states, and less error in the close states. So (with only a little of the cart-before-the-horse), I have rated the pollsters based on performance in the 21 states where the margin was less than 15%, and in the 7 states where the margin was less than 5%.
Here's how they did, with + being bias for Obama and - being bias for McCain:
21 states (14 polled): Bias = +0.3%, StdDev of Bias = 3.4%
7 states (6 polled): Bias = -0.7%, SD = 1.0%
Bias by state
21 states (14 polled): Bias = -0.7%, SD = 4.1%
7 states (7 polled): Bias = +0.0%, SD = 1.0%
Bias by state
Extra credit: Hit all 7 key states within 2.0%!!
21 states (13 polled): Bias = +0.6%, SD = 3.8%
7 states (7 polled): Bias = +0.6%, SD = 2.1%
Bias by state
Extra credit: nailed West Virginia's -13.59% (they predicted -13%).
CLOSE BUT NO CIGAR
21 states (9 polled): Bias = -0.9%, SD = 2.6%
7 states (6 polled): Bias = -1.4%, SD = 2.8%
Bias by state
Extra credit: only pollster to come close on Nevada's +12.7% (they predicted +11%). Throw out the bogus number in Indiana and their biases and SD's drop significantly.
In 2004, Rasmussen and Mason-Dixon were two of the best, but they flopped this time. I don't think they accounted for the youth (or non-land-line) vote significantly enough:
21 states (20 polled): Bias = -1.6%, SD = 3.2%
7 states (7 polled): Bias = -2.0%, SD = 1.9%
Bias by state
Red Pen: Got the winner wrong on 3 of 7 key states. Off by >2.5% in 8 of the 20 states for McCain, only 2 for Obama.
21 states (17 polled): Bias = -1.0%, SD = 3.9%
7 states (6 polled): Bias = -2.1%, SD = 2.3%
Bias by state
Red Pen: All 6 key state polls biased toward McCain. Ohio bias of -6.2%.
21 states (14 polled): Bias = -0.1%, SD = 4.2%
7 states (6 polled): Bias = +0.6%, SD = 4.2%
Extra Credit: only got winner wrong in IN.
WI Pred: +5%. WI Actual: +13.1%.
States with prediction error of 4% or more:
21 states: 6 out of 14
7 states: 3 out of 6
21 states (21 polled): Bias = +0.1%, SD = 4.4%
7 states (7 polled): Bias = -1.4%, SD = 4.7%
Extra Credit: only got winner wrong in IN.
IN Pred: -8%. IN Actual: +0.9%.
CO Pred: +15%. CO Actual: +6.7%.
States with prediction error of 3% or more:
21 states: 11 out of 21
7 states: 3 out of 7
Posted on November 13, 2008 9:39 PM
Comments: (you may use HTML tags for style)
Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.
Please email us to report offensive comments.
See our comment policy here. Note that we require commenters to share their email address via Typekey. We will never share your email address with anyone without your explicit permission.
MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR