Mark Blumenthal | October 14, 2008
Topics: Bradley/Wilder , Kathy Frankovic , Pew Research Center , Scott Keeter
The last few days have certainly been a boon for anyone interested in the "Bradley Effect" (also known as, variously, the Wilder or the Bradley-Wilder effect.). Long analytical pieces have appeared on Sunday in the New York Times , The Washington Post and on CBS Sunday Morning. CBS News survey director Kathy Frankovic devoted her column to it this week. Nate Silver added his thoughts. And Lance Tarrance, the Republican pollster who worked for Tom Bradley's Republican opponent, published a skeptical piece on RealClearPolitics on the subject (skeptical puts it mildly -- he calls the potential of an "effect" skewing the results this time a "pernicious canard and is unworthy of 21st century political narratives").
For the unlikely few who may be tuning in to this subject for the first time, the Bradley Effect describes a pattern witnessed in the 1980s and early 1990s in a series of contests between black and white candidates, including the historic gubernatorial candidacies of Tom Bradley in California in 1982 and Doug Wilder in Virginia in 1989. Polls in those races typically got support for the black candidate about right, but usually understated support for the white candidate. Over the last ten years, however, that effect largely disappeared (for more details, see the analysis by the Pew Research Center, the paper by Harvard post-doctoral fellow Daniel Hopkins and my own posts here and here).
If you read through any one of the pieces on the subject this week you will pick up two themes: First, pollsters and survey researchers continue to engage in vigorous debate over how real the effect was twenty to thirty years ago. Second, they often disagree about exactly what the effect was or how it was measured. It can be a confusing topic for those of us who conduct surveys, so I can imagine how bewildering it is for ordinary news consumers trying to make sense of this season's political polls.
Pollsters that I respect fall into two camps on this subject. Kathy Frankovic sums up an argument that has the merit of being based on the available evidence of polls conducted over the last ten years:
Despite all the claims that Americans have moved beyond race, we still want to talk about race!
Why else was "race" practically the first explanation offered in New Hampshire this year when pre-primary polls failed to predict the outcome? Was it really the resurrection of the so-called "Bradley Effect" from all the way back in 1982?
Why else did people "forget" (or disregard) that in 2006 pre-election polls in two state-wide races involving black candidates showed NO indication of ths "effect? [...]
We've gone through this topic many times, yet we still seem to worry about people lying to pollsters, or that black interviewers will get different answers from respondents than white interviewers do. Some of us even believe that every person who has yet to declare a preference publicly must, somehow, be motivated by race.
Yet several Democratic campaign pollsters I talk to and respect still have strong memories of the illusory leads their clients had in some of those high profile races in the late 1980s and early 1990s. Their attitudes are a lot like those of this non-pollster friend of TPMCafe contributor Todd Gitlin:
Whether there is or is not a "Bradley effect," it is very helpful to believe there is one, and very dangerous to believe there is not. And since we don't know for sure, the only prudent thing to do is to assume that there is one. Any other assumption is reckless. Whether it is due to a "Bradley effect," an October surprise, or some other variant, the good guys always need 60% to eke out a 1% victory. If Dems get overconfident, then McCain will have us just where he wants us. Instead of measuring the drapes, we need to be fighting to stave off defeat.
Reading over all of the articles over the last week -- and the debate that continues in our comments section -- I realize I could spend every moment of the next three weeks writing and speculating about the "effect" without getting any closer to the truth of what we might see on November 4. What would be helpful, and what I hope pollsters will pursue, are new measurements to guide us in how to interpret or allocate the remaining undecided voters and tell us (something admittedly harder to measure) whether those refusing to participate in surveys may be skewing results in a particular direction.
If you are a pollster, I highly recommend this plan of action that Scott Keeter, director of survey research at the Pew Research Center, shared with me last June:
We plan to do three things to monitor this. One is to continue to try to measure the impact of racial attitudes on voter judgments, as we did in our March 2008 national poll. We don't have a way to bring this to bear directly upon our vote estimates, but at least it can tell us how important racial attitudes are relative to other considerations. Second, we will look at race of interviewer effects. In my polling in Virginia in 1989, race of interviewer effects were quite strong, and my polls (like almost everyone else's) understated support for the white Republican candidate against Doug Wilder. Third, we will look at respondents who are hard to reach and especially at refusal conversions to see if there is any pattern there. As you may recall, in our 1997 non-response experiment we did find some weak evidence that the most resistant respondents were less favorable to African Americans. We were unable to replicate this in our 2003 nonresponse study, but we think it's worth looking at during this election season.
Finally, with respect to the allocation of undecideds, we will very much follow the course we did in 2004 and that Andy described to you in [your 2006] interview. We'll probably discard a portion of them, and then allocate the rest based on the evidence we have about them in the survey: their demographics, the attitudes and values, and perhaps a little bit on the basis of how the leaners overall are leaning. Racial attitudes are likely to be a part of this evidence-based process, especially if we continue to find them to be correlated with the vote among the decided.