Mark Blumenthal | November 15, 2006
Topics: 2006 , The 2006 Race
I had an unhappy experience yesterday morning while still down for the count with a persistent fever (it has broken finally and thanks to all for the kind get well wishes). As I lay shivering, achy and generally miserable, my wife kindly ventured outside to find me some distraction in the form of our dead-tree copy of the morning's Washington Post. It took me only a minute or two to discover that Jon Cohen, the new polling director as the Post, had penned a column that mounted a veiled but clear attack on this site and others like it:
One vogue approach to the glut of polls this year was to surrender judgment, assume all polls were equal and average their findings. Political junkies bookmarked Web sites that aggregated polls and posted five- and 10-poll averages.
But, perhaps unsurprisingly, averages work only "on average." For example, the posted averages on the Maryland governor's and Senate races showed them as closely competitive; they were not. Polls from The Post and Gallup showed those races as solidly Democratic in June, September and October, just as they were on Election Day.
These polls were not magically predictive; rather, they captured the main themes of the election that were set months before Nov. 7. Describing those Maryland contests as tight races in a deep-blue state, in what national pre-election polls rightly showed to be a Democratic year, misled election-watchers and voters, although cable news networks welcomed the fodder.
More fundamentally, averaging polls encourages the already excessive attention paid to horse-race numbers. Preelection polls are not meant to be crystal balls. Putting a number on the status of the race is a necessary part of preelection polls, but much is lost if it's the only one.
We need standards, not averages. There's certainly a place for averages. My investment portfolio, for example, would be in better shape today if I had invested in broad indexes of securities instead of fancying myself a stock-picker. At the same time, I'd be in a much tighter financial position if I took investment advice from spam e-mails as seriously as that from accredited financial experts.
This last point exaggerates the disparities among pollsters. But there are differences among pollsters, and they matter.
Pollsters sometimes disagree about how to conduct surveys, but the high-quality polling we should pay attention to is based on an established method undergirded by statistical theory.
The gold standard in news polling remains interviewers making telephone calls to people randomly selected from a sample of a definable, reachable population. To be sure, the luster on the method is not as shiny as it once was, but I'd always choose tarnished precious metals over fool's gold.
I want to say upfront that I find the charge that our approach was "to surrender judgment," "assume all polls were equal" and blindly peddle "fool's gold" to be both inaccurate and deeply offensive. While it is tempting to go all "blogger" and fire off an angry response in kind, I am going to try to assume that Mr. Cohen -- whom I do not know personally -- wrote his column with the best of intentions. At the same time, it is important to spell out why I fundamentally disagree with his broader conclusions about the value of examining and averaging different kinds of polls.
[Unfortunately, having lost a few days to the flu, I need to pay a few bills and attend to a few other details here at Pollster. I should be back to complete this post later this afternoon. Meanwhile, please feel free to post your own thoughts in the comments section].
Update (11/16): Since I dawdled, the second half of this post appears as a second entry