Mark Blumenthal | December 29, 2007
Topics: 2008 , Iowa , The 2008 Race
Given the way we are scrutinizing the final Iowa Caucus polls, this seems like a good time to take a look a the final pre-caucus polls from 2004 and 2000. One of the questions I get most frequently is which pollsters were "most accurate" in previous years, and as the old data will show, that is a far more difficult question to answer than most people assume.
Consider the final polls for the Democratic Caucuses in 2004. Only five organizations released public polls conducted in the final week before the Caucuses, which were held on Monday, January 19 that year. Since both John Kerry and John Edwards experienced late surges in support, polls conducted before that would show considerably more "error," since they obviously missed the late surge. Also, those who continued to call through Sunday night might have some advantage in catching the late breaking trend (or, as more cynical pollsters will point out, those releasing late polls also had the benefit of seeing the results of the other earlier surveys).
The table below shows the results of the final week's polls, plus the results of the network "entrance poll," which asked participants their first preference as they entered their caucus location.
[Click on table to pop-up full size version]
The first issue, unique to the Democratic contest, is that even a perfectly accurate survey can tell you only about the initial preference of the caucus-goers. The Iowa Democratic Party does not report the head-count of initial preferences. Instead, the official "results" they report on caucus night will be based on the state delegates for each candidate chosen to the state convention. The delegate selections are based on a second round of voting after supporters of "non-viable" candidates (those who receive less than 15% on the first round) realign to their second choice.
So as should be obvious, the second round voting means that initial vote preference -- even as measured by an entrance poll -- does not directly measure the final results. So there is an important element of inaccuracy built into any Democratic preference poll. In 2004, both Kerry and Edwards did better in the reported results than the entrance poll. Most observers attribute much of the six-point gain for Edwards to a deal struck on caucus morning between the Kucinich and Edwards campaigns that sent most Kucinich supporters into the Edwards camp on the second round. Exit pollster Joe Lenski reports that most Kucinich supporters chose Edwards as their second choice in the entrance poll.
Putting aside the viability issue, which poll was "most accurate?" The answer depends on the yardstick applied, which is a tough call in a multiple candidate primary or caucus. The Des Moines Register poll has received much credit for being the only one to correctly "predict" the order of the top candidates, but notice that Edwards "led" Dean on their larger "likely caucus goer" sample by a not statistically significant three percentage points (23% to 20%). Among their narrower "definite voter" subgroup, the Edwards-Dean order was reversed (Dean had 21%, Edwards 19%). So the getting the order right may have been partly a matter of good fortune.
I will spare readers the minutiae of the various error scores, but if we measure accuracy in terms of how well the polls predicted Kerry's percentage the Des Moines Register's narrower "definite voter" subgroup does slightly better. The same Register sample also does best in terms of the average of the errors for all the candidates. Ironically, the smaller Register "definite voter" sample, the one that had Dean nominally (though not significantly) "ahead" of Edwards, was the most accurate on these criteria even though the larger Register sample has been credited with "predicting" the order of finish.
If, on the other hand, we focus on the Kerry-Edwards margin, the final Zogby poll comes slightly closer to the actual result. In any case, the differences between the pollsters are small enough on all of these criteria that random chance was certainly a factor in determining which did best. And notice that everyone was way off on the final margin between Edwards and Dean, whether we compare to the entrance poll head count (Edwards +6), or the post-realignment actual results (+14).
What about the 2000 Caucuses? The number of final week polls was again fairly limited. On the Republican side -- where the actual results are a simple head-count -- the LA Times and the Des Moines Register came closest to George Bush's ultimate share of the vote, and the Times had the narrowest (and thus most accurate) Bush-Forbes margin. But all of the polls underestimated the support received by Forbes and Keyes.
On the Democratic side, the Des Moines Register had the Gore-Bradley margin exactly right, but a University of Iowa poll, which overstated Gore's margin, had Gore's percentage of the vote exactly right.
So what's the point of these comparisons? Trying to score such a small number of polls solely on the basis of accuracy is a confusing, contentious and ultimately futile exercise. The 15% viability threshold on the Democratic side undermines the accuracy of all polls. The timing of the final poll is critical and the differences among the various pollsters in the final week have been relatively small. Moreover, different measures of accuracy can lead to different conclusions. So if you take away nothing else, remember that none of the Iowa polls have been a perfect crystal ball. All have missed significant aspects of the final results in 2004 and 2000.
Sources: I obtained the results above from the subscriber only archives of The Hotline and the Polling Report. The SurveyUSA results are still available online and the InsiderAdvantage survey from 2004 was released at the time via PRNewswires.