Mark Blumenthal | February 10, 2008
Topics: 2008 , Pollster , Pollster.com , SurveyUSA
SurveyUSA, the well known provider of automated polls, has posted a pollster "report card" based on the final polls reported by public pollsters during the 2008 primary season to date. Actually, they have put up two report cards, one for all pollsters that have released at least one survey and another for just the 14 most active pollsters.
Several readers have emailed asking us to do an analysis or to verify their statistics. Professor Franklin has been doing post primary "polling error" analyses that look at the graph the performance of all polls, and should have something on the Super Tuesday polls soon. A pollster report card of our own is also on a Pollster.com to-do list that is, alas, long and growing longer each day. For now, let me just point out two important characteristics of the SurveyUSA report card:
First, their statistics are based on the last poll conducted by each organization. Typically, surveys get more accurate as we get closer to election day, and the polls conducted a week or more before the election tend to be at a disadvantage when compared against those from organizations like SurveyUSA that typically continue to call right up until the night before the election. You can decide whether that issue is a "bug" in the report card or a critical "feature" in SurveyUSA's approach to pre-election polling.
Second, SurveyUSA bases their ranking on one particular measure of polling error, which compares the margin between the percentages received by the first and second place finishers on election day to the margins as reported for the same two candidates on the final poll. There are other measures of poll error (SurveyUSA has posted a paper they authored that reviews eight such measures). Those critical of SurveyUSA will note that they typically report very small percentages for the "undecided" category,
so they tend to do better on their measure of choice (Mosteller 5) which does not reallocate undecided voters. Again, your call as to whether that is a bug in the report card or a fair way to highlight one of the positive attributes of SurveyUSA's methdology. [CORRECTION: I was incorrect to imply that SurveyUSA has an advantage on the Mosteller 5 measure. If anything, that measure appears to be relatively tougher on them than other pollsters. See the response from SurveyUSA's Jay Leve here and my comments here].
Others -- most notably, Chicago Tribune pollster and frequent Pollster.com commenter Nick Panagakis -- are critical of the Mosteller 5 measure for focusing on the margins rather than individual percentages. His beef is that pollsters report a "margin of sampling error" based on individual percentages, which will be smaller than the average errors on the margin between two percentages. So, Panagakis argues, the measure used by SurveyUSA makes the magnitude of the errors seem unacceptably large.
Finally, the real challenge for any "pollster report card" is providing guidance on when the differences among the pollsters are statistically meaningful when when they are based mostly on random chance. Put another way, how much bigger should the average error be (and on how many polls should it be based) before we conclude that the difference between two pollsters truly significant? Unfortunately, I do not have good guidance on that question. Perhaps our more statistically astute readers can chime in with their thoughts.[Update: Again, a response from from SurveyUSA's Jay Leve here and my comments here]