Articles and Analysis


SurveyUSA's Pollster Report Card

Topics: 2008 , Pollster , Pollster.com , SurveyUSA

SurveyUSA, the well known provider of automated polls, has posted a pollster "report card" based on the final polls reported by public pollsters during the 2008 primary season to date. Actually, they have put up two report cards, one for all pollsters that have released at least one survey and another for just the 14 most active pollsters.

Several readers have emailed asking us to do an analysis or to verify their statistics. Professor Franklin has been doing post primary "polling error" analyses that look at the graph the performance of all polls, and should have something on the Super Tuesday polls soon. A pollster report card of our own is also on a Pollster.com to-do list that is, alas, long and growing longer each day. For now, let me just point out two important characteristics of the SurveyUSA report card:

First, their statistics are based on the last poll conducted by each organization. Typically, surveys get more accurate as we get closer to election day, and the polls conducted a week or more before the election tend to be at a disadvantage when compared against those from organizations like SurveyUSA that typically continue to call right up until the night before the election. You can decide whether that issue is a "bug" in the report card or a critical "feature" in SurveyUSA's approach to pre-election polling.

Second, SurveyUSA bases their ranking on one particular measure of polling error, which compares the margin between the percentages received by the first and second place finishers on election day to the margins as reported for the same two candidates on the final poll. There are other measures of poll error (SurveyUSA has posted a paper they authored that reviews eight such measures). Those critical of SurveyUSA will note that they typically report very small percentages for the "undecided" category, so they tend to do better on their measure of choice (Mosteller 5) which does not reallocate undecided voters. Again, your call as to whether that is a bug in the report card or a fair way to highlight one of the positive attributes of SurveyUSA's methdology. [CORRECTION: I was incorrect to imply that SurveyUSA has an advantage on the Mosteller 5 measure. If anything, that measure appears to be relatively tougher on them than other pollsters. See the response from SurveyUSA's Jay Leve here and my comments here].

Others -- most notably, Chicago Tribune pollster and frequent Pollster.com commenter Nick Panagakis -- are critical of the Mosteller 5 measure for focusing on the margins rather than individual percentages. His beef is that pollsters report a "margin of sampling error" based on individual percentages, which will be smaller than the average errors on the margin between two percentages. So, Panagakis argues, the measure used by SurveyUSA makes the magnitude of the errors seem unacceptably large.

Finally, the real challenge for any "pollster report card" is providing guidance on when the differences among the pollsters are statistically meaningful when when they are based mostly on random chance. Put another way, how much bigger should the average error be (and on how many polls should it be based) before we conclude that the difference between two pollsters truly significant? Unfortunately, I do not have good guidance on that question. Perhaps our more statistically astute readers can chime in with their thoughts.

[Update: Again, a response from from SurveyUSA's Jay Leve here and my comments here]



SurveyUSA responds to this post here:


Jay H Leve



To Mark Blumenthal,

Subject: Survey USA Strike back

Survey USA is correcting the record and today they released a long page on their website to rebut some of the points that Mark made a few days ago about their report card and their surveys' accuracy. It's a nice read, convincing, and hard-knuckle toward pollster. Mark, I would appreciate you take a look at it and let us know what you think.

As a blogger, I've learned that Survey USA is very sensitive about their reputation and rightly so. However, sometimes they seem to be bullying people who dare to criticize them.

It's fair to say, Survey USA has done a very good job this cycle and in the past years too as their nice graphic try to prove today. I welcome their forthrightness and they deserve to be applaud for being transparent to the public. Few pollsters released their crosstabs free to the public but Survey USA always try to be open to us, political junkies, who relied on their internal to see where the races truly stand and sometimes throw some flaks at them.


Daniel T:

What are the odds that SurveyUSA winning percentage on Mosteller 5 happened by chance alone?

SurveyUSA uses a trinominal distrabution (win, loss, tie) to determine its winning percentage. How this works is not clear from their website since no details of how a tie is handled are given. As a consequnce, I threw out ties and went with a binominal system of wins and losses. The Mosteller 5 data reports 712 wins and 473 losses, which represents a winning percentage of .514 (712/1385). Using the binominal distrabution method found here: http://faculty.vassar.edu/lowry/binomialX.html the expected mean is 692 with a standard deviation of 18.6. Thus, the results of SurveyUSA are about one SD away from the mean. Precisely, the odds that survey USA arrived at this result by chance is 15%. Given the fact that we normally want P

Note, however, that there is a much bigger problem with evaluating SurveyUSA data. The winning percentages listed by them suffer from selction bias; the draws are non-random. This is so because, as they admit, they don't survey the same places/events as everyone else nor do they survey all possible elections. For example, Mosteller 1 produces 920 wins out of 1136 for win percetage of .809. The produces a result that is 20 SD away from the mean and a the probablity that it happened by chance of .000001. The difficulty is that we don't know if this is a result of better polling or a result of better selection of polls (they only poll where they do well). Clearly, by the Mosteller 1 method SurveyUSA is doing something right, but it simply could be cherry picking polls. (This would also be true for Mosteller 5 but just getting lost in the noise).

In the end, because of selection bias, we really can't determine the odds of SurveyUSA results being better than chance because the draws are not random. And even if were assume randomness, we are left with decidedly mixed results that vary based upon the measure used.


Daniel T:

not sure why this sentence go cut but it should read

Given the fact that we normally want P less than 5%, the results of the Mosteller 5 series are NOT statisticly significant.


Mark Lindeman:

Daniel, my first response is lost in pollster.com hell, so I am trying again from a different browser.

(1) 712 + 473 = 1185, not 1385, so by your method the winning share is 0.601 (even higher than by SUSA's), and strongly statistically significant.

(2) It's hard for me to see how SUSA could be "only poll[ing] where they do well" when they poll in more places than most of their competitors. They did do well to skip New Hampshire this year, but still, it's an impressive record. Gallup does a bit better head-to-head this year, but that is based on only three data points. I wouldn't rely on any one pollster, but SUSA is definitely in my mix.


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.