Pollster.com

Articles and Analysis

 

Yost & Borick: The Silver Standard

Topics: AAPOR Transparency Initiative , Berwood Yost , Chris Borick , Disclosure , Franklin and Marshall College , Muhlenberg College , Nate Silver , Poll Accuracy

This guest pollster contribution comes from Berwood Yost, director of the Floyd Institute for Public Policy Franklin and Marshall College, and Christopher Borick, director of the Muhlenberg College Polling Institute.

Nate Silver's compilation of performance data for election polling in the United States and his ratings of polling organizations should be applauded for increasing the ability of the public to judge the accuracy of the ever increasing number of pre-election polls. Helping the public determine the relative effectiveness of polls in predicting election outcomes can be compared to Consumer Reports equipping individuals with information about which products meet minimum standards for quality. As with the work of Consumer Reports, Mr. Silver is explicit in his methodology and provides substantial justification for the assumptions he adopts in his calculations. But as is the case in the construction of any measure, there are some reasonable questions that can be raised about what was included in those calculations. One such question has to do with the "affiliation bonus."

Silver's decision to include an "affiliation bonus" for pollsters that are either in the NCPP or have joined AAPOR's Transparency Initiative has significant consequences for his final ratings. Table 1 provides two pollster-introduced error (PIE) estimates for a sub-group of academic polling organizations, one that uses the calculation for all telephone pollsters and the other that uses the calculation for those pollsters who receive the "affiliation bonus." We chose this group because all of the organizations, regardless of their affiliation with NCPP or the AAPOR Transparency initiative, consistently release full descriptions of their methodology and provide detailed breakdowns of their results. The scores highlighted in yellow are those reported for each pollster on Silver's site. As Table 1 shows, the rankings are substantially different depending on whether a firm receives the "affiliation bonus."

[Editor's note: Chris Borick informs us that Muhlenberg University has signed on to the AAPOR Transparency Initiative, but did so after June 1, so they were not classified as a participant in Silver's ratings. Berwood Yost tells us that Franklin and Marshall intends to sign on, but has not done so yet].

2010-06-14-borick-Yost-538scores.png

As part of his rating methods Mr. Silver makes the decision to discount the "raw scores" for polls despite noting that those scores are the most "direct measure of a pollster's performance." His primary justification for discounting the "raw scores" is because his project is, "not to evaluate how accurate a pollster has been in the past--but rather, to anticipate how accurate it will be going forward" (taken from Silver's methodological discussion). Those who read his rankings should take care to understand the distinction that Silver is making between past performance and expected future performance. We are not sure why the scores based on past performance are inferior to PIE and he does not make a sufficiently strong case for the very heavy discount that he applies to those scores in his calculations. It would be valuable to see some more evidence about what makes PIE a better indicator of polling performance. The "affiliation bonus" may indeed be correlated with the performance of polls, but is it actually the affiliations that are leading to better performance or is it some other unmeasured variable that is at work? Silver's calculations show that the "affiliation bonus" explains only three percent of the variance in his regression equation and has a p value that is greater than .05. One may ask if that is sufficient evidence to provide such a strong advantage to some pollsters.

In closing we would once again like to applaud Mr. Silver for taking on the important task of applying solid methods to the evaluation of pollster accuracy. The public needs such efforts in order to more effectively sift through the avalanche of polls that greet them every election season. Our intention is simply to note that the scores produced by Silver should be evaluated in terms of both their strengths and limitations.

 

Comments
AySz88:

To be fair, his p-value is greater than 0.05, but only barely (0.054). He also says hasn't decided whether to give the bonus to pollsters affiliating after June 1, and I'd assume that's because he sees the same problem with correlation/causation there. I guess we'll find out whether affiliation leads to better performance in a few years, as pollsters are signing on now....

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR