Pollster.com

Articles and Analysis

 

Numbers Guy: Rating the Pollsters

Topics: 2006 , The 2006 Race

We interrupt the previous post still in progress to bring you a feature Pollster readers will definitely want to read in full.  Carl Bialik, the "Numbers Guy" from Wall Street Journal Interactive did some comparisons of the performance of five pollsters that were particularly active in statewide elections: Rasmussen Reports, SurveyUSA, Mason Dixon and Zogby International (twice, once for its telephone surveys and once for Internet panel surveys).

The most important lesson in Bialik's piece is his appropriate reluctance to "crown a winner."  As he puts it, "the science of evaluating polls remains very much a work in progress."  That's one reason why we have not rushed to do our own evaluation of how the polls did in 2006.  Bialik provides a concise but remarkably accessible review of the history of efforts to measure polling error (including a quote from Professor Franklin) and a clear explanation of his own calculations.

Again, the column -- which is free to all -- is worth reading in full, but I have to share what is for us, the "money graph:"

There were some interesting trends: Phone polls tended to be better than online surveys, and companies that used recorded voices rather than live humans in their surveys were standouts. Nearly everyone had some big misses, though, such as predicting that races would be too close to call when in fact they were won by healthy margins. Also, I found that being loyal to a particular polling outfit may not be wise. Taking an average of the five most recent polls for a given state, regardless of the author -- a measure compiled by Pollster.com -- yielded a higher accuracy rate than most individual pollsters.

Thanks Carl.  We needed that today.   Now do keep in mind the one obvious limitation of Bialik's approach.  He only looked at polls by four organizations, including just one online pollster (Zogby) and just two that used live interviewers (Mason Dixon and Zogby).  There were obviously many more "conventional pollsters," although few conducted anywhere near as many surveys as the four he looked at. 

Another worthy excerpt involves Bialik's conclusions about the Zogby Interactive online surveys, especially since nearly all of those surveys were conducted by Zogby on behalf of the Wall Street Journal Interactive -- Bialik's employer.  

But the performance of Zogby Interactive, the unit that conducts surveys online, demonstrates the dubious value of judging polls only by whether they pick winners correctly. As Zogby noted in a press release, its online polls identified 18 of 19 Senate winners correctly. But its predictions missed by an average of 8.6 percentage points in those polls -- at least twice the average miss of four other polling operations I examined. Zogby predicted a nine-point win for Democrat Herb Kohl in Wisconsin; he won by 37 points. Democrat Maria Cantwell was expected to win by four points in Washington; she won by 17.

 Again...go read it all

 

Comments

Pollster, of course, was not the only one doing polling averages. To be fair, Real Clear Politics also did 5 poll averaging, and was often cited by right-leaning sites like Fox as "the RCP average". See example.

When reading or hearing which site was cited, sometimes it was a measure of which polls (the VA Sen average from pollster.com, the generic R vs D average from RCP, etc.) and sometimes which site you looked at last (I cited whoever I read)> But sometimes (like the Beltway Boys on Fox) it was which site fits your political bias (pollster being neutral, RCP being R-neutral at least when it comes to polls but R when it comes to content).

I, of course, am devoted to this site, but I wish Bialik would have mentioned RCP as it is widely used and it's been around a while.

____________________

Chris G:

Carl mentions there's dispute over how to measure a poll's accuracy, but I think there's a pretty straightforward way to do this if one considers the bottom line: how well did a poll predict behavior. So ignoring undecideds for a moment, you can

(1) include turnout by looking at total %s divided over registered population, and include % turnout as a variable to be treated just like the other %s

(2) sum up the absolute value of all differences between real %s and poll %s, and divide by 2.

This result should give you the % of voters whose behavior was wrongly predicted. It works no matter how many candidates are in the race, because there are essentially 2 degrees of freedom in the analysis (ie, each wrongly predicted voter moves from predicted category into one and only one other category).

As for undecideds, I don't think there's anything you can do except project all %s into that group, unless a pollster actually makes an explicit prediction of how they'll break.

____________________

Chris G:

oh waitm, one correction--you include % did *not* turnout as a variable like other %s

____________________

Rich:

Chris G - Not being a statistician, I understood almost none of what you just wrote, although it seemed interesting and vaguely important. Any chance that you could put that into layman's terms a bit?

____________________

Chris G:

Rich- sorry, just a nerdy post during a break... I'll try. There are two issues in comparing a poll with returns that I was trying to address: first, most of the polls were based on likely voters rather than the registered population. So part of a poll's accuracy should be judged by how good it was at determing the % of registered that actually turned out.

Second, if there are more than two candidates in a race, how do you quantify how close the poll was? Some have simply taken the difference between the winner's predicted and actual %. But this misses a lot: in the extreme case, say the poll predicted the winner's % spot on, but was way off on the 2nd and 3rd place candidates. By that method, the poll would be judged as perfect.

The method I'm suggesting attempts to deal with both issues by focusing on the % of voter outcomes that were wrongly predicted--take the sum of the difference in %s across all candidates and turnout, and divide by two. So say, for simplicity, that a poll indicates that 25% of registered wouldn't vote, and 25% of registered would vote for candidates A, B and C respectively. Now, say that on Election Day, 30% do not vote, and candidates A, B and C get 15%, 20%, and 35% respectively (again, %s calculated over the entire registered population, not just those who vote). By my method, the total % of voters wrongly predicted was (5+10+5+10)/2 = 15%. In other words, 15% of voters were wrongly predicted.

Or take a simpler example: say I predict that in a group of 4 people, 2 will eat an apple, 1 will eat an orange, and 1 will not eat at all. But in reality, all 4 ate an apple. How many was I off by? (2+1+1)/2 = 2 people, the intuitive answer. Say instead, 1 eats an apple, 1 eats an orange and the remaining two eat nothing. Then I was off by (1+0+1)/2 = 1 person, which is also intuitive- there was one person who did not eat rather than having the 2nd apple I predicted.

Not sure if this clarifies... a comment is probably not the best forum to get into this.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR