Pollster.com

Articles and Analysis

 

Visualizing Poll Accuracy In Massachusetts

Topics: 2010 , Accuracy , Martha Coakley , Massachusetts , Poll Accuracy , Scott Brown

Regular readers may recall my personal pet peeve about rushing to quick conclusions about the "most accurate pollster" in any given election. One of my objections -- all votes are typically not counted immediately -- is slightly less of a worry when applied to yesterday's Massachusetts race, as election officials there have produced an "unofficial" count based on 100% of the state's precincts. Still, the final certified count can sometimes differ slightly, sometimes enough to move the final count by a percentage point, so please take what follows as preliminary.

Another objection of mine, however, is even more valid in looking at a single state as it was back in February where we had several contests to consider:

[T[he whole notion of crowing a "big winner" based on a handful of polls in a handful of states is foolish. The final polls yesterday had random sampling error of at least +/- 3 percentage points. If a poll produces a forecast outside its margin of error, that's important. But if several polls capture the actual result within their standard error, chance alone is as likely as anything else to determine which one "nails it" and which miss by a point or two.

Let's use today's results to illustrate the problem. The following chart shows Brown's percentage of the vote as reported by the public polls conducted during the last 7 days of the campaign, with an error bar based the poll's reported margin of error. The horizontal bar represents Scott Brown's actual percentage of the vote.

2010-01-20-Brown-Error

What stands out most is that most of the polls produced an estimate of Brown's percentage of the vote within their own margin of error of the actual result. The exceptions are those on the left, which were conducted almost a week before the election, and if you followed our chart or read Charles Franklin's post on Monday, you know that Brown's support rocketed up over the course of January, so we should expect some of the earlier polls to show Brown's support lower than it turned out to be.

Another perennial issue with measuring the accuracy of pre-election trial heat questions is the issue of how to handle the undecided percentage. I did not allocate undecideds, and some polls had a bigger undecided percentage than others. Also, in this case, some pollsters included independent Joe Kennedy as an option, others did not. Kennedy ultimately received only 1% of the vote, but the Blue Mass Group/Research 2000 poll that missed the Brown percentage by 11 points had the biggest percentage either for Kennedy (5%) or undecided (5%).

So by and large, it is fair to say that all of the surveys conducted after Wednesday produced estimates of Brown's vote that were as accurate as a survey can be given the potential for random error.

You might reach a different conclusion, however, when you look at how they did forecasting Martha Coakley's percentage.

2010-01-20-Coakley-Error.png

Here four surveys, all conducted after Wednesday, all significantly understated Coakley's percentage of the vote. Two were conducted for Pajamas Media by the Republican firm CrossTarget. The others were done by InsiderAdvantage and and by the Merriman River Group for InsideMedford.com. These four surveys are mostly responsible for the small understatement of Coakley's support in our overall trend estimate.

For what it's worth, all four used an automated IVR methodology and were completed in a single day, but all four also reported slightly higher undecided percentages than the others surveys conducted over the final weekend. So perhaps their significantly lower estimates of Coakley's support had something to do with their calling procedures, or perhaps they were not pushing undecideds hard enough.   

I also produced the following table that calculates the error on each poll for each candidate and the error on the margin. The two surveys that missed the Brown's margin by the most were the Pajamas/CrossTarget and BlueMassGroup/Research 2000 polls conducted more than four days before the election - and they managed to miss in opposite directions.

2010-01-20-MA-Poll-Error.png

Click table for full size version

 

Comments
Al:

I understand your looking at polls in the week before the election. But, since the major purpose of polls (regardless of when taken) is the predict the outcome of an election, wouldn't it be useful to look at all results from a pollster and compare them them to actual election result?

A poll that does a good job telling you who is going to win three months out is more valuable than one who tells you that three days before the election!

And early polls have much greater impact on news coverage, voter enthusiasm, campaign contributions, and even decisions to withdraw from the race.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR