Charles Franklin | January 18, 2010
There has been a wider than normal range of polling results in the last two weeks from the Massachusetts Senate special election. This has been further clouded by a number of leaked internal polls and polling by relatively unknown and unproven pollsters, some partisan but others not. And most importantly, the rapid shifts in the race, reflected across all the polls, makes this a fast moving target. So let's take a moment to consider what we could reasonably conclude based on the data.
But no matter how you slice the data, the only reasonable conclusion is that Scott Brown has moved from well behind to a lead somewhere between 4 and 11 points.
The chart above shows all the polls we have available as of 12:36 a.m. Monday morning. That includes new PPP and Pajamas Media/CrossTarget polls released late Sunday evening. The chart also includes the leaked polls, mostly from the Coakley campaign but one from Brown as well. These leaked polls are NOT included in most of the estimates above, though they are not out of line with the rest of the data.
So what might you believe about these data? You could refuse to cherry pick the polls. That has long been our view here at Pollster.com. Our job is to summarize the trends as best we can, without partisan favor. If you do that, we get a 8.8 point Brown lead.
Perhaps you only trust non-partisan polls. Then the Brown lead is 6.8 points.
Maybe you are a Dem, who doesn't trust the Republican pollsters. Then Brown leads by 6.5 points.
Or you are a Dem who doesn't trust the non-partisan pollsters either and who does believe in the leaks from the Coakley campaign. Then Brown's lead is 3.8 points. (This is the only estimate that includes the leaks.)
Or you are a Rep who trusts GOP and nonpartisan polls only. Then Brown leads by 11.3. (There aren't enough Rep polls to run a Rep only estimate to parallel the Dem only, but I'd think an 11 point lead would be satisfying enough for Reps.)
There may be other ways to cut these data (IVR vs conventional phone, pollsters you've heard of vs ones you haven't) but it seems quite unlikely that any but the most selective reading of these data can find that the race remains a dead heat. Brown has a lead, as of Sunday night.
Let's back up a step to look at the data without the clutter. Here are just the polls, no trends fit.
Without the lines it is quite clear that the movement has been sharply towards Brown. Trace out what you like, ignore what you don't like, in the early polls Coakley is convincingly ahead. Then between about day -8 and -5 the polls are balanced above and below dead even. Since then no poll has shown a Coakley.
In my models in the first chart, I use linear fits rather than our usual local regressions. The reason is there are still not very many polls,and once we subset them by party there simply aren't enough cases to get good local regression fits. That subsetting is the main point here. But it also turns out that the local regression on all the data isn't very far from the linear fits I use above. Here is the comparison:
The blue trend is our standard estimate, and it wiggles a bit due to only 12 cases. If we use a bit less sensitive local regression, we get the black line. And the red linear fit isn't very far from either of the two local fits. So I'm willing to give up some flexibility in the fit for a bit more robustness, and especially the ability to fit the models by party of pollster that was the lede above.
Finally but significantly, we are seeing more pollster variation in this race than normal. If we look at the residuals around the trend estimates, past experience with 2004, 2006 and 2008 state and national contests has pretty consistently found that most of the polls (about 95%) fall within +/- 5 points of the trend estimate. Now that is an empirical observation, not a theoretical one. But it has been generally consistent in our data. How do these polls compare?
Only half of the current polls are inside +/-5 points of the linear trends. The number of polls is small, and this race is more dynamic than most. But one has to wonder about the problems of polling in a special election, the role of partisan and new players in the polling and the heavy use of IVR polls. This is much more variation in polls than we normally see in general elections.
Let's also recall the NY-23 special election, which was not polling's finest hour. The last three polls there had Hoffman up by 5, 5 and 17 points. Our final trend estimate based on all the polls had Hoffman up by 5, 41.8 to 36.8.
Polling special elections is hard. Tuesday we'll see how hard, and who was good and/or lucky.