Articles and Analysis


Looking Back at the UK Polls

Topics: Anthony Wells , David Shor , Exit Polls , Joe Lenski , Nick Moon , Politics Home , Poll Accuracy , UK elections

My column for this week looks back at how the exit poll, the pre-election polls and seat projections did in last week's elections in Great Britain. Please click-through and read it all.

Just after I filed my column on Friday, Joe Lenski, the co-founder of Edison Research, the company that conducts exit polling for the U.S. television networks, posted the following comment on AAPOR's member listserv (reproduced here with his permission). After noting the remarkable accuracy of the projection they made as the polls closed (a subject I explore in the column), he concludes:

When you consider how complicated the British electoral system is and that the projections are being made on the results of 650 separate constituency races using exit poll interviews from only 130 polling stations this is a spectacular achievement.

Their success also highlights up the challenge that Lenski faces every 2 to 4 years. The U.K. exit pollsters conducted just one survey. On November 4, 2008, the U.S. exit pollsters fielded 51 separate surveys for which they dispatched over 1,000 interviewers to conduct more than 100,000 interviews.

Last week, the U.K. exit pollsters asked just one question (vote preference) because they were charged with just one task: estimating the number of seats won by each party. The typical November 2008 U.S. exit poll asked voters to answer at least two dozen questions about who they were (demographically) and why they made the choices they did.

U.S. exit polls are designed mostly to help explain the results. Their predictive role is mostly supplementary. They help confirm that blow-out contests are really blow-outs, but when the outcome is in any doubt, network election analysts wait for actual vote counts from randomly selected precincts or, if necessary, from all precincts before "calling" the outcome

The point here is that the term "exit poll" can mean very different things in different places, and the design choices are ultimately up to the television networks and other news organizations that pay for them.

Two minor notes about the column: Technically, the results are known for all but one of the 650 constituencies. The Thirsk & Malton district will not hold its election until May 27 due to the death of one of the minor party candidates in late April. Since the Conservatives won Thirsk & Malton by a huge margin in 2005, most consider this a "safe" conservative seat. Thus, with Thirsk & Malton included, the final result is likely to be 307 seats for the Conservatives, 258 for Labour and 57 for the Liberal Democrats.

Finally, for those wanting to dig deeper into scoring the accuracy of individual pollsters and prognosticators, David Shor has taken a first stab on his Stochastic Democracy blog.

Update: PoliticsHome just posted their own look back at how their model performed. It's worth reading in full, but here are two key excerpts of the commentary from Rob Ford (which follows up on the lengthy four-part exchange between Ford and Nate Silver):

The [PoliticsHome] model did perform well, although there was a large slice of luck involved. We were in fact wrong to assume that the Tories would outperform in the marginals, but this was balanced by Lib Dem underperformance everywhere to deliver roughly the right result.

We did see very clear patterns of differential swing in Scotland, as we predicted, although the differences were even larger than the polls had suggested. There were also differential patterns in Wales and in seats with large ethnic minority populations. These would both have been near the top of my list of expected differential effects, but we had no polling evidence on them so did not incorporate them in our model.


The big story, though, with regards the UNS vs differential swing debate is that the pattern of swing was remarkably uniform:

The change in Conservative vote varied by less than two percentage points moving from their weakest to their strongest areas, and they actually underperformed somewhat in their weakest areas relative to the average

The change in Labour vote varied somwehat more, but there was no systematic relationship with prior strength - if anything the party performed worse in areas where it started off somewhat weaker.

The change in Liberal Democrat vote showed more evidence of proportionality, falling back three points in the strongest areas while rising in the weaker areas. But even here the evidence of proportional swing was weak and patchy at best.

Given the lack of any clear relationship between prior strength and outcomes, we would expect proportional swing based models to perform quite poorly, and so it has proved.

Update 2: I neglected to link to the "early postmortem" from Anthony Wells of the UK Polling Report, who has and will continue to post much more on this subject.

Also, the British Polling Council issued its own statement and scoring of the accuracy of this year's final polls:

While not proving as accurate as the 2005 polls, which were the most accurate predictions ever made of the outcome of a British general election, the polls nevertheless told the main story of the 2010 election -- that the Conservatives had established a clear lead. All but one of the nine pollsters came within 2% of the Conservative share, and five were within 1%.

The tendency at past elections for polls to overestimate Labour came to an abrupt end, with every pollster underestimating the Labour share of the vote, though all but one were within 3%. However, every pollster overestimated the Liberal Democrat share of the vote.



I'd like to see how both models perform given the real vote shares (since everyone overestimated Lib Dem), but unfortunately Silver's proportional model doesn't seem fully determined with just the polling data, so we can't predict exactly how the 538 model would have turned out. (During their liveblog, 538 did post a model run with exit polling data, and it did better - I wonder how uniform swing would compare when run with the exit polling.)

It'd be nice to also see the seat-by-seat breakdown, to tally up exactly what got missed by each model.

Without that info, I'd withhold judgment - or, more precisely, it seems like both sides are losers with the polling on Lib Dem being too high. Could you imagine a net miss of 30 contests in a two-party contest here (roughly Congress+Prez+Governors in a Presidential election year)?



Silver reanalyzed things with the actual nationwide vote and says that PoliticsHome's model (uniform swing + regional adjustment) did indeed better.


Two comments:
1. The exit polls in Britain have a long history of producing very accurate forecasts. Harris ( and the UK company it bought in 1970 ) began forecasdting elections based on exit polls, without adjusting the forecasts with any actual voting numbers,in 1969 and with one conspicuous exception ( our exit poll for the BBC in the second 1974 general election ) these were always remarkably accurate, with resuts comparable to the recent UK exit poll. But as noted by Mark the only purpose of these exit polls was to forecast the result -- in terms of parliamentary seats -- at 10.PM as the polling stations closed, and well before any actual votes were available.

2. In these UK election, the media reported all the main polls whether they ( like ours ) were conducted online or by telephone. In the UK as in the rest of Europe there are no media we are aware of that are reluctant to publish online polls. At the risk of hubris, we feel that the relative accuracy of our finaL online poll justifies this media coverage.


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.