# Pollster.com

## Articles and Analysis

### Jacob Eisenstein: Using Kalman Filtering to Project the Senate

##### Topics: 2006 , The 2006 Race

Today's Guest Pollster Corner contribution comes from Jacob Eisenstein. While not technically a pollster -- Eisenstein is a PhD candidate in computer science at MIT -- he recently posted an intriguing U.S Senate projection (and some familiar looking charts) based on a statistical technique applied called "Kalman filtering" that he applied to the Senate polls. He explains the technique and its benefits in the post below.

Polls are inexact measurements, and they become irrelevant quickly as events overtake them. But the good news about polls is that we're always getting new ones. Because polls are inexact, we can't just throw out all our old polling data and accept the latest poll results. Instead, we check to see how well our poll coheres with what we already believe; if a poll result is too surprising, we take it with a grain of salt, and reserve judgment until more data is available.

This can be difficult for the casual political observer. Fortunately, there are statistical techniques that allow this type of "intuitive" analysis to be quantified. One specific technique, the Kalman Filter, gives the best possible estimate of the true state of an election, based on all prior polling data. It does this by weighing recent polls more heavily than old ones, and by subtracting out polling biases. In addition, the Kalman Filter gives a more realistic margin-of-error that reflects not only the sample sizes of the polls, but also how recent those polls are, and how many different polling results are available.

The Kalman Filter assumes that there are two sources of randomness in polling: the true level of support for a candidate, which changes on a day-to-day basis by some unknown amount; and the error in polling, which is also unknown. If the true level of support for a candidate never changed, we could just average together all available polls. If the polls never had errors, we could simply take the most recent poll and throw out the rest. But in real life, both sources of randomness must be accounted for. The Kalman Filter provides a way to do this.

Pollsters are happy to tell you about margin-of-error, which is a measure of the variance of a poll; this reflects the fact that you can't poll everybody, so your sample might be too small. What pollsters don't like to talk about is the other source of error: bias. Bias occurs when a polling sample is not representative of the population as a whole. For example, maybe Republicans just aren't home when the pollsters like to call -- then that poll contains bias error that will favor the Democratic candidates.

We can detect bias when a poll is different from other polls in a consistent way. After repeated runs of the hypothetical biased poll that I just described, careful observers will notice that it rates Democratic candidates more highly than other polls do, and they'll take this into account when considering new results from this poll. My model considers bias as a third source of randomness; it models the bias of each pollster, and subtracts it out when considering their poll results.

The Kalman Filter can be mathematically proven to be the optimal way to combine noisy data, but only under a set of assumptions that are rarely true (these assumptions are listed at my own site). However, the Kalman Filter is used in many engineering applications in the physical world -- for example, the inertial guidance of rockets -- and is generally robust to violations of these assumptions. In the specific case of politics, I think the biggest weakness of this method is the elections are fundamentally different from polls, and my model does not account for the difference between who gets polled and who actually shows up to vote. I think this can be accounted for, but only by looking at the results of past elections.

Paul Horwitz:

So what results does he get with his Kalman filter approach. Come on, don't leave us in suspense!

____________________

Nathaniel Lichtin:

The results he finds don't seem to match reality. Less than 1% chance of maryland going republican or 1.5% for New Jersey or only 14.5% for montana. These races are closer than the results he finds.

____________________

Benjamin Schak:

Yeah, the Kalman filter is a good method. I don't have time to do any election prediction this cycle, but it'd be my basic tool if I did.

I used a multivariate Kalman filter (the 51 variables being the 50 states and DC) to predict the 2004 election. That had the benefit of allowing measured swings in states that get polled to influence one's predictions of results in states that don't get polled.

I've never gotten around to writing up what I did, but here are a couple results I remember off the top of my head: 1) I discovered several months before the election that OH would be the state most likely by far to produce a situation like FL in 2000 (with WI as a distant runner-up). 2) I discovered several months before the election that Kerry was far more likely than Bush to win the electoral college while losing the popular vote, so I wasn't the least bit surprised when that almost happened. 3) I called every state except WI correctly. (And I blame WI on the preponderance of biased numbers from the Badger Poll.) 4) I correctly predicted both candidates' shares of the two-party popular vote with some insanely low margin of error. (I think it was something like 0.15%.)

One question for you about an issue that I struggled with: In its basic form, the Kalman filter requires you to assume some value for the additional daily variance that accrues as polls grow stale. How much variance do you add per day, and why? (I see someone above criticized your results as being unreasonable. This might be because your daily added variance is too low, particularly towards the end of the race.)

____________________

Nathaniel -- One very important caveat is the "percent chance of winning the election" numbers assume that the election is held TODAY. The method does not predict future events. If things break their way, the Republicans can come back and win in New Jersey. But as of today, something like nine of the last ten polls in New Jersey show Menendez ahead -- so if the election were held today, I'm very confident that he'd win.

____________________

Benjamin --

I think it's a really good point that variance in support increases towards the end of the race, since that's when people are paying attention to politics. I haven't figured out how to model this yet, but it's something I plan to think about after the election. What did you do?

____________________

Eric Applegate:

How well would this approach have modeled past elections? For instance, using poll data from 1-2 weeks before the election, what would the results have been for the Kalman filter, versus the actual results?

____________________

Bruce Caswell:

In response to the second poster, this is not a forecasting technique, it is a more precise way of describing the interpreting the results of the "snapshot" of the election taken by the polls. Hence, the "1% chance of the Republican winning in Maryland" means that there is only a 1% chance of the average of the polls being wrong at this time. It is a statistical statement on who is winning now, not who will win on election day. There are many things that could happen between now and election day that could change the final outcome.

____________________

Amit Lath:

If bias is indeed hardwired, say pollster A always favors Democrats, then your analysis should be able to measure and correct for it. You can compare his output to actual (previous) election results. Of course pollsters can change their methods after the reality check of an actual election, but hopefully this is a second order effect.

____________________

Alan:

That is a nice piece of work and puts the social scientists to shame ;-) (I'm also an engineer, so if you are a social scientist, please don't take the comment seriously :-)

The number that's currently called "the chance of winning each seat" is really the chance of winning that day's unbiased poll. If you really wanted to use poll figures to estimate the probability of winning an election, I think you could/should add a "fudge factor" to account for the differences in what happens in a poll vs. what happens on election day. In other words, if you could poll every voter, your sampling error or so called "margin of error" would be zero, but you still would not know with 100% certainty what the outcome of an election would be even if it were held the same day. I guestimate that the difference between a well-done poll and actual election results in a senate race is something on the order of 2-4% ON TOP OF the sampling error. Therefore, I think you would get a better estimate of the chance of winning each seat if you added this "fudge factor", i.e., use 6-8% as the uncertainty of each poll, rather than 4% (the margin of error). What do the real social scientist think about this number? Is an additional 2-4% too much or too little?

Also, the little "ramps" in the graphs from one day to the next are a little strange. In reality the changes in the estimates reflect information impulses. The ramp comes from drawing a straight line from one day's estimate to the next, as if the time axis were continuous. In fact, its not continuous, its discrete (sampled) with a sample period of one day. The actual model and how it works would be more clear if you drew the center lines using only horizontal and vertical line segments, i.e., make each change a step up or a step down, not a ramp. Similarly, the 95% confidence intervals could be drawn using continuously widening cones with the wide ends cut "flat", i.e., drawn with vertical ends.

Alan

____________________

Alan:

P.S. You could in fact get an estimate of what might happen on election day if you ran the Kalman filter forward to Nov 7 without any new inputs. The cones would widen, reflecting greater uncertainty over what might happen in the future, but I believe the math behind the numbers would continue to be valid. This would work as long as the cones stay near the center of the graph, because of course, the cones can never extend below zero or above 100, so the linear math would break down if you tried to run the filter too far into the future.

____________________

Amit Lath:

Alan, shouldn't a well done estimate (poll) come closer than 2-4% of the actual measurement (election)? Rather than take this "fudge factor" as an additional error, one could figure out the mechanism and correct for it (and of course, there will be some uncertainty associated with this correction factor, but hopefully it will be less that the correction itself).

My thought on getting these correction factors was to look at individual polling firms and how far off they were in 2004.
Maybe that's not the best idea for estimating this factor, but I find it curious that polling firms that are known to be off don't pay a higher price in terms of error bars in the final fit.

____________________

Amit --

Just to be clear -- the model already corrects for pollster bias. It does this by counting pollster bias as a source of "error" in polling results, and correcting for it. A poll will be assigned a high bias if its results are consistently different from those of other polls.

Considering previous election results might be a more accurate way to do this, but as you say, there is a feedback phenomenon.

____________________

Alan:

Hello Amit, I'm assuming the factor is random, unknown or unmodelable, so you can't "correct" for it but you can account for it. One way to determine the factor might be to run the Kalman filter against past elections and determine what factor best accounts for the variance in the difference between the Kalman filter's best estimate and the actual election results. This would also be a good check of the Kalman filter: if its a good model, then the mean difference should approach zero.

____________________

Amit Lath:

Thanks Jacob. From your original post and your webpage I understand that bias is estimated by scatter (by the way, dangerous, no? the guy out on the tail could be the one guy who's actually managed to phone the factory workers on the owl shift...)

But never mind all that. However you estimate bias, do you correct the reported values, or do you take the bias and blow up the error bars so they have a bigger chisquared/lower weight in the overall fit?

I do think you are doing pretty much the right thing here. But one foolproof way to check you get bias/correction factors right is: apply the corrections and the variance gets smaller. But if you are actually using the distance from the mean to estimate the bias, then by definition the scatter is going to be lower after corrections. If you did have another way to estimate bias, you could see how well it worked by looking at what it did to the variance, no?

Of course, if EVERYONE is biased one way, the
this correcting by looking at the scatter fails overall.

Anyway, thanks for replying. I was course 8 many moons ago but 6.002 and 6.003 were some of the most fun courses I took.

____________________

Amit Lath:

Thanks Alan. It is a good idea, running Kalman on past elections. So I am sure Jacob must have done that (preprints?) One could then ask how much of the change in the numbers is due to a real time dependence and how much is scatter due to bias.

But assuming the correction is random is kind of a model. Then the mean of N polls will converge to the right point, modulo the time dependence that Jacob is accounting for with the Kalman.

But what if it is not random? Here is a toy model: the scatter is due to a pool of voters that all vote one way (say Republican) that various pollster sample with different efficiencies. This is a simple, one-parameter model, but no way to get at it by looking at the scatter. Looking at real elections you may have a shot.

Of course, to make things more complicated, I gather most if not all of the pollsters correct their data for numbers of R vs D, men vs. women, old vs young, and probably don't even add the systematic uncertainty from this correction to their numbers.

____________________

ekg:

In your attempt to model "momentum", do you explicitly account for the implied preferences of the undecideds who become "decideds" between one poll and its immediate follow-on? i.e. if prior undecideds seem to breaking for one candidate over another, is this explicitly considered? There are some polls out there (e.g. IL Governor, ID Governoe, AK Governor) with well over 10% undecided at this late stage.

____________________

Hi ekg. The model pretty much ignores undecideds -- it just considers who's ahead among decided voters, and how that lead compares to the variance. You could say that this is equivalent to assuming undecided voters will break 50/50. It would be interesting to look at all the ways in which election results deviate from polls; for example, that undecided voters usually break 2:1 for the challenger. Maybe in 2008.

____________________

Ian Valenzuela:

Wow, it seems as though each and every prediction was on target. Interesting to note, skeptical reactions to Maryland and New Jersey projections was misguided; the races there were much less close than conventional wisdom allowed; whereas tight races in MT, MO, TN, and VA were also predicted, as were the eventual winners (assuming Webb's lead remains). My advice is take these prediction results and market yourself to the media in the next election cycle.

____________________

Benjamin Schak:

Jacob--

I did something fairly ad hoc. I think I took whatever my assumed daily variance was (actually, in my case it was a variance/covariance matrix since I used a multivariate version of the KF) and just multiplied it by factors during the week before the election. Essentially, I ended up treating 10/28 as if it were two days, 10/29 as if it were three days, and so on. I think I boosted the assumed variance on some other days too, like the major parties' conventions.

____________________

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR