Articles and Analysis


House Districts vs. Poll Results: Part II

Topics: 2006 , The 2006 Race

On Monday, I looked at how well our averages of polls in U.S. House Districts did in comparison to the unofficial vote counts, and when we averaged the averages, they compared quite well. A related and important question is how well those averages did within individual districts. How often did our House poll averages - sometimes conducted over a span of more than a month - provide a misleading impression of the eventual result on Election Day? In most cases the pre-election averages in House races coincided with the eventual results, but there were a handful of districts where those averages gave a misleading impression of the outcome of the race. The tougher question is whether that misimpression was the fault of the polls or of the combination of their timing and subsequent "campaign dynamics" that changed voter preferences.

That last point is important. Pre-elections polls attempt to be snapshots of voter preferences "if the election were held today." No one should expect a head-to-head vote preference question asked in the first week of October to forecast the outcome of an election held a month later. And as noted here previously, our final averages often included polls stretching back a month or more before Election Day. So consider today's discussion as much about the merits of averaging polls in House races as about the merits of the polls themselves.

Let's start with the averages that we posted on our House map and summary tables. We averaged of the last five polls in each district (including those conducted by partisans and sponsored by the campaigns or political parties). We then classified each race as either a "toss-up" or "lean" or "strong" for a particular candidate based on our assessment of the statistical strength of that candidate's lead.

We were able to find at least one poll in 87 districts, but only 34 with five or more polls. As such, the House race averages often spaned far more time than our statewide poll averages. The final averages were based on just over 304 polls, but 58 of those polls (in 38 districts) were conducted before October. More than a third of the polls used in the averages (124) were conducted before October 15. So it would not be surprising to see averages of these results produce misleading results in any district with a late trend.

In comparing the averages to the results, I see ten districts with "reversals" - districts that we had designated as "leaning" or better to one candidate while a different candidate prevailed. Specifically:

It is worth noting that all but two of these "reversals" were seats we classified as either "lean" Democrat or Republican (a lead beyond one standard error, but not two). That is to say, the lead of the ultimately unsuccessful candidate was relatively small, though obviously not small enough to rate "toss-up" status. The exceptions were New Hampshire-1 and Florida-13, which we had classified as strong Republican and strong Democrat respectively (based on average margins of 11.8% and 7.2% respectively).

Some of these reversals are explicable. For example, all of the public polls released in Ohio-15 and Kansas-2 were conducted prior to October 11, so it is entirely possible that those early surveys were right and that late trends moved the ultimate winner ahead by Election Day. Also, the results for Pennsylvania-4, Arizona-5, New Hampshire-1 all showed trends toward the ultimate winner. The polls in Florida-13 also showed a late trend to the current nominal leader, Republican Buchanan. In Nebraska-3 and Kansas-2, partisan polls with results highly favorable to their sponsors also helped skew the averages in what may have been a misleading direction.

Finally, as many readers know, the results from Florida-13 remain in dispute due to an unusually high rate of "under-votes" in one county that appear to result from a poorly designed layout of the touch-screen electronic voting equipment in that county. A compelling draft analysis by four political scientists (Frisina, Herron, Honaker and Lewis) argues that Democrat Christine Jennings would have prevailed but for the roughly 15,000 votes lost because of the touch-screen equipment.

I had anticipated some of these issues and, in a post just before the election, presented a variety of different "scorecards" based on applying various filters (only late polls, only independent polls, etc). At the time, the various alternative averages made very little bottom-line difference in terms of the number of seats we classified as leaning Democrat or Republican. For the sake of brevity, I will not go through every permutation, but the following table summarizes the number of reversals that would have resulted given various screens we could apply to the averages (that I described in my post on Monday).


Not surprisingly, applying the various filters does reduce the number of "reversal" districts, those where one candidate led in the poll averages but another won. As we throw out early polls or those conducted by partisans, however, a different kind of "miss" increases, those where we miss a switch in party because no polls are available. Our rule on Pollster.com was to assume no change in party for districts with no polls available. However, had we included only independent polls conducted after October 15, we would have made the wrong assumption about four districts previously held by Republicans were Democrats prevailed: Florida-16, Kansas-2, New York-24 and Pennsylvania-7. So remarkably, the rate of "missed" outcomes is roughly the same regardless of the filter applied.

Of course, there are a few districts mentioned above where the reasons for a late "reversal" are not immediately apparent. I'll try to take up some of these, as well as the question of how some of the more prolific pollsters fared in a subsequent post.


Bill Fitzsimmons:


Just went through your file, and you are missing several publicly posted polls (available on Real Clear Politics, at least).

September 1-4 Arizona Daily Star poll for AZ-08
September 13-18 Greenberg Quinlan Rosner Research poll for AZ-08
August 27-29 RT Strategies/Constituent Dynamics poll for AZ-08
August 27-29 RT Strategies/Constituent Dynamics poll for FL-13
October 11-13 Research 2000 poll for FL-16
August 20-24 Benenson Strategy Group poll for FL-22
September 24-27 Anzalone-Lizst Research poll for FL-22
September 18-19 Garin Hart Yang poll for IN-08
August 27-29 RT Strategies/Constituent Dynamics poll for MN-06
September 13-17 Grove Insight poll for NH-02
October 14-16 Mellman poll for NV-02
September 22-24 Zogby poll for NY-20
August 27-29 RT Strategies/Constituent Dynamics poll for NY-24
August 28-29 Tarrance Group poll for NY-24
September 20-21 Benenson Strategy Group for NY-25
September 21-24 Goodwin Simon poll for OH-15
September 17-18 Cooper & Secrest poll for OH-18
September 10-12 Public Opinion Strategies poll for OH-18
September 24 Rasmussen poll for VT-AL
September 18-19 Research 2000 poll for VT-AL
August 27-29 RT Strategies/Constituent Dynamics poll for WI-08

Misidentified one Selzer & Co poll as IVR rather than phone

Didn't identify IL-08 DCCC poll as Benenson Strategy Group

Maybe some were excluded for methodological issues (using generics before primary, for example), but curious if they overall help or hurt your averages.


Mark Blumenthal:


Thank you for bringing these to our attention, and to anyone else who knows of a poll or polls we missed, please do not hesitate to do the same.

Please note that, as you guessed, we intentionally omitted the specific RT Strategies/Constituent Dynamics surveys you listed above because they used a "generic" vote question rather than using candidate names (because primaries had not been held or candidates not selected). Similarly, the Research 2000 poll in Florida-16 posed a question using the name of candidate Joe Negron, even though his name did not appear on the ballot.

Most of the others were internal campaign polls that we missed, and we appreciate the heads up. Since the handful that we should have included were mostly conducted in September, they should not have a noticeable impact of the averages, but we'll check.

Thanks again



I have a question that's probably unrelated to polling, but there may have been a poll at some point addressing it. I've been throwing out the hypothesis, without any evidence at this point, that the party in power gains voter support simply from being in power--that is, a segment of voters are not partisan nor ideologically bound, and will vote for their incumbent Member of Congress if his or her party is in power, because he or she can "bring home more bacon". (I really don't think the vast majority of Americans know how much bacon their Members of Congress bring home, or how much more or less they've have under a different member of Congress; however, they do see news reports and get franked mailings in which their Members tout their ability to deliver federal dollars, so the voter has a general concept.) Under this theory, the wave of Democrats in this cycle who won by narrow margins will be more secure in two years; some voters who selected their incumbent Republican to keep the bacon flowing will switch allegiances. In turn, those districts narrowly won by incumbent Republicans this cycle would be in even greater jeopardy in '08. Is there any evidence of this as a phenomenon? It would help explain how the Republicans, after their huge sweep in 1994, continued to hold the House for five cycles, but by historically narrow margins each time.




I thought about this as well and I (unscientifically) attributed it to "An object at rest tends to stay at rest."

Really, what we're talking about is the power of incumbency, which at least has a lot of anecdotal evidence if not actual hard facts.

I agree that the Democrats are likely to hold the house in 2 years, unless they really screw up.

And, based on the senate seats up for re-election, prospects look good for Democrats there, too.

The Democratic majority--which, last time I checked, stood at 30 seats--is larger than any the republicans have enjoyed since the Truman era.

Barring any major national wave (like occurred in 94 and now 06), I think the Democrats are in a solid position.



Thanks for responding, Shane, but I'm really talking about something else. You're describing "inertia," a tendency to leave things alone without cause to disrupt them. I'm describing, for lack of a better term, civic greed--the desire to make your elected official a member of the majority party, regardless of political stripe. I came up with this theory when Rove and Bush were baldly predicting the GOP would hold the House, contrary to what all the polls showed. I came to the conclusion Rove was simply lying when he claimed to have seen polls in the Republicans' favor--he was trying to persuade non-partisan voters to stick with their Republican incumbents, because if they didn't, they'd be saddled with a Democratic Representative in a Republican House, and no clout.


Bill Fitzsimmons:

Mark, thanks for the quick response.

I certainly see your reasoning on the exclusions, which is very appropriate for this project. My interest is more in a compendium of all public polls, with the reasoning that if a pollster puts their name on it they are basically making a judgment of the accuracy of the instrument (generic, listing Negron in the Foley race, whatever).

By the way, I found one other issue. You have listed OnPoint Polling and Research as doing Phone surveys, but they are IVR.


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.