Articles and Analysis


Generic House vs. National Vote: Part II

Topics: 2006 , The 2006 Race

So how did national estimates of the "generic" House vote compare to the national vote for Congress? We learned in my last post on this topic that the national House vote is being counted and is not yet set in stone. My estimate of the Democratic victory margin (roughly 7 points, 52% to 45% 47%) is still subject to change. The survey side of the comparison is even murkier, with an unusually wide spread of results among likely voters on the final round of national surveys.

To try to make sense of all the numbers, we need to revisit the "generic" House vote and its shortcomings. By necessity as much as design, national surveys have made no attempt to match respondents to their individual districts and ask vote preference questions that involve the names of actual candidates. Instead, they have asked some version of the following

If the elections for support Congress were being held today, which party's candidate would you vote for in your Congressional district -- The Democratic Party's candidate or The Republican party's candidate?

The problem is that the question assumes that respondents know the names of the candidates and can identify which candidate is a Democrat and which is a Republican. Such knowledge is rare, even in competitive districts, so most campaign pollsters consider it a better measure of the way respondents feel about the political parties than a tool to measure actual candidate preference.

In 1995, two political scientists -- Robert Erikson and Lee Sigelman -- published an article in Public Opinion Quarterly that compared every generic House vote result as measured by the Gallup organization from 1950 to 1994 to the Democratic share of the two-party vote (D / D + R). Among registered voters, when they recalculated the results to ignore undecided respondents, they found that the generic ballot typically overstated the Democratic share of the two party vote by 6.0 percentage points, and by 4.9 percent for polls conducted during the last month of the campaign. When they allotted undecided voters evenly between Democrats and Republicans, they fund a 4.8 point overstatement of the Democratic margin, and a 3.4 point overstatement in polls taken during October (See also Charles Franklin's analysis of the generic vote, and also the pre-election Guest Pollster contributions by Erikson and Wlezien and Alan Abramowitz that made use of the generic ballot and other variables to model the House outcome).

Two years later, Gallup's David Moore and Lydia Saad published a response in Public Opinion Quarterly. They made the same comparison of the total House vote to the generic ballot "but included only the final Gallup poll results before the election -- poll numbers that are closest to the election and also based on likely voters" (p.605). Doing so, they reduced the Democratic overstatement from 3.4 points in October to an average of just 1.28 percentage points. In 2002 the Pew Research Center used their own final, off-year pre-election polls from 1994 and 1998 to extend that analysis. Their conclusion:

The average prediction error in off-year elections since 1954 has been 1.1%. The lines plotting the actual vote against the final poll-based forecast vote by Gallup and the Pew Research Center track almost perfectly over time.


Last year, Chris Bowers of MyDD put together a compilation of the final generic House ballot polls from 2002 and 2004 "conducted entirely during the final week" of each respective campaign. When I apply the calculations used by the various POQ authors to the 2002 and 2004 final polls (evenly distributing the undecided vote), the average Democratic overstatement was smaller still -- roughly half of a percentage point in 2000 and 2004.


Which brings us to the relatively puzzling result from this year. The following table shows the results for both registered and likely voters for the seven pollsters that released surveys conducted entirely during the final week of the campaign. The most striking aspect of the findings is the huge divergence of results among likely voters. The Democratic margin among likely voters ranges from a low of 4 percentage points (Pew Research Center) to a high of 20 (CNN).


Not surprisingly, the results show a much smaller spread when we look at the larger and more comparable sub-samples of self-identified registered voters. And some of this remaining spread comes from the typical "house effect" in the percentage classified as other or unsure. As we have seen on other measures, the undecided percentage is greater for the Pew Research Center and Newsweek (and Fox News among likely voters), less for the ABC News/Washington Post survey.

If we factor out the undecided vote by allotting it evenly, and compared to my current estimate of the actual two party vote (with the big caveat that counting continues and this estimated "actual" vote is still subject to change), an interesting pattern emerges:


The results of three surveys -- Gallup/USA Today, Pew Research Center, and ABC News Washington Post -- fall well within the margin of error of the current count. The average result for these three surveys understates the Democratic share of the current count by about a half a percentage point. The likely voter models used by these surveys also show the usual pattern -- a narrower Democratic margin among likely voters than among all registered voters.

But three surveys -- CNN, Time and Newsweek -- show big overstatements of the Democratic vote, roughly 5 percentage points on average. And none of these three show the usual narrower Democratic margin among likely voters than among all registered voters. On the CNN survey, the likely voter model actually increases the Democratic margin.

It is not immediately apparent why the likely voter models of those three surveys yielded such different results, although as always, the precise details of the mechanics used on the final surveys were not publicly disclosed. Other than the general information some of these pollsters have provided previously, all we know for certain is the unweighted number of interviews classified as "likely voters" by each organization, and that information is not helpful in sorting out the differences As indicated in the table below, each pollster identified roughly two-thirds of their initial sample of adults as likely voters.


What can we make about this unusual inconsistency? The overstatement of the Democratic margins among registered voters is generally consistent with past results, but the wide spread of results of likely voters is far more puzzling. The difference in the behavior of the likely voter models looks like a big clue, but without knowing more about the mechanics of the models employed by each pollster, conclusions are difficult.


Mark Lindeman:

One point I think you didn't make: the three surveys with the smallest deviations are the three that were in the field for four days. Another big clue?



Instead of posting a substantive comment, let me take this moment to wish a Happy Thanksgiving to the hardworking MP team...many thankful fellow junkies out here!



Bruce Moomaw:

Do the nationwide polls of the generic House vote take into account that the turnout in poor districts -- which tend to vote Democratic -- is very often far smaller than the turnout in rich districts, so that the percentage of the actual total nationwide House vote won by the Dems is usually distinctly smaller than the Dem candidate's percentage of the vote in the average House district?



Let me suggest two reasons that the generic preferences don't reflect the final tally.

#1 Gerrymandering, if there are four districts and one votes 95% Democrat, even if the total vote is 55% democrat over all, the final result is 25% Democrat congressmen and 75% Republican Congressmen. Such effects are common in Florida and Texas, but are found elsewhere as well.

#2 Voter Suppression and fraud. Just because a voter intends to vote does not mean they will be allowed to vote, kept from voting by insufficient numbers of machines, or even when they can vote, that their vote will be recorded and not "flipped" .

These problems happen all over and favor Republican friendly results overwhelmingly.


Mark Lindeman:

Bruce, that's an interesting question. Generic polls wouldn't make any effort to weight each district equally; they 'simply' try to identify likely voters. So the generic percentage "should" be closer to the % of actual vote than to the % of vote in the average district, regardless of turnout differentials across districts. (In principle, the likely voter models should pick up any differences in turnout.) However, folks who live in uncompetitive (even if not literally uncontested) districts are probably more likely to state a preference in a generic poll and then not bother to vote -- and those uncompetitive districts are disproportionately Democratic.

Freedem, the same response is relevant to your first point. It doesn't necessarily matter how many seats are won by Democrats and how many by Republicans. But if Democrats are gerrymandered into House districts so uncompetitive that they don't bother to vote in the House races, that could create a gap between generic results and vote counts.

Your second point is hard to assess, but the evidence seems weak. In 2006, looking at all the House races for which pollster.com calculated poll averages (and using preliminary vote totals), the average Democratic vote margin was 0.16 points smaller than the poll margin, and the median was 0.19 points greater. That doesn't look to me like vote suppression and fraud happening all over and favoring Republicans overwhelmingly. (Of course, the telephone polls might underrepresent the same people whose votes are ultimately suppressed.) Given the divergence between generic and district-specific polls, and the wide spread in generic results, I can't see trying to use a generic "poll of polls" average to measure vote suppression and fraud.


The least competitive Dem districts are far less competitive than their GOP analogues.

Voting Rights Act compliance/interpretation creates majority minority districts that are usually well more than 3-1 D, whereas rock-solid R seats are usually no more than 2-1 GOP. Turnout in these seats, for both socioeconomic and low general competition reasons (in most of these areas, local races are determined in Dem primaries), tends to be very low, and thus reduce D midterm turnout relative to Rs.

I suspect in a Pres year with the same public attitudes, the relative (and absolute) D House vote would have gone up as, say, Rangell and John Lewis and Bennie Thompson and others would have won by larger total vote margins.

Of course, in theory this ought to be captured by likely voter models....

On the other hand, I'm curious what the net impact on the (functionally meaningless) net vote total is of the netroots/Dean 50 state strategy of contesting every seat. Dems contested 15-20 more seats than the GOP, which obviously raises the Dem vote relative to GOP -- but if Dems pinned down a bunch of nearly but not completely safe [and very risk averse] GOP incumbenbts to work their districts harder/spend more of their warchests... the netroots may have pinned down a lot of GOP money to bring in additional GOP votes in strategically noncentral districts.


Samuel Knight:

Let me echo the concern that voter fraud might help you find the 5% difference between nation-wide polls and the reported final election results. Florida 2000, Georgia 2002, Ohio 2004, Florida 2006 etc. consistently show decocratic votes not being counted - and thus reported.

Palance actually estimates that cumulatively that adds up to about 5% nation-wide. Of course it would very difficult to prove - but that should be considered as a possibility.


The Oracle:

Karl Rove visited Ohio in October. The article I read reported that he visited the state, but didn't mention whether he visited anyone while there or where he travelled while there. Strange.

Diebold has it's headquarters in Ohio.

Shortly after Rove's mysterious visit to Ohio, he started proclaiming that the Republican Party would retain control of Congress, although by his "math," Democrats would make gains.

Someone on a another blog mentioned that if any "fixing" of voting machines was planned, the "fixing" would have had to take place weeks in advance of the Nov. 7th elections.

Rove made a mysterious visit to Ohio several weeks before the elections.

Therefore, who did Rove see on his October trip to Ohio? Someone at Diebold headquarters, maybe? To discuss, in person, what percentage of "fixing" was necessary to assure a Republican victory on Nov. 7th? Which is why Rove had his own "math," "math" that contradicted many of the pre-election polls? "Math" that even had members of the Republican Party scratching their heads?

Immediate election reform and investigating Karl Rove should be high on the priority list of the new Democratic Congress next year.


Nick Panagakis:

I noticed a finding in the national sample exit poll that may have a bearing on this discussion. When asked "When did you finally decide for whom to vote in the U.S. House election?" 10% of voters decided "just today"; i.e., on election day. This measn that 10% decided after these polls were taken. Another 9% decided "In the last three days", some after they were interviewed.

That 10% easily exceeds the number of undecideds in most polls. 4% undecided in both CNN and Time. That could means that some voters changed their minds on a candidate on election day.

I have no similar data for 2002 and 1998 so I don't know if this level of late deciding was more of factor this year.



Bruce Moomaw:

Mark: Given the wildly erratic variations in the "likely voter" predictions by the various pollsters, I'm much more interested in whether they took the lower average turnout in pro-Democratic districts into account in their published measurements of the nationwide generic House vote among REGISTERED voters. If they don't do so, that by itself could explain their habitual tendency to overestimate Democratic strength in House elections.

(I myself am just about to try calculating the Democratic margin in the average House seat -- which, as I say, should be distinctly higher than the Democratic margin in the nationwide total vote for the House -- and see how well it matches up to the pollsters' pre-election predictions of the registered voters.)


Bruce Moomaw:

By my calculations (based on CQ's latest election results), the AVERAGE Democratic margin in each House seat was 13.15%. Soo, as I say, the smaller total turnout in poorer districts -- which tend to be more Democratic -- does indeed make for a major difference between this figure and the TOTAL vote for each party in the national House races.


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.