Pollster.com

November 19, 2006 - November 25, 2006

 

Generic House vs. National Vote: Part II

Topics: 2006 , The 2006 Race

So how did national estimates of the "generic" House vote compare to the national vote for Congress? We learned in my last post on this topic that the national House vote is being counted and is not yet set in stone. My estimate of the Democratic victory margin (roughly 7 points, 52% to 45% 47%) is still subject to change. The survey side of the comparison is even murkier, with an unusually wide spread of results among likely voters on the final round of national surveys.

To try to make sense of all the numbers, we need to revisit the "generic" House vote and its shortcomings. By necessity as much as design, national surveys have made no attempt to match respondents to their individual districts and ask vote preference questions that involve the names of actual candidates. Instead, they have asked some version of the following

If the elections for support Congress were being held today, which party's candidate would you vote for in your Congressional district -- The Democratic Party's candidate or The Republican party's candidate?

The problem is that the question assumes that respondents know the names of the candidates and can identify which candidate is a Democrat and which is a Republican. Such knowledge is rare, even in competitive districts, so most campaign pollsters consider it a better measure of the way respondents feel about the political parties than a tool to measure actual candidate preference.

In 1995, two political scientists -- Robert Erikson and Lee Sigelman -- published an article in Public Opinion Quarterly that compared every generic House vote result as measured by the Gallup organization from 1950 to 1994 to the Democratic share of the two-party vote (D / D + R). Among registered voters, when they recalculated the results to ignore undecided respondents, they found that the generic ballot typically overstated the Democratic share of the two party vote by 6.0 percentage points, and by 4.9 percent for polls conducted during the last month of the campaign. When they allotted undecided voters evenly between Democrats and Republicans, they fund a 4.8 point overstatement of the Democratic margin, and a 3.4 point overstatement in polls taken during October (See also Charles Franklin's analysis of the generic vote, and also the pre-election Guest Pollster contributions by Erikson and Wlezien and Alan Abramowitz that made use of the generic ballot and other variables to model the House outcome).

Two years later, Gallup's David Moore and Lydia Saad published a response in Public Opinion Quarterly. They made the same comparison of the total House vote to the generic ballot "but included only the final Gallup poll results before the election -- poll numbers that are closest to the election and also based on likely voters" (p.605). Doing so, they reduced the Democratic overstatement from 3.4 points in October to an average of just 1.28 percentage points. In 2002 the Pew Research Center used their own final, off-year pre-election polls from 1994 and 1998 to extend that analysis. Their conclusion:

The average prediction error in off-year elections since 1954 has been 1.1%. The lines plotting the actual vote against the final poll-based forecast vote by Gallup and the Pew Research Center track almost perfectly over time.

11-22%20pew%20graphic.jpg

Last year, Chris Bowers of MyDD put together a compilation of the final generic House ballot polls from 2002 and 2004 "conducted entirely during the final week" of each respective campaign. When I apply the calculations used by the various POQ authors to the 2002 and 2004 final polls (evenly distributing the undecided vote), the average Democratic overstatement was smaller still -- roughly half of a percentage point in 2000 and 2004.

11-22%20bowers%2002%2004.jpg

Which brings us to the relatively puzzling result from this year. The following table shows the results for both registered and likely voters for the seven pollsters that released surveys conducted entirely during the final week of the campaign. The most striking aspect of the findings is the huge divergence of results among likely voters. The Democratic margin among likely voters ranges from a low of 4 percentage points (Pew Research Center) to a high of 20 (CNN).

11-22%20generic%2006.jpg

Not surprisingly, the results show a much smaller spread when we look at the larger and more comparable sub-samples of self-identified registered voters. And some of this remaining spread comes from the typical "house effect" in the percentage classified as other or unsure. As we have seen on other measures, the undecided percentage is greater for the Pew Research Center and Newsweek (and Fox News among likely voters), less for the ABC News/Washington Post survey.

If we factor out the undecided vote by allotting it evenly, and compared to my current estimate of the actual two party vote (with the big caveat that counting continues and this estimated "actual" vote is still subject to change), an interesting pattern emerges:

11-22%20error.jpg

The results of three surveys -- Gallup/USA Today, Pew Research Center, and ABC News Washington Post -- fall well within the margin of error of the current count. The average result for these three surveys understates the Democratic share of the current count by about a half a percentage point. The likely voter models used by these surveys also show the usual pattern -- a narrower Democratic margin among likely voters than among all registered voters.

But three surveys -- CNN, Time and Newsweek -- show big overstatements of the Democratic vote, roughly 5 percentage points on average. And none of these three show the usual narrower Democratic margin among likely voters than among all registered voters. On the CNN survey, the likely voter model actually increases the Democratic margin.

It is not immediately apparent why the likely voter models of those three surveys yielded such different results, although as always, the precise details of the mechanics used on the final surveys were not publicly disclosed. Other than the general information some of these pollsters have provided previously, all we know for certain is the unweighted number of interviews classified as "likely voters" by each organization, and that information is not helpful in sorting out the differences As indicated in the table below, each pollster identified roughly two-thirds of their initial sample of adults as likely voters.

11-22%20n%20sizes.jpg

What can we make about this unusual inconsistency? The overstatement of the Democratic margins among registered voters is generally consistent with past results, but the wide spread of results of likely voters is far more puzzling. The difference in the behavior of the likely voter models looks like a big clue, but without knowing more about the mechanics of the models employed by each pollster, conclusions are difficult.


USAToday's "Gallup Guru" Blog by Frank Newport

Topics: Pollsters

One recent development I overlooked in the run-up to the election is the new blog on USAToday.com by Frank Newport, editor and chief of the Gallup Poll.  Dubbed "Gallup Guru," Newport's blog promises to chew over many of the same topics we examine here. 

In an item item last week, for example, Newport reacted to the apparent pre-election belief by White House political advisor Karl Rove that "polls were obsolete because they relied on home telephones in an age of do-not-call lists and cell phones:"

Karl Rove mentions cell phones.  The impact of cell phones on survey research has been a topic of extraordinary analysis by survey researchers over the last several years.  The American Association of Public Opinion Research, as a matter of fact, is devoting a special track of research sessions on the impact of cell phones at its annual conference next May (I am the associate chair of that conference). 

Indeed, it might have been useful if Rove had read a research paper published by Dr. Scott Keeter of the Pew Research Center just two weeks before the election this year entitled:  "Cell-Only Voters Not Very Different".  Keeter concluded his analysis by noting that "… the absence of the "cell-only" population from telephone surveys is not creating a measurable bias in the overall findings."

Frank Newport contributed the debut item to our Guest Pollster corner and responded to comments left by readers.  As such, the comments section in his new blog may offer readers more routine interaction with this Gallup Guru.  One more thing we all need to read regularly. 


2006 Data Available for Download

Topics: 2006 , The 2006 Race

We are pleased to announce that we have posted spreadsheet files the include every poll result we gathered for 2006 Senate, House, and Gubernatorial races. These are now available for download from each respective national summary page. For each poll we've includes a link, a name of the pollster, sample size, population, margin of error, and polling dates where available.


Generic House vs. National Count - Part I

Topics: 2006 , The 2006 Race

For more than a week -- with my bout of flu virus causing an unfortunate interruption -- I have been trying to come up with a reasonable tabulation of the total House vote to use as a comparison to the final rendering of the "generic" House vote by various national surveys. As it turns out, coming up with a precise total is not easy, as some votes are still being counted and other votes have not been reported.  The comparison comes with a number of caveats before we even reach the unusual spread in the generic results. 

The weekend before last, I copied the raw vote numbers reported by the Associated Press as posted on WashingtonPost.com (largely because the latter reported raw votes for all districts in a format easily conducive to spreadsheet copy-and-paste). Then last week, I went to various Secretary of State web sites to try check any district where a large portion of the precincts (3% or more) were still uncounted on the Post/AP tallies. I was able to obtain complete counts in most districts, but not all. In some areas, counting either continues or remains incomplete pending the release of final, "certified" results.

For example, nearly a third of the vote apparently remains uncounted in California's Riverside County (mostly absentee and provisional ballots). Yet check the Associated Press tallies for Riverside's 44th and 45th Districts (as reported by CNN.com or WashingtonPost.com) and you will see that "100%" of precincts have been counted. The reports on the California Secretary of State web site are not much more help.

Or check the results for any of the House districts in Washington, where a significant share of the votes had still not been counted when news sites stopped updating their results. For example, consider the results below for Washington's 5th District. The last report from WashingtonPost.com indicated that only 64% of precincts had been counted. The last update from CNN.com indicated 75% of precincts counted. And the current unofficial tabulation available from the Washington Secretary of State's office shows a total of 232,379 votes cast. So how many votes have been counted? According to a press spokesperson for the Washington Secretary of State, the "uncounted vote" tally is maintained separately by each County in Washington. Their web site currently shows a total of 48,190 votes still uncounted (roughly 2% of the total)

11-20%20wa-05.jpg

The last line of the table shows what the total would be if we extrapolate from the percentage of precincts counted. These data suggest either that 5% to 10% of the ballots are uncounted, or that extrapolations based on previous reports of precincts counted were too high, or -- most likely -- a mix of both. One lesson to take away is that "extrapolations" based on the percentage of precincts counted are sometimes less than precise shaky.

With the caution in mind that counting continues, and that the following totals are unofficial and incomplete, here are the current totals as I have:

11-20%20house%20vote.jpg

Now consider that these totals still leave out the votes cast in 22 districts (19 held by Democrats and 3 by Republicans) where no votes have been reported. In six of those districts -- all in Florida and all held by Democrats -- vote counts will never be available (assuming that no write-in candidates qualified in any of the districts) because no votes were cast. Florida law leaves uncontested races off the ballot.

The totals above include vote counts for another 12 no-contest races (involving 11 Democrats and 1 Republican). The incumbent received an average of 108,608 votes in those districts, roughly 60% of the total votes cast elsewhere. If we assume that each of the incumbents in the 16 missing districts outside Florida received roughly that number of votes (a big if -- totals varied widely), we would add 2.3 million votes to the Democratic total, and a little over a half million votes to the Republican total. The Democratic margin would thus increase to a roughly seven-point margin (52% to 45%) for the Democrats. Though the exact size will depend on the assumptions we are willing to make about the various sources of uncounted votes.

And how does this estimate compare to the "generic" House vote on national surveys? I will take that up in a subsequent post, but the survey side of this comparison makes the vote count look relatively complete, precise and pristine.

Continues with Part II


Bush Approval: 6 Post-Election polls

Topics: George Bush

BushApproval20050620061115small.png

I'm finally back from nearly three weeks on the road. I'm one flight from home returning from Germany where my network connection seemed cursed at hotel after hotel after hotel. I'm very much looking forward to being home and getting to the backlog of post-election posts. Here is a downpayment.

President Bush has suffered a significant drop in approval since the mid-term election. Six post election polls bring the trend estimator down to 35.0%. And that despite one rather high reading at 41%. The huge question now is what happens to presidential approval over the next couple of months leading to the first meetings of the new congress. As it stands, approval is only 1.2 percentage points above the all time low of 33.8% on May 15, 2006. A new downward trend would threaten that record, adding a strongly negative public judgment of presidential performance to the rebuff of Republicans at the poll. A short term downward shock due to the election that quickly stabilizes would not be a strong endorsement but would certainly be better for the White House facing the new congress. How such a stabilization, or even a new upward trend, can be engineered is a problem for the President's political advisers.

Time to run for that last flight to Madison. More after a good night's sleep.

Cross-posted at Political Arithmetik.


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR