Pollster.com

Articles and Analysis

 

Handicapping the House: Part I

Topics: 2006 , IVR , IVR Polls , Measurement , Sampling , The 2006 Race

With the addition of House race data to Pollster.com, it is a good time to talk about the difficulty of measuring the status of the race to control Congress at the district level. Political polling is always subject to a lot of variation and error (and not all of it the random kind), but Congressional district polls have their own unique challenges.

First, we are tracking something different in terms of voter attitudes an preferences than in other races, particular contests for President. Two years ago, voters received information about George Bush and John Kerry from nearly every media source for most of the year. Huge numbers of voters tuned in to watch live coverage of nationally televised candidate debates. In races for the Senate and House, news coverage is far less prevalent and voters pay considerably less attention until the very end of the campaign. Even then, voters still get much of their information about House candidates from paid television and direct mail advertising.

Of course, in the top 25 or 30 House races, the candidates (and political parties) have already been airing television advertising. However, if you expand the list to the next 30-40 races that could be in play, the flow of information to voters drops off considerably. Middle-tier campaigns in districts in expensive media markets (like New York or Chicago) will depend on direct mail rather than television to reach voters.

So generally speaking, voter preferences in down ballot races are more tentative and uncertain. The (Democratic affiliated) Democracy Corps survey of Republican swing districts released last week reported 26% of likely voters saying there is at least a "small chance" they may still change their minds about their choice for Congress. When they asked the same question about the presidential race in mid-October 2004, only 14% said they saw a "small chance" or better of changing their mind about voting for Kerry or Bush.

This greater uncertainty means that minor differences in methodology can have a big impact on the results. Specifically, pollsters may vary widely in terms of the size of the undecided they report depending on how hard they push uncertain voters.

Second, the mechanics of House races polling can be very different from statewide methodology. The biggest challenge involves how to limit the sample to voters within a particular House district. In statewide races the selection is easy. Since area code boundaries do not cross state lines, it is easy to sample within individual states. So most of the statewide polls we have been tracking use a random digit dial (RDD) methodology that can theoretically reach every voters with a working land line telephone.

No such luck with Congressional districts, whose gerrymandered borders frequently divide counties, cities, even small towns and suburbs. Since very few voters know their district numbers, pollsters use a variety of strategies to sample House districts. Most of the partisan pollsters, as well as the Majority Watch tracking project, use samples drawn from lists of registered voters (sometimes referred to as "registration based sampling" or RBS). These lists make it easy to select voters within a given district, but the lists frequently omit telephone numbers for large numbers of voters (typically 20% to 40%30% to 50%**). Remember the real fear that RDD surveys are missing cell-phone-only households? Right now the missing cell phone households represent roughly 6-8% of all voters. Lists, obviously, miss many more. If the uncovered households differ systematically from those with working numbers on the lists, a bias will result.

Again, most partisan pollsters (including my firm) are comfortable sampling from lists, because the benefits of sampling actual voters within each District appear to outweigh the risks of coverage bias (see the research posted by list vendor Voter Contact Services of a sampling of arguments in favor of RBS). Media pollsters are generally more wary. SurveyUSA, for example, conducted a parallel test of RDD and RBS in a 2005 experiment that found a large and consistent a bias in RBS sampling that favoring one candidate. "SurveyUSA rejects RBS as a substitute for RDD," their report read, "because of the potential for an unpredictable coverage bias." So in House polls they often use RDD and screen for voters in the given district based on voters' ability to select their incumbent member of Congress from a list of all members of Congress from their area.

These various challenges have made many media outlets and public pollsters wary of surveys in House races. As of two week ago, we had logged more than 1,000 statewide polls for Senate or Governor into our Pollster.com database for 2006. As of yesterday, we had tracked only 173 polls conducted in the most competitive House races, but as the table below shows, only 47 of those came from independent media pollsters using conventional telephone methods

10-16%20house%20polls.jpg

Nearly half of all the House race polls come from two automated pollsters: SurveyUSA (23) and especially the Majority Watch project of RT-Strategies and Constituent Dynamics (56). Also, more than a quarter of the total (52) are partisan surveys conducted by the campaigns, the party committees or their allies, with far more coming from Democrats (44) than Republicans (8).

The sample sizes for House race surveys are also typically smaller. While national surveys typically involve 800 to 1000 likely voters, and statewide surveys 500 to 600, many of the House polls involve only 400 to 500 interviews (although the Majority Watch surveys have been interviewing at least 1000 voters in each district).

Finally, very few districts have been surveyed by public pollsters more than a few times since Labor Day. Only two of the 25 seats now held by Republicans rated as "toss-ups" by the Cook Political report have been polled 5 or more times. Most of these critical seats have been polled 2 to 4 times. Put this all together, and the results are likely to be more varied and more subject to all sorts of error than other kind of political polls. After the 2004 election, SurveyUSA put together a collection of results for every pre-election public opinion poll released in the U.S. from October 1 to November 2, 2004. Their spreadsheets included 64 House race surveys, and their calculations of the error of each survey indicate that those few House races had more than double the error on the margin (5.82) than the polls conducted in the presidential race (3.43).

10-16%20mosteller.jpg

All of which goes to say that while we too will be watching the House polls more closely over the next three weeks, for all the tables and numbers, we know far more about these races than meets the eye.   More on what we do know tomorrow.

**Correction: Colleagues have emailed to point out that quoted match rates for list samples have improved in recent years and now typically range from 60% to 80%. I won't quarrel, although I have had past experiences where quoted rate exaggerated the actual match once non-working numbers are purged from the sample.

 

Comments
DemFromCT:

Posts like this are why I love this site. Nonetheless, I eagerly await the next batch of SUSA polls (later today?). ;-)

Sigh. After November, CT will no longer be the center of the political universe. I'll have to adjust.

____________________

Guy:

Perhaps the best indicator here is not the survey results, per se, but the D:R ratio of released polls by the parties. Democrats have deemed it advantageous to release five times more CD polls than the Republicans (44 to 8). I bet that's a very solid leading indicator......

____________________

Gary Kilbride:

I'm astonished there haven't more parallel RDD and RBS experiments, in different locales and conditions. SurveyUSA brags about their Cleveland 2005 experience and making the correct decision within the last 12 hours, then matter of factly mentions that Yale and Mitofsky came to an opposite conclusion in 2002, that RBS was superior to RDD, or no discernable difference. SurveyUSA lists six variances between the 2002 and 2005 studies, implying they may have influenced the different results, and comparative conclusions.

OK, where are the statewide and district followup studies, without the six differences? Or with experimentation that alters a few of the variables here and there? It reminds me of the January 2005 Edison/Mitofsky paper evaluating the 2004 exit polls, saying well maybe this needs to be changed or perhaps that was the reason for the discrepancy. Lots of guesswork in this business. At least the exit pollsters have an excuse, one day to work with and certain restrictions, like where they can be. The pre-election phone pollers should be able to launch any type of sampling and often, to the point you're not baffled if using a professional newscaster to ask the questions makes a difference, or being able to ask for a specific voter by name. Those were two of the six.

____________________

Another interesting study of the RDD v. RBS debate is by Donald Green and Alan Gerber of Yale.

http://www.yale.edu/isps/publications/regsamp.pdf

____________________

MIkeW:

Good article on congressional polls. As a former candidate who is now working on what used to be a close Dem race, I have been wondering about a couple of issues on the phone polling. (1) We are getting only about 15% of the people we call to actually pick up the phone. With caller ID and VM a lot more people screen out these calls than in the past. Is there any inherent bias in this (ie> people who can afford caller ID and VM vs the poor)? (2) What is the effect of the younger people who don't have land based lines? (3) Likely voters who have voted regularly in the past may not vote this year and may lie to pollsters about their current intentions. This would seem to apply mostly to religious voters who have been relucantly voting in past elections because their ministers told them to but are now disgusted and will return to their old ways of not getting themselves dirtied by politics. Is there any way to pick this up? Are the Dems even more ahead than the LV polls are showing?

____________________

Sean:

Great post, and true. Longtime poll junkies will remember late polls showing Jeff Fortenberry and Emanuel Cleaver in trouble in their respective districts in '04. I distinctly remember Kos and Bowers opining that Democrats would likely be competitive in Texas, and polls seeming to buttress that view. And who can forget October '02 polls showing Simmons up by double-digits on Courtney, Nancy Johnson tied with Maloney, Kahn up on Gingrey, Chocola losing to Long-Thompson (or up by 12), Julia Carson narrowly defeating Brose McVey, Helen Bentley ahead of Ruppersberger, Pearce only up by 2 on Smith, and Reiser defeating Chris Bell.

Well, lots of us may forget them, but it just reminds us how mercurial house polling can be.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR