Pollster.com

Articles and Analysis

 

The 2006 Exit Polls: How Did They Perform?

Topics: 2006 , Exit Polls , The 2006 Race

Today's Guest Pollster's Corner contribution comes from Mark Lindeman, assistant professor of Political Studies at Bard College.

In the wake of allegations that the 2004 U.S. presidential exit polls pointed to a stolen election, many observers wondered how the 2006 exit polls would turn out. One widespread rumor asserted that no exit poll results whatsoever would be made public until after the polls had been forced to match the official vote counts. But in fact, CNN.com once again posted a preliminary national House tabulation a bit after 7 PM Eastern, and posted tabulations in state races soon after the polls closed in each state. (Other outlets may have done so as well: at the time I had my hands full with just one.) These tabulations appear to show discrepancies fairly similar to the 2004 discrepancies, as I report below.

Using tabulations to estimate exit poll "red shifts"

The tabulations are not intended to project the final vote counts. Rather, they offer crude but useful insights into why voters voted as they did. Nonetheless, each tabulation is based on a particular vote estimate made at a particular time. The exit pollsters use different estimates for different purposes. Before vote counts begin to arrive, the pollsters can refer to at least three estimates. These are (as described in the post-election evaluation report on the 2004 exit polls):

  • The "best survey" or "Best Geo" estimate -- based on interview data (from exit polls and, in some states, telephone surveys of early and absentee voters), and also incorporating data on past results from the exit poll precincts
  • The "prior" estimate -- based primarily on public pre-election surveys (something like the averages posted on Pollster.com)
  • The "composite" estimate -- a hybrid which combines the interview data (Best Geo estimate) and pre-election polls (prior estimate).

The initial tabulations posted by CNN.com are based on the composite estimate -- not just on interview data. Therefore, they probably tend to understate the disparity between the exit poll results and the vote counts. For instance, the initial 2004 "screen shot" of Pennsylvania indicates that Kerry had about 54% of the vote, and the evaluation report confirms that the composite estimate was 54.2% [p. 22]. But the report also reveals that the interview-only Best Geo estimate gave Kerry almost 57% of the vote, or a 13.8-point margin [p. 22]. The official result -- Kerry won by 2.3 points -- constitutes an 11.5-point "red shift," or reduction in Kerry's net margin, compared to the Best Geo estimate. Because pre-election polls showed a very tight race in Pennsylvania, the composite estimate gave Kerry "only" an 8.5-point margin, or 6.2-point red shift. Overall in 2004, the average discrepancy was a 5.0-point red shift in the Best Geo estimate, but "only" a 3.6-point red shift in the composite estimate.

(Once vote counts start to arrive, the pollsters continually generate a variety of estimates that incorporate vote count data at both precinct and county levels. These dynamic estimates are used to inform the decisions to "call" -- or not to call -- each race. Intermittently the pollsters also generate new tabulations based on a current vote estimate. Updating the tabulations has been described by critics as replacing "pristine" exit poll results with "soiled" ones. [*] Actually, if "pristine" means "based on interviews only," none of the tabulations is pristine.)

To "estimate the estimates" from the early tabulations, I use each table in a tabulation to figure approximate party or candidate shares, then take the median of the differentials across all the tables. For instance, take this snippet of the preliminary national House poll:

MLTablesmall.png

We can use these percentages to estimate that 49% * 53% (or about 26%) of voters were men who voted for Democrats, and 51% * 57% (or about 29%) were women who voted for Democrats. So, based on this table, apparently Democrats got about 26% + 29% = 55% of the vote. Applying the same logic, apparently Republicans got about (49% * 45%) + (51% * 42%) = 43.5% of the vote, for about an 11.5% Democratic margin. However, other tables imply somewhat larger or smaller margins, due to the influence of rounding error. Using a median of estimates from all the tables reduces this rounding error, and a computer program interpreting the HTML tables can do the calculations almost instantly. (Because of mistakes I made on election night, I have cruder approximations for two uncompetitive Senate races -- Minnesota and Utah -- and no data for the gubernatorial races in Illinois and Tennessee.)

What I found

Overall, I estimate that the initial national House tabulation gave Democrats an 11.3% margin in total vote. The final tabulation currently available, weighted to the pollsters' vote estimate at that time, gives Democrats a 7.6% margin, so these figures imply a 3.7-point "red shift" -- close to the 3.6-point average in the 2004 presidential composite estimates. If the final official margin is closer to 7 points, as Mark Blumenthal has estimated, the red shift may be above 4 points. [*] However, the vote proportions are influenced at the margins by uncontested races, which appear on the ballot in some states and not in others. Without knowing exactly how NEP handles these uncontested races (nor whether voters accurately report their votes and non-votes in these races), it is unclear what vote totals we should compare to the exit poll estimates.

(Note also that the House tabulation is not quite like the state-level tabulations I discuss next. The state-level tabs, posted as the polls closed in each state, should incorporate all the interview data. The House tabulation, posted long before the polls closed in many states, relies on partial data from much of the country. I have no reason to think that the complete results would be much different.)

State-level races yield broadly similar red shift estimates. In the Senate races, I estimate that the average red shift was 2.3 points, and the median red shift was 2.8 points. In races for governor, I estimate that both the average and the median red shift was 4.0 points.

MLChartsmall.png

As in 2004, most of the largest exit poll discrepancies were in uncompetitive races. Some observers have cited (here , here, and here) the red shifts in the Virginia and Montana Senate races as pointing to vote miscount favoring the Republican incumbents -- who nonetheless lost both races. But since those two races have near-average red shift, there is little reason to single them out. Perhaps the most striking discrepancy is in the Minnesota governor's race. Democratic challenger Mike Hatch appeared to have an 8-9% lead in the initial exit poll tabulation, but lost to incumbent Tim Pawlenty by about 1%. The pollster.com 5-poll average gave Hatch a narrow 2.6-point margin, so the election result was closer to expectations than the exit poll result was. Minnesota also experienced one of the largest "red shifts" in 2004.

The House red shift has also been cited as evidence of vote miscount, most elaborately in a paper issued by the Election Defense Alliance (EDA). The paper argues that respondents' reports of their presidential votes in 2004 can be used as an "objective yardstick" to evaluate the 2006 poll. In the final House tabulation, (self-reported) Bush voters outnumber Kerry voters by 6 percentage points, more than double Bush's popular vote margin. EDA's analysts reason that the exit pollsters in effect had to invent millions of Bush voters (and/or delete Kerry voters) in order to match the House vote counts -- which, therefore, must be wrong. The basic flaw in this argument is that reported past vote is not an objective yardstick. On the contrary, as I have noted elsewhere, exit polls and other polls often -- even usually -- overstate past winners' vote shares. Worse, because the authors believe that Kerry won the popular vote and that Democrats had higher turnout in 2006, they end up conjecturing in a footnote that Democrats actually won the House vote by 23(!!) percentage points, a double-digit deviation from the initial tabulation. So much for defending the reliability of exit polls!

Comments

After the events of election night 2004, the NEP pollsters (Edison Media Research and Mitofsky International) announced efforts to reduce exit poll bias. Among other things, Edison/Mitofsky planned to improve interviewer training in order to minimize any selection bias on the part of interviewers. (See, for instance, Joe Lenski's interview with Mark B.) Despite the strong evidence of red shift in the 2006 data, we cannot conclude that these efforts were ineffectual. Participation bias easily could have been larger than ever in 2006. As Mark Blumenthal has noted, a Fox News/Opinion Dynamics pre-election poll found that 44% of Democrats, versus only 35% of Republicans, said that they would be "very likely" to participate in an exit poll. Differences in levels of concern about electronic voting and election fraud may (or may not) contribute to that disparity. In any case, no methodological refinement can force Democratic and Republican voters to participate at equal rates.

Interestingly, the pilot "Election Verification Exit Polls" (EVEP) conducted by Steve Freeman and Ken Warren reported similar or larger red shifts. Freeman's initial report indicates red shifts ranging from 5 to 8 percentage points in four distinct races (two House races, Senate, and governor) in the 28 Pennsylvania precincts surveyed. Freeman argues that the survey "eliminated most of the potential sources of error" (7), presumably through careful training of the interviewers. However, Freeman also reports that several interviewers who obtained relatively low completion rates "subjectively felt that Republicans were disproportionately avoiding participation" (8).

A first glance at the EVEP data files shows at least one case of large red shift where the exit poll result seems implausible. In this precinct, the exit poll registered a 63% majority for House Democratic challenger Lois Murphy (PA-06), while the official returns gave her just 44% of the vote. Registration statistics for the precinct (Chester County precinct 021, East Bradford North 2) show that registered Republicans outnumber Democrats by more than 2 to 1 (57% to 26%). If we concede Freeman's premise that the EVEP methodology was close to ideal, this result hardly inspires confidence in exit polls' inherent accuracy.

 

Comments
Elizabeth Liddle:

Very nicely put together. Interesting that it is not clear which official count should be compared with the generic poll. It didn't occur to me that there would be uncontested races. In the UK, wannabe MPs are given the job of fighting unwinnable seats as training for a winnable seat later. Party worthies are given "safe seats" - although they aren't always safe.

Re exit poll bias: it might be possible to minimise selection bias, but if the underlying cause of selection bias is an underlying differential in willingness to participate, there is no reason to suppose that being more rigorous about selection wouldn't simply move the problem on in to the realm of response bias. If you don't want to participate in a poll, the easiest option is probably to evade selection. If that isn't possible, your next option would be to refuse. I don't see that "eliminating most of the potential sources of error" could ever be claimed for an exit poll, unless the pollsters had not only managed to achieve completely random selection, but also very high response rates.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR