Articles and Analysis


Joe Lenski Interview: Part 2

Topics: 2006 , Exit Polls , The 2006 Race

Joe Lenski is the co-founder and executive vice-president of Edison Media Research. Under his supervision, and in partnership with Mitofsky International, the company of his later partner Warren Mitofsky, Edison Media Research currently conducts all exit polls and election projections for the six major news organizations -- ABC, CBS, CNN, Fox, NBC and the Associated Press. In Part I of his interview with Mark Blumenthal, he spoke generally about how the networks conduct exit polls and how they use them in their system to project winners on Election Night. The interview concludes with a discussion of the problems the exit poll experienced in 2004 and what will be done differently this year.

I want to ask more generally about how things will be different this year. First, let's talk about the issue of when and how you will release data to members of the National Election Pool (NEP) consortium and other subscribers. In the past, and please correct me if I'm wrong, hundreds of producers, editors and reporters had access to either the mid-day estimates or early versions of the crosstabulations that you would do, and the top-line estimate numbers would inevitably leak. How is that process going to be different this year?

The news organizations are really taking this challenge seriously on how to control the information for a couple of reasons. First, each of these news organizations have made a commitment to Congress over the years that they would not release data that would characterize the winner or leader in a race before the polls have closed. So in essence, by this data leaking, it was undermining that promise that they had made to Congress.

The other thing is that we know these are partial survey results. No polling organization leaks their partial survey results. If it's a four-day survey they don't leak results after two days. Similarly if it's a twelve-hour exit poll survey in the field you're not going to release results after just three hours of interviews. So the data will not be distributed to the news organizations until 5:00 p.m. in 2006, and that's a change from all the previous elections. The goal is that this will be more complete data and also we will have more time to review the data and deal with any issues in the data that look questionable that we need to investigate. It will still give news organizations time for their people to look at the data before the polls start closing.

In 2004 at least one network started posting the demographic cross-tabulations online for specific states. I believe these started appearing almost as soon as the polls closed, maybe shortly thereafter. Do you have any idea if they are planning to repeat that of if they will hold off on posting tabulations until most of the votes have been counted?

Again, that's an editorial decision by the news organizations, but they are well within their rights, as soon as the polls close within a state, to publish those results.

Let's switch gears a bit. There are a bunch of stories out now about the increase in demand for early voting or absentee ballots. In Maryland, where there are concerns about the voting equipment stemming from problems that had during the primary there, there are apparently so many requests that they are running out of ballots. What are you doing to cope with the fact that fewer and fewer voters are voting at polling places?

In the states with significant amounts of absentee or early voting, we are doing telephone surveys the week before the election of voters who have either already voted absentee or have their absentee ballot and plan to send it in or vote in person before Election Day.

Have you had time to adjust for the demand in Maryland? Is Maryland going to be one of those states?

No, Maryland will not. Most of these decisions were based on the share of the vote that was absentee in 2004 and Maryland in 2004 had a relatively small number. Yes, there will be big increases in several states. Maryland is one. Ohio may be another. We saw in 2004 really large increases in states like Florida and Iowa in terms of the people who voted early and absentee. So as absentee and early voting increases the need for us to do more of these telephone survey supplements to the exit poll is going to continue and we will need to budget for more of them.

As long time readers of my blog will certainly know you and Warren Mitofsky co-authored an analysis in January 2005 of the problems experience with the 2004 exit polls and the overall system. In particular there appeared to be a problem with interviewers either deviating from the random selection procedure or when they faced greater challenges in completing interviews. What will you be doing differently this year to try to fix those problems?

Well a lot of it has already been done. We sat down with all of the NEP members after that report came out and did a thorough review of all the recruiting and training procedures for the exit poll interviewers and we got a lot of input from all of the professionals that work at the news organizations that do their own surveys. They looked at the materials we were using. We had discussions and we came up with an improved training manual. We also prepared and filmed a training video that all interviewers are required to watch. We developed a new more rigorous training script and a quiz or evaluation at the end of that script to make sure that the interviewers understand the important facets of their job.

In addition to all that we have the input rehearsals that we have done every year, where we have two days in which we act like its Election Day. People call in with test results using the same phone numbers, the same questionnaires they will use on Election Day just to make sure they understand how the process works. So all of that has already gone into effect and I think our interviewers are much better trained this year than they were in 2004.

Another factor that came into play is that we found the error rates tended to be higher on average in precincts where there were younger interviewers, especially interviewers under the age of 25. This isn't to malign the abilities of interviewers under the age of 25, there just seems to be an interaction between older voters and younger interviewers that make older voters less likely to fill out surveys that are presented to them by younger interviewers, so we've also made a concerted effort to increase the average age of our interviewers.

I wrote just the other day about a question buried at the end of a recent Fox News poll that showed Democrats were significantly more likely to agree to participate in exit polls than Republicans. I'm wondering if you think that differential willingness to participate is worse now than in 2004? And how can you do an unbiased random sample at the precinct level under those conditions?

Well, typically in past non-presidential off-year elections there hasn't tended to be large exit poll biases. I have a feeling though, 2006 has, like 2000 and 2004, more passion than the typical off-year presidential election. So that does worry me to some extent. Again, the more we can train our interviewers to follow the proper sampling procedures the more we can eliminate a good bit of the bias that comes in from people in essence volunteering to take an exit poll when they weren't randomly selected to participate in an exit poll.

It still doesn't correct for the differential non-response that might exist even within the sampled voters. So you could properly select every fifth voter say as they are leaving a polling place but if 55 percent of the Republicans fill out a questionnaire but only 50 percent of Democrats fill it out you are still going to have a bias from non-response even if your sampling is absolutely perfect. What we are going to do, and what all the decision teams looking at this data are going to do, is know that the possibility for that type of bias exists and we'll be careful especially in states where we have seen that type of survey bias before projecting any winners.

The NEP exit polls are designed for a variety of different purposes, to provide an analytical tool to help people understand why people voted the way they did, to help assist with the projections, but they are not designed, at least as I understand it, to help detect fraud. So my question is, if you were in the business of designing an exit poll to do that in the United States, how would you design it differently than you do for NEP?

In answering this question I want to be careful not to malign the design of the exit polls as they now exit because they really serve the purpose of the news organizations and what they need. What they need is a lot of information about who voted, how they voted, why they voted, and to be able to present that information as quickly as possible on Election Night.

So there are some things that we do in designing these exit polls that we wouldn't do in designing an exit poll whose sole purpose would be to validate the results precinct by precinct, county by county, state by state. One is we interview on average about 100 voters per polling location. That's basically because there are several costs that come in. There are printing and shipping costs to get the questionnaires to polling places. Two, there is the interviewer cost -- you would probably need to hire more than one interviewer per location if you were doing more interviews. And three, we need to get all of this data into our system by the time the polls close in that state in time so that it can be reported on Election Night. So there would be the time cost -- it would be cost prohibitive to get all that data into the system on Election Day.

So if you were going to design a system solely for the purpose of validating election results, one, you would try to interview everyone, or at least approach everyone at the polling locations where you are trying to do that validation. Two, you would have a shorter questionnaire. You would not have the twelve to twenty five questions that are being asked in our questionnaires for analytical purposes. You wouldn't be asking a lot of the questions about the important issues in the race or be asking questions about religion, whether people are married, income, education, etc. The studies I've seen -- and there have not been enough of them and some of them are fairly old -- tend to conclude that you get the highest response rate and the lowest error from exit poll questionnaires that are about six to seven questions long. That would give you very little data to analyze the election. And that's why the NEP questionnaires are longer than that. But if you were trying to increase response rates and decrease bias and within-precinct-error, you would have shorter questionnaires as well.

The other thing you would do is to interview for all the hours in which the polls are open. Because of the time constraints, we tend to stop interviewing at most polling places about 30 to 60 minutes before the polls have closed, so we can get all the data for that day into our system. If you were going to do a survey that would validate the entire day's voting you would do interviewing from poll opening all the way to poll closing, including that last hour. So those are some of the things you would change in the design if you were designing exit polls solely to validate the voting results at the precinct level.