Pollster.com

Articles and Analysis

 

More on the CT Exit Poll Experiment

Topics: Exit Polls

Today I want to catch up and fill in a few details provided by our intrepid reader/reporter Melanie about that experimental CBS/New York Times exit poll conducted last week in Connecticut.   As regular readers may recall, Melanie first brought the exit poll to our attention after being interviewed last Tuesday.   I asked her what she remembered about the experience, starting with the interviewer.  Here is her report: 

The poll person was a young guy, maybe 20 - looked like a college student (tall, a little messy, scraggly hair, very diffident). I think he had some sort of ID around his neck or clipped to his pocket, but I didn't notice what it said  (he was in the midst of a discussion with the woman running the voting site about where he could be set up - she said she had just gotten a faxed letter from someone giving this young guy permission to be there, wherever he wanted to be as long as he didn't interrupt anyone, so she told him he could stay out of the sun - he was about 5 feet from the door of the site).

Melanie's experience gives us a unique window into the real world challenge of trying to select voters randomly as they exit a polling place.  She happened to overhear the interviewer's most important interaction of the day, the one that enabled him to stand just outside the door of the polling place.  Had he been forced to stand farther away, his ability to sample exiting voters randomly would have been severely compromised.  The post-election report provided by the two companies that conducted the 2004 exit polls (Edison Research and Mitofsky International**) showed that errors in John Kerry's favor were more than twice as large (-12.3) when interviewers were forced to stand 100 feet or more from the door of the polling place than when they could stand right outside the door (-5.3, p. 37).

At first I assumed this conversation occurred first thing in the morning, just as the polling place opened.  But I asked her to clarify and she said:

I got there about 9, which is early for me, but not for them (they had opened at 6 - the folks inside told me about 10% of the possible voters had already been in by then).

So put it all together and notice the reference to a faxed permission letter that the polling place official "had just gotten."  From the description, it appears that when the interviewer first arrived, the polling place officials would not let him stand near the exit door.  So he presumably called his supervisor and they found a way to fax a letter to the polling place official, and just as Melanie was voting, they relented and allowed the interviewer to stand near the door.  Up until that time, the interviewer had to try to intercept 10% of the day's sample from a less advantageous position. 

Second, notice the clear impression that the interviewer's appearance made on Melanie.  He "looked like a college student" with "messy scraggly hair" and a "diffident" attitude, yet she did not notice what appeared on his ID.  Now dear reader, ask yourself what you might guess about the politics or personality of that interviewer.  How would you react to an approach from such a person?  Is it possible that your choice -- whether you make eye contact, approach with interest or walk briskly in the opposite direction -- might have some relationship to your politics?  The data in the Edison/Mitofsky report from 2004 strongly suggests that it does

I will let Melanie continue with her description of how she happened to be interviewed: 

He was alone -- he actually didn't approach me -- I saw a couple of other folks doing his poll and when I walked over to see if he'd ask me to do it too he just handed me the device.  (My voting site is pretty small -- just one voting machine -- with about 700 registered democrats -- in a part of a smallish suburb east of New Haven -- we never get exit polls here). 

An exit poll interviewer is usually instructed to select exiting voters at random using a predetermined selection rate.  In other words, they are given a number (usually between 1 and 10) and told to select every third voter or every fifth.  Some are told to select every voter that exits the polling place.  We do not know the interval used here, although with 700 voters likely a number greater than one -- that is, not every voter should have been selected.  We will also never know if Melanie was one of the voters who should have been selected, but it is awfully interesting that in this instance she approached the interviewer rather than the other way around.

The data from the Edison/Mitofsky report on the 2004 exit polls showed that errors in Kerry's favor were nearly three times greater when where interviewers had to approach every tenth voter (-10.5), than in smaller precincts where interviewers were instructed to interview every voter or every other voter (-3.6, p. 36).   The clear implication is that where there was more potential for the random selection procedure to break down (either because of more chaos at the voting place or just more room for error), Kerry voters were more likely than Bush voters to volunteer to participate. 

So here we have another anecdote that helps convey the most important challenges in conducting an exit poll.  Some argued the collection of "almost perfect" random samples outside a polling place is easy.  It is not. 

Now let's hear about the most novel aspect of the CBS/New York Times Connecticut experiment, as per Melanie's report, that they used an "electronic device" to conduct the interviews.  Exit polls have traditionally been conducted with a paper "secret ballot" given to respondents on clipboard so they can fill it out and drop it into a "ballot box" without revealing the answers to the interviewer.  In Connecticut, CBS and the Times experimented with something new: 

The device was sort of like what the UPS guys use, except horizontal -- about 11" wide and 7" high. There was a LCD screen making up the middle of it, and the questions were in the center of the screen -- you just tapped the answers with a pencil to register your choices. He didn't get involved at all -- just handed it to me and took it back when I was done.  He seemed to have 2 or 3 of them.  He also had a little thing that looked sort of like a Palm pilot or blackberry that (I think) he inserted into the back of the poll device after it was filled out -- I assumed that was the way each set of info was transmitted to the collection folks, but I could be completely wrong about that.

The downside of the paper ballot is that it forces the interviewer to stop intercepting voters several times during the day to talley up responses and call in respondent level data, reading off each answer to someone on the other end of the line that enters everything into a computer.  The NEP exit polls (and those done by their forerunner, VNS) typically had interviewers call in only half of the respondent level data in order to cut down on phone call bottleneck.  This new technology automates both the tabulation and transmission so interviewers can cover the polls constantly all day, and 100% of their collected data gets transmitted immediately. 

How hard was it to use the electronic device?  Melanie continues:

The last question on the poll was about how easy or difficult the text on the screen was to read, so clearly they are experimenting with that part of it too (it was a little tough to read, especially with sunlight reflecting off the LCD screen -- or maybe that's just my aging boomer eyes).  But it was very user-friendly -- I think it was much quicker than a paper ballot would have been.

One last footnote:  In my speculation about the exit poll problems in 2004, I have given great weight -- probably too much, in retrospect -- to the presence of the network logos on the survey questionnaire, the exit poll "ballot box," and on the interviewer name tags.  Note that Melanie did not notice that network identification until she was holding the device in her hands.  I asked her specifically when she first noticed the logos. Her answer:

I noticed the logo after I had started to read the questions on the device -- it was about 2-3" square, at the top left of the device -- he didn't say anything about it and I just happened to notice it after I had read the first question or 2.  It was the usual CBS logo, but it wasn't a particularly bright color -- it sort of blended in to the dark gray or black color of the device, so it wasn't splashy. I assumed that was on purpose, as was his failure to tell me whose poll it was.

Of course, Melanie was just one respondent and amounts to a sample size of one.  Of course, as a colleague of mine once told me, the plural of anecdote is data.

**An MP source reports that Edison-Mitofsky, the company that does the pooled exit polls for the NEP consortium did not conduct the CBS/New York Times exit poll in Connecticut last week. 

 

Comments
Nathan:

When we had that discussion of exit polls 2 years ago, especially after the release of the Mitofsky report, I noted that the hiring of exit poll-takers from Craigslist and college poli-sci classes inevitably prevented a bias issue. Throw in that the worst discrepancies in Kerry's favor were with graduate students. It's not hard to conclude that straightlaced Republican MBA students were not volunteering for a low-paying exit pollster job.

____________________

Alan:

In Quinnipiac's August 17, 2006, poll in Connecticut (http://www.quinnipiac.edu/x11362.xml?ReleaseID=948), the results are reported by party affiliation, but don't include the % of total respondents in each party, making it difficult to tell how much the Democrat, Republican and Independent voters weigh in the totals.

In case anyone else is looking for this, I asked Quinnipiac if they could provide this information, and this is what they gave me:

-------

Generally speaking, do you consider yourself a Republican, a Democrat an Independent, or what?

LIKELY VOTERS

Republican 25%
Democrat 36
Independent 34
Other 4
DK/NA 1

____________________

Regarding State Exit Poll non-response in 2004, the EVIDENCE of Mitofsky's own polling and response rate data contradicts the Bush nonresponse hypothesis.

Not to mention the Final Exit Poll, where the 43% Bush/37% Gore 2000 vote weightings contradict the theory as well.

Of course the 43% weighting was physically impossible since 43% of 122.3mm = 52.57mm and Bush only got 50.45mm votes in 2000. Of these about 1.75mm died, leaving just 48.7mm alive to vote in 2004. And 48.7/122.3 = 39.8% (this assumes 100% Bush 2000 voter turnout).

Let's compare the Final National Exit Poll using the IMPOSSIBLE 43%/37% weightings to a plausible scenario assuming 98% turnout of both Gore and Bush voters -WITH NO CHANGE IN VOTE SHARES.

________________________________________
FINAL NATIONAL EXIT POLL
2:05PM, 13660 RESPONDENTS

HOW VOTED IN 2000
(Impossible 43%/37% weights)

BUSH WINS: 62.5-59.3mm (51.1%-48.5%)

Voted..2004............Vote Share..........Votes (mm)
2000 Votes Weight Kerry Bush Other Kerry Bush Other
No 20.79 17% 54% 45% 1% 11.22 9.35 0.21
Gore 45.24 37% 90% 10% 0% 40.72 4.52 0.00
Bush 52.57 43% 9% 91% 0% 4.73 47.84 0.00
Nader 3.67 3% 71% 21% 8% 2.60 0.77 0.29

Total 122.27 100% 48.48% 51.11% 0.41% 59.28 62.49 0.50

_________________________________________________

FINAL NATIONAL EXIT POLL
2:05PM, 13660 RESPONDENTS

HOW VOTED IN 2000
(Adjusted for plausible weights)

MAXIMUM WEIGHTS (100% turnout): 40.25% Gore/39.82% Bush
PLAUSIBLE WEIGHTS (95% turnout): 38.24% Gore/37.83% Bush
NO CHANGE in vote shares.

KERRY WINS: 62.6-59.2mm (51.2%-48.4%)

Voted..2004............Vote Share...........Votes (mm)
2000 Votes Weight Kerry Bush Other Kerry Bush Other
No 26.22 21.44% 54% 45% 1% 14.16 11.80 0.26
Gore 46.75 38.24% 90% 10% 0% 42.08 4.68 0.00
Bush 46.25 37.83% 9% 91% 0% 4.16 42.09 0.00
Nader 3.04 2.49% 71% 21% 8% 2.16 0.64 0.24

Total 122.27 100% 51.17% 48.42% 0.41% 62.56 59.20 0.51

StateVotevsExitPollCompletionRate1_27680_image001.png

____________________

Here is a corrected link to the graph mentioned in the previous post:

http://www.geocities.com/electionmodel/StateVotevsExitPollCompletionRate1_27680_image001.png

____________________

Elizabeth Liddle:

TruthIsAll:

Regarding response rates:

You seem to have fallen into the same fallacy that USCV fell into, and, since then, Steve Freeman.

At precinct level (which is where it counts) there was no signficant correlation between response rate and proportion of votes for Bush. This was stated clearly in the E-M report. Moreover, Warren Mitofsky demonstrated, in a slide presented by at AAPOR in 2005, that there was no correlation between refusal rate and proportion of votes for Bush.

However, it IS true that response rates were not LOWER in precinct with a higher proportion of votes for Bush. So one possible interpretation is that Bush voters were not more likely to refuse to participate in the poll than Kerry voters.

But this interpretation would be fallacious. Bias does not depend on absolute response rates (which were extremely variable, across all degrees of voteshare) but on relative response rates. And you cannot compute the relative response rates because you do not know who your non-responders voted for (by definition).

It would only take a single factor (such as urban density) to be correlated both with response rate and with Bush's vote share, for the inference to fail.

In other words, the response rate data do not either support or contradict the non-response bias theory at all. They tell you nothing at all about non-response bias.

The fallacy is known as the ecological fallacy.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR