Pollster.com

Articles and Analysis

 

The Art and Science of Choosing Likely Voters

Topics: ANES , Barack Obama , Likely Voters , Validation

On Wednesday Nate Silver posted a helpful table that compared registered voter and likely voter samples on seven recent national surveys, including both the "traditional" and "expanded" likely voter models reported ever day by Gallup.

081023_538 table

He noticed that the polls "appear to segregate themselves into two clusters," on showing a 4-6 point difference between the likely and registered voter models and one showing essentially no difference:

The first cluster coincides with Gallup's so-called "traditional" likely voter model, which considers both a voter's stated intention and his past voting behavior. The second cluster coincides with their "expanded" likely voter model, which considers solely the voter's stated intentions. Note the philosophical difference between the two: in the "traditional" model, a voter can tell you that he's registered, tell you that he's certain to vote, tell you that he's very engaged by the election, tell you that he knows where his polling place is, etc., and still be excluded from the model if he hasn't voted in the past. The pollster, in other words, is making a determination as to how the voter will behave. In the "expanded" model, the pollster lets the voter speak for himself.

Nate offered several good reasons why the traditional likely voter models may be missing the mark this year, as well some reasonable suggestions of ways pollsters might check their assumptions. His bottom line, however, is that he considers the 4-6 point gap between registered and likely voters "ridiculous" and issued a "challenge" to the pollsters showing closer margins to "explain why you think what you're doing is good science."

Now I'm a fan of Nate's work at FiveThirtyEight.com and I share his skepticism about placing too much faith this year in more restrictive likely voter models that place great emphasis on past voting. But having said that, I think it's a bit unfair to imply that the models used by pollsters like Franklin & Marshall and GfK amount to bad "science."

The science and art of likely voter models is worth considering. I've long argued that political polling is a mix of both science and art (just check the masthead of my old blog), and no where is the "art" of this business more evident than in the way pollsters select likely voters. Whether it's the likely voter model or screen or decisions about what sort of sample to use or how to weight the results, pollsters typically make a series subjective judgements that are at best informed by science. One reason that no two pollsters use exactly the same "model" is that the science of predicting whether a given individual will vote is so imprecise.

As I wrote in my column earlier this week, likely voter models had their origins in a series of "validation" studies first done by pollsters in the 1950s, when they mostly interviewed respondents in person. Since the interviewer visited each respondent at home, they could easily obtain their name and address. After the election, pollsters with a sufficient resources could send their interviewers to the offices of local election clerks to look up whether each respondent had actually voted. Gallup used proprietary validation studies to help develop its traditional likely voter model, and the validation data collected by the University of Michigan's American National Election Studies (ANES) from the 1950s through the 1980s helped guide a generation of political pollsters.

Unfortunately, the ANES stopped doing validation studies in 1980, but the data is readily available online, I downloaded the 1980 survey and ran the cross-tabulations that follow. In 1980, ANES followed its standard practice, conducting an in-person interview with a nationally representative random sample of voters in October, then following up with a second interview with the same respondents after the election in November.

The following table shows results from questions asked before the 1980 election about whether the respondent was registered and whether they intended to vote, plus a question asked afterwards about whether they had actually voted. (A few caveats: first, the data shown here are unweighted, as I could find no documentation or weight variables in the materials online. Second, roughly 18% of the respondents are omitted from this table because the researchers could not firm their registration status. Third, obviously, the study is 28 years old, although a more recent validation study conducted by in Minnesota by Rob Daves, now a principal of Daves & Associates Research, yielded very similar findings).

081024 NES1980_A.png

The middle column represents respondents who were actually registered to vote, but had no record of voting in the 1980 general election. And no, that's not a typo. Eight-four percent (84%) of these confirmed non-voters said they planned to vote. Their answers were more accurate after the election, but still, nearly half (44%) of the non-voters claimed inaccurately a few weeks later that they had voted.

The far right column shows the respondents who were confirmed as non-registrants. Nearly a third (30%) told the interviewer that they were registered to vote during their first, pre-election interview, and 45% said they intended to vote. After the election one in five of those with no record of being registered to vote (21%) claimed they had cast a ballot.

These results are not unusual. They are broadly consistent with previous ANES studies. Collectively, they illustrate the fundamental challenge of identifying "likely voters." If you "let the voter speak for himself," he (or she) often overstates their true likelihood of voting. Looking back, many also claim to have voted when they have not -- something to keep in mind in looking at crosstabulations for out this week for those reporting they have voted early.

Now check the patterns by two additional questions about past voting and interest in the campaign. Again, you also see strong but imperfect correlations. Those who say they usually vote and who express high interest in the campaign tend to vote more often than those who do not.

081024 NES 1980-2.png

Since voters tend to overstate their intentions, pollsters like Gallup (and most of the others in Nate Silver's table) typically combine questions about intent to vote, past voting, interest in politics and (sometimes) knowledge of voting procedures into an index. A respondent who says they are registered, plans to vote,has voted in all previous elections and is very interested in politics might get a perfect score. A respondent that reports doing none of those things gets a zero. The higher the score, the more likely they are to vote. [I should add: I'm giving you the over-simplifed, "made-for-TV-movie" version of how this typically works -- as per one of the comments below, Gallup and many others give "bonus points" to younger voters to try to compensate for their inability to say they've voted in previous elections].

Some pollsters (such as Gallup and others who use variants of their "traditional" model) will use that index to select the portion of their adult sample that corresponds to the level of turnout they expect (they use the index to screen out the unlikely voters). A few pollsters (CBS News/New York Times and Rob Daves when he conducted the Minnesota Star Tribune poll) prefer to weight all respondents based on their probability of voting. The table below (from my post four years ago on the CBS model) shows a typical such a scale used for this purpose based on the same 1980 validation data presented above.


081024 traugott table.png

So given all this evidence, why am I skeptical of more restrictive models? Look again at any of the tables above. Neither the individual questions nor the more refined index can perfectly predict which voters will turn out. For example, in the table above, more than a quarter (27.6%) of the voters with the lowest probability of voting -- those who would be disqualified as "likely voters" by most "cut-off" models -- did in fact vote in 1980. And almost as many of the voters scored with the highest probability of voting did not vote (that's one reason why I like the CBS model that weights all registered voters on their probability of voting seems rather than tossing out the least likely).

Still, the best any of these models can do, as SurveyUSA's Jay Leve put it in an email to me last week in describing his own procedures, is "capture gross changes" in turnout from year to year. "We believe," he continued, "no model in 2008 is capable of capturing fine changes" in turnout. I agree. I also fear, as I did four years ago, that models that try to closely "calibrate" to a particular level of turnout overlook the strong possibility that the respondents willing to participate in a 5 to 15 minute interview on politics are probably more likely to vote than those who hang up or refuse to participate. In other words, some non-voters have already screened themselves out before the calibration process begins.

The best use of these highly restrictive "likely voter models," in my view, is to determine when the level of turnout has the potential to affect the outcome of an election. Put another way, the likely voter models typically produce results that differ only slightly from the larger pool of registered voers. However, in relatively rare elections -- and 2008 appears to be such an example -- the marginal voters tilt heavily to one candidate. Surveys have been showing for months that Barack Obama stands to benefit if his campaign can help increase turnout among the kinds of registered voters that typically do not vote.

The fact that the likely voter models are producing inconsistent results, provides additional confirmation of that finding. As Nate Silver points out, some likely voter models (presumably the ones putting more emphasis on past voting) are showing closer results than other models that appear to be less restrictive. The problem is that determining which model is the most appropriate is not a matter of separating science from non-science, and the differences between them are sometimes subtle. Many of the presumably less restrictive models used by national pollsters (ABC/Washington Post and CBS/New York Times, for example) likely include at least some measures of past voting. The true margin that currently separates Obama and McCain probably falls somewhere in between these various "likely voter" snapshots.

Once the votes are counted, we will have a better idea which models are coming closest to reality. Either way, no single model can claim unique "scientific" precision. All involve judgment calls by the pollsters.

[Typo corrected]

 

Comments
ClarkAMiller:

This is a nice analysis. Thanks. I have to say that, despite Nate's objection to the practice (vis-a-vis his model building), I greatly appreciate the Gallup organization for offering multiple RV and LV models. Perhaps the most annoying behavior this entire season is people, especially in news organizations, just reading the topline numbers and giving no attention to the internals. Equally annoying are the pollsters who publish no internals and tell us nothing about how they are making what you correctly label as the *judgments* they are making. I've got nothing against such judgments; they are an inevitable part of all science and is why we hire experts.

But real scientists are committed to openness and transparency about the models they use. It would be nice if pollsters would follow suit and publish their models.

Real scientists are also committed to model sensitivity studies that highlight where model's are subject to significant uncertainties based on assumptions. Again, it would be nice if pollsters if pollsters would publish sensitivity studies, following Gallup's lead.

These two choices would significantly help our ability to interpret polling data in a sophisticated and knowledgeable fashion, rather than just reading off the topline.

____________________

AySz88:

I don't think Silver is saying that these based-on-previous-elections ("traditional") LV models have no justification at all. I think he is referring to the evidence that, this time around, there will be a large shift in the composition of who turns out. The correlation between one's likelihood of voting and one's previous voting record has probably changed significantly for this cycle. When you already have evidence that the correlations have changed, using an LV model based on old correlations doesn't seem very applicable. His question seems to be, why should you assume continuity and continue with the "traditional" LV model, despite rising uncertainty in the applicability of the model?

Silver seems inclined to increase uncertainty in the face of an "unknown unknown". (An example: he would rather forecast to Election Day, which mostly ends up increasing uncertainty in his prediction, than attempt to answer "if the election were held today...", where he would have no ground truth data to validate his prediction against.) So instead of making a judgment call about likely voter models, he might rather just use the 'expanded' LV model and admit a larger uncertainty in that process. This makes sense on some level - if you're really not certain whether the extra massaging is justified, you may as well not try to do it or claim as if it were.

I get the sense that the question of whether or not "traditional" LV is valid this year is too much of an unknown for Silver's taste. I think that, in a strict sense, avoiding the "traditional" model is Right Thing to do - it avoids using the numbers that are being questioned. But on the other hand, perhaps the pollsters' gut (like the baseball manager's gut in Silver's primary industry) gets to have some leeway here.

____________________

Gary Kilbride:

First of all, you're never going to get 250+ comments under an analysis like this. Keep that in mind and sharpen up. Eric dominates in the number of comments per thread.

This is probably a simplistic summary, but I have no idea why major variables like Party ID or RV/LV relationship are seemingly guessed at every 2 or 4 years. It reminds me of the Reluctant Bush Responder stuff, waiting until the next cycle to see if it shows up again.

Isn't there room for major scrutiny between cycles, some type of common ground or brainstormed new application, or a modern day followup to the 1980 data? It seems like the same dilemmas show up every cycle, unresolved.

Anyway, I don't think this is equivalent to baseball. I have literally hundreds of situational and statistical angles for football, basketball and golf. With a sample size like that, there's no reason to get cute and subjectively try to turn a 57% angle into somewhat higher. You allow the math to work in your favor over an extended period. But in a one-and-done high profile situation like a presidential race I would definitely use subjectivity to apply the variables unique to this cycle, and outwit my scumbag competitors.

____________________

brambster:

Mark, I'm sure that you realize that some things have obviously changed since the 1980's. Motor voter bills have increased registration rates, and there seems to be more coverage and certainly money spent on big elections.

This year in particular clearly promises a rise in the youth vote and the AA vote. From past discussions on Gallup's traditional model, we know that they definitely discount at least the youth vote way too much by using an incredibly simplistic model that severely disenfranchises new voters, nothing like the Survey USA one. That traditional model might work fine if the shape and enthusiasm of the electorate was static or traditional, but it's clearly not this time.

Although I don't have stats on this, there's also seems to be a runaway effect, where a candidate that can't win and is at the top of the ticket will under perform the polling. It's getting so bad with McCain that Republicans are now jumping over each other to either disown him or even endorse Obama.

Then there's the issue of new battlegrounds this cycle bringing in new voters and increasing turnout. I know that my vote won't make a difference at all for any single race in front of me this cycle, and therefore it is almost a waste of time to even show up, and my state will likely have very low voter turnout. This won't be the case in Virginia, North Carolina, Colorado, Georgia, Indiana, or the other states that haven't been pushed hard in past elections like Ohio and Florida, but they're also likely to see even more gains too.

So where do all of these new voters come from when you produce a rise in turnout? They come from the registered voters that are discounted by the likely voter models, and these voters are overwhelmingly supporters of Democrats.

It's clear that Research 2000 is expecting increased turnout, and specifically among certain groups, and that's why their model is on the top of the tracking polls in Obama's favor. Gallup's traditional model however assumes that this election will be no different than others in recent history, and Zogby doesn't think that party ID has moved at all since 2000.

So given an average of RV results and LV results, I would pick a number that is 2/3 the difference from the LV result as the likely target this cycle. Of course that's just a guess as I don't run a polling organization.

____________________

Mark Lindeman:

@Gary: It would be wonderful to have a vote validation study every election, although as you can imagine, it is very labor-intensive. (The economics may be changing as digital voter history files proliferate -- but getting the voter match right is non-trivial, as we've heard in other contexts.)

If the campaign remains pretty stable, Gallup and a few others who have collected lots of data may be in a decent place to analyze how tinkering with the LV model would have changed the results. It's a bit hard to do well with a single survey, even a big one like a Pew survey. And meta-analysis like Nate's can be problematic because so many things may vary across pollsters. (And, even beyond the hype, uncounted ballots do make it harder to tell what the gold standard of accuracy should be.)

But I don't see any reason to assume that past answers about LV/RV or party ID would apply to this election. That's why we argue about it.

____________________

kglore in PA:

Mark,
I wish you would comment on the motivation behind polling organizations for publishing misleading polls without any explanations for how they determined their likely voter models. For example, the AP/GfK poll last week made the news because it showed the race nearly even at 44% Obama and 43% McCain among likely voters, yet among registered voters the same poll showed Obama ahead by 10 points 47% to 37% leaving the remaining 16% for other candidates or undecided. When one looks at their internals, you can see why. Their overall sample included 45% Evangelical Christians, 36% were from the south, and 35% were from the age group 30-44, while only half as many people from the other regions of the other age groups were sampled. This is not representative of the general voting population and is clearly skewed to McCain's electoral strengths. I am sure if a similar poll overrepresented African Americans, the 18-29 age group and the northeast region, Obama would be up by 60%! But that would not be a useful poll to predict the general population's voting outcome now would it? The research 2000 internals are published every day and it is very clear where each candidate's strengths lie and how they are changing. AP/GfK should do the same!

____________________

franzneumann:

The most salient point in this analysis is that the likely voter intent questions do not perfectly match actual voting behaviour. There's a significant 'lag' between the two, whether it's low likelihood (27.6% vote) or high likelihood (25.4% don't vote). Put in the context of statistical MOEs that is a rather huge discrepancy. Even non-registered voters vote at 10% based on this data.

I often caution clients against using segmentations in a firm manner for just this reason. I have seen a major retailer completely ignore its 35% male customer base because someone did a segmentation for them that showed nothing but female segments simply because they were over-represented in each category.

There is truth to the notion that traditional voting demographics could have a notable impact on the election outcome - i.e., that Obama has strong support from subgroups that are traditionally less likely to vote. However, if you look across the internals of many polls, you will also find that Obama's supporters are much more enthusiastic than McCain's. Moreover, this enthusiasm is higher relative to past presidential winners. It is quite likely that the relative strength of support on either side of the ticket will offset the demographic issue of voter turnout.

Further, the levels of engagement in this election appear to be significantly higher than in recent elections. That too bodes well for Obama, as the key issue with those less likely demographic segments is engagement and cynicism.

Thus, it is likely that the "marginal" voters will be less marginal in this election than the past. The data shows this both in terms of support strength/enthusiasm and primary results.

Finally, polling and survey research is always a blend of 'art' and 'science'. There is always a subjective element in any result/interpretation.

____________________

s.b.:

First of all people aren't automatically eliminated from Gallup's LV model traditional if they haven't vpted before. That is false. In fact 8% of the people in that sample are first time voters, as stated in Gallup's article on first time voters. First time voters have also not increased and are 13% of Registered voters, exactly the same as in 2004.

So it is completely misleading to state that first time voters are eliminated from LV models with 4-6 point differences from Rv. Not true. There is also no evidence that there will be more first time voters than in 2004, an historically high turnout election year.

____________________

But one thing we never know is how voter suppression efforts, like these encouraged by the White House affect likely voter models.

____________________

kglore in PA:

Not only does voter suppression skew the polling results which overwhelmingly predict an Obama/Biden victory, but let's not forget actual vote fraud which seems to be happening right now in early voting in places like West Virginia, Tennessee and Texas. Recent CNN reports of early voting in West Virginia have already noted electronic touch screen voting machines that are flipping votes from democrat to republican when the box is touched. Eye witnesses have repeated their vote only to find the machine continuing to flip their vote. The machines are made by ES&S and they happen to be the same machines implicated in vote flipping in Ohio in 2004 and Florida in a congressional race in 2006 where 18,000 votes were unaccounted for in an election decided by 356 votes. In every one of these cases from 2004, 2006, and 2008 early voting, the votes are flipped from democrat to republican, never the other way around, so machine malfunction cannot be the only explanation. This must be investigated by the proper authorities and this must be stopped before election day. There are currently 97,000 of this company's ivotronic machines being used in this election, so GOP vote fraud is alive and well. When pollsters skew their polling results like AP/GfK did, this offers cover for why the vote may be closer than the polls suggest. Will anyone question a come-from-behind McCain victory?
http://wvgazette.com/News/200810180251

____________________

tom brady:

As he is wont to do, Nate has slyly slipped in his personal interpretation of the polling models in the guise of just presenting the numbers. His comparison of the two sets of polling data implies that all the ones showing closer races have adopted the traditional Gallup likely voter model, but in fact we don't know if that's true, do we? I understand that he is a partisan for Obama, and thus prefers that pollsters use the expanded model, but he ought to be more upfront that it is simply his partisan leanings that are driving his choice, and not any objective evidence. In fact, we simply don't know what the turnout is going to be, and there are very good reasons to rely on the traditional model until proven wrong.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR