Likely Voters


More on the Arkansas Surprise

Before moving on to the more important issues raised by both Nate Silver's new pollster accuracy ratings and their apparent role in parting of ways between DailyKos and pollster Research 2000, I want to consider some possible lesson's from last night's Arkansas surprise.

Let's start with the assertion that Del Ali, president of Research 2000, made to me earlier today. He says that the final result -- Blanche Lincoln prevailed by a 52.0% to 48.0% margin -- fell within the +/- 4% margin of error of his final poll, which showed Halter at 49% and Lincoln at 46%. That much appears to be true. However, Research 2000 did three polls on the Lincoln-Halter run-off, including a survey conducted entirely on the evening of the first primary, and all three gave Halter roughly the same margin as the final poll.

2010-06-09-AR polls.png

I'll spare you the math (and the argument about how we might calculate the margin of error for such a pooled sample), but if you treat all three polls as if they were one, the difference between the vote count and the consistent Research 2000 result looks far more statistically meaningful.

One big problem in this case is that Research 2000 was the only pollster releasing results into the public domain. Had other pollsters been active, producing the sort of pollster-to-pollster variation we typically see, those who follow the race may have been less surprised by the outcome.

I am told, however, that there the Lincoln campaign and allies of the Halter campaign (presumably organized Labor) did conduct internal polling that was not publicly released. I communicated with senior advisors to both campaigns today who say that each side polled immediately after the first primary and showed Lincoln ahead. Lincoln's internal poll showed her leading by ten points, while two post-primary polls conducted by Halter's allies showed Lincoln leading by six and four points. The advisors also claim that neither campaign fielded a tracking poll in the final week, as all remaining resources were devoted to advertising and efforts to get out the vote.

Now in fairness to Research 2000, all of these claims were made to me today, on background, and I have no way to verify them independently. So take this information with a grain of salt.

Are there lessons to be learned here?

First, let's remember the point I made a week ago, with the help of Nate Silver's data: Whatever the reason, polls show far more error in primaries, especially primary elections in southern states.

Second, consider something largely overlooked: Arkansas has one of the largest cell-phone only populations in the nation. A year ago, the Center for Disease Control's National Center for Health Statistics (NCHS) published estimates of wireless-only percentages by state. Arkansas ranked fourth for the percentage of cell phone only households (22.6%) and seventh for the percentage of cell phone only adults (21.2% -- for rankings, see the charts in our summary). And the national level NCHS estimates of the cell phone only population have risen another 4.5 percentage points over the past year.

Nationally, the cell-phone-only population is largest among younger Americans, those who rent rather than own their homes and among non-whites. Those patterns could have made a difference in Arkansas.

Third is a point I made in my column earlier this week: The results of pre-election surveys are sometimes only as good as the assumptions that pollsters make in "modeling" likely voters.

For example, many pollsters stratify their likely voter samples regionally based on past turnout. In other words, they divide the state up into regions and use past vote returns to determine the size of each region as a percentage of the likely electorate. As should be obvious, these judgements are often subjective and rely heavily on the assumption that past turnout patterns will apply to future elections.

For the Arkansas runoff, however, pollsters could rely on a very proximate turnout model: The first primary on May 18 between Lincoln, Halter and D.C. Morrison. In fact, according to Del Ali, that's exactly what Research 2000 did for their runoff polls. They used the regional distribution of voters on May 18 to set regional quotas. They also conducted a survey of self-identified voters on primary election night, weighted the survey so their self-reported preferences matched the result, and relied on the resulting demographics to guide their demographic weighting on subsequent polls.

But here's the problem: As is typical, total turnout declined between the two elections. Roughly 70,000 voters (21% of those who voted in the first primary) did not vote in the runoff. But more important the pattern in the fall-off was not consistent throughout the state and the pattern favored Lincoln: Turnout was high in her base and fell off most where she was weakest.

I took the vote by county as reported by the Associated Press (here and here) and calculated turnout in each county for the runoff as a percentage of the total vote cast in the first primary. As the scatterplot below shows, the fall-off in turnout was typically greatest in counties where Lincoln's percentage of the vote on May 18 was lowest (I omitted results from Baxter and Newton counties which showed increases in the total vote suggesting clerical errors or omissions in AP's vote total).


The pattern is most likely explained by the fact that there were also Congressional run-off elections held yesterday in the 1st and 2nd Districts of Arkansas, which kept turnout higher in areas that are also Lincoln's base of support.

I don't want to make too much of the turnout pattern since, by my calculations, re-weighting the May 18 vote to match yesterday's county level turnout would add less than percentage point to Lincoln's lead. But hopefully it gives you some idea of what can happen when assumptions can go awry. Region is just one variable. Other assumptions, such as those for race and age, may have been even more consequential. Other pollsters making different assumptions may have produced very different results. When just one public pollster is active in the race, the odds of misreading the horse-race are greater.

On Polling Political Junkies

This morning, Nate Silver flagged a pretty glaring difference between two similarly worded and structured questions asked on surveys conducted by Rasmussen Reports and CBS News at roughly the same time. In so doing, he's highlighted a critical question at the heart of an important debate about not just Rasmussen's automated polls, but about all surveys that compromise aspects of their methods: Are the respondents to these surveys skewed to the most attentive, interested Americans? Do Rasmussen's samples skew, to use Nate's phrase, to "political junkies?"

Here is his chart:


CBS News found just 11% of adults who say they are "very closely" following "news about the appointment of U.S. Solicitor General Elena Kagan to the U.S. Supreme Court" in a survey fielded May 20-24. Rasmussen found 37% who say they are "very closely" following "news stories about President Obama's nominee for the Supreme Court" in an automated survey fielded from May 24-25. The answer categories were identical.

In addition to the minor wording differences, the big potential confounding factor is that Rasmussen screened for "likely voters," while CBS interviewed all adults. Nate does some hypothetical extrapolating and speculates that likely voter model alone cannot account for all the difference.

Whether you find that speculation convincing or not, the theory that more politically interested people "self select" into automated surveys is both logical and important. GWU Political Scientist John Sides put it succinctly in a blog post last year about an automated poll by PPP:

A reasonable question, then, is whether this small self-selected sample is -- even with sample weighting -- skewed towards the kind of politically engaged citizens who are more likely to think and act as partisan[s] or ideologues.

It is difficult to answer that question definitively, especially about Rasmussen's surveys, in a way that is based on hard empirical evidence and not just informed speculation. The reason is that the difference in mode (automated or live interviewer) is typically confounded by equally significant differences in question wording (examples here and here) or use of likely voter filtering by Rasmussen but not other polls. The Kagan example is helpful because the question wording is much closer, but the likely voter confound remains.

I have long argued that Rasmussen could help resolve some of this uncertainty by being more transparent about their likely voter samples, which dominate their releases to a far greater degree than almost any other media pollster. What questions do they use to select likely voters? What percentage of adults does Rasmussen's likely voter universe represent? What is the demographic composition of their likely voter sample, by age, gender, race, income? That sort of information is withheld even from Rasmussen's subscribers.

They could also start reporting more results among both likely voters and all adults. They call everyone, and they would incur virtually zero marginal expense in keeping all voters on the phone for a few additional questions.

Back to Silver's post. He includes some extended discussion on some of the differences in methodology that might explain why political junkies would be more prone to self-select to Rasmussen's surveys than those done by CBS. I have to smile a little because I outlined the same issues in a presentation at the Netroots Nation conference last August on a panel that happened to include Silver. I've embedded the video of that presentation below. It's well worth watching if you want more details on the stark methodological differences between Rasmussen and CBS News (my presentation begins at about the 52:00 minute mark; I review much of the same material in the first part of my Can I Trust This Poll series).

Finally, I want to follow-up on two of Nate's comments. Here's the first:

I've never received a call from Rasmussen, but from anecdotal accounts, they do indeed identify themselves as being from Rasmussen and I'm sure that the respondent catches on very quickly to the fact that it's a political poll. I'd assume that someone who is in fact interested in politics are significantly more likely to complete the poll than someone who isn't.

I emailed Rasmussen's communications director and she confirmed that they do indeed identify Rasmussen Reports as the pollster at the beginning of their surveys.

But here's the catch: So does CBS. I also emailed CBS News polling director Sarah Dutton and she confirms that their scripted introduction introduces their surveys as being conducted by CBS News or (on joint projects) by CBS and the New York Times. According to Dutton, they "also tell respondents they can see the poll's results on the CBS Evening News with Katie Couric, or read them in the NY Times."

Second, he argues that CBS will "call throughout the course of the day, not just between 5 and 9, which happens to be the peak time for news programming." I checked with Dutton, and that's technically true: CBS does call throughout the day, up until 10 p.m. in the time zone of the respondent. However, they schedule most of their interviewers to work in the evenings. As such, most of their calling occurs during evening hours because, as Dutton puts it, "that's when people are at home."

More important, CBS makes at least 4 dials to each selected phone number over a period of 3 to 5 days, and they make sure to call back during both evening and daytime hours. The idea is to improve the odds of catching the full random sample at home. That said, if a pollster does not do callbacks -- and Rasmussen does not -- it's probably better to restrict their calling to the early evening because, again, that's when people are at home.

But I don't want to get bogged down in the minutiae. The question of whether automated surveys have a bias toward interested and informed respondents is big and important, especially when we move beyond horse race polling to surveys on more general topics. I'm sure Nate Silver will have more to say on it. So will we.

When to Watch Likely Voters?

My column for this week looks at the issue of "likely voters" as identified on public opinion surveys from a slightly different perspective: When should we pay more attention to subset of voters that pollsters consider most likely to vote? The answer is obvious when looking at horse race numbers a few weeks before an election, but far less so when considering opinion on issues of public policy. Please click through to read it all.

And thanks to Mollyann Brody, Claudia Deane and Carolina Gutierrez at the Kaiser Family Foundation for providing the data tabulation included in the column.

McDonald: Does Enthusiasm Portend High Turnout in 2010?

This guest contribution comes from Michael McDonald, an Associate Professor of Government and Politics in the Department of Public and International Affairs at George Mason University and a Non-Resident Senior Fellow at the Brookings Institution.

As Nate Silver notes, a recent USA Today/Gallup poll finds that that 62% of registered voters say they are "more enthusiastic than usual about voting" in the upcoming midterm elections.

Nate focuses his attention on differential enthusiasm between Democrats and Republicans. Republicans appear more enthusiastic than Democrats, but enthusiasm among partisans of both stripes are at record levels in Gallup polling for a midterm election. I'd like to focus on a different question. What does this level of enthusiasm potentially tell us about voter participation in the 2010 November elections?

This level of enthusiasm at 62% is indeed the highest level of enthusiasm among registered voters in a midterm election since Gallup began asking this question in October, 1994. The next highest level was recorded at 49% in a June, 2006 poll, a difference of 13 percentage points.


USA Today notes that this is "a level of engagement found during some presidential election years but never before in a midterm. " Indeed, this is the case. Looking back at the same question asked in presidential elections since 1996, enthusiasm peaked at 69% in June, 2004 and again at 69% in October, 2008. At a similar point in February, 2008, 63% of registered voters said they were more enthusiastic than usual about voting in that election.


The enthusiasm question appears to tap into underlying voting propensities. Voter turnout rates among those eligible to vote has been relatively stable in the 1994, 1998, 2002, and 2006 midterm elections, as has the self-reported enthusiasm measure. In presidential elections, enthusiasm appears to be related to voter participation. Turnout rates have increased from a low point in 1996 to progressively higher levels in 2000, 2004, and 2008, along with the enthusiasm measure.


If this high enthusiasm for congressional elections translates into similar voter turnout rates as recent presidential elections, this would be exceedingly rare. In the course of U.S. history, midterm turnout rates only exceeded presidential turnout rates at the time of the country's Founding, when Congress was the preeminent branch of government and when presidential elections were occasionally not contested or presidential electors were still occasionally selected by state governments. Over the past century, midterm turnout rates have been on average about 15 percentage points lower than contemporaneous presidential elections. History tells us that it is unlikely that the 2010 midterm turnout rate will equal recent presidential turnout rates of 60%+ of those eligible to vote.

Still, absent any knowledge about enthusiasm, we might expect that turnout rates would increase in 2010. The long term pattern has been for midterm election turnout rates to generally move with presidential elections. An increase in presidential turnout rates has occurred recently without a breakout to the upside for the midterm rates. Looking back to the 1960's, just by looking at the aggregate election data alone we might expect midterm turnout rates to rise near 50% in 2010.

Further tamping expectations down is that level of enthusiasm of 39% in the October 2000 survey is on par with the 41% in October, 1998 and the 41% in October, 2002, yet the turnout rate in that presidential election was still approximately 15 percentage points higher than either of these midterm elections. Indeed, the lowest level of enthusiasm of 17% was registered on the October, 1996 survey. The 1996 presidential turnout rate of 51.7% is a modern low, but it still easily exceeds any recent midterm election.

This disconnect may have something to do with the question wording. The question asked is, "Compared to previous elections, are you more enthusiastic than usual about voting, or less enthusiastic?" Note that the question elicits a respondent to refer back to previous elections as a comparison point. It may be that respondents are thinking about comparable midterm or presidential elections when answering the question, rather than a baseline enthusiasm that may be compared across different types of elections.

There is one further caveat to consider. The presidential data shows that it is possible that this enthusiasm may swiftly wane. In 2008, voters' enthusiasm in the primaries faded by summer, dropping from 63% in February to 48% in June, before peaking again at 69% in October as the election neared. The enthusiasm observed at this point in time may be a product of circumstances that may not be sustainable until November. Then again, even if enthusiasm wilts in the summer this does not mean it may not perk up again as November draws near.

At this point, the most reasonable conclusion to draw from the totality of the evidence is that turnout in 2010 will most likely exceed the 41.4% of 2006, and if these current conditions hold the turnout rate may come in just shy of 50%.

How Tight is the Screen? (2010 Edition)

Nate Silver has an interesting catch this morning: On the most recent USA Today/Gallup poll, conducted roughly a week after health care reform passed the House (and 2-4 days after the presidential signing ceremony), both Democrats and Republicans expressed record levels of enthusiasm about voting in the mid-term elections. As Silver point out, the “big problem” for Democrats is that Republican enthusiasm — 69% say they are “more enthusiastic than usual” about voting — is still greater than for Democrats (57%).

As he points out, some of this jump is “very probably…a temporary bounce and will fade as memories of the health care legislation become more distant,” but he concludes with a point worth discussing further:

What I wish the pollsters would do, actually, is to publish the percentage of people in each party who are screened out by their likely voter model. You don’t have to tell us how you’re doing it — but at least let us know in broad strokes how much impact it’s having. How much of Rasmussen Reports’ apparent house effect, for instance, is because they’re applying a likely voter screen when most other pollsters aren’t, and how much of it is because there are some differences — or bugs — in other parts of their data collection and massaging routine? We shouldn’t have to guess; this should be an easy thing for the pollsters to disclose.

That’s half right. What pollsters can do easily, and do not do nearly often enough, is publish the percent of adults (or of registered voters) who they screen out with their likely voter questions or models. This has been a hobby horse of mine since I started asking pollsters about their likely voter models in 2004. I wrote a two-part series about it the context of primary polling in the summer of 2007 pushed harder for the percentage of adults that qualified in 2007 during the run-up to the Iowa caucuses and many times during the 2008 primaries.

That said, we need to take care with the percentage passing the screen statistic. Any pre-election survey probably includes at least some response bias toward genuinely likely voters — truly unlikely voters are presumably more likely to hang up at some point regardless of how they answer a screen question — so a sample of “adults” may begin with a slight skew to actual voters (though the actual evidence of this phenomenon is surprisingly thin). This is a complicated point, but if such response bias exists, we probably want the percentage of adults that pass the screen to be bigger than the actual turnout percentage among adults.

Silver is wrong, however, to say that it’s an “easy thing” for all pollsters to publish the percentage screened out in each party. Yes, it would be relatively easy for pollsters using the sometimes controversial Gallup likely voter model, which typically begins with a sample of all adults, retains the answers to the party identification question for all adults, and then applies a filter and weighting to select and model a likely electorate (though Gallup’s practice of weighting down a middle category of voters on-the-bubble between likely and not likely would complicate things a bit).

But it would be impossible for pollsters who screen for registered and/or likely voters at the beginning of the survey and terminate the interview with those who do not qualify to report the percentage of each party that pass the screen. They usually hang up before asking a party ID question. The same is true for pollsters who begin with samples drawn from official lists of registered voters. They might be able to tell you something about party registration of those who get screened out (in party registration states), but only to the extent that they were able to identify in the individual they talked to in each household before they ended the call. And they can tell you nothing at all about the party preferences of non-registrants and whatever statistics they produce would only be comparable with other similarly designed polls in the same state. Those two practices, the use of screening and list samples, apply to virtually all internal campaign polls and most media polls conducted at the state level.

What would make far more practical sense would be for all pollsters to publish the party composition of their likely voter sample. In other words, what percentage of likely voters identify as Democrats, Republicans or independents? Among the most prolific statewide pollsters, SurveyUSA, PPP and Research2000/DailyKos now routinely publish those results. Rasmussen Reports and Quinnipiac do not.

So What's a Likely Voter? Answers from Rasmussen and PPP

I spent the morning at Midterm Election Preview panel discussion sponsored by our competitor colleagues at the CQ Roll Call Group that featured pollsters Peter Brown of the Quinnipiac University Polling Institute, Tom Jensen of Public Policy Polling and Scott Rasmussen of Rasmussen Reports. During the question-and-answer period I asked a question about my favorite hobby-horse, what a "likely voter" is and how pollsters select them.

I directed the question (which begins at about the 1:00 mark) at Rasmussen and Jensen largely because their national surveys on presidential job approval and other issues are among the few that currently report results for likely voters or "voters" and because their reports provide little definition of those terms. The persistent and noticeable "house effect" in the Rasmussen results has led some to conclude that they are "polling a different country than other polling outfits."

I promise a longer post tomorrow summarizing my take on why Rasmussen is different, but since I'm running out of blogging time today, here are the verbatim answers from earlier today followed by a few comments. First, Scott Rasmussen of Rasmussen Reports:

First of all, we actually do have something in our daily presidential tracking poll that says that it's likely voters not adults, and we we do have a link to a page that explains something about the differences, maybe not as concisely or as articulate as I will say here...

There's a challenge to defining a likely voter. The process is a little different than in the week before an election for us than it is in two months before an election than it is in a year before an election. And to give a little history, normally if you would go do a sample of all adults, you go and interview whoever picks up the phone and you model your population sample to the population at large. When you begin to sample for likely voters you do it by asking a series of screening questions.

At this point in time, we use a fairly loose screening process, in the sense that we don't ask details about how certain you are to vote in a particular election next November. In fact, even the term "likely voters" is probably not the best term. I used to use the phrase "high propensity voters," because it was suggesting that these people who were most likely to show up in a typical mid-term election. We're not claiming this is a particular model of who will show up in 2010. When we used the phrase, "high propensity voters" -- I got a bunch of journalists who wrote back saying, "what does that mean?" I tried to explain it and they said, "oh you mean likely voters." So I finally just gave up.

Now for us [what] happens is that from this point in time, from now until Labor Day right before the election we will continue to use this model. These are people who are generally likely to show up in a mid-term election. When we get closer to the election, we add additional screens based on their interest in the election and their certainty of voting in this particular race and so the number does get more precise.

What does it mean in practical terms? Rasmussen Reports and Gallup are the only two polls out there with a daily tracking poll of the President's job approval. If you go back from January 20th on, most of the time you will see that Gallup's reported number is about three or four or five points higher than ours, because these are surveys and there is statistical noise. Sometimes the gap is bigger, sometimes its smaller. In fact there are some days when our number is a little bit higher than Gallup's. But typically, the gap between the adults and the likely voter sample is in the four or five point range.

The reason: Likely voters are less likely to include young adults, people who [as] Tom mentioned were very supportive of the President. They are less likely to include minority voters who are, again, very strongly supportive of this President. And so the gap is consistent.

Now I would explain that, at this point and time, it's a little like the difference between measuring something in inches or in meters, inches or in centimeters: the trends are the same in both cases, the implications are the same in both instances. And, by the way, the ultimate answers are that Republicans strongly disapprove of this President, Democrats strongly approve of this President, and independent voters have grown a little bit disenchanted, but they're not anywhere near the level of discontent that Republicans show. And that's true whether you measure it with likely voters or adults.

Next, Tom Jensen of PPP:

Well, I'll give a very concise answer. For our national polls, we're just pulling a list from Aristotle Incorporated of registered voters, period. We don't do any sort of likely voter sampling on our national polls. On our state level polls for 2010 races, we're polling lists of people who voted in the 2004, 2006 or 2008 general elections. If we were a live interviewer pollster that would be too liberal a sampling criteria, but we do automated polling and people who don't tend to vote in an election aren't going to answer an automated poll, so they just hang up. So we figure the 2008 wave voters we should be calling because some of them will come out in 2010, and those who will not, just hang up.

A few quick notes. First, very little of Rasmussen's explanation of his voter screen appears on the Rasmussen Reports methodology page (the one that's linked to from their daily presidential presidential tracking poll). Second, I'm still not quite clear on the question or questions that they currently use to screen for likely voters, although he implies that they ask a question about how often respondents typically vote. I understand that media pollsters often treat these screen questions like a proprietary "secret sauce," although the partisan pollsters that rely on screen questions, including Democracy Corps, Resurgent Republic and Public Opinion Strategies, typically include them in their filled-in questionnaires. Rasmussen Reports could help consumers of its data better understand "what country they are polling" if they did the same.

Finally, about Jensen's comment that "people who don't tend to vote in an election aren't going to answer an automated poll, so they just hang up:" He assumes that to be true -- and it's a perfectly reasonable assumption -- but I am not sure anyone has produced hard evidence yet that non-voters "just hang up." If they do, however, it calls into question the wisdom of assuming that an initial sample of adults called with an automated poll is really a sample of all adults (a question I've wondered about for years, even for pre-election surveys conducted with live interviewers).

Likely Voters and Mid-Term Elections, Part I

It would be a political miracle if the Democrats did not lose seats in the 2010 Congressional elections, yet the polls so far suggest that scenario is doubtful at best. I think it's because most polls are providing a rosier picture for the Democrats by reporting voting intentions of the general public, or registered voters, rather than the much smaller segment of "likely voters" that will ultimately turn out to cast a ballot.

That the Democrats will almost certainly lose House seats in 2010 is attested to by several factors. The most important, of course, is that since the advent of the current two party system (Republicans and Democrats), the party of the president almost always loses seats in a mid-term election. The best theory for this phenomenon is that disgruntled people (i.e., those who identify with the "out" party) are more motivated to cast a protest vote than the relatively satisfied people (i.e., those who identify with the party of the president) are to cast a vote of support.

The second factor is that, in the wake of the protracted war in Iraq and the sagging economy, Democrats won many seats in 2006 and 2008 that would "normally" go to Republicans. In 2010, with Bush gone and a Democratic administration in charge, Democratic House members in those "normally" Republican seats are going to be quite vulnerable.

The final factor is that as a general rule, Republicans are more likely to turn out than Democrats, because Republicans tend to be higher on the socio-economic scale - generally more educated, with higher incomes, and more actively involved in politics than Democrats.

So, if all of these reinforcing factors suggest the Republicans are likely to gain seats, why aren't the polls showing that? Here are some interesting recent poll results (see pollingreport.com):


July-August Polls 2009 Measuring Support

for Congressional Candidates, 2010







Democratic advantage






Pct Pnts

Aug 10-13

(general public)

Daily Kos/

Research 2000





July 31-Aug 1

(general public)

CNN/Opinion Research Corp





July 24-27

(general public)

NBC/Wall Street Journal





July 22-26

(likely voters)






July 19-23

(likely voters)

GWU - Tarrance/Lake





July 9-13

(registered voters)






July 10-12

(registered voters)







Note that there is little difference in the lead that polls show for Democrats when the sample is either the general public or registered voters - from six to ten percentage points. However, the two polls that reported results based on "likely voters" show essentially a dead heat (a 3-point Democratic lead or a one-point Republican lead).

Nate Silver (at fivethirtyeight.com) suggests caution in relying on likely voter models this early in the 2010 campaign. Generally, I agree that early polls - especially in specific races (as opposed to the more general generic ballots reported above) - need to be viewed with caution. Many people are undecided 10 to 12 months ahead of the election, though some pollsters obscure that fact by using a forced choice format. See, for example, the contrast between Diageo/Hotline and Gallup above, the former showing 30 percent of registered voters undecided, Gallup showing just 7 percent.

Furthermore, different polling organizations use different screeners to arrive at their presumed "likely voters," some more "aggressive" than others. So, it's difficult to make direct comparisons with polls showing different leads, even if they base their results on likely voters, rather than registered voters or the general public.

That said, I would argue that in general we get a more realistic view of the general sentiment of voters, if the sample has been screened fairly tightly to produce a relatively small segment of likely voters rather than a much larger group of people - the general public or even "registered voters." In mid-term elections, turnout is only about half or so of turnout in presidential elections. Thus, screening out the non-voters is much more sensitive for understanding mid-term elections than presidential elections.

So, contrary to Nate Silver's advice, I would suggest that when polls diverge, one based on likely voters is probably a better reflection of the actual electorate than a poll based on the general population or even registered voters.

(In Part II I will discuss the exceptions to the general rule that the president's party loses House seats in mid-term elections, and whether those exceptions are relevant to 2010.)

Murray: Estimating Turnout in Primary Polling

Patrick Murray is the founding director of the Monmouth University Polling Institute and maintains a blog known as Real Numbers and Other Musings.

There are a couple of pieces of accepted wisdom when it comes to contested primary elections versus general elections: 1) turnout has a bigger impact on the ultimate margin of victory in primaries and 2) primaries are more difficult to poll (see point #1).

The voters who show up for primaries come disproportionately from either end of the ideological spectrum. Even in states with closed primaries (i.e. one has to pre-register with a party to vote in its primary), there is still a particular art for determining which groups of voters should be included in the likely voter sample.

Voters' likelihood to turnout generally correlates with their ideological inclination. Last year's Democratic presidential nomination provides a good illustration of this. Lower turnout caucus states saw a bigger proportion of higher educated liberal activists participate in the process. These same voters also showed up in the primary states, but they were joined by a good number of less educated, blue-collar Democrats. Result: Obama basically swept the caucus states, while Hillary Clinton held her own in the primaries. Texas, which held both a primary and a caucus that were won by different candidates, is a stark illustration of this turnout effect.

The same is true for Republican primaries. Lower turnout means a larger proportion of the electorate will be staunchly conservative in their views. As turnout increases, it's moderates who are joining the fray, thus diminishing the conservative voting bloc's overall power. And with the GOP being in its present ideologically-splintered state, small changes in turnout can have a real impact in primaries cast as battles between the party's ideological factions.

To some extent, we saw this play out in New Jersey's recent gubernatorial primary where the two leading candidates were seen as representing different wings of the Republican party. Former mayor Steve Lonegan cast himself as the keeper of the conservative flame, while former U.S. Attorney Chris Christie claimed to adhere to core conservative principles (e.g. anti-abortion), but presented himself as a more centrist option. New Jersey's Republican voters agreed - a plurality of 47% described Christie as politically moderate while a majority of 56% tagged Lonegan as a conservative.

The Monmouth University/Gannett New Jersey Poll released a poll nearly two weeks before the June 2 primary showing Christie with an 18 point lead over Lonegan - 50% to 32%. New Jersey has a semi-open primary - meaning both Republicans and "unaffiliated" voters are permitted to vote (although unaffiliateds have their registration changed to Republican if they do vote). So, technically about 3.5 million out of New Jersey's more than 5 million registered voters were eligible to vote in the recent GOP primary. But in the last two contested gubernatorial primaries only between 300,000 and 350,000 voters were actually cast.

So, how do you design a sampling frame for that? First, it's worth noting that state voter statistics show that extremely few unaffiliated voters ever show up for a primary - certainly not enough to impact a poll's estimates. So we are left with about one million registered Republicans, of whom still only one-third will vote. That is, of course, IF turnout is typical (more on that below).

Our poll for this primary used a listed sample of registered Republican voters who were known to have voted in recent primaries. It was further screened and weighted to determine the propensity of voting in this particular election (based on a combination of known past voting frequency and self-professed likelihood to vote this year). In the end, our model assumed a turnout of about 300,000 GOP voters, based on turnout in the past two gubernatorial primaries.

However, turnout in other recent GOP gubernatorial primaries in New Jersey have gone as low as 200,000 - that was in 1997 when incumbent Christie Whitman went unchallenged. Turnout in contested U.S. Senate primaries is also generally around the 200,000 level. On the other hand, turnout has been much higher than 300,000 as well. It even surpassed 400,000 as recently as 1981.

The GOP primary saw higher than average turnout in 1993 - another year when a trio of Republicans were vying to take on an unpopular Democratic incumbent. So, it was fair to speculate that Governor Jon Corzine's weak position in the polls would give GOP voters extra incentive to turn out in the expectation of scoring a rare general election win. On the other hand, perhaps the state's Republicans have become so demoralized by their poor standing nationally and 12-year statewide electoral drought that turnout could be lower than the 300,000 used for our poll estimate.

Because we had information on actual primary voting history for each voter in our sample - i.e. rather than needing to rely on notoriously unreliable self-reports - it was possible to re-model the data from two weeks ago with alternative turnout estimates. If the GOP primary turnout model was set well above 430,000 - a 40-year record turnout for a non-presidential race - the Christie margin in our poll grew to 23 points. Alternatively, if the turnout model was pushed down to about 200,000 - a typical U.S. Senate race level - the gap shrank to 13 points. In other words, adjusting the primary poll's turnout estimate from 5% to 12% of eligible voters could swing the results by 10 points!

Why? The analysis showed that "strong" conservatives comprise about half of New Jersey's 200,000 "core" GOP turnout - and this group was largely for Lonegan. But when we widened the turnout estimate, more and more moderates entered the mix. As a result, Chris Christie gained one point on the margin for approximately every 25,000 extra voters who "turned out."

On primary day, Christie ended up beating Lonegan by a respectable 13 point margin - 55% to 42% - on a 330,000 voter turnout. Based on the model above, if Republicans had been a lot less enthusiastic, Lonegan may have been able to narrow this gap to 8 points. On the other hand, record level turnout would have given Christie a 16 or 17 point win.

Turnout: From NJ to VA

Apologies for missing this, but on Monday Patrick Murray, founding director of the Monmouth University Polling Institute, posted a terrific primer on this Tuesday's New Jersey primary. His post includes an intriguing description of their methodology and how they "modeled" turnout that may have lessons for the polls out now on next week's Virginia primary:

The Monmouth University/Gannett New Jersey Poll released two weeks ago showed Christie with an 18 point lead - 50% to 32% for Lonegan. For the record, that poll was conducted using a listed sample of registered Republican voters in the state who were known to have voted in recent primaries. It was further screened to determine the propensity of voting in this particular election (based on a combination of known past voting frequency and self-professed likelihood to vote this year). In the end, our model assumed a turnout of about 300,000 GOP voters on June 2 (give or take 10,000).


Variations in turnout tend to have more impact on primary results than they do on general elections. In general elections, the preferences of non-voters tend to line up fairly well with those who actually go out to the polls on election day. However, for primary elections, particularly with an ideologically-fractured GOP electorate, a factor of just a few thousand voters simply deciding whether or not to show up can swing a close race.

It doesn't look like we have a particularly tight race in this case, although that 18 point poll gap may have narrowed since our last sounding on May 20. I did re-examine our data using alternative turnout estimates. If the GOP primary turnout model is set to well above 430,000 - i.e. a 40-year record turnout for a non-presidential race - the Christie margin in our poll grows to 23 points. Alternatively, if the turnout model is pushed down to about 200,000 - i.e. a typical U.S. Senate race - the gap shrinks to 13 points. That's a swing of 10 points based on turnout alone!

I asked Murray if he would provide us with some post-primary thoughts via a "guest pollster" post, and if all goes well, we should have that posted for you tomorrow. But consider his observations about turnout in New Jersey in the context of the polls released in the last week or two in Virginia:

  • The two pollsters that have shown Terry McAulliffe doing best -- SurveyUSA and Research2000 -- have used random digit dial (RDD) samples that cannot use the sort of actual vote history information available for individual respondents on list samples.
  • The two pollsters that have sampled using registered voters lists -- Public Policy Polling (PPP) and Moran's pollster, Greenberg-Quinlan-Rosner (GQR) -- have consistently shown McAuliffe running 7 to 10 percentage points lower than the polls using RDD samples. I reported details of the PPP sampling method here. I assume that GQR uses lists in this race because their first release says they identified likely voters using both "vote history in Virginia and self-reported likelihood to vote in the upcoming gubernatorial primary" (emphasis added). Again, vote history is only available with a voter list.

[Note: If you click on any data point in our Virginia chart, embedded below, you can connect-the-dots for surveys from individual pollsters and see how each compares to the overall trend]

  • On their last two polls, PPP provides crosstabulations that compares two groups: (1) households with vote history in either of the very low turnout primaries in 2005 or 2006 with (2) households where voters participated in only the much higher turnout 2008 presidential primary. Both polls show Deeds doing better (by 8-10 points) in the lower turnout households. McAulliffe scored 9 points lower in the low turnout households two weeks ago, but just two points lower earlier this week.
  • SurveyUSA's summary of their latest survey out today includes these findings that suggest a similar correlation: McAuliffe does best among the subgroups with the historically lowest levels of turnout:

McAuliffe's constituents are Independent and young. In SurveyUSA's turnout model, 20% of likely Primary voters are Independent. If this group votes in smaller numbers, McAuliffe's support is overstated here. In SurveyUSA's turnout model, 19% of likely voters are age 18 to 34. If this group votes in smaller numbers, McAuliffe's support is overstated here.

Combine these findings with the considerable self-reported uncertainty -- half (52%) of SurveyUSA's respondents and 44% of the voters on the last PPP survey say could still change their minds -- and we get a race where the final result may look very different from whatever the final round of polls "predict." Hang on to your hats.

PS: Several big unknowns remain in this race, but one big one is now a bit clearer. Creigh Deed's campaign just sent out a release announcing that they will begin airing a television advertisement touting his recent Washington Post endorsement "on broadcast and cable stations in Northern Virginia." Note, however, that the release provides no details about how much time Deeds is buying on the very expensive DC broadcast stations (that also reach into Virginia, Maryland and DC). If they are committing to a decent sized broadcast buy in the DC market, it's a major gamble. If any of our readers catches this new ad on Washington DC broadcast television, please email me or leave a comment below (or email me).

[Prior association disclosed: David Petts, currently the pollster for the Deeds campaign, was my business partner though 2006].

How Do Polls and Exit Polls Handle Early Voting?

The most common questions I have been getting via email the last two weeks are about early voting. Specifically, how are pollsters dealing with early voting on the pre-election polls we report and how will exit pollsters deal with the early and absentee voters that do not show up at polling places on Election Day?

The answer to the first question is that just about every pollster is either modifying their screen questions or asking additional questions to allow and attempt to identify early voters. Here is a sampling of how some of the national pollsters ask about early voting.

  • CBS News/New York Times: How likely is it that you will vote in the 2008 election for President this November - would you say you will definitely vote, probably vote, probably not vote, or definitely not vote in the election for President, or have you already voted?
  • Fox News/Opinion Dynamics: When do you plan to vote in the presidential election -- did you already vote, do you plan to vote early -- meaning sometime before Election Day, or will  you vote on Election Day?
  • Gallup/USA Today: Which of the following applies to you - you have already voted in this year's election, either by absentee ballot or early voting opportunities in your state, you plan to vote before Election Day, either by absentee ballot or early voting opportunities in your state, or you plan to vote on Election Day itself?
  • GWU/Battleground: What is the likelihood of your voting in the elections to be held in November -- are you extremely likely, very likely, somewhat likely, or not very likely at all to vote? (Accepts "already voted" as a volunteered response).
  • Pew Research Center: Do you plan to vote in the presidential election, have you ALREADY voted, or don't you plan to vote?

While verbatim questionnaires are harder to come by for state level polling, the questions are presumably similar. It is worth keeping in mind that, as with self-reported measure of voting, these questions may overstate the degree of early voting as some respondents will claim to have voted when they have not.

But key point that some seem to miss: None of the pre-election polls (or at least none that I know of) are excluding early voters from their samples. The totals reported include both early voters and those still considered "likely" to vote next week, so no, we do not have to try to somehow account for early voting in interpreting the poll numbers posted and estimated on Pollster.com or other poll aggregation sites.

What about the exit polls? The exit pollsters have, for several elections conducted telephone surveys the week before the election among those who have already voted in states with a rate of early voting they consider significant enough to affect the results. On election night, they combine the early voting telephone survey results with interviews conducted at polling places (except for Oregon, where all voters cast ballots by mail). In 2004 , they did telephone surveys of early voters in 12 states: Arizona, California, Colorado, Florida, Iowa, Michigan, Nevada, New Mexico, North Carolina, Tennessee, Texas, Washington and nationally (for their national exit poll).

A few days ago, Kate Phillips of the New York Times reported these helpful details on this year's plans, which will apparently include six more states:

Joe Lenski, the executive vice president of Edison Media Research, which along with Mitofsky International, conducts the exit polls for a consortium of news organizations, said the group has already expanded its plans for telephone surveys of early voters to 18 this year from a dozen states in 2004. The states are selected based on their competitiveness in the election and on their high rates of voters who cast ballots before Election Day.


Beginning this week through the weekend, Edison/Mitofsky will conduct random phone surveys in those 18 states, asking detailed questions of people who actually say they voted early. Mr. Lenski wouldn't release the list of all 18 states, but it's pretty apparent that California, Colorado, Nevada, Florida, North Carolina, Georgia and New Mexico will be among the targets.

We're told that Pennsylvania and Virginia - still considered battleground states - won't be among those surveyed before Election Day because those states' rates of early voting/absentee voting are traditionally lower than others.

One caveat: This survey is conducted among landline telephone users only, despite pollsters' growing practice of capturing cellphone users as well. Mr. Lenski and others asserted that shouldn't make much of a difference, because recent research indicates that there aren't huge differences on issues between landline and cellphone respondents. But the Pew Research Center has detected a slight difference when it comes to horse-race figures, suggesting that cellphone surveys capture more younger voters who heavily favor Senator Barack Obama. On Election Day, exit poll interviews will include questions about cells.

It is probably worth adding that exit poll interviews are just one component of the data that the networks use to estimate the election result and (ultimately) weight the exit poll tabulations we will see on Election Night. They will be looking at samples of actual returns very shortly after the polls close. Some states will make separate tabulations of early voting available immediately. Needless to say, the "decision desk" analysts will consider the potential impact of early voting in their projections.

Phillips' article has much more on the early voting phenomenon. It's worth reading in full.

[An earlier version of this post mangled the Fox News early vote questions -- apologies for the error].

Miller: What Pollsters Can Learn From Climate Modelers

Guest Pollster Clark A. Miller is an Associate Professor at Arizona State University. His post expands on a comment left on Pollster.com on Friday.

As Mark Blumenthal and Nate Silver have both noted in detail of late, the design of likely voter models can significantly impact how pollsters interpret and transform the raw data of voter samples into the topline results we see at pollster.com, fivethirtyeight.com, and other sites covering election polling. In turn, Mark and Nate observe, likely voter model design depends significantly on judgments that pollsters make about how to model the likelihood that any voter sampled will actually turn out and vote in the election. As we have all seen in the last few days, differences in how such judgments get made by different pollsters, combined with differences in the samples of voters collected by each poll, can mean the difference between a 1-point and a 14-point spread between the respective candidates for President.

A key challenge for consumers of polls - whether citizens, journalists, or politicians - is sorting out to what extent the likely voter model or the underlying raw data sample is responsible for variations in poll outcome. In fact, this sorting out of how judgments made by modelers impact model design and outputs is a general challenge in the use of science to inform policy choices, which I have studied for much of the past two decades. Judgments like this are inevitable in any scientific work, which is why policy officials turn to experts to make judgments on the basis of the best available knowledge, evidence, and theories.

One case that I have looked at in detail is the use of computer models of the Earth's climate to make predictions about whether the planet is experiencing global warming. As I'm sure most of you know, models of climate change have been viewed skeptically by many people. I believe the trials and tribulations of climate modelers - and also their approaches to addressing skepticism about their judgments - offer three useful insights for pollsters working with likely voter models.

  1. Transparency - climate models are far more complex than most polls, but climate modelers have made significant efforts to make their models transparent, in a way that many pollsters haven't. (In much the same way, computer scientists have called for the code used in voting machines to be open source.) By making their models transparent, i.e., by telling everyone the judgments they use to design their model, pollsters would enhance the capacity of other pollsters and knowledgeable consumers of polls to analyze how the models used shape the final reported polling outcome. They would also do well to publish the internal cross-tabs for their data.
  2. Sensitivity - climate modelers have also put a lot of effort into publishing the results of sensitivity analyses that test their models to see how they are impacted by embedded judgments (or assumptions). This is precisely what Gallup has done in the past week or so, in a limited fashion, with its "traditional" and "extended" LV models and its RV reporting. By conducting and publishing sensitivity analyses, Gallup has helped enhance all of our capacity to properly understand how their model responds to different assumptions regarding who can be expected to vote.
  3. Comparison - climate modelers have also taken a third step of deliberate comparisons of their models using identical input data. The purpose of such comparison is to identify where scientific judgments were responsible for variations among models, and where those variations resulted from divergent input data. Since the purpose of polling is to figure out what the data are saying, it is essential to know how different models are interpreting that data, which can only be done if we know how different models respond to the same raw samples.

The reason climate modelers have carried out this activity is to help make sure that the use of climate model outputs in policy choices was as informed as possible. This can't prevent politicians, the media, or anyone else from inappropriately interpreting the outputs of their models, but it can enable a more informed debate about what models are actually saying and, therefore, how to make sense of the underlying data. As the importance of polling grows, to elections and therefore to how we implement democracy, pollsters should want their polls to be as informative as possible to journalists, politicians, and the public. Adopting model transparency, sensitivity analyses, and systematic model comparisons could go a long way toward creating such informed conversations.

The Art and Science of Choosing Likely Voters

On Wednesday Nate Silver posted a helpful table that compared registered voter and likely voter samples on seven recent national surveys, including both the "traditional" and "expanded" likely voter models reported ever day by Gallup.

081023_538 table

He noticed that the polls "appear to segregate themselves into two clusters," on showing a 4-6 point difference between the likely and registered voter models and one showing essentially no difference:

The first cluster coincides with Gallup's so-called "traditional" likely voter model, which considers both a voter's stated intention and his past voting behavior. The second cluster coincides with their "expanded" likely voter model, which considers solely the voter's stated intentions. Note the philosophical difference between the two: in the "traditional" model, a voter can tell you that he's registered, tell you that he's certain to vote, tell you that he's very engaged by the election, tell you that he knows where his polling place is, etc., and still be excluded from the model if he hasn't voted in the past. The pollster, in other words, is making a determination as to how the voter will behave. In the "expanded" model, the pollster lets the voter speak for himself.

Nate offered several good reasons why the traditional likely voter models may be missing the mark this year, as well some reasonable suggestions of ways pollsters might check their assumptions. His bottom line, however, is that he considers the 4-6 point gap between registered and likely voters "ridiculous" and issued a "challenge" to the pollsters showing closer margins to "explain why you think what you're doing is good science."

Now I'm a fan of Nate's work at FiveThirtyEight.com and I share his skepticism about placing too much faith this year in more restrictive likely voter models that place great emphasis on past voting. But having said that, I think it's a bit unfair to imply that the models used by pollsters like Franklin & Marshall and GfK amount to bad "science."

The science and art of likely voter models is worth considering. I've long argued that political polling is a mix of both science and art (just check the masthead of my old blog), and no where is the "art" of this business more evident than in the way pollsters select likely voters. Whether it's the likely voter model or screen or decisions about what sort of sample to use or how to weight the results, pollsters typically make a series subjective judgements that are at best informed by science. One reason that no two pollsters use exactly the same "model" is that the science of predicting whether a given individual will vote is so imprecise.

As I wrote in my column earlier this week, likely voter models had their origins in a series of "validation" studies first done by pollsters in the 1950s, when they mostly interviewed respondents in person. Since the interviewer visited each respondent at home, they could easily obtain their name and address. After the election, pollsters with a sufficient resources could send their interviewers to the offices of local election clerks to look up whether each respondent had actually voted. Gallup used proprietary validation studies to help develop its traditional likely voter model, and the validation data collected by the University of Michigan's American National Election Studies (ANES) from the 1950s through the 1980s helped guide a generation of political pollsters.

Unfortunately, the ANES stopped doing validation studies in 1980, but the data is readily available online, I downloaded the 1980 survey and ran the cross-tabulations that follow. In 1980, ANES followed its standard practice, conducting an in-person interview with a nationally representative random sample of voters in October, then following up with a second interview with the same respondents after the election in November.

The following table shows results from questions asked before the 1980 election about whether the respondent was registered and whether they intended to vote, plus a question asked afterwards about whether they had actually voted. (A few caveats: first, the data shown here are unweighted, as I could find no documentation or weight variables in the materials online. Second, roughly 18% of the respondents are omitted from this table because the researchers could not firm their registration status. Third, obviously, the study is 28 years old, although a more recent validation study conducted by in Minnesota by Rob Daves, now a principal of Daves & Associates Research, yielded very similar findings).

081024 NES1980_A.png

The middle column represents respondents who were actually registered to vote, but had no record of voting in the 1980 general election. And no, that's not a typo. Eight-four percent (84%) of these confirmed non-voters said they planned to vote. Their answers were more accurate after the election, but still, nearly half (44%) of the non-voters claimed inaccurately a few weeks later that they had voted.

The far right column shows the respondents who were confirmed as non-registrants. Nearly a third (30%) told the interviewer that they were registered to vote during their first, pre-election interview, and 45% said they intended to vote. After the election one in five of those with no record of being registered to vote (21%) claimed they had cast a ballot.

These results are not unusual. They are broadly consistent with previous ANES studies. Collectively, they illustrate the fundamental challenge of identifying "likely voters." If you "let the voter speak for himself," he (or she) often overstates their true likelihood of voting. Looking back, many also claim to have voted when they have not -- something to keep in mind in looking at crosstabulations for out this week for those reporting they have voted early.

Now check the patterns by two additional questions about past voting and interest in the campaign. Again, you also see strong but imperfect correlations. Those who say they usually vote and who express high interest in the campaign tend to vote more often than those who do not.

081024 NES 1980-2.png

Since voters tend to overstate their intentions, pollsters like Gallup (and most of the others in Nate Silver's table) typically combine questions about intent to vote, past voting, interest in politics and (sometimes) knowledge of voting procedures into an index. A respondent who says they are registered, plans to vote,has voted in all previous elections and is very interested in politics might get a perfect score. A respondent that reports doing none of those things gets a zero. The higher the score, the more likely they are to vote. [I should add: I'm giving you the over-simplifed, "made-for-TV-movie" version of how this typically works -- as per one of the comments below, Gallup and many others give "bonus points" to younger voters to try to compensate for their inability to say they've voted in previous elections].

Some pollsters (such as Gallup and others who use variants of their "traditional" model) will use that index to select the portion of their adult sample that corresponds to the level of turnout they expect (they use the index to screen out the unlikely voters). A few pollsters (CBS News/New York Times and Rob Daves when he conducted the Minnesota Star Tribune poll) prefer to weight all respondents based on their probability of voting. The table below (from my post four years ago on the CBS model) shows a typical such a scale used for this purpose based on the same 1980 validation data presented above.

081024 traugott table.png

So given all this evidence, why am I skeptical of more restrictive models? Look again at any of the tables above. Neither the individual questions nor the more refined index can perfectly predict which voters will turn out. For example, in the table above, more than a quarter (27.6%) of the voters with the lowest probability of voting -- those who would be disqualified as "likely voters" by most "cut-off" models -- did in fact vote in 1980. And almost as many of the voters scored with the highest probability of voting did not vote (that's one reason why I like the CBS model that weights all registered voters on their probability of voting seems rather than tossing out the least likely).

Still, the best any of these models can do, as SurveyUSA's Jay Leve put it in an email to me last week in describing his own procedures, is "capture gross changes" in turnout from year to year. "We believe," he continued, "no model in 2008 is capable of capturing fine changes" in turnout. I agree. I also fear, as I did four years ago, that models that try to closely "calibrate" to a particular level of turnout overlook the strong possibility that the respondents willing to participate in a 5 to 15 minute interview on politics are probably more likely to vote than those who hang up or refuse to participate. In other words, some non-voters have already screened themselves out before the calibration process begins.

The best use of these highly restrictive "likely voter models," in my view, is to determine when the level of turnout has the potential to affect the outcome of an election. Put another way, the likely voter models typically produce results that differ only slightly from the larger pool of registered voers. However, in relatively rare elections -- and 2008 appears to be such an example -- the marginal voters tilt heavily to one candidate. Surveys have been showing for months that Barack Obama stands to benefit if his campaign can help increase turnout among the kinds of registered voters that typically do not vote.

The fact that the likely voter models are producing inconsistent results, provides additional confirmation of that finding. As Nate Silver points out, some likely voter models (presumably the ones putting more emphasis on past voting) are showing closer results than other models that appear to be less restrictive. The problem is that determining which model is the most appropriate is not a matter of separating science from non-science, and the differences between them are sometimes subtle. Many of the presumably less restrictive models used by national pollsters (ABC/Washington Post and CBS/New York Times, for example) likely include at least some measures of past voting. The true margin that currently separates Obama and McCain probably falls somewhere in between these various "likely voter" snapshots.

Once the votes are counted, we will have a better idea which models are coming closest to reality. Either way, no single model can claim unique "scientific" precision. All involve judgment calls by the pollsters.

[Typo corrected]

A Likely Voter Story

My NationalJournal.com column for the week is now posted online. It looks at pollster likely voter models and the question of whether they will be able to capture an increase in turnout should it occur this year. The short version is that very few are placing great weight on measures of past voting, and virtually none are using methods that would systematically exclude new registrants.

The topic is likely voter models is rich and complex and next to impossible to try to summarize in an 800 word column. Four years ago this week, I did an eight part series on the topic (including a guide to the methods used by almost all of the best known pollsters), and most of what I wrote then still applies. I tried to use today's column to concentrate on the degree to which the current models stress past vote behavior (answer: not much). In preparation for this column, I sent some additional questions to various pollsters about this topic, and I will try to blog those over the coming week.

I'll have more to say about this over the next two weeks, but the combination of cell phone interviewing (or the lack thereof), party weighting and the emphasis given to reports of past voting in likely voter models or screen questions appears to explain why some polls (IBD/TIPP, Battleground, Zogby/Reuters and possibly Rasmussen) are showing a slightly closer race nationally than other surveys.

Gallup's New Likely Voter Model

Though I caught reference to it elsewhere, I managed to overlook the detailed description in today's Gallup Daily release of how they will report "likely voter" results for the rest of the campaign:

Likely Voter Estimates

Obama's current advantage is slightly less when estimating the preferences of likely voters, which Gallup will begin reporting on a regular basis between now and the election. Gallup is providing two likely voter estimates to take into account different turnout scenarios.

The first likely voter model is based on Gallup's traditional likely voter assumptions, which determine respondents' likelihood to vote based on how they answer questions about their current voting intention and past voting behavior. According to this model, Obama's advantage over McCain is 50% to 46% in Oct. 9-11 tracking data.

The second likely voter estimate is a variation on the traditional model, but is only based on respondents' current voting intention. This model would take into account increased voter registration this year and possibly higher turnout among groups that are traditionally less likely to vote, such as young adults and racial minorities (Gallup will continue to monitor and report on turnout indicators by subgroup between now and the election). According to this second likely voter model, Obama has a 51% to 45% lead over McCain.

With a fifty-year time series of presidential polling to consider, Gallup has often demonstrated a reluctance to change its methods. As such, Gallup does deserve credit for trying, as Jay Carney put it today, "to apply a [new] model that accounts for the electorate's likely new complexion," even if they are essentially "hedging their bets by going with two models."

Of course, that hedging presents us with a difficult decision to make about which Gallup results to include in our national trend charts. Our usual rule is to give preference to results among registered voters over samples of adults, and to "likely voter" samples over registered voters. Charles Franklin and I had a two-part exchange on this subject back in August that explains the rationale for our usual rule. As Franklin put it, "our first rule for Pollster is that we don't cherry pick." So we rely on a simple inclusion rule that relies on the pollsters' judgements:

Our decision rule says "trust the pollster" to make the best call their professional skills can make. It might not be the one we would make, but that's why the pollster is getting the big bucks. And our rule puts responsibility squarely on the pollsters shoulders as well, which is where it should be.

Unfortunately, in this case, Gallup is producing two different "likely voter" models without expressing a clear preference for either. So in this rare case, we will exercise our own judgement and opting to plot Gallup's newer "Likely Voter Model II," at least for the time being. Why? First, the Likely Voter II (the one based only on respondents current voting intention) splits the difference between the registered voter results we have been reporting for the Gallup Daily and the traditional model.

Second, and more important, the traditional Gallup likely voter model has been producing samples that have significantly fewer 18-to-29-year-olds than both the likely voter models of other pollsters and available estimates of the 2004 electorate. While no one can be certain about who will vote, the least likely outcome is a 2008 electorate that is older than those who voted in 2004.

Now should Gallup change and express a clear preference for either model, we will yield to their judgement. Until then, we will plot the "likely voter II" model for both Gallup Daily (as of today) and the remaining USA Today/Gallup polls.

Update: Nate Silver comes to the same conclusion. 

Polling Registered vs. Likely Voters: 2004

As Pollster.com readers have no doubt noticed, there has been much discussion in the posts and the comments here about the merits of polling registered voters (RV) versus likely voters (LV). Mark and Charles have been debating this point in their most recent exchanges about whether it is better to include LV or RV results in the Pollster.com poll averages. Charles's last post on this topic raised the following questions:

"There is a valid empirical question still open. Do LV samples more accurately predict election outcomes than do RV samples?"

Ideally, I'd have time to go back over 30 or more years of polling to weigh in on this question. Instead, I thought I'd go back to 2004 and get a sense of how well RV versus LV samples predicted the final outcome. To do this, I used the results from the final national surveys conducted by eight major survey organizations. For each of these eight polls (nearly all of which were conducted during the last three days of October), I tracked down the Bush margin among both RVs and among LVs. The figure below demonstrates the difference in the Bush margin for the LV subset relative to the RV sample from the same survey.


For most polls, LV screens increased Bush's margin, including three surveys (Gallup, Pew, and Newsweek) where Bush did 4 points better among LVs than he did among RVs. But using a LV screen did not always help Bush. In three polls, (CBS/New York Times, Los Angeles Times, and Fox News) his margin remained the same and in the Time poll (which was conducted about a week earlier than the other surveys) Bush actually did 2% worse among LVs.

Of course, this doesn't really tell us which method was more accurate in predicting the general election outcome, just which candidate benefited more from the LV screens. To answer which was more accurate, we can plot each poll's Bush margin among both RVs and LVs to see which came closest to the 2.4% margin that Bush won in the popular vote. This information is presented in the figure below, which includes a dot for each survey along with red lines indicating the actual Bush margin.


Presumably, the best place to be in this plot is where the red lines meet. That would mean that both your RV and LV margins came closest to predicting the eventual outcomes. But, if you are going to be closer to one line over the other, you'd rather be close to the vertical line than the horizontal line. This means that the polling organization's LV screen helped them improve their final prediction over just looking at RVs. If the opposite is true (an organization is closer to the horizontal line than they are to the vertical line), their LV screen actually reduced their predictive accuracy.

The CBS/New York Times poll predicted a 3 point Bush margin for both its RV and LV samples, meaning it was just 6/10ths of a point off regardless of whether they employed their LV screen. Four organizations (Pew, Gallup, and ABC/Washington Post, and Time) increased the accuracy of their predictions by employing the LV screens, coming closer to the vertical line than they do to the horizontal line. Gallup's LV screen appeared to be most successful, since it brought them closest to the actual result (predicting a 2 point victory for Bush despite the fact that their RV sample showed a 2 point advantage for Kerry).

On average, the RV samples for these eight polls predicted a .875 Bush advantage while the LV samples predicted a 2.25 advantage for Bush, remarkably close to the actual result. Of course, this is just one election, but it does appear as though likely voters did a better job of predicting the result in 2004 than registered voters. On the other hand, this analysis reinforces some other concerns about LV screens, the most important of which is the fact that some LV screens created as much as a 4 point difference in an organization's predictions while in three cases LV screens produced no difference at all. It is also important to note that these are LV screens employed at the end of a campaign, not in the middle of the summer, when it is presumably more difficult to distinguish LVs. Ultimately, the debate over LV screens is an important one and the 2008 campaign may very well provide the biggest challenge yet to pollsters trying to model likely voters.

Time/SRBI: Another Take on Modeling Likely Voters

Are all "likely voter" models created equally? Not at all.

Case in point, the comment left yesterday by George Mason University political science Professor Michael McDonald about the latest Time/SRBI poll:

Continuing my war on likely voter models...

Here we have 808 "Registered Likely Voters." Q1 reports 100% of the sample is registered and Q2 reports 90% are "definitely" going to vote and 10% "probably." I guess this means that registered likely voters must have to respond affirmatively to being registered and "definitely" or "probably" to voting. This is different from Gallup, which requires likely voters to have a past history of voting and to express an interest in the campaign. There is no indication of weighting in this survey, so who knows what it going on there.

If I am correct, then this two-question likely voter model seems less biased against young voters and less volatile due to changing interest. This may explain the stability since June in this poll compared with the USAToday/Gallup poll.

Mike's theory seemed plausible, so I sent an email to Mark Schulman, CEO of Abt SRBI, the firm that conducts the Time poll. Here is his full response:

Mike, the Time sample is indeed weighted based upon the entire cross-section sample, as are most election surveys. We retain demographics for the entire sample, registered or not, and weight the entire cross-section sample on the usual Census demographic variables. The 100% you cite is the total of self-reported registered voters who are then asked about likelihood to vote. It does not include unregistered screen outs, who skip straight to the weighting demographics. I see that this can cause confusion. I'm glad that you requested this clarification.

You are correct in that we are not currently using past vote in our model. My objective in the pre-convention polling is to be fairly inclusive in the voter model until after the nominating conventions, when the campaigning starts in earnest. We're likely being a bit too inclusive with the light voter screen, but this still improves upon reporting based upon registered voters. Research on models which include "interest in campaign" and related questions finds variability in the composition of the likely voter profile during early campaign period, leading to some volatility in the estimate. This volatility is reduced as the election approaches.

We always tighten the model a notch after the nominating conventions. To be perfectly honest, I don't claim to have all the answers at this point on which approach we will use to tighten the model. I'm concerned about the likely influx of new voters, young voters, newly registered voters, newly activated voters. In 2004, we had an increase in turnout, even with an incumbent whose job rating was still just below 50% at that time. I don't have a fix at the moment on what to expect in 2008. Our plan is to consult with several leading experts in turnout models later this month and then make some decisions on which approach to take on our turnout model and targets. We're not wedded to any one approach. FYI, for internal purposes, we do break out our horse-race data by likelihood to vote to gauge the impact of smaller vs. larger turnouts.

I do wish to emphasize that we should not strictly abide by past turnout percentages reported by the U.S. Census. Our landline telephone universe is smaller than the Census CPS universe because of undercoverage. Therefore, our target turnout number will be higher than Census turnout trend data would suggest.

Thank you again for requesting this clarification.

If all of this detail confuses you, here is the short version: The Gallup Likely voter model, as applied to the last two USA Today/Gallup polls, uses self-reports of past voting and interest in the election to help identify "likely voters" (in addition to questions about registration and intent to vote). In surveys conducted before the conventions, the Time/SRBI poll does not -- it uses only questions about registration and intent to vote.

Update: Although the Time/SRBI poll uses a simplified likely voter model that should produce less volatility, their sample of registered voters managed to include an even smaller percentage of 18-to-29-year-olds (9% - see QF1) than the "likely voters" in the USA Today/Gallup survey (10%) discussed earlier, and six points fewer than the self-ID'd registered voters in the Gallup survey (15%).

A Likely Story

My NationalJournal.com column for the week is now online. It revisits the nearly two-week old USA Today Gallup poll that showed a big difference between registered voters and those selected as "likely voters" with a focus on the age of the likely voter pool.

After you read the column, the following data may be of interest. First, notice that while the most recent, conducted in late July, showed a net shift of seven points between registered and likely voters, no such gap existed in the poll conducted just a month before. In mid-June, Obama led by six percentage points among both registered and likely voters.

08-07 gallup last2.png

What makes that difference interesting is the additional data generously provided by Jeff Jones of the Gallup organization showing how respondents in different age groups answered the four questions used to identify likely voters. As noted in the column, younger voters tend to score lower on all four questions. Notice that the percentage of 18-29-year-olds who said they had given "quite a lot of thought" to the election plummeted from June (60%) to July (45%). Similarly, the percentage who rated their chances of voting as a 9 or 10 on a 1-10 scale dropped ten points (from 69% to 59%).

08-06 Gallup likely questions.png

Thoughts anyone?

Update:  Nate Silver has additional thoughts.  Note that the method he describes as "the most logical way to handle" the likely voter problem is, in essence, the way the CBS/New York Times poll will model likely voters in October.  Their most recent release provides results for registered voters, but not likely voters.

Also, see the related comments we just posted from Time/SRBI pollster Mark Schulman. 

Likely Voters 2008: The Sequel

Ever had a day where everything seemed to fall on your desk at once? Today, for me, has been one of those days, although just one is obvious, and that involves those seemingly contradictory numbers on the presidential race from the Gallup organization. As blogged yesterday, the most important differences are the result of Gallup's well known (and often controversial) "likely voter" model.

If you are a long time reader and remember my old blog, Mystery Pollster, you will remember that over the final weeks of the 2004 campaign, I did a seven-part review of how pollsters select likely voters, including an explanation of the Gallup model and a review of criticism of it. Most of the issues reviewed then are relevant now, but in the context of this latest controversy, let's consider (a) why pollsters try to identify "likely voters," (b) the approach Gallup and USA Today took this week and (c) some thoughts about what it all means about the state of the race.

Why Screen for Likely Voters?

Four years ago, according the website maintained by Professor (and frequent Pollster commenter) Michael McDonald, 122 million Americans cast a ballot for president, which amounts to a turnout of 60% of the eligible adults in the United States (a significant increase from 54% in 2000).

A pre-election survey of all adults that made no effort to identify likely voters would have included the 40% who did not vote, and -- as should be obvious -- those extra interviews create the potential for a error the results since non-voters might have different preferences than actual voters. So all pollsters care about trying to identify the likely electorate.

But there is a big problem: Simply asking respondents whether they plan to vote does not work. Many more Americans will report they are likely to vote, or will claim they have voted in the past, than actually do. Consider the following results from the final national survey conducted by the Pew Research Center

However, when the Pew Research Center conducted their final survey before Election Day 2004 (interviewing 2,804 adults, October 27-30), they found the following:

  • 83% of adults said they were registered to vote (excluding the tiny percentage who live in North Dakota, the one state without party voter registration)
  • 71% of adults said they had already voted (5%) or rated their likelihood of voting as 10 -- "definitely will vote" -- on a 1-10 scale (66%).
  • 68% of adults said they vote "always" (51%) or "nearly always" (17%)

So screening for just self-identified registered voters is a good idea, but would still include roughly 20% non-voters. And simple questions about past voting or vote intent would largely overstate the size of the electorate. As such, the Pew Center and most other media pollsters used various indirect techniques (with some success) to screen for or otherwise "model" the likely electorate.

How Does Gallup Do it Now?

As explained in an article posted yesterday by Gallup's Frank Newport, the USA Today/Gallup has been using a three question scale to likely voters on the nine surveys they have conducted so far this year that asked a presidential vote question. Actually, that should be four questions, as they first ask adults if they are registered to vote, than ask:

1. How much thought have you given to the upcoming election for president -- quite a lot, or only a little? (quite a lot or volunteer "some" = 1 point)

2. How often would you say you vote -- always, nearly always, part of the time, or seldom? (always or nearly always = 1 point)

3. Do you, yourself, plan to vote in the presidential election this November, or not? ("yes" = 1 point)

They award points as noted above to those who give responses that have shown to correlate strongly, in past elections, with actual turnout. How do they know? They have conducted studies in past elections where they checked the voter registration rolls to see which respondents actually voted and which did not (how long ago? - I'm not sure if Gallup has disclosed that).

When they scored the three questions, they found that 56% of adults scored a perfect 3 on the 1-3 scale, answering all three questions as a highly likely voter would. The next category -- those scoring at least 2 out of 3 -- amounted to another 17% of adults, which would add up to 73%. But Gallup wanted their likely voter tabulations to "model" a turnout of 60% of adults, so they weighted down the "2s" to (those getting 2 out of 3 points) to a little less than one third of their original value.

I am leaving out a few details (involving Gallup's standard demographic weighting) but that is the gist of it: Likely voters are the 3s on their likely voter scale plus the 2s weighted down to roughly a third of their original value.

Gallup will use more or less the same procedure in the surveys they conduct in September and October, except that they will add four more questions to the scale (involving past voting behavior, knowledge of their polling place and a ten point scale to rate vote intent.

What are the problems here? First, as should be obvious, this is not the most precise method of identifying a true likely voter. If you are not yet registered, you are not included. If you registered this year for the first time and respond honestly that you have never voted before, your preferences are weighted down by a factor of 2 as compared to other voters or thrown out altogether if you say you are not paying much attention to the campaign.

Second, as Robert Erikson and his colleagues reviewed in the pages of Public Opinion Quarterly four years ago, the classic 7-question Gallup model "exaggerates" reported volatility in ways that are "not due to actual voter shifts in preference but rather to changes in the composition of Gallup's likely voter pool" (also summarized here).

Third, as Mike McDonald points out in a comment earlier today, a higher than ever turnout will challenge these models and their assumptions. Other surveys continue to show a huge Democratic advantage on measures of supporter "enthusiasm" for the two candidates. Those measures have not previously been included in the Gallup-style model, but they may be important this year.

Fourth, and this is the really important one, no one knows how accurate this technique is in terms of predicting turnout in November based on an application to survey data gathered in July. We have a lot of evidence that the Gallup-style "cutoff model," clunky as it may seem, does make surveys more accurate when applied to data collected the week before the election. But I have yet to see any comparable evidence regarding data collected in July.

So I tend to agree with Gallup's Frank Newport when he told Jill Lawrence yesterday that "'registered voters are much more important at the moment,' because Election Day is still 100 days away." For now, the poll of self-identified registered may be too broad a representation of the likely electorate, but at least they allow for a consistent measurement. Looking at vote preference among typically higher turnout subgroups is useful, analytically, but may or may not improve our conception of where the race stands.

So What Do These Results Say About Where the Race Stands?

First, to put the question as several readers did in emails over the last 24 hours? So which poll or approach is the most accurate right now? Listen closely now: We. Don't. Know.

If the election were being held today, past evidence would argue for placing more trust in the Gallup "likely voter" model than in the preferences of registered voters. But the election is not today, and I am not convinced that any pollster has a monopoly on wisdom when it comes to predicting turnout 100 days out.

As Brian Schaffner (our new contributor) reminds us, if we look at all the recent polls, and not just one, we can stills say with considerable confidence that Barack Obama is ahead. The precise margin probably depends on what assumptions one makes about turnout, which is more art than science at this point. However, as progressive blogger Chris Bowers has been pointing out lately, it is far better to be ahead than behind.

Having said that, we should not discount that two recent polls -- USAToday Gallup and ABC/Washington Post -- show McCain doing better when the classic Gallup "likely voter" model is applied. What is truly interesting about that finding is that the opposite was true on six of seven surveys that Gallup conducted from January to May: Obama did better slightly better among "likely voters" (defined as they were above) than among registered voters.

I have a theory (that someone at Gallup can probably test empirically): What changed is that the Democratic primaries ended. From February to June, Republicans who usually vote had a perfectly good reason to say they were paying "only a little attention" to the presidential campaign. All of the news what about the Obama Clinton race. Now that the media has started to focus on the McCain-Obama contest, Republicans have greater reason to be engaged. At least, that is something worth checking.

Also, finally, consider something from the perspective of a no-longer-practicing campaign pollster: Campaigns matter. So I am less concerned at this stage about "projections" that predict the outcome than in understanding what each campaign needs to accomplish to win. If the "likely voter" pattern evident in the recent USA Today/Gallup and Washington Post/ABC polls is accurate, it tells what the Obama campaign needs to do to win this election: They need to mobilize Americans that are ready to support Obama but that do not typically vote. That comes through loud and clear.

**PS - My conclusions above raise an obvious question: If registered voters are a better subgroup to watch, why does Pollster.com use the likely voter numbers on our tables and charts? I will blog on that highly pertinent question next, I promise.

[Typo corrected.  I know it's "likely voter" season because I'm misspelling Erikson again].

Obama's Overseas Screen Test

Note: We are pleased to add Republican pollster Steve Lombardo, the president and CEO of Lombardo Consulting, as a regular contributor. His weekly email update, the LCG Election Monitor, is well known to political journalists and insiders as a source of straight-shooting analysis of political poll trends. Starting today, the LCG monitor will also be published every week right here on Pollster.com.

It is often difficult to accurately assess the electoral impact of events during a campaign - especially those that occur more than 3 months prior to Election Day. But in the case of Obama's overseas trip, I think we can mark this down as a substantial tactical and strategic victory.

First, as I have said before - in the words of my friend and colleague, the late Mike Deaver - elections are about impressions. And this trip (and the accompanying coverage and photos) has created an impression of Barack Obama as that of an engaged, serious and strong person. Second, the trip serves to negate the preexisting notion that Obama is not up for the job of President. While it likely has not completely reversed the "inexperienced" impression, the trip has begun the process. Time will tell if other moments can serve Obama in the same way. The campaign will be looking for them to be sure.

From a micro perspective Obama has swamped McCain in terms of positive media coverage, driven largely by this overseas trip. Media reports have, to this point, been almost uniformly glowing. This has been helped along, of course, by comments from Iraqi Prime Minister Maliki, which seemed to support Obama's plans for a withdrawal of U.S. troops from Iraq.

Yes, there has been some criticism that this is a media stunt, but the vast majority of the coverage has been positive, suggesting that this was a sound strategy. Our sense is that when most Americans turn on their televisions, visit their favorite websites or open up their newspapers and see Obama sitting down with foreign leaders and chatting with American soldiers, most of them will say: "Sure, he looks presidential." In the end, that's all that matters.

John McCain has been hammering away at Obama on the stump and in this ad. This, too, is a pretty good strategy: trying to move the conversation away from whether Obama supported the war to whether he supported the surge. Obviously, Obama is vulnerable here. He stated that the surge would be counterproductive, and this line of attack serves to underscore the idea that he is not ready for the job. But this somewhat narrow approach may be obscured by events abroad (Afghanistan, Iran) and at home (gas prices, the economy). Remember that the economy is by far the number one issue in the country right now. Obama only needs to be in the ballpark with McCain on handling Iraq; if he dominates on the issue of the economy, he wins.

As we said in our last Election Monitor, this campaign will be a referendum on Barack Obama. If the American public comes to the conclusion that he can be an effective commander-in-chief - basically, if they become comfortable with the idea of him as President - then he should win the race. But the American public isn't there yet; the one area where Obama still trails McCain is on this key question of leadership and whether he has the "experience" to be president. This is obviously something that the Obama Iraq trip is designed to address. Our sense is that it is working; the question is whether the leadership "bounce" that Obama gets from the trip can be sustained.

Electoral Vote Projection Map

Our electoral vote map has not changed in the last two weeks. To this point, nothing has fundamentally altered the race, either nationally or in any key states. We will have to wait for next week's batch of polling data to see if Obama's overseas trip has any quantifiable impact on the race.

LCG electoral vote map 2008-7-22.png

However, there is some new polling data that does confirm a couple of our earlier predictions, as well as hint at one of the LCG Big Ten moving into the Obama column:

  1. Michigan (Toss-up). The upper-Midwest is clearly the Obama campaign's center of gravity. With his campaign headquarters and personal and political roots in Chicago, he has taken the sensible strategy of making strong plays for Iowa (which was won by less than 1% of the vote in both 2000 and 2004) and Michigan, a state that went Gore +5.2, Kerry +3.4. Horserace polling in Michigan has consistently shown Obama and McCain within the margin of error. However, the three most recent polls in Michigan (Rasmussen, Quinnipiac/WSJ/WP and PPP) show an average of Obama +8. If this recent bounce continues, we may have to move Michigan into the Obama column.
  2. michigan 7-22.PNG

  3. Iowa (Obama). We debated putting Iowa--a state that Bush won in 2004--in the Obama column so early, but every publically-released poll conducted in Iowa since the end of 2006 has shown Obama leading McCain, and now a new poll confirms a significant Obama advantage. A Rasmussen survey of 500 likely voters has Obama at a comfortable +10.
  4. North Carolina (Toss-up). As we mentioned in our initial comments on this electoral map, the fact that a state Bush won by at least 12 points in both 2000 and 2004 is a toss-up underlines the enormous structural advantage the Democratic Party has this year. We still think that McCain is likely to win this state, nevertheless, three new surveys (Rasmussen, SurveyUSA and PPP) show an average of just a 3-4 point lead for McCain and we will continue to treat this as a toss-up until something changes.
  5. NC 7-22.PNG

The Independent Vote

Just one more note before we go. So much has been made of the Independent vote that we decided to take a look at it, both in terms of how Independents are trending in 2008 and how that compares with previous elections. The chart below makes it clear that structural changes and disaffection with the current administration hasn't translated into increased support for Obama--yet. For all the talk of Bush's base-pandering and Obama's popularity among swing voters, the middle is being split between the two candidates, and it's been that way for the last eight years. For historical perspective, the small edge Obama currently enjoys is nothing compared to the huge Independent support garnered by Ronald Reagan and George H.W. Bush.

Ind 7-22.PNG

However, our sense is that McCain is doing better with likely voters and therefore, to win, Obama will need to open up a 4-7 point lead with Independents (think Clinton in '92 and '96).

We will be back again next week. Thanks to Pete Ventimiglia and John Zirinsky for their insights.

Comment of the Day

Posted by "hobetoo" in response to my post on likely voter models and what effect they may be having this year"

On the possible effect of the enthusiasm gap on the representativeness of polls using registered vs. likely voter screens, I would suggest the following point for consideration.

If candidate A is generating a lot more enthusiasm among his supporters than Candidate B is among his own supporters, then it also seems likely that candidate A's supporters would be more likely to participate in polls. Rather than being underrepresented, then, Candidate A's supporters would perhaps be more likely to be overrepresented than Candidate B's. (I'm thinking of John Brehm's argument that participation in polls is akin to participating in politics, and so the same factors that predispose people to vote are likely to predispose them to consent to an interview.)

"Likely Voters" and 2008

TNR's Noam Scheiber wonders whether national polls that report on the preferences of "registered voters" might "understate the support of the candidate with the enthusiasm on his side--Obama in this case" as compared to state level surveys that are typically reporting on the preferences of "likely voters."

He sees a some suggestive evidence in the apparent enthusiasm gap identified in the ABC/Post poll (as per today Post article):

But [McCain] starts that campaign with several deficits, including an enthusiasm gap. A majority of voters, 55 percent, said they are enthusiastic about Obama's candidacy, while 42 percent said the same for McCain. Three times as many said they are "very enthusiastic" about Obama as said so about McCain.

Even among McCain and Obama supporters, there is a clear difference in interest.

Ninety-one percent of Obama's supporters are enthusiastic about his candidacy, including 54 percent who are very enthusiastic. Fewer of McCain's backers are as ardent: 73 percent are enthusiastic about his run, but just 17 percent are very much so. There appears to be some leftover animosity toward him on the right. Overall, 13 percent of conservatives are very enthusiastic about McCain, compared with nearly half of liberals who feel as strongly about Obama.

The theory that Obama's enthusiasm advantage may translate into a a turnout edge is intriguing but difficult to prove with the data we have available right now. The main reason is that this far from an election, the process of identifying true "likely voters" is a sketchy exercise at best.

True, media pollsters have spent decades developing likely voter "models" to identify the true electorate, but most of that research identifies characteristics that are proven to predict turnout a few weeks before the election (or background, see my blogging on this topic from October 2004). The most elaborate approaches, like the classic Gallup likely voter model, use self-reported registration, intent to vote, past vote history, interest in the campaign and knowledge of voting procedures to score each respondents probability of voting. They then separate likely voters from less likely (or weight the most likely more heavily than the least likely) based on their assumptions about the level of turnout.

The basis for these models are validation studies that measure how well these variables predict turnout, and almost all were conducted in the final weeks of the campaign, not in June. And we have other evidence -- most notably a 2004 POQ article by Robert Erikson and his colleagues -- showing that the Gallup model may introduce too much volatility into the survey results before October. As a result, most national pollsters report on registered voters until the fall.

With those warnings in mind, we do have Gallup data for both registered and "likely" voters (using their traditional model) for the seven surveys they conducted so far during 2008 in partnership with USA Today. I copied those into the table below. They show a very slight pattern supporting Scheiber's theory. Obama did a point or two better among likely voters (but no better) on six of the seven surveys. On average across all the surveys, however, this "effect works out to four tenths of a percentage point.


Back in 2004 we typically saw the reverse pattern. Bush did slightly better than Kerry with "likely voters" using the Gallup style model, than with all registered voters.

Unfortunately, we know less little about the "likely voter" models used by most state level polls, as pollsters tend to divulge few details about their methods. However, those that have shared details typically use relatively simple screens for registered voters who say they are likely to vote in November. Since virtually all self-described registered voters say they are likely to vote, these "likely voter" screens are functionally not much different from the registered voter results we are seeing on national surveys (this conclusion is not warranted for the handful of state level polls using list samples to select those with past voting history, but that is another topic altogether).

Re: 46-45 Plus or Minus 3

Update: In the comments, Chris G argues that I am "way off" to conclude that "there has been far more stability than change in the national Obama-Clinton vote preference since Super Tuesday." He writes:

[T]hat simply does not follow from the simulations. the only thing that can be inferred is that if we're looking at these 2 time series alone, any meaningful changes in support are swamped by the noise. that's all we can conclude.

Since I may have been unclear, let me try to clarify: I am not arguing that the Gallup Daily and Rasmussen Reports tracking data proof the complete absence of change in candidate preference since Super Tuesday. Chris is absolutely right: No survey can do that. The best we can do is conclude that changes have been too small to be detected with confidence.

The point I was trying to make is that the changes since Super Tuesday have been (a) short lived, (b) small enough that they are indistinguishable from random noise, or (c) both. I do not consider changes of that sort to be very meaningful substantively, though your definition of "meaningful" may differ.

I am also not arguing that we should ignore the Gallup Daily. We just need to be patient and wait to see big, persistent changes. Look back at the numbers they reported in January through early February and you can see a very large, sustained and meaningful trend toward Obama:


In the midst of writing this update, I discovered that Gallup's Frank Newport made essentially the same point in his daily video report today:

As I look at the Gallup Daily election tracking, I am struck by the fact that neither candidate, Hillary Clinton or Barack Obama has been able to move ahead to a sustained and significant lead over the other [emphasis added].

Polling a Semi-Open Primary as Closed

The Columbus Dispatch released a mail-in survey of registered Democrats and Republicans in Ohio this morning. We have chosen not to include that survey in our chart for the Ohio primary because the Dispatch made the odd choice of sampling only registered Democrats and Republicans in a semi-open primary that allows non-partisan registrants to participate. We did briefly and inadvertently include the poll in our chart earlier this afternoon, but have removed it.

The Columbus Dispatch has long conducted pre-election polls by mail, but our issue with this particular survey is unrelated to its mode. The Dispatch sends out poll "ballots" to voters randomly selected from Ohio's list of registered voters. This method has been surprisingly accurate in general elections since 1980, something I wrote about approvingly in October 2004. On the other hand, the Dispatch poll produced a disastrous result on a set of ballot initiatives in 2005, owing partly to some deviations from their usual methodology, such as not replicating the exact ballot language, including an undecided option and fielding the survey a week earlier than usual.

However, in this case, the key issue is that the Dispatch sampled only registered partisans, that is, voters with some previous history of voting in primaries. Why does that matter?

Ohio has a "semi-open" primary. The state has no formal "party registration," in that voters do not choose a party when they register to vote. However, those who vote in primaries have their party affiliation recorded in the voter lists. Those who have previously voted in a primary and want to switch their party affiliation can do so by filling out a form on primary day (or when they request an absentee ballot). But those who have never voted in a primary before [and are registered to vote] -- those considered "non-partisan" by the registrar of voters -- can opt to participate in any primary simply by showing up on Election Day (for more details, see the blog post by Pollster reader Tom Fox).

As of 2006, Ohio had 7.6 million registered voters, but only 2.4 million voted in the primary election (of either party) in 2004. Slightly more, 2.5 million, voted in the Ohio primary in 2000, and the turnouts in off-year primaries are lower.

As such, the majority of Ohio's registered voters do not participate in primaries and are, therefore, registered as "non-partisan" but yet still fully eligible to participate in Tuesday's primary. The current voter file maintained by Voter Contact Services (a political list vendor) includes 7.9 million registered voters of whom 20% are "registered Democrats," 19% are registered Republicans and 60% are non-affiliated.

Keep in mind that "party registration" in Ohio is very different from the self-reported "party identification" that most surveys measure. While 60% are unaffiliated on the voter lists, a recent SurveyUSA poll of registered voters finds only 23% identifying as "independent" (while 44% identify as Democrats and 29% as Republicans).

Typically, most primary voters in Ohio have voted in primaries before. So the choice by the Dispatch to sample only those with previous primary history may have been appropriate for typical off-year primaries. The 2008 primary will be anything but typical, however, and their decision to exclude non-partisan voters from their sample is questionable.

Ohio's Secretary of State Jennifer Brunner is predicting that 52% of Ohio's registered voters will participate this week, a level that the Associated Press appropriately described as "incredibly high." They also reported that Brunner cited as as evidence the early requests for absentee ballots and the experience of other states this year.

As the Dispatch observed, if Brunner is right and this week's turnout hits 4 million, it would mean that "well more than a quarter of Ohio's 5 million-plus nonpartisan voters will vote" in the primary. That means that roughly 30% percent of the voters in the two primaries would be unaffiliated. Presumably, given the interest in the Obama-Clinton race, the percentage of non-affiliated voters in the Democratic primary would be higher.

Of course, the percentage of unaffiliated voters that turn out on Tuesday is a matter of speculation. However, since the Dispatch sample design entirely excludes this potentially critical category of voters from its sample, we are not including it in our trend chart.

Re: SurveyUSA Texas

A few comments on our post of the new SurveyUSA Texas poll raised two questions worthy of further discussion.

First, reader s.b. notes:

[W]ith an automated survey, if its in English, they aren't sampling spanish only or mostly spanish speakers. I think it skews these results.

Some pollsters (such as Gallup) offer voters the opportunity to complete the survey in Spanish when they encounter Spanish speaking respondents. Most pollsters, however, will simply end the interview in these instances. I asked SurveyUSA's Jay Leve about their procedure in Texas and he notes that while they do have the facility to offer respondents the option to complete a survey in either English or Spanish (and have done so in mayoral elections in New York and Los Angeles and some congressional districts), they did not offer a Spanish interview for their Texas poll.

However, before leaping to conclusions about the SurveyUSA results, keep in mind that none only one of the other Texas pollsters report using bilingual interviewing for any of their surveys [Correction: interviews for the Washington Post/ABC News poll "were conducted in English and Spanish"]. Three of the other pollsters -- Rasmussen Reports, PPP and IVR polls -- also interview with an automated methodology rather than live interviewers.

And before leaping to conclusions about all the Texas polls, we might want to know just how many Latino voters in Texas speak only Spanish. I have not done survey work in Texas, but my memory from conversations with pollsters that do is that the percentage that will actually complete an interview in Spanish when offered is typically in the low single digits.

Second, several commenters have speculated about the small changes in the demographic composition of the last two SurveyUSA Texas polls. For example, "Mike in CA" points out:

Hispanic turnout at 28% sounds just about right. The last SUSA survey had it at 32% which was way too high. It seems SUSA has scaled back their Hispanic estimates, so they must have a reason. Additionally, the boosted AA to 23%, from 18%. Seems reasonable considering the extraordinary increases in early voting turnout from Houston and Dallas [emphasis added].

That's not quite right. Keep in mind that SurveyUSA's approach to likely voter modeling is comparable to that used by Iowa's Ann Selzer, in that they do not make arbitrary assumptions about the demographic composition of the likely electorate. As SurveyUSA's Jay Leve explains, they "weight the overall universe of Texas adults to U.S. census" demographic estimates, then they select "likely voters" based on screen questions and allow their demographics to "fall where they may." So some of the demographic variation from survey to survey is random, but large and statistically statistically significant variation should reflect real changes in the relative enthusiasm of voters. Leve goes into more detail in the email that I have reproduced after the jump, which also includes the full text of the questions they use to select likely voters.

Continue reading "Re: SurveyUSA Texas"

Why So Much Volatility in Texas?

Not surprisingly, the three new Texas polls we posted yesterday provoked quite a bit of discussion. We have three polls showing very different results for the Democrats, but much more consistency for the Republicans. How can that be?

First, a quick summary: A survey sponsored by the Texas Credit Union League and conducted by two campaign pollsters, Hamilton Campaigns (D) and Public Opinion Strategies (R) has Clinton leading Obama by eight points (49% to 41%). A new automated survey from Rasmussen Reports has Clinton leading by sixteen (54% to 38%) and a new survey from American Research Group (ARG) shows Obama leading by six (48% to 42%). The Republican results are far more consistent, showing John McCain leading Mike Huckabee by margins of four to eight points.

One likely reason for much of the apparent "volatility" in the Democratic results is that the Obama-Clinton vote preference shows large variation on five critical variables: race and ethnicity, gender, age, socio-economic status and party affiliation (percent non-Democratic on party ID). Small changes in pollster methods (such as whether they sample from a list, how they select respondents within each sampled household, what time of day they call, whether they use live interviewers or an automated methodology and how they weight their data) can produce important differences in sample composition that will in turn affect the vote preference results.

Here is the data available online from the three most recent surveys (some of which was posted by our readers in comments yesterday):


Unfortunately, only the TCUL/Hamilton/POS poll provides complete information on its sample composition, although the ARG summary provides percentages for selected subgroups. From these data we can see that the we can see that the TCUL survey includes slightly more Latino voters and slightly fewer African-American voters than the ARG survey. That explains a few points of the difference between them but (as noted below) not all.

The table above also includes sample composition statistics from the 2004 Texas Democratic exit poll, although the 2008 composition will likely be different. Just how different we will not know until the votes are cast, but the exit polls so far this year in other states provide some guidance. The Washington Post's Cohen and Agiesta have put up a very helpful compilation showing the demographic shifts from 2004 to 2008 in 17 states that have held primaries or caucuses so far this year. Women have made up a slightly greater share of Democratic electorates almost everywhere (averaging about a 4 percentage point gain). The percentage of 18 to 29 year olds has also increased in just about every state, up 4 points on average.

The changes in race and ethnicity have been less consistent. Most relevant to Texas are California and Arizona, the two states with the largest Latino populations. In California, the Latino contribution surged (+14), while the African American percentage was roughly constant (-1). In Arizona, the African American percentage as up far more (+6) than the Latino contribution. Cohen and Agiesta also note that black percentage of the Democratic electorate is down slightly in two states (Florida and Virginia) where the Latino percentage increased.

The racial and ethnic composition of the three most recent surveys does not explain the their different Obama-Clinton results. As the following table shows, the biggest difference among the three is that the ARG survey reports an even race among Texas Latino Democrats while the Hamilton/POS and ARG surveys give Clinton a roughly two-to-one lead, comparable to her showing in other states with large Hispanic populations.


Another factor in the "volatility" of these polls -- a factor that is next to impossible to evaluate from the data available -- is how tightly (and accurately) they screen to identify "likely voters." In 2004, the Texas Democratic primary attracted 839,231 voters, 6% of all eligible adults and 5% of all adults in the state. Democratic turnout has increased everywhere this year, nearly doubling on average in primary states (as a percentage of eligible adults) although the state-by-state patterns have varied widely. Texas is all but certain to see a big turnout boost, but just how big is anyone's guess.

They key point here is that polls may yield different results depending on how broadly or narrowly they conceive of the Texas primary electorate. Unfortunately, the degree to which they screen for "likely voters" is hidden from our view.

A Response from Gallup's Frank Newport

In response to the dialogue we've been having about the Gallup Daily tracking survey (here and here), Gallup's editor-in-chief Frank Newport sent the following response. Say what you will about Gallup, they are consistently among the most transparent and responsive of the public pollsters.

We are always glad to discuss and analyze Gallup poll data. We generally learn from the insights, comments and questions of others.

The particular reader to whom Mark spends time responding was focusing on the fact that Gallup's daily election tracking was not in exact sync with the vote totals across the 22 Super Tuesday states.

We never reported the Daily Tracking results as projective of what would happen on Super Tuesday. Had that been our intention, we would have used a strict likely voter screen. We would have made specific assumptions about what turnout would be in each state and adjusted each state accordingly. This is what we normally do when trying to predict the actual vote in a state or national election. We did not design the tracking survey methods for that purpose. The general patterns of trends among the broad sample of voters we look are extremely important. But the exact numbers are not projections of the vote in any state or combination of states.

As we reported, candidate support levels in the Super Tuesday states were not dramatically different from the national support levels. This suggests that the momentum and trends observed nationally could be hypothesized to be reflected in the Super Tuesday states.

But for a reader to take that as a prediction by Gallup about the precise vote outcome in all Super Tuesday states (or certainly any individual state) is incorrect.

Our data suggested that among all voters across the country and in Super Tuesday states prior to Feb. 5th, Hillary Clinton had a lead over Barack Obama. Of course not all voters went to the polls -- they never do. Initial estimate are that there was only an average 30% turnout - and a turnout which varied widely across states.

The Gallup Daily election tracking uses a mild screen that filters out just those respondents who say they are not likely to vote in response to a four part question. For Republican voters in February so far that has been 16.9%. For Democratic voters it has been 13.7%. In other words, the screen leaves in more than 80% of national adults, making it functionally similar to the typical registered voter screen.

It certainly wouldn't be expected that a large sample of 80% + of all adults would mirror the actual vote total in a widely disparate group of states with on average just about 30% turnout - and with different turnout within each state. By way of example, when we retrospectively go back and look at the sample of voters from Super Tuesday States from the last five days before Super Tuesday -- screened only among those who are extremely likely to vote -- we find that the vote totals are near a tie, with Obama at 48% and Clinton at 45%.

But we didn't get into that before Super Tuesday because that was not our purpose. The purpose of the national tracking is to monitor the mood of all Democratic and all Republican voters across the country as this primary season progresses. After Jan 3rd, of course, some of these people had already voted, and that proportion continues to go up.

One of the great values of Gallup's tracking is the ability to monitor on a daily basis the changing dynamics of the campaign and to see where the momentum is. (The second value is to be able to aggregate data and look at detailed subgroup analysis). Obama had been gaining in the week or two prior to Super Tuesday to the point where he was essentially tied with Clinton among the broad sample of all voters. But then Clinton retook the momentum. Thus, we hypothesize that had the election been held on Saturday, for example, it looks like Obama would have done better than he eventually ended up doing. But we were not attempting to say what the exact vote totals would be.

[UPDATE (2/10)]: The comments left for this this entry are unusually well-expressed and definitely worth a read. They have inspired a few additional thoughts of my own (delayed, admittedly, by a much needed 36 hour break):

First, we ought not pick just on Gallup. Gallup's broad approach to selecting the "voters" that get asked presidential primary questions is more or less what the other national polls do. I first wrote about this issue almost a year ago and warned about it just last week, on the eve of Super Tuesday when headlines told us of a "dramatic shift" toward Obama.

Second, I am certainly sympathetic to the nearly insurmountable challenges that would be involved in creating a combination actual (past) voter/"likely voter"/"likely caucus goer" model that would apply at the national level and somehow take into account the myriad of different rules for participation and historically varying turnout rates. It would not be at all easy.

Also, be careful what you wish for: Those who remember Gallup's daily during the 2000 election will recall that they applied their "likely voter model" to data as early as Labor Day. Critics made a strong case that while the model works well a week before the election it introduces a lot of variation in the kinds of voters selected as "likely," much of it questionable.

Third, I agree with Mark Lindeman that there is value to Gallup's approach. "it's very interesting," he wrote, "to know what Democrats and Republicans (including leaners) around the country are thinking of "their" candidates, whether their states have already voted or not." However, I tend to agree even more with reader DTM's reaction:

[Quoting Newport] "One of the great values of Gallup's tracking is the ability to monitor on a daily basis the changing dynamics of the campaign and to see where the momentum is."

I think it is fair to say the campaigns are directed at eventually getting actual votes in caucuses and primaries, and the kind of momentum the campaigns care about is the kind of momentum that would further such an end. But given the way in which Gallup is defining "voters", the relationship between what is going on in their tracking polls and what the campaigns are actually trying to accomplish is less than clear.

And this is precisely the sort of confusion which worries me. Indeed, they seem to be more or less encouraging people to use these tracking polls for "horse race" coverage, while at the same time admitting they are not really even trying to screen for actual voters in the upcoming contests, which is what the "race" is all about.

Most people who follow the national poll numbers -- including journalists and political professionals -- treat them as if they measure the views of actual voters in party primaries or caucuses. Pollsters could do a much better job making it clear that they also include far more "leaned partisans" than are likely to actually participate in the party primaries and caucuses (regardless of what respondents claim on vote likelihood questions).

Re: Gallup Daily Vs. Super Tuesday

While I was finishing my National Journal column late yesterday afternoon, Gallup posted a longer than usual Gallup Daily update that answers most of the questions we asked here yesterday. It is a must read for those closely following the Gallup Daily numbers and other national surveys. Our readers blogged the key passages in the comments last night, but for those who missed it here are the key paragraphs:

The vote opinions of those in Gallup Daily tracking will not, of course, represent the actual vote in various states or in particular combinations of states on Election Day. One reason is that the tracking represents a broad sample of all respondents who say they are at least somewhat likely to vote, removing a small percentage who are unable to vote or not engaged in the campaign to any degree. The "not likely to vote" group is less than 20% in general (among both Republicans and Democrats), meaning that over 80% of American adults are included in the voter figures Gallup reports, making it similar to a typical "registered voter" figure.

Those who track voter turnout in various states that voted on Super Tuesday estimate that actual turnout was around 30%, and varied considerably among states. Thus, a broad sample of over 80% of American adults would not be expected to match the actual voting patterns of the much smaller group that turn out to vote in either party's primary.

There is, in fact, strong evidence in the tracking data from the days prior to Super Tuesday that Obama did significantly better when those who reported the highest likelihood of voting are isolated in the sample. Retrospectively, Gallup analysis can isolate just voters who say they are extremely likely to vote -- about 50% of the sample (this still overestimates actual turnout). The vote preferences of Democrats within that smaller slice for the five days prior to Super Tuesday (and after John Edwards left the race) show that Clinton (45%) and Obama (48%) were basically tied [emphasis added].

This finding is significant since it says something important, not just about the Gallup Daily tracking but about most of the other national surveys that ask about the Democratic primary vote preference among similarly broad samples (that overrepresent primary turnout). Back in April of last year, Open Left blogger Chris Bowers (then with MyDD) wondered whether these overly broad samples in national polls might be inflating Hillary Clinton's advantage. At the time narrower slices of national surveys -- like the one that Gallup did above -- did not support the theory. However, this new evidence, coupled with Obama's consistently better performance in lower-turnout caucuses on Tuesday, suggests that other national surveys may be overstating Clinton's advantage.

Two weeks ago I wondered again if the national screens are "tight" enough. This new evidence from Gallup suggests that if we are interested in the preferences and opinion of Democratic primary voters nationwide, they are not tight enough.

Reacting to these new findings, FlyOnTheWall, the Pollster reader whose question started this discussion, asked:

If Gallup is saying that the sample which includes 80% was wildly off the mark as a predictor of actual voting, but that the sample which included just the 50% of highly likely voters came darn close to predicting how actual voters actually vote - then why the heck don't they use the tighter screen all the time?

If they're trying to find out how all Americans feel, they shouldn't use any screen. But if they're tracking voter sentiment, then they should be screening for voters. And since a loose screen produces results that aren't predictive, and a tight screen produces those that are, I really wish they'd just use the tight screen going forward.

To report daily results based on a rolling average of "extremely likely" to vote respondents, Gallup would either need to call twice as many Americans every night or report a rolling six-day average in order to keep the sample size the same. Read my National Journal column later today (I will add a link when it's up) to get a sense for why it would be a bad idea to do daily tracking based on a smaller sample.

However, Fly makes a very good point. It would certainly be helpful if Gallup could report a weekly average based on just the "extremely likely" to vote respondents. Since they interview 7,000 adults a week, they are uniquely positioned to regularly compare high turnout Democrats to all the rest.

Gallup Daily vs. Super Tuesday

In response to a reader's question about the Gallup Daily survey, I left a comment last night that was not correct. It concerned the screen that Gallup applies to the results on the Democratic and Republican presidential primary contest. I had assumed, wrongly as it turns out, that Gallup reported the results for all adults nationwide that identify or lean Democratic. An alert reader caught and alerted me to the word "voter" used in their methodological blurb. I emailed Gallup's Jeff Jones to check, and he kindly replied with the precise explanation for how the select the primary "voters" whose preferences they report every day:

Republicans or Republican-leaning independents who say they are extremely, very or somewhat likely to vote in their state’s primary or caucus when it is held.

Democrats or Democratic-leaning independents who say they are extremely, very or somewhat likely to vote in their state’s primary or caucus when it is held.

We [also] make provisions for those residing in states that have already held their primary caucus – those who indicate they have already voted are considered extremely likely to vote, and those who did not vote in their state’s primary or caucus would be excluded from the base.

One important note: The screen that Jones describes is similar to what other pollsters use in statewide surveys, but it is not the more rigorous and sometimes controversial Gallup "likely voter model" that they use in general elections and used for their surveys in New Hampshire.

Back to the question from "FlyOnTheWall" that prompted this discussion:

Today's Gallup polling was done yesterday [Tuesday]. It's of likely Democratic primary voters. And it attempts to show for whom they're going to vote. Today's snapshot shows a 13-point lead for Clinton.

Only we ran this experiment on a broader basis yesterday, and found (based on the tallies of the popular vote that I've seen) less than a point separating the two candidates. And it gets worse. Gallup broke out the February 5 states a few days ago, and found that voters there were more - not less - favorably disposed to Clinton than their entire sample. So, presumably, what Gallup is telling us is that voters in February 5 states favor Clinton by some 15 points - when the voters themselves turn out to be evenly divided.

That's as an egregious an error as Zogby, from a pollster who's supposed to be a whole lot more reputable. What gives?

And that's a fair question. First, to clarify, the results released yesterday that showed the 13-point lead were based on interviews conducted from Sunday afternoon (before the Super Bowl started) through Tuesday night. While some Tuesday night respondents on the West Coast may have been aware of the results, most were not. So Fly is right to suggest that the Sunday to Tuesday window was a good time period to compare to the actual results.

Of course, Gallup did not report on the vote preference of voters in Super Tuesday states in their Sunday to Tuesday data. They did that on Monday:

Forty-nine percent of Democrats and (where eligible to participate) Democratic-leaning independents in Super Tuesday states favor Clinton for the nomination, while 44% choose Obama. This analysis is based on tracking data from Jan. 30-Feb. 3, all collected since John Edwards suspended his campaign.

But note the last sentence. They reported a result nationally on Monday from the last three-nights of interviewing (Friday to Sunday) showing Clinton 4 points ahead of Obama (47% to 43%). However, the results for the Super Tuesday states were culled from interviews over the prior five nights of calling (Wednesday through Sunday). The different time period might have made a difference, although the national Clinton-Obama margin looks to have been roughly four points over the five day period as well. Perhaps Gallup can clarify.

In the spirit of my op-ed piece this morning, the disclosure of some additional statistics from the Gallup data would help in comparing the Super Tuesday results to the Gallup Daily results of the last week or so:

  • What percentage of adults, nationally, qualified as "Democratic and Democratic-leaning voters" over the last week or so? What percentage of adults qualified as "Republican and Republican-leaning voters?" How does the combination of the two compare to the 29.1% turnout of eligible adults that Michael McDonald's invaluable primary turnout web page estimates for Super Tuesday?
  • Gallup may have to reach back more than a week to get sufficient sample sizes, but what are the same statistics when we compare Super Tuesday primary states to Super Tuesday caucus states? McDonald's data indicates the obvious -- that caucus turnout was significantly lower than primary turnout -- though I will need to crunch the data more to get an overall comparison of turnout in Democratic primaries to Democratic caucuses.
  • And while we are at it, what was the vote preference over the last five or six nights of interviewing if we looked only at primary states that voted on Super Tuesday: And how does that preference compare to the actual votes cast in just the primary states?
  • [Update - I left one out: Did "extremely likely" Democrats in Super Tuesday states differ from those just somewhat likely to vote?]

Perhaps someone at Gallup could take a crack at this. It would make a terrific Gallup Guru item, don't you think?

February 5 Polls: Four Cautions

Over the last 48 hours we have had an avalanche of new polls,** and given the discussion both in our comments section and elsewhere across the blogosphere, everyone seems unsure of what to make of the results and what they say about where things stand, especially in the Democratic presidential race. As is evident from our charts, the trends are highly favorable to both John McCain and Barack Obama, but from there things get murkier, especially in the Democratic race. Here is my sense of what the poll results tell us and what they do not.

The Republican race is easier to gauge, largely because of the "winner-take-all" rules that apply in so many Republican primaries. The National Journal's Campaign Tracker shows that more than two-thirds of the Republican delegates up for grabs tomorrow will be awarded on a winner-take-all basis either by state or congressional district or some combination of the two. As such, John McCain's roughly twenty-point leads in most of the national surveys, combined with similar margins in the winner-take-all-states in the Northeast (New York, New Jersey, Connecticut and Delaware) and narrower leads elsewhere position him to take a commanding delegate lead tomorrow night. Mitt Romney's hopes, on the other hand, ride on surpassing McCain in states like California and Missouri.

The Democratic contest is obviously much closer, although in some ways the process of selecting delegates is more straightforward. The allotment of delegates is proportional to votes in each congressional district and each state. While the rules may make for some odd outcomes in individual states (see more detailed explanations here and here), the allotment across all states should be a good reflection of the overall votes cast. While winning individual states may have symbolic value in terms of the way the media covers the results, the total delegate counts amassed across all states are what really matter.

So what do the polls tell us about how tomorrow's Democratic contest will translate into delegates? While Barack Obama appears to be gaining support, there are four reasons to be cautious about what the various polls are reporting [about where the race will end up -- see the clarification below]:

1) Polls are of little use in the caucus states. Roughly 13% of the Democratic delegates chosen tomorrow are from six states and one territory that hold party caucuses (Alaska, Colorado, Idaho, Kansas, Minnesota, North Dakota and American Samoa). Accurate polling in these contests is next to impossible because past turnout has been so light. Fewer than one percent of the eligible adults in the six states participated in the Democratic caucuses in 2004 (ranging from 0.1% in Alaska to 2.2% in North Dakota).

Turnout in the February 5 caucuses is anyone's guess, and as such, pollsters have wisely stayed away. We have logged only two polls in the six caucus states fielded since December. One of these was the Minnesota Public Radio News/Humphrey Institute poll (pdf) that explicitly warned it was "not a prediction of Tuesday night's precinct caucuses" because "the interviews did not identify likely caucus participants." The second, a Mason-Dixon survey in Colorado, is now nearly two weeks old and gave no indication what percentage of Colorado adults were deemed "likely caucus goers."

2) National polls may be misleading. Given the proportional allotment of delegates across such a large number of states, the national polls may provide a reasonable assessment of where the race stands. While we have a lot of very recent national polling data showing Barack Obama gaining, we have to remember that the February 5 states may look different than those not holding contests tomorrow.

So far, I have seen only two national surveys attempt to break out results for the February 5 states, and those show contradictory results.

The report released yesterday by the Pew Research Center allows a comparison across their last three surveys of Democrats in the February 5 states to those who will vote in later primaries. In the December and January surveys, Pew showed no significant difference between these two categories of states. Now, however, Obama does slightly (though not quite significantly) better in the February 5 states. Looking at it another way, virtually all of recent Obama's gains on the Pew survey have come from the February 5 states.

02-04 Pew-2-5.png

On the other hand, the new CBS News survey, which shows the national Clinton-Obama contest deadlocked at 41% each for Clinton and Obama, yields the opposite result. The CBS summary reports the following about a similarly small sample of Democratic primary voters:

The picture in the states voting on Super Tuesday is not nearly as close as the overall picture and offers some good news for Clinton. Among voters in those states, she leads Obama, 49 percent to 31 percent, with 16 percent still undecided.

As Josh Marshall points out, the entire CBS survey was based on 491 Democratic primary voters, so the subgroup of February 5 state voters may have been as small as 200 interviews.

Perhaps our friends at Gallup, who have interviewed nearly 2,200 Democrats over the last five days, can run a tabulation that helps clarify how the February 5 states compare to the rest of the nation. [Update: They did just that -- details here].

3) Are they sampling truly "likely voters?" Some national surveys, such as ABC/Washington Post and CBS, have reported the results of respondents who describe themselves as likely primary voters. Others, however, have reported on the views of registered voters or adults that identify as Democrats. While turnout is likely to be higher tomorrow than in 2004, the percentage of adults that vote in the Democratic primaries is still likely to be smaller than the percentage represented by most of these national surveys.

Here are two sets of turnout statistics to chew over. First, consider how turnout has increased in the Democratic contests held so far:

02-04 turnout so far.png

As should be obvious, turnout has increased dramatically in all the early states even (or perhaps especially) in states that featured little or no active campaigning by the candidates. If nothing else, this pattern suggests that turnout will exceed 2004 levels in all the February 5 states.

It is still worth considering that past turnout has amounted to a relatively small percentage of eligible adults in each state. The following table shows the turnout levels from 2004 for the February 5 primary states as a percentage of all adults an of eligible adults (as reported by Michael McDonald):

02-04 turnout 2004.png

Here's the main point: Even if Democratic turnout doubles tomorrow as compared to 2004, the percentage of adults participating in the Democratic primaries will still be a fraction of the adults identified as Democrats or Democratic "primary voters" on most national polls. Do the truly "likely" voters look different than all Democratic identifiers? Are the statewide surveys doing a better job of selecting "likely voters" than the national polls? Unfortunately, we can only guess, as only a small handful of the statewide surveys report the percentage of adults that their likely voter samples represent.

4) Uncertainty remains high. If you pay attention to nothing else, remember this: As in New Hampshire, a lot of Democrats are having a hard time deciding between Hillary Clinton and Barack Obama. According to the Pew Research Center, both candidates now receive overwhelmingly positive ratings from Democrats:

  • Clinton: 80% favorable, 15% unfavorable
  • Obama: 76% favorable, 16% unfavorable

Again, as in New Hampshire, voters are expressing considerable uncertainty. In California, for example, both the Mason-Dixon and Rasmussen surveys report 29% of Democrats as either completely undecided or indicating there is still a chance they could change their mind about their preference.

This high degree of uncertainty creates the potential for a volatility that the final tracking polls may not reveal. Many voters will likely carry their sense of indecision into the voting booth, so the news and events of the next 24 hours could prove crucial.

Update: Adam's question in the comments suggests the need for a clarification. I have no doubt that support for Barack Obama has been increasing steadily over the last week. Virtually all of the surveys in all of the states are showing evidence of that trend, and as each pollster measures the same population (however it is defined), those trends are reliable. What I am urging caution about is where Clinton-Obama contest ends up when votes are cast tomorrow. As my AAPOR colleague, Professor Robert Shapiro put it over the weekend, "I would trust the trends but not the magnitude - [it] could be greater or less."

**If you have appreciated the constant flow of updates over the weekend, please post a thank you to the indefatigable Eric Dienstfrey for his exceptionally hard work (and for putting up with a boss who sometimes misspells his name).

Correction: The original version of this post incorrectly identified the CBS News survey as a CBS/New York Times survey.

Polls and Early Voting in Florida

One note on early voting and the Florida Republican primary. Charles Franklin’s excellent “endgame” summary shows a roughly eight point drop in Rudy Giuliani’s support since December. But the Giuliani campaign sees some hope in early voting. As Newsday reports:

Giuliani’s campaign made a case that it could win here on the back of its get-out-the-vote efforts aimed at early and absentee voters, who are expected to top 450,000 and to account for a third of the turnout.

Are polls showing Guiliani running ten to fifteen percentage points behind frontrunners Mitt Romney and John McCain missing the impact of early voting? Not likely. Of the eight organizations, only three reported the number of Republican primary voters who said they had “already voted” at the time they were interviewed: 27% by SurveyUSA, 25% by PPP and 19% by the Suffolk University poll. All three also provided tabulations comparing early voters to those yet to cast a ballot:

01-28 florida early voting.png

The SurveyUSA and PPP results show Giuliani running a few percentage points higher among early voters, although neither difference is large enough to be statistically significant given the relatively small sample sizes involved. The Suffolk survey finds few Giuliani supporters in its even smaller subgroup of early voters, but even if you ignore the Suffolk result and treat the differences measured by SurveyUSA and PPP as statistically meaningful, they offer Giuliani little hope. Those results are also consistent with Giuliani campaign manager Mike DuHaime’s acknowledgement to Newsday that early voting “could make a difference of only a few percentage points.”

Rudy Giuliani just appeared on Morning Joe on MSNBC and reiterated his confidence in the early vote:

I think the early voting in unprecedented in number. I believe we’ll do very well because we campaigned all during that period bringing out that early vote.

One handicap the Giuliani faced is that the period of early voting occurred over the last fifteen days. However, as Charles Franklin’s endgame graphic shows, significant erosion in Giuliani’s support had already occurred by January 1, so the potential gain from early voting is limited:

01-28 endgame giuliani.png

What about the five or six other polls whose public releases said little or nothing about how they handled early voting? As poll consumers, unfortunately, we are once again in the dark. I assume that most handled early voting as the three cited above did: Their screen questions presumably gave respondents the option to say they had already voted, and hopefully rephrased their trial heat questions to ask early voters about their preferences in the past tense. But we do not know that for sure. Given that early voters may cast a third of Florida’s ballots, disclosure of the way media polls handle early voting ought to be a no-brainer. Why don’t their media sponsors demand it?

PS: Tonight’s exit polls in Florida will include sample of interviews conducted by telephone among early voters, and those results will be weighted with the interviews conducted at polling places today to the best estimate available of absentee ballots as a percentage of all votes cast.

Clarification: Daniel is right to point out a distinction I had missed that Florida makes between “early” and “absentee” voting:

There is an important point of clairification. Early voting *at the polls* has only been going on for the last 15 days. However, it’s my understanding that abstentee voting by mail has been going on since early December. I have no data on what percentage of the 400K votes were at the polls vs by mail, nor do I know anything about the exact wording of the voter screens. My point is that “early voting” and “abstentee voting” are not the same concept in Florida.

Unfortunately, the Florida Secretary of State's absentee voting page does not indicate when "absentee" voting begins. Either way, the available survey data suggest that the preferences of early and absentee voters are, at best, a few percentage points more favorable to Guiliani.

Either way, it is extremely unlikely that pollsters systematically excluded or screening out early and absentee voters. My assumption is that most accounted for early/absentee voters as SurveyUSA and PPP did but reported nothing about their procedures. The worst case is that a pollster that made no modification to their screen and trial heat questions to accommodate early or absentee voters. Under that scenario, I would imagine that those who had already voted would choose the "very likely" to vote option (or the interviewers would choose it for them), and that voters would report their actual choice as the candidate they would support "if the election were held today." So one way or another, the choices of those early voters are probably included in the surveys we have before us.

South Carolina: Why So Much Variation?

We've had quite a bit of discussion today in the comments section about the wide variation in results from the South Carolina polls. Reader Ciccina noticed some "fascinating" differences in the percentages reported as undecided, differences that lead reader Joshua Bradshaw to ask, "how is it possible to have so widely different poll numbers from the same time period?" There are many important technical reasons for the variation, but they all stem from the same underlying cause: Many South Carolina voters are still uncertain, both about their choices and about whether they will vote (my colleague Charles Franklin has a separate post up this afternoon looking at South Carolina's "endgame" trends).

Take a look at the results of eight different polls released in the last few days. As Ciccina noticed, the biggest differences are in the "undecided" percentage, which varies from 1% to 36%:

01-24 SC polls.png

1) "Undecided" voters -- Obviously, the differences in the undecided percentage are about much more than the random sampling variation that gives us the so-called "margin of error," but they are surprisingly common. Differences in question wording, context, survey mode and interviewer technique can explain much of the difference. In fact, variations in the undecided percentage are usually the main sources of "house effect" differences among pollsters.

The key issue is that many voters are less than completely certain about how they will vote and will hesitate when confronted by a pollster's trial heat question. How the pollster handles that hesitation determines the percentage that ultimately get recorded as undecided.

On one extreme, is the approach taken by the Clemson University Palmetto Poll. First, their trial-heat question, as reproduced in an online report, appears to prompt for "undecided" as one of the four choices. And just before the vote question, they asked another question which probably suggests to respondents that "undecided" is a common response:

Q1. Thinking about the 2008 presidential election, which of the following best describes your thoughts on this contest?

1. You have a good idea about who you will support
2. You are following the news, but have not decided
3. You are not paying much attention to the news about it
4. Don’t know, no answer

So two of the categories prime respondents with the idea that other South Carolina voters either "have not decided" or are "not paying much attention."

Most pollsters take the opposite approach. They try to word their questions, train their interviewers or structure their automated calls in a way to push voters toward expressing a preference. Most pollsters include an explicit follow-up to those who say they are uncertain, asking which way they "lean." The pollsters that typically report the lowest undecided percentages have probably trained their interviewers to push especially hard for an answer. And SurveyUSA, the pollster with the smallest undecided in South Carolina (1%), typically inserts a pause in their automated script, so that respondents have to wait several seconds before hearing they can "press 9 for undecided."

But it is probably best to focus on the underlying cause of all this variation: South Carolina voters feel a lot of uncertainty about their choice. Four of the pollsters followed up with a question about whether voters might still change their minds, and 18% to 26% said that they might. So many South Carolina Democrats -- like those in Iowa, New Hampshire before them -- are feeling uncertain about their decision. Thus, as reader Russ points out, "the last 24 hours" may count as much in South Carolina as elsewhere.

2) Interviewer or automated? - A related issue is what pollsters call the survey "mode." Do they conduct interviews with live interviewers or with an automated methodology (usually called "interactive voice response" or IVR) that uses a recording and asks respondents to answer by pressing keys on their touch-tone phones.

Three of the pollsters that released surveys over the last week (SurveyUSA, Rasmussen and PPP) use the IVR method (as does InsiderAdvantage), while the others use live interviewers. One thing to note is that the so-called "Bradley/Wilder effect" (or a the "reverse" Bradley/Wilder effect - via Kaus) assumes that respondents alter or hide their preferences to avoid a sense of "social discomfort" with the interviewer. Without an interviewer, there should be little or no effect.

In this case the difference seems to be mostly about the undecided percentage, which is lower for the IVR surveys. In the most recent surveys, the three IVR pollsters report a smaller undecided percentage (7%) than the live interviewer pollsters (17%). That pattern is typical, although pollsters disagree about the reasons. Some say voters are more willing to cast a "secret ballot" without an interviewer involved, while others argue that those willing to participate in IVR polls tend to be more opinionated.

If the Bradley/Wilder effect is operating, we would expect to see it on surveys that use live interviewers, but in this case, the lack of an interviewer seems to work in Obama's favor. He leads Clinton by an average of 17 points on the IVR polls (44% to 27%, with 19% for Edwards), but by only 9 points on the interviewer surveys (37% to 28%, with 17% for Edwards).

3) What Percentage of Adults? -- Four years ago, the turnout of 289,856 573,101 South Carolina Democrats amounted to roughly 9% 20% of the eligible adults in the state.* Turnout tomorrow will likely be higher, but how much higher is anyone's guess. Thus, selecting "likely voters" in South Carolina may not be as challenging as the Iowa or Nevada caucuses, but it comes close.

For Iowa, I spent several months requesting the information necessary to try to calculate the percentage of adults represented by each pollster. With the exception of SurveyUSA (who tell us their unweighted Democratic likely voter sample amounted to 33% the adults they interviewed), none of the pollsters have reported incidence data.

So some of the variation in results may come from the tightness of the screen, but we have no way to know for certain.

4) List or RDD? One important related issue is the "sample frame." Three of the South Carolina pollsters (SurveyUSA, ARG and Rasmussen) typically use a random-digit dial (RDD) technique that samples from all landline phones. They have to use screen questions to select likely Democratic primary voters.

As least two (PPP and Clemson) drew samples from lists of registered votes and used the records on the lists to narrow their sampled universe to those they knew had a past history of participating in primaries.

These two methods may also contribute to different results, and pollsters debate the merits of each approach.

5) Demographics? Differences in the likely voter selection methods mean that the South Carolina polls have differences in the kinds of people sampled for each poll. One of the most important characteristics is the percentage of African-Americans. It varies from 42% to 55% among the five pollsters that reported it (I extrapolated an approximate value for Rasmussen from their results by race crosstab).

01-24 SC AA.png

Another important difference largely hidden from view is the age composition of each sample. Only three pollsters reported an age breakdown. SurveyUSA reports 50% under the age of 50, compared to 43% on the McClatchy/MSNBC/Mason-Dixon survey. PPP had an older sample, with only 23% under the age of 45.

So the bottom line? All of these surveys indicate quite a bit of uncertainty, both about who will vote and about the preferences that their "likely voters" express. Obama appears to have an advantage, but we will not know how large until the votes are counted.

*Kevin: thank you for the edit.

How Tight is the Screen? (Redux)

My National Journal column, in which I revisit the issue of the primary voting screens used in national surveys, is now online.

Why So Few Polls in Nevada? (Redux)

One interesting footnote to the angst over New Hampshire: After email and comments on what went wrong in the Granite State, the question I've received most often over the last week is, "why so few polls in Nevada?"

Today, after a draught of more than a month, we finally have a new survey from Nevada, and will likely see another survey or two by week's end. Keep in mind that our inherently conservative trend estimator will try to essentially split the difference between the new results and the old trend.

But the answer to "why so few polls" in Nevada is something I tackled back in November. The problem is the exceptionally low turnout in past Nevada caucuses, which leaves pollsters guessing about turnout this time. Even optimistic turnout projections leave pollsters attempting to select and model a very small "likely caucus goer" universe (raising all the challenges of polling Iowa, and then some). Here are the critical statistics, as blogged in November:

In 2004, Nevada held traditional caucuses in mid-February that drew an estimate 9,000 participants (according to the Rhodes Cook Letter). That amounts to roughly one half of one percent (0.5%) of the state's voting age population at that time.
Of course, Nevada is switching to a party-run primary (the main difference being far fewer polling places). The states of Michigan and New Mexico have used a similar system, that produces a higher turnout than traditional caucuses (outside Iowa) typically get, but not much higher. The 2004 Democratic turnout, as a percentage of the voting age population, was 2.2% in Michigan and 7.3% in New Mexico (both events occurred a week before Nevada but a week after the New Hampshire primary).
So who turns out this time is anyone's guess.

So, while we will have survey results for Nevada, take them with the larger than usual grain of salt.

Times/Bloomberg IA Poll - What % of Adults?

Here are some additional details on the new Los Angeles Times/Bloomberg poll in Iowa. The last Times/Bloomberg poll in September drew a sample of "caucus voters" that represented a much larger slice of the Iowa population than other polls. The Democratic sample represented 39% of Iowa adults, while the Republican sample represented 29% of adults. While this statistic varied greatly among pollsters, most have reported "likely caucus goer" samples representing a range of 9-17% of Iowa adults for the Democrats and 6-11% for the Republicans (see the second table in my Disclosure Project post).

For this most recent survey, the Times release did not report the percentage of adults represented by each sample, but they did provide the unweighted sample sizes for the four different Iowa subgroups they released. All four are considerably closer to the low-incidence samples reported by most of the other pollsters that have disclosed these methodological details, although even the smaller Democratic "likely caucus goer" sample (17% of adults, unweighted) appears to be on the high side of what other pollsters reported to our Disclosure Project.


I put "appears to be" in italics above because the more accurate weighted values may be different. The methodology blurb in the Times release implies that the weighted size of each sample may be slightly smaller. Though unclear on the details, they say they "designed" their sample to " yield greater numbers of voters and thus a larger pool of likely caucus goers for analysis." That design may mean that the weighted value of the caucus voter and likely caucus-goer samples may be slightly smaller. I emailed a request for the weighted values and, as of this writing, have not received a response.

Update: Just received a response and added the weighted values to the table above. The weighting does bring down the size of the two "likely caucus goer" subgroups slightly, to 15% for the Democrats and 7% for the Republicans.

Update 2: "So what does this mean?" Two commenters ask that question, so I obviously neglected to explain. For those interested in all the details, the complete context can be found in this section of my Disclosure Project results post. The key issue is that the previous historical highs for caucus turnout are 5.5% of adults for the Democrats in 2004 and 5.3% of adults for the Republicans in 1988. Pollsters are generally not trying to screen all the way down to a combined 11% of adults, since (a) no one knows what turnout will be next week, (b) low incidence screens cannot select truly "likely" caucus goers with precision and (c) all political surveys presumably have some non-response bias toward voters (on the theory that non-voters are less interested and are more likely to hang up).

On the other hand, I consider it highly questionable to report results representing 68% of adults as representative of "caucus voters" as the Times/Bloomberg survey did in September.

So the results above mean two things. First, the latest Times/Bloomberg surveys are a vast improvement in terms of the portion of Iowa adults they represent. Second, at least in theory, the "likely caucus goers" are the more appropriate subgroups to watch. Of course, the percentage of adults sampled is just one aspect of accurately modeling the likely electorate. The kinds of voters selected are just as important, and can vary widely across polls that screen to the same percentage of adults. See the full Disclosure project post for more details.

Iowa: Where Things Stand

Notice the deluge of polls from Iowa and New Hampshire the last few days? It has been pretty hard to miss. We have seen six new Iowa polls in the last three days. Have we reached the point where, as one valued reader put it via email, do we now have "too many polls, too little meaning?" Is it time to stop watching polls altogether?

The big problem, particularly in Iowa, is the way a close race (especially for the Democrats) combines with wide variations in "likely caucus goer" methodology to thoroughly confuse everyone. And for good reason. Consider the screen shot from our Iowa Democrats chart (below) which shows the results for Obama (yellow), Clinton (purple) and Edwards (red) over the last two months (the light blue grid lines are 5 percentage points apart). Forget the lines, for the moment and look at the points. They are all over the place.

12-21 cloud.png

Put another way, consider the following results from the last six Iowa polls, all fielded over the last week. The support for the candidates ranges between:

  • 24% and 30% for Clinton
  • 25% and 33% for Obama
  • 18% and 26% for Edwards
  • 6% and 20% (on the Republican side) for McCain

Some of this variation is the purely random sort that comes with doing a survey (the part that the "margin of error" quantifies), and how hard each organization pushes those who are initially undecided, but a large portion also comes from how they define and select "likely caucus" goers. What makes Iowa different is that the last source of variability. It is bigger and more consequential than for other types of polls. So if we take into account both the closeness of the Democratic race and all sources of potential poll error, we really have no idea who is truly "ahead" at this point in the race. The polls are simply too blunt an instrument, especially given all the uncertainty about who will participate.

So what should we keep in mind when looking at the new polls?

1) The fact that results vary with methodology tells us something important: For the Democrats, the nature of the turnout -- what kinds of voters show up on January 3 -- will likely determine the outcome. We can see the same thing within individual polls. As I noted earlier in the week, the ABC News analysis puts this best.

Applying tighter turnout scenarios can produce anything from a 10-point Obama lead to a 6-point Clinton edge -- evidence of the still-unsettled nature of this contest, two weeks before Iowans gather and caucus. And not only do 33 percent say there's a chance they yet may change their minds, nearly one in five say there's a "good chance" they'll do so

2) Comparisons between apples-and-oranges can be very misleading. Different methodologies can produce different results, so it's a fools errand to directly compare say, yesterday's ARG result to the Post/ABC poll from earlier in the week. Averaging five or six polls at a time can help reduce the purely random variation, but in this instance (to torture the metaphor), it leaves us comparing a basket of apples, oranges and pears from this week to a basket of apples, bananas and grapefruit from the week before that. Put another way, notice the way the last 13 polls have been done by 11 different pollsters. An average of the last six polls has only two pollsters in common (Strategic Vision and Rasmussen) with the seven polls released the previous week.

12-21 thirteen IA polls.png

2) Apples-to-apples comparisons are safer, but unless the race shifts dramatically, they don't tell us much about day-to-day or even week-to-week variation. The table below shows how results from five of this week's polls compare to results from the same organizations conducted in mid-to-late November. It shows a slight average gain (+3%) for Obama, with changes of less than a point for the other candidates. Interesting, but we had too look back nearly a month to get a decent, averaged comparison, and even then the direction of the change is inconsistent across individual polls.

12-21 apples to apples.png

3) Our charts illustrate most of the variation that we can "see." Some readers may be frustrated that the lines do not shift as much as a rolling average, but more often than not, the day-do-day variation is just the "noise" of pollster house effects. If we were looking at a few hundred interviews conducted every night using a constant methodology, we might be able to see more genuine day-to-day variation. Given the data available, however, our lines are showing us about as much of the real variation as we can truly "see" through the methodological clouds.

One caveat on the above, however, is that Professor Franklin can alter the sensitivity of those trend lines to check for any short term shifts that may better fit the data. Requests for another "sensitivity analysis" have filled my email in box over the last few weeks. We have heard you, and we will have another sensitivity analysis later today, and updates next week.

So what do we know? Among Republicans, Mike Huckabee has clearly seen a dramatic increase in support for the last month, and now leads nominally in eight of the last nine polls (the individual margins may not be statistically significant, but the mostly consistent direction tells us that that Huckabee's advantage is most likely real). Still Huckabee's support remains soft and the Republican ad war is turning negative. Things can still change a lot over the next two weeks.

For the Democrats, Obama has gained over the last month, but the latest round of surveys are neither consistent nor powerful enough to tell who would win if the Iowa Caucuses were held today. And obviously, with the race as close as it appears to be, changes over the next two weeks could also prove decisive.

And now the race goes "behind the dark side of the moon," as it were, given the challenges of polling between Christmas and New Year's. I will have more to say about that very soon.

NH: The Gallup "Likely Voter Model" Arrives

One quick note about the new Gallup poll from New Hampshire that we linked to a few moments ago (see Gallup's releases on the Democratic & Republican samples). As the Gallup release indicates, it is based on their well-known but sometimes controversial "likely voter model." That fact alone makes their results different -- and not entirely comparable -- to the other New Hampshire polls we have seen.

To the extent that they have disclosed their methods, the other polls we have reported on typically use some sort of screen: They include self-reported registered voters who indicate some degree of intent to vote in either the Democratic or Republican primary. The Gallup model, whether applied here or in a general election, builds on the idea that self-reported intent to participate alone tends to overstate the true turnout. So they use other measures that tend to correlate with turnout, such as attention paid the the campaign, self-reported voting in past elections, and knowledge of the location of their polling place, to narrow the sample to a percentage approximating the likely turnout. The new survey from New Hampshire applies exactly that model.

I emailed Gallup and they kindly provided this detailed document describing the mechanics. The model uses eight questions to build an eight-point scale, on which a score of eight indicates the highest probability of voting. In the current survey, they used the following questions and awarded respondents with one point for every bolded answer below:

1A. Generally speaking, how much interest would you say you have in politics -- a great deal, a fair amount, or only a little?

1B. How often would you say you vote -- always, nearly always, part of the time, or seldom?

1C. Do you happen to know where people who live in your neighborhood go to vote? (Yes or no)

1D. Have you ever voted in the polling place or ward where you now live? (Yes or no)

1E. How much thought have you given to the coming primary election for president 8211 a great deal, a moderate amount, not much, or none at all?

D8. Next, I'd like you to rate your chances of voting in the primary election for president on a scale of ten to one. If '10' represents a person who definitely will vote and '1' represents a person who definitely will not vote, where on this scale of ten to one would you place yourself? (7-10 or 1-6)

D9. Thinking back to the election in November of 2006 when John Lynch ran against Jim Coburn for governor of New Hampshire did things come up that kept you from voting, or did you happen to vote in that election? (Yes or no)

D10. Please tell me whether you, yourself, ever voted in each of the following kinds of elections. How about...

A. A Republican or Democratic primary for president
B. A Republican or Democratic primary for U.S. Senator or Congressman
C. A Republican or Democratic primary for Governor
(A yes on any earns a point).

Because of the built in penalty for those who were not old enough to have voted in previous elections Gallup gives extra points to those age 18-21. They also give an extra point to those who did not live in New Hampshire in 2006 but say they "always" or "nearly always" vote. [Clarification: Since Marc Ambinder quoted this paragraph, it made me (a) notice the typo, now corrected and (b) want to point out that 18-21 year olds get an extra point or two to the degree that they score high on the other questions. The point is that those otherwise earn a perfect likelihood score are not penalized because they were not old enough to have voted before].

I'm oversimplifying a little here (see their description for full details), but Gallup then selects some combination of respondents scoring 6 or higher weighted so that their sample size matches the expected turnout. Here is how Gallup explains their assumptions about New Hampshire turnout:

Turnout has been fairly high in recent primaries, roughly 20-25% of New Hampshire adults have voted in the Democratic primary and 25% have voted in the Republican Primary.
Given the usual incidence in our polls of 40% of New Hampshire adults saying they will vote in the Democratic primary and 40% saying they will vote in the Republican primary, the typical turnout assumptions are that 50%-55% of self-reported Democratic primary voters and 60% of self-reported Republican primary voters should turnout (roughly half of New Hampshire residents). . . Given a higher proportion of Democrats scoring as likely voters this year than in previous years, the expected turnout for Democrats was increased to 60%.

I wrote at great length during the 2004 campaign about the Gallup likely voter model and the shortcomings identified by critics. The short version is that while this model produces a lot of questionable variation when applied months before an election, as well as results that differ from other likely voter models, its track record is strong when applied the the last survey before the election.

The even shorter version: The Gallup model may produce different results than other recent polls in New Hampshire. Debate which model makes the most sense -- and I know you will -- but be careful about comparisons across polls.

The Insider Advantage Crosstabs

For today's puzzle, we have two new polls in Iowa, one from the ABC News/Washington Post partnership and another from the public relations firm InsiderAdvantage. The ABC/Post poll shows both Obama (at 33%) and Clinton (at 29%) significantly ahead of John Edwards (at 20%). The InsiderAdvantage survey -- or at least the result they chose to lead with -- shows that John Edwards (with 30%) has "leapfrogged ahead" of Clinton (26%) and Obama (24%). As our friends at NBC's First Read note, conflicting results like these make it "hard to know what's right or wrong."

Before digging deeper, it is worth highlighting this point from the ABC story:

Applying tighter turnout scenarios can produce anything from a 10-point Obama lead to a 6-point Clinton edge -- evidence of the still-unsettled nature of this contest, two weeks before Iowans gather and caucus. And not only do 33 percent say there's a chance they yet may change their minds, nearly one in five say there's a "good chance" they'll do so.

However, I want to pass along some problematic details on the recent InsiderAdvantage polls. One issue is that InsiderAdvantage sometimes conducts surveys using live interviewers, sometimes using an automated interactive voice response (IVR) method (in which respondents answer by pressing buttons on their touch tone phones) and almost never specifying which method they use in their public releases. In this case, I checked with InsiderAdvantage and they confirm that the latest Iowa surveys were done with the automated IVR method.

The second problem is potentially bigger. InsiderAdvantage typically emails us a few pages of cross-tabulations that we have sometimes posted to the site, but which they rarely post to their own site. We did not receive those crosstabulations for today's survey, perhaps because of the story I am about to share. The site RealClearPolitics has posted a more limited version for the Republican and Democratic results.

Take a look at the Democratic tab, and if you look closely, you'll see the problem: According to the crosstabs, Barack Obama gets 19.6% of the vote from men, 17.8% from women but 24.3% from all voters. Needless to say, that result is impossible, especially since they report 392 interviews conducted among men, 585 interviews among women and 977 overall (and since 392+585=977).**

We had posted the crosstabs for the InsiderAdvantage poll of Republicans in South Carolina earlier this month, but pulled them back when a reader noticed similar inconsistencies (for this posting, we have put the Democratic and Republican crosstabs back up on our server). The story of what happens next should give pause to anyone wondering how much faith to put in their surveys.

I emailed InsiderAdvantage to say that "something seems amiss" in their tabs. Mistakes happen, and I assumed I was simply reporting an error in the cross-tabulations that they would want to correct. Instead, I got some curious replies. I heard first from Matt Towery, the public face of InsiderAdvantage. He referred me to the statistician who weights their data and then offered this explanation:

We have produced many a poll that showed the male female column not seeming to "fit" with the totals. But as [the person who weights the data] will explain, the other weights applied cause the numbers to appear to "disagree" with the male female column. I can only tell you that we've used the same weighting system for going on ten years and it has rarely failed us.

Next, I heard from Gary Reese, an analyst at InsiderAdvantage, who shared his "guess" that "because of gender and age and race weightings, that may make individual cross-tabs read slightly off." The person that weights the data was not available, Reese wrote, but he would check with him and get back to me. The next day, Reese replied with a confirmation:

Was as I wrote yesterday. Multiple weightings of various demographics skew individual weightings that they don't necessarily add up to match the top line.

Now here I have to interject: I too have weighted data for many years, and this explanation is simply wrong. Either the data are weighted consistently (in a process that changes the "weight" given each respondent when the data are tabulated) or they are not. If cross-tabulations are based on weighted data, then the results in subgroups (men, women, etc) should be internally consistent with the total.

They gave me a number for the statistician that weights the data. I called, but heard nothing back, then got caught up in our office move and other more pressing stories. I finally heard back yesterday from Jeff Shusterman, the president of Majority Opinion Research (the company that conducts the InsiderAdvantage surveys) and he confirmed what should have been obvious to Towery and Reese: Only the total column in their crosstabs is weighted. Thus, for reasons that still perplex me, they choose to leave the columns for subgroups unweighted.

Before posting this item, I went back to Towery and Shusterman and asked for an explanation of the purpose of releasing weighted values for all respondents, but unweighted results for subgroups. Here is Shusterman's answer:

The purpose of the InsiderAdvantage/Majority Opinion polls are to provide a snapshot for major media outlets of the race at the time of polling and, as the election day approaches, to accurately predict the outcome of the election for which we have a substantial record of success. This snapshot and eventual prediction are contained in the total column of the cross-tabulations, which is accurately weighted. By contrast, our polls are not conducted to advise campaigns or to provide interesting subtext for academics or bloggers, so we do not weight or place emphasis on the other banner points.

If that's the case, I am not sure I understand why they choose to run "inaccurate" cross-tabulations at all, much less send them to us and to RealClearPolitics. Readers ought to take all of the this "interesting subtext" into account when trying to decide which polls to rely on (and we will save for another day the issue of what weighting up subgroups by factors of three or more does to the reported "margin of error").

Back to the issue of the conflicting results from Iowa. As we have reported, pollsters in Iowa have taken many different approaches to defining likely voters. The ABC News/Washington Post surveys have at least disclosed the demographics of their likely caucus-goers and the methods used to select them. InsiderAdvantage has not. Without more of these details, it is hard to do much more than speculate and pass on the good advice from First Read:

Look at the trends of the pollsters who have surveyed the state for multiple cycles, and be careful of pollsters who haven't polled Iowa before.

**Update: Several commenters are fixated on the footnoted paragraph above but appear to have paid little attention to the rest of this post. So to be clear: The contradictory results are "impossible" only if all of the crosstabs columns were weighted consistently, which they obviously were not. The results are also "impossible" in terms of the reality the data are supposed to represent, and that is the point. If you are ready to weight all Democratic voters to 48% black, then it makes no sense to release results for the same survey by gender where men are 10.9% black and women are 18.4% black.

Disclosure Project: Results from Iowa

It is time -- actually long past time -- to summarize the returns from the Pollster.com "Disclosure Project." Back in September I declared my intent to request disclosure of key methodological details from pollsters doing surveys in Iowa, New Hampshire, South Carolina and the nation as a whole. I sent off the first batch of requests to the Iowa pollsters, and then began a long slog, delayed both by other activity and, frankly, by a surprising degree of resistance from far too many pollsters. The result is that now, nearly three months later, I can report results from Iowa only.

I should note that many organizations (particularly ABC/Washington Post, CBS/New York Times, Los Angeles Times/Bloomberg, the Pew Research Center, Rasmussen Reports and Time/SRBI) either put much of the information into the public domain or responded within days (or hours) to my requests. With others, however, the responses were slower, incomplete or both. A few asked for more time or assured me that responses were imminent, yet ultimately never responded despite repeated requests. Sadly, such is the state of disclosure in my profession, even upon request.

So while the results described below are far from a complete review of all the polls in Iowa, they do tell a very clear story: No two Iowa pollsters select "likely caucus goers" in the same way. Moreover, each pollster has a unique conception -- sometimes radically unique -- of the likely electorate.

This post is a bit long, so it continues after the jump...

Continue reading "Disclosure Project: Results from Iowa"

Polling Nevada

I have been focusing heavily on the Iowa caucuses, both our Disclosure Project started with polls there and because the competition, particularly on the Democratic side, is so intense. With a Democratic debate in Nevada tonight, we have had two new polls out of "likely voters" in the Nevada Democratic caucuses from Zogby and CNN.** Their results are quite different though for reasons that are probably explicable.

Both show Hillary Clinton leading, followed by Obama, Edwards and Richardson, in that order, but the percentages are very different. CNN shows Clinton leading Obama by 28 points (51% to 23%), with Edwards far behind (at 11%). Zogby shows Clinton with a narrower, 22 point lead over Obama (37% to 19%) with Edwards closer (at 15%).

The biggest obvious difference is that the CNN survey effectively pushed respondents harder for a choice. They show only 4% with no opinion, while the Zogby shows 17% as unsure. This is a very common source of variation across polls, leaving pollsters to debate which approach - pushing for a choice or allowing uncertain voters to register their indecision - is most appropriate when the election is still months away.

One likely contributor to that difference is that the CNN questions includes the job title of each candidate ("New York Senator Hillary Clinton," "Former North Carolina Senator John Edwards") which may frame the question a bit differently. Of course, since Zogby fails to disclose the full text of its vote question, we cannot know for certain.

But there is one other potential source of variation: How the pollster handles the expected low turnout. The CNN release tells us that they conducted 389 interviews with voters "who say they are likely to vote in the Nevada Democratic presidential caucus" out of a total sample of 2,084 adults. Thus, CNN screens rather tightly to identify a Democratic sample that represents 19% of Nevada adults. Once again, as Zogby fails to disclose it, we have no idea what portion of Nevada their sample represents (ditto for Mason-Dixon, ARG and Research 2000, the three other pollsters that have released Nevada surveys).

But at 19%, even the CNN survey may be a shot in the dark at the turnout in Nevada on January 19. In 2004, Nevada held traditional caucuses in mid-February that drew an estimate 9,000 participants (according to the Rhodes Cook Letter). That amounts to roughly one half of one percent (0.5%) of the state's voting age population at that time.

Of course, Nevada is switching to a party-run primary (the main difference being far fewer polling places). The states of Michigan and New Mexico have used a similar system, that produces a higher turnout than traditional caucuses (outside Iowa) typically get, but not much higher. The 2004 Democratic turnout, as a percentage of the voting age population, was 2.2% in Michigan and 7.3% in New Mexico (both events occurred a week before Nevada but a week after the New Hampshire primary).

So who turns out this time is anyone's guess. Will the voters sampled in these surveys bear any resemblance to those that turn out in Nevada on January 19? In size, at least, that seems very unlikely.

**Zogby has also released results for likely Republican caucus-goers. According to their release, CNN sampled likely Republican caucus-goers, but they have not yet released those results.

A Pollster Grinch Effect?

While pondering some new poll results from Iowa last night, MyDD's Jon Singer asked some good questions:

How do you come up with a turnout model when you don't know what day the caucuses are going to be held? Specifically, does anyone actually believe that turnout for a Thursday night January 3 caucus, when many voters just won't have the time to take two hours to participate, would be the same as the turnout on a Saturday afternoon January 5 caucus, when significantly fewer voters will be working or have just gotten off of work? Might not the turnout also be different were the Democratic caucuses to be held on Tuesday night January 14, which Ben Smith says is a possibility?

It could be the case that the sentiments of voters 1 through 125,000 are not terribly different from those of voters 125,001 through 150,000 or 175,000 or 200,000. But then again, it also could be the case that those going to caucus for the first time ever or even the first time in many years are a whole lot different from those who are already pretty determined to keep up their streak of making it to the caucuses every four years.

So do we need to consider a "pollster Grinch Effect? Does uncertainty surrounding the date of the Iowa caucuses make it even more difficult for pollsters to identify and sample "likely caucus goers?" Yes and yes, but...

While Singer is asking all the right questions, he is probably giving pollsters too much credit for our ability to divine likely caucus goers with laser-like precision, regardless of our assumption about the level of turnout. A public opinion poll is basically a blunt instrument when it comes to "modeling" likely caucus participants. The primary measures that most public pollsters use to select likely caucus goers are self-reports of interest in the caucus, and intent to participate and (in a few cases) self-reports of past participation. Unfortunately, respondents notoriously overstate their intent to vote. Most want to show an interest in doing their civic duty, especially when asked by a stranger on the telephone. So rather than take responses at face value, most pollsters use several different questions in combination to try to narrow their "likely voter" subgroup to some reasonable number.

A few public polls in Iowa have sampled from lists of registered voter lists, a procedure that at least provides an accurate way to screen out non-registrants and sort out those registered as Democrats, Republicans or with no affiliation. But as ABC's Gary Langer points out, those lists only eliminate the roughly 17% of the adult population that is either not registered or identified as "inactive" voters by Iowa's Secretary of State. The record of actual party affiliation is helpful to pollsters but not a conclusive indicator of their caucus of choice, since Iowa voters can register or declare their party affiliation on caucus night.

Only one or two public Iowa polls have used actual vote history to select their respondents, and -- except for the recent polls conducted for the One Campaign -- none have used past caucus participation to select their likely caucus-goer samples.

So the bottom line is that even if we knew exactly how many voters planned to participate, modeling the likely caucus goers comes down to methodology decisions that amount to an educated guess, at best. And even then, we have very little idea how many Iowans will participate. Consider the estimated turnout from past years (from an offline source: Rhodes Cook's invaluable Race for the Presidency: Winning the 2004 Nomination):


Look closely at the contested Democratic races, 1980, 1984, 1988, 2000 and 2004. "Estimated" turnout varied enormously, from an estimated 60,000 to 124,000. And as we learned last week, some have expressed doubt about the 2004 estimate, since caucus organizers ran out of sign-in sheets and failed to record name and address information for nearly twenty thousand participants.

And finally, we have to consider that every campaign is doing everything it can to identify and, ultimately, turn out voters who are not typical caucus goers. Some are devoting literally millions of dollars to microtargeting, field staff and various forms of "voter contact" to alter the turnout in their favor.

So - before we contemplate the Grinch Effect - what level of turnout is likely in 2008? Who knows?

What this means for the polls we plot and obsess over is that they are, at best, blunt measures of voter preferences based in Iowa, and no two pollsters define "likely caucus goers" alike. They do give us a decent sense of trends - who is gaining or falling -- especially for surveys done by the same pollster using a constant methodology. However, the "point estimate" for any candidate on any one poll has a lot of room for error, the kind that has absolutely nothing to do with the statistical "margin of error."

The Pollster.com Disclosure Project

Over the last few months I have written a series of posts that examined the remarkably limited methodological information released about pre-election polls in the early presidential primary states (here, here and here, plus related items here). The gist is that these surveys often show considerable variation in the types of "likely voters" they select yet disclose little about the population they sample beyond the words "likely voter." More often than not, the pollsters release next to nothing about how tightly they screen or about the demographic composition of their primary voter samples.

Why do so many pollsters disclose so little? A few continue to cite proprietary interests. Some release their data solely through their media sponsors, which in the past limited the space or airtime available for methodological details (limits now largely moot given the Internet sites now maintained by virtually all media outlets and pollsters). And while none say so publicly, my sense is that many withhold these details to avoid the nit-picking and second guessing that inevitably comes from unhappy partisans hoping to discredit the results.

Do pollsters have an ethical obligation to report methodological details about who they sampled? Absolutely (and more on that below), and as we have learned, most will disclose these details on request as per the ethical codes of the American Association for Public Opinion Research (AAPOR) and the National Council on Public Polls (NCPP). Regular readers will know that we have received prompt replies from many pollsters in response to such requests (some pertinent examples here, here, here and here).

The problem with my occasional ad hoc requests is that they arbitrarily single out particular pollsters, holding their work up to scrutiny (and potential criticism) while letting others off the hook. My post a few weeks back, for example, focused on results from Iowa polls conducted by the American Research Group (ARG) that seemed contrary to other polls. Yet as one alert reader commented, I made no mention of a recent Zogby poll with results consistent with ARG. And while tempting, speculating about details withheld from public view (as I did, incorrectly, in the first ARG post), is even less fair to the pollsters and our readers.

So I have come to this conclusion: Starting today we will begin to formally request answers to a limited but fundamental set of methodological questions for every public poll asking about the primary election released in, for now, a limited set of states: Iowa, New Hampshire, South Carolina or for the nation as a whole. We are starting today with requests emailed to the Iowa pollsters and will work our way through the other early states and national polls over the next few weeks, expanding to other states as our time and resources allow.

These are our questions:

  • Describe the questions or procedures used to select or define likely voters or likely caucus goers (essentially the same questions I asked of pollsters just before the 2004 general election).
  • The question that, as Gary Langer of ABC News puts it, "anyone producing a poll of 'likely voters' should be prepared to answer:" What share of the voting-age population do they represent? (The specific information will vary from poll to poll; more details on that below).
  • We will ask pollsters to provide the results to demographic questions and key attributes measures among the likely primary voter samples. In other words, what is the composition of each primary voter sample (or subgroup) in terms of gender, age, race, etc.?
  • What was the sample frame (random digit dial, registered voter list, listed telephone directory, etc)? Did the sample frame include or exclude cell phones?
  • What was the mode of interview (telephone using live interviewers, telephone using an automated, interactive voice response [IVR] methodology, in-person, Internet, mail-in)?
  • And in the few instances where pollsters do not already provide it, what was the verbatim text of the trial heat vote question or questions?

Our goal is to both collect this information and post it alongside the survey results on our poll summary pages, as a regular ongoing feature of Pollster.com. Obviously, some pollsters may choose to ignore some or all of our requests, but if they do our summary table will show it. We are starting with Iowa, followed by New Hampshire, South Carolina and the national surveys, in order to keep this task manageable and to determine the feasibility of making such requests for every survey we track.

Again, keep in mind that the ethical codes of the professional organizations of survey researchers require that pollsters adequately describe both the population they surveyed and the "sample frame" used to sample it. The Code of Ethics of the American Association for Public Opinion Research, for example, lists "certain essential information" about a poll's methodology that should be disclosed or made available whenever a survey report is released. The relevant information includes:

The exact wording of questions asked . . . A definition of the population under study, and a description of the sampling frame used to identify this population . . . A description of the sample design, giving a clear indication of the method by which the respondents were selected by the researcher . . . Sample sizes and, where appropriate, eligibility criteria [and] screening procedures.

The Principles of Disclosure of the National Council on Public Polls (NCPP) and the Code of Standards and Ethics of the Council of American Survey Research Organizations (CASRO) include very similar disclosure requirements.

We should make it clear that we could ask many more questions that might help assess the quality of the survey or help identify methodological differences that might influence the results. We are not asking, for example, about response rates, the method used to select respondents within each household, the degree to which the pollster persists with follow-up calls to unavailable respondents or the time of the day in which they conduct interviews. We have limited our requests to try to make it easier for pollsters to respond while also focusing on the issues that seem of greatest importance to the pre-primary polls.

What can you do? Frankly, we would appreciate your support. If you have a blog, please post something about the Pollster Disclosure Project and link back to this entry (and if you do, please send us an email so we can keep a list of supportive blogs). If not, we would appreciate supportive comments below. And of course, criticism or suggestions on what we might do differently are also always welcome.

(After the jump - a more exhaustive list of the questions that we will use to determine the percentage of the voting age population represented by each sample)

Continue reading "The Pollster.com Disclosure Project"

More on ARG and Iowa

Following up on yesterday's post, in which I speculated - wrongly, as it turns out -- about the incidence of eligible adults selected by the American Research Group (ARG) as likely caucus goers for their most recent surveys of Democrats and Republicans in Iowa. I emailed Dick Bennett, and can now report on how their surveys compare to the others that have provided us with similar details.

First, according to Bennett, I was incorrect in speculating that they use only one question to screen for "likely caucus goers." They start with a random digit dial (RDD) sample of adults in Iowa in households with a working telephone and then ask four different questions (although they provide only the last question on the page reporting Iowa results):

  • They ask whether respondents are registered to vote, and whether they are registered as Democrats or Republicans. Non-registrants are terminated and not interviewed.
  • They ask registrants how likely they are to participate in the Caucus "a 1-to-10 scale with 1 meaning definitely not participating and 10 meaning definitely participating." Those who answer 1 through 6 are terminated and not interviewed.
  • They ask unaffiliated registrants ("independents" registered as neither Democrats nor Republicans) whether they plan to participate in the Democratic or Republican caucus. Registered Democrats and independents who plan to caucus with the Democrats get the Democratic vote question; Registered Republicans and independents who plan to caucus with the Republicans answer the Republican question.
  • After asking vote question, they asks the question that appears on the web site: "Would you say that you definitely plan to participate in the 2008 Democratic presidential caucus, that you might participate in the 2008 Democratic presidential caucus, or that you will probably not participate in the 2008 Democratic presidential caucus?" Only the definite are included in the final sample of likely caucus voters.

So the process involves calling a random sample of adults until they reach a quota of 600 interviews for voters of one of the parties. In their most recent Iowa survey, they were able to fill the quota for Democrats first, so they continued dialing the random sample until they had interviewed 600 Republicans, terminating 155 Democrats in the process. Bennett reports that they also terminated another 4,842 adults on their various screen questions (740 who say they were not registered to vote, 3,598 who rated their likelihood of participating as 6 or lower and 504 who were less than "definite" about participating on the final question).

So, the "back of the envelope" calculation for ARG is that their most recent sample of Democrats represents 12% of Iowa adults (755 Democrats divided by 755+600+4,842). Their most recent sample of Republicans represents roughly 10% of Iowa adults (600 Republicans divided by 755+600+4,842). We can compare the Democratic statistic to those provided by other Iowa pollsters:

And again, for those just joining this discussion, the 2004 Democratic caucus turnout was reported as 122,200, which represented 5.4% of the voting age population and 5.6 of eligible adults.

So, if we take all of these pollsters at their word, my "blogger speculation" yesterday was off-base: ARG's incidence of Democratic likely voters as a percentage of eligible adults is very close to the surveys done by Time and ABC/Washington Post. Apologies to Bennett.

But we still have a mystery. Why the consistent difference between the result from ARG and other surveys that appears to favor Clinton? Professor Franklin is working on a post as I speak that will chart the difference, but when we exclude the ARG's surveys from our estimate for Iowa, Clinton's current 2 point margin over Edwards (26.2% to 24.2%) becomes a 1.3 point deficit (24.6% to 25.9%). [See Franklin's in-depth discussion, now posted here].


I asked Bennett whether he had any theories that might explain the difference. Here is his response:

Our sample size is larger and our likely voter screen is more difficult to pass. As you have pointed out, many surveys (although they are not designed to project participation) project unrealistic levels of participation. A likely voter/participant does not need to vote/participate to represent the pool of likely voters/participants, but the likely voter/participant pool is not much larger than the actual turnout.

Our results in Iowa show that John Edwards has a slight lead over Hillary Clinton among those voters saying they have attended a caucus in the past. Hillary Clinton has a greater lead among those saying this will be their first caucus. Hillary Clinton also has very strong support among women who say they usually do not vote/participate in primary/caucus races - this is true in Iowa and the other early states

Sample size is largely irrelevant to the pattern in our chart. Smaller samples would explain greater variability, but not a consistent difference across a large number of samples. The observation in his second paragraph is much more important. Since ARG's previous releases did not mention these results, I asked for the question about past caucus participation and the associated results. His response:

The question is: Will this be the first Democratic caucus you have attended, or have you attended a Democratic caucus in the past?

We first asked this in Feb:

Feb - 41% first, 59% past
Mar - 44% first, 55% past
Apr - 39% first, 60% past
May - 45% first, 55% past
Jun - 42% first, 57% past
Jul - 40% first, 60% past
Aug - 43% first, 57% past

We can compare this result to similar questions or reports from other recent surveys and they show a clear pattern. The differences among the four pollsters are huge and show a clear pattern, consistent with the differences Bennett reports in his own surveys: John Edwards does better against Clinton as the percentage of past caucus goers increases.


So what is the right number of past caucus goers? Bennett can certainly argue that the entrance polls from the 2000 and 2004 Caucuses are on his side. Bennett used exactly the same question as the network entrance poll, which reported the percentage of first-time Democratic caucus goers as 53% in 2004 and 47% in 2000. Of course, as we learned three years ago, exit polls have their own problems, and I am guessing that other pollsters will debate what past-caucus goer number is correct. We will pursue this point further.

Finally, it is worth saying that this exchange and my arguably unfair "blogger speculation" yesterday makes one thing clear: If we are going to dig deeper into these issues, we have an obligation to ask these questions (about incidence and sample characteristics) about all polls, not just those from ARG, Time and a handful of others.

Stay tuned.

Iowa: A Tale of Two New Polls

So today we have another installment in that pollster's nightmare known as the Iowa caucuses: Two new polls of "likely Democratic caucus goers" conducted over the last ten days that show very different results. The American Research Group (ARG) survey (conducted 8/26-29, n=600) shows Hillary Clinton (with 28%) leading Barack Obama (23%) and John Edwards (20%). And a new survey from Time/SRBI (conducted 8/22-26, n=519, Time story, SRBI results) shows essentially the opposite, Edwards (with 29%) leading Clinton (24%) and Obama (22%).

Is one result more trustworthy than the other? That is always a tough question to answer, but one of these polls is considerably more transparent about its methods. And that should tell us something.

While I have been opining lately about both the difficulty in polling the Iowa Caucuses and the remarkable lack of disclosure of methodology in the early states (especially here and here and all the posts here), the new Time survey stands out as a model of transparency:

The sample source was a list of registered Democratic and Independent voters in Iowa provided by Voter Contact Services. These registered voters were screened to determine their likelihood of attending the 2008 Iowa Democratic caucuses.

Likely voters included in the sample included those who said they were

  • 100% certain that they would attend the Iowa caucuses, OR
  • probably going to attend and reported that they had attended a previous Iowa caucus.

The margin of error for the entire sample is approximately +/- 5 percentage points. The margin of error is higher for subgroups. Surveys are subject to other error sources as well, including sampling coverage error, recording error, and respondent error.

Data were weighted to approximate the 2004 Iowa Democratic Caucus "Entrance Polls," conducted January 19, 2004.

Turnout in primary elections and caucuses tends to be low, with polls at this early stage generally overestimating attendance.

The sample included cell phone numbers, which, to the extent SRBI was able to identify them, were dialed manually.

I emailed Schulman to ask about the incidence and he quickly replied with a "back of the envelope" calculation: Their sample of 519 likely caucus goers represents roughly 12% of eligible adults in Iowa (details on the jump), exactly the same percentage as obtained by the recent ABC News/Washington Post poll, but higher than the reported 2004 Democratic caucus turnout (5.5% of eligible adults). Keep in mind, however, that the ABC/Post poll used a random digit dial methodology and screened from the population of all Iowa adults.

The Time/SRBI survey started with a list of registered Democrats and independents - so theoretically did a better job screening out non-registrants and Republicans. On the Time survey, 92% of respondents report having "ever attended" Iowa precinct caucuses (see Q2)." On the Post/ABC survey, 68% report having "attended any previous Iowa caucuses" (see Q12). Readers will notice that on the 2004 entrance poll, 55% of the caucus-goers said they had participated before.

What is the American Research Group Methodology? All they tell us on the website is that they completed 600 interviews and that respondents were asked:

Would you say that you definitely plan to participate in the 2008 Democratic presidential caucus, that you might participate in the 2008 Democratic presidential caucus, or that you will probably not participate in the 2008 Democratic presidential caucus?

Blogger speculation alert: If this was the only question used to screen, it is likely that ARG's incidence of eligible adults was much higher. Such a difference likely explains why they show Clinton doing consistently better in Iowa than other pollsters, but that is just an educated guess. [Update: A guess that turns out to be wrong....]. We owe Dick Bennett the opportunity to respond with more details. I have emailed him with questions and will post a response when I get it. [Update: Details of Bennett's response here. They ask four questions to screen for likely voters and their Democratic sample in this case represented roughly 12% of adults in Iowa. Apologies to ARG].

I suspect that if we could know all about every pollsters' methods in Iowa, we would see evidence of a disagreement about how tightly to screen and about what percentage of the completed sample should report having participated in a prior caucus.

The resolution of that argument is neither simple nor obvious, but seems to have a profound impact on the results. Surveys that appear to include more past caucus goers (Time, Des Moines Register and One Campaign survey -- see our Iowa compilation) tend to favor John Edwards, while Hillary Clinton does better on surveys that define the likely caucus-goer universe more broadly. [Update: The disagreement may have more to do with the appropriate number of self-reported past caucus goers].

Details on Time's "back of the envelope" incidence calculation after the jump...

Continue reading "Iowa: A Tale of Two New Polls"

A Different Approach: The Univ. of Iowa Caucus Poll

A few additional notes on the poll of likely Iowa caucus-goers from the University of Iowa that we linked to earlier, based on information provided via email by U. of Iowa Assoc. Prof. David Redlawski:

First, the survey used a sample drawn from a list of Iowa households listed in telephone directories. As such, it has a potential coverage problem because it misses Iowans with unlisted telephone numbers. The survey screened to interview 907 self-reported registered voters.

Second, "because of a programming glitch," Redlawsk said he "cannot distinguish the 'no registered voters' from other refusals." However, we know that as of the fall of 2006, 84% of Iowa's adults were registered voters (1.9 million** registered voters divided by 2.26 million voting age adults).

Based on that statistic, we can make the following assumptions about the percentage of adults represented by the various subgroups reported on for this survey:

  • 425 Democratic Caucus Goers = 40% of adults
  • 319 "Most Likely" Democratic Caucus Goers = 29% of adults
  • 306 Republican Caucus Goers = 28% of adults
  • 223 "Most Likely Republican Caucus Goers = 21% of adults

In short, the various subgroups of likely caucus goers in the U. of Iowa poll represent a much broader slice of Iowa voters than the recent ABC/Washington Post survey or the Des Moines Register survey from last year.

Put another way, even the "most likely" caucus-goer definitions for this survey project to a combined Democratic and Republican turnout of 1.1 million participants - half the adults in Iowa. By comparison, Democratic turnout was an estimated 124,000 147,000 in 2004, and estimated Republican turnout was 108,000 90,000 in 1988.

Finally, even putting screening issues aside, this survey used an entirely open-ended vote preference question. Respondents had to volunteer the name of their choice without prompting. This method undoubtedly provides a tougher test of voter commitment, but also produces a much bigger undecided and renders the results incomparable to other Iowa polls. As such, we have not included either of the U. of Iowa polls in our Iowa charts.

**UPDATE: In doing these calculations, I should have added a decimal to the registered voter number (i.e. 1.97 voters rather than 1.9) which would have shown 87% as registered to vote rather than 84%. That change would increase my estimate of the percentage of adults represented by each sample to 30.6% for the "mostly likely" Democratic caucus goers and 21.4% for the "most likely" Republicans.

Gallup Looks at Likely Primary Voters

Gallup Guru Frank Newport followed up on the discussion here and elsewhere about "possible differences between broad samples of voters and likely voters" when Gallup asks about the 2008 party nomination contests on national surveys. His conclusion:

[O]ur analysis suggests at this point there is little difference at the national level in candidate preferences even when we analyze smaller groups of more hard-core voters. For our latest national poll, we narrowed the sample down to those Democrats who said they were "extremely likely" to vote in the Democratic primary in their state next year. No difference. Hillary Clinton heads by 20 points over Obama. We also looked at "pure Democrats" -- excluding those independents who lean Democratic. Hillary does even better among her party faithful, beating Obama by 30 points.


What about likely voters on the Republican side? Fred Thompson picks up a little among Republicans who are extremely likely to vote in the Republican primary, such that Giuliani's lead is trimmed to 8- points, 32% to 24%. Among hard-core Republicans -- excluding independents who lean Republican -- Giuliani is ahead of Thompson 30% to 20%.

Bottom line: The basic structure of the national presidential race for both parties appears to be similar regardless of whether one looks at all voters, or just those voters who are most likely to actually vote.

A further analysis of the same data posted this morning by Gallup's Lydia Saad provides more numbers for the Democrats, plus more information on the subgroups that Newport examined. First, for the Democrats:

  • All Democratic identifiers and "leaners" (initially independent adults that say they lean to the Democratic Party) - 48% of adults.
  • All Democratic identifiers and "leaners" that also say they are "extremely likely" to vote in the Democratic primaries or caucuses" - 27% of adults; 58% of all Democrats & leaners.
  • "Pure Democrats" (excludes independent "leaners") - 30% of adults.
  • Pure Democrats that are registered to vote plus registered Democratic "leaners" that say they are "extremely likely" to participate in the Democratic primaries or caucuses - 30% of adults; 63% of all Democrats & leaners.**

The table below shows the full results included in the Saad report for the first and last groups, plus the Clinton margins reported in Newport's Gallup Guru post. As Saad notes, looking at the last group (registered Democratic identifiers plus "extremely likely" registered leaners):

Clinton still dominates the field, although by a bit smaller margin than among all Democrats. Support for Clinton remains about the same, at 47%, but the percentage choosing Obama is slightly higher, at 31%.


One take-away point from these data. How the pollster defines a "likely voter" matters as much as how tightly they screen. Notice that the third and fourth columns above capture slices of Democrats that are the same size (30% of adults) but with very different compositions. Clinton leads Obama by 30 points among "pure Democrats," but remove non-registrants and add back indpendents that are "extremely likely" to vote in a Democratic primary, and Clinton's lead drops to just 16 points.

Also, bear in mind that the actual turnout in all of the 2004 Democratic primaries and caucuses amounted to less than 10% of adults in the United States.

**The definition of the fourth subgroup in the Gallup report is a bit ambiguous. I emailed Gallup to request confirmation.

Screens & RDD: The ABC/Post Survey

It was probably Murphy's Law. Within hours of my posting a review of the sorry state of disclosure of early primary poll methodology, ABC News and The Washington Post released a new survey of likely caucus goers in Iowa that disclosed the two critical pieces of information I had searched for elsewhere. The two ABC News releases posted on the web (on Democratic and Republican caucus results) disclosed both the sample frame and the share of the voting age population represented by each survey. ABC News polling director Gary Langer also devoted his online column last Friday to a defense of his use of the random digit dial (RDD) methodology to sample the Iowa caucuses.

Let's take a closer look.

Langer concluded his column with a note on "likely voter screening," a subject I have been posting on lately. He writes:

Some polls of likely caucus-goers, or likely voters elsewhere, may include lots of people who aren't really likely to vote at all. Drilling down, again, is more difficult and more expensive. But if you're claiming to home in on likely voters, you want to do it seriously. Anyone producing a poll of "likely voters" should be prepared to answer this question: What share of the voting-age population do they represent?


The good news is that Langer and ABC News also provided an answer. For the Democratic sample:

This survey was conducted by telephone calls to a random sample of Iowa homes with landline phone service. Adults identified as likely Democratic caucus goers accounted for 12 percent of respondents; with an adult population of 2.2 million in Iowa, that projects to caucus turnout of 260,000.

In 2004, by comparison, just over 122,000 Democrats (5.5% of the voting age population) turned out for the caucuses.

And for the Republicans:

Adults identified as likely Republican caucus-goers accounted for seven percent of respondents; with an adult population of 2.2 million in Iowa, that projects to caucus turnout of 150,000. That's within sight of the highest previous turnout for a Republican caucus, 109,000 in 1988.

The estimated turnout for the 2000 Republican caucuses was lower (approximately 86,000), partly because John McCain focused his campaign on the New Hampshire primary. Thus, Republican turnout amounted to 4% to 5% of the voting age population in the last two contested Iowa caucuses.

So first, let's give credit where it is due. Of the thirteen organizations that have released surveys in Iowa so far this year, only ABC News has published full information about how tightly they screened likely caucus voters.

Having said that, two questions remain: First, is the screen used by the ABC/Washington Post poll screen tight enough? After all, their screen of Democrats projects to "likely voter" population of 260,000, a number more than double both the 2004 turnout (122,000) and the all-time record for Democrats set in 1988 (125,000). The ABC release seems to anticipate that question with the following passage:

A more restrictive likely voter definition, winnowing down to half that turnout, or about what it was in 2004, does not make a statistically significant difference in the estimate -- Edwards, 28 percent; Obama, 27 percent; and Clinton, 23 percent, all within sampling tolerances given the relatively small sample size. The more inclusive definition was used for more reliable subgroup analysis.

The full sample had Obama at 27% and Edwards and Clinton at 26% each. While the release does not specify the "more restrictive" definition they used, The Washington Post's version of the results indicates that exactly half (50%) of the likely Democratic caucus goers indicated that they are "absolutely certain" they will attend.

The Republican release makes essentially the same assertion: "A more restrictive likely voter definition, winnowing down to lower turnout, makes no substantive difference in the results."

So ABC's answer is: We could have used a tighter screen but it would have made no significant difference in the results.

Their decision is reasonable considering that the Des Moines Register poll used essentially the same degree of screening for their first poll of Democrats in 2006, using a list based methodology that nailed the final result in 2004. Also keep in mind that no screen based on self-reports of past behavior or future intent can identify the ultimate electorate with anything close to 100% accuracy. Pollsters know that some respondents will falsely report having voted in the past, and that respondents often provide wildly optimistic reports about their future vote intent that typically bare little resemblance to what they actually do on Election Day. And while we know what turnout has been in the past, we can only guess as to the Iowa Caucus turnout this coming January (or, perhaps even December). The ideal methodology defines turnout a bit more broadly than expected...[Oops, forgot to finish that sentence: An ideal method defines turnout a bit too broadly but also looks at narrower narrower turnout groups within the sample as this survey did].

The second and more complex question involves the ABC/Washington Post to use a random digit dial (RDD) sample frame rather than a sample drawn from a list of registered voters.

Langer makes the classic case for RDD, by pointing out the potential flaws in samples drawn from the list of registered voters provided by the Iowa secretary of state. Roughly 15% of the voters on the Secretary of State's list lack a telephone number and about as many will turn out to be non-working or business numbers (according to data he cites from a Pew Research Center Iowa poll conducted in 2003). Include the traditionally small number of Iowans that may still register to vote (or participate after having been inactive for many years), and we have, he writes, "a lot of noncoverage - certainly enough, potentially, to affect estimates." Langer acknowledges that RDD samples now face their own non-coverage problem due to the growth of cell phone only households (12-15% now lack landline phone service), but concludes that RDD "produces far less noncoverage than in list-based sampling."

True enough. But Langer leaves out some pertinent information. First, campaign pollsters that make use of registered voter lists typically use a vendor that attempts to match the names and the addresses on the list to telephone listings. Two vendors I spoke with today tell me that they are able to use such a process to increase the "match rate" to over 90%, a level that makes Iowa's lists among the best in the nation for polling.

Second - and this is a more complicated issue that really demands another post - the potential value of sampling from a registered voter list is not the ability to call only registered voters with the confidence that "people are reporting their registration accurately." It also allows pollsters to use the rich past vote history data available on the list for individual voters to inform their decisions about which voters to sample and interview. Pollsters can also make use of data providing the precise geographic location, party registration, gender and age of each sampled voter provided on the list to correct for non-response bias.

Finally, the campaign pollsters on the Democratic side that shell out "up to $100,000" to the Iowa Democratic Party for access to the list do not conduct polls that "entirely exclude" first time caucus goers (as Langer suggests). The Iowa party appends past caucus vote history to the full list of registered voters, and pollsters can use the additional data to greatly inform their sample selection methodology (Democrat Mark Mellman gives a hint of how this works here; Mellman's complete procedure probably resembles the methodology proposed by Yale political scientists Donald Green and Alan Gerber here and here).

Ultimately, the decision about what sample frame to use involves a trade-off between the potential for greater coverage error (when using a list) and greater measurement error in identifying true likely voters (when using RDD). The decision between the two is ultimately a judgment call for the pollster. Those of us who have grown comfortable with list samples believe that the increased accuracy in sampling true likely voters offsets the risk of missing those without accurate phone numbers on the lists. But the choice is not obvious. The fact that ABC and the Post have gone in a different direction -- and have disclosed the pertinent details -- will ultimately enrich our understanding of both the poll methodology and the Iowa campaign.

How Tight is the Screen? Part II

I want to pick up where I left off on Tuesday, when I wrote about the way national surveys screen for primary voters. How well have the pollsters in early primary states done in disclosing how tightly they "screen" to identify the voters that will actually turn out to vote (or caucus)? Not very well, unfortunately.

For those just dropping in, here is the basic dilemma: Voter turnout in primary elections and, especially in caucus states like Iowa, is typically much lower than in the general election. A pre-election survey that aims to track and ultimately project the outcome of the "horse-race" -- the measure of voter preferences "if the election were held today" -- needs to represent the population of "likely voters." When the expected turnout is very low, that becomes a difficult task, especially when polling many months before an election.

And in Iowa and South Carolina, if history is a guide, that turnout will be a very small fraction of eligible adults,** as the following table shows:


When a pollster uses a random digit telephone methodology, they begin by randomly sampling adults in all households with landline telephone service. They need to use some mechanism to identify a probable electorate from within a sample of all adults. If recent history is a guide, the probable electorate in Iowa -- Democrats and Republicans -- will fall in the high single digits as a percentage of eligible adults. South Carolina's turnout is better, but is still unlikely to exceed 30% of adults. And while the New Hampshire primary typically draws the highest turnout of any of the presidential primaries, it still attracts less than half of the eligible adults in the state. Despite all the attention the New Hampshire primary receives, many voters that ultimately cast ballots in the November general election (roughly 30% in 2000) choose to skip their states' storied primary.

A pollster may not want to "screen" so that the size of their likely voter matches the exact level of turnout. Most campaign pollsters I have worked with prefer to shoot for a slightly more expansive universe, both to capture those genuinely uncertain about whether they will vote and to account for the presumption that "refusals" (those who hang up on their own before answering any questions) are more likely to be non-voters.

Nonetheless, the degree to which pollsters screen matters a great deal. If, hypothetically, one Democratic primary poll captures 10% of eligible adults while another captures 40%, the results could easily be very different (and I'll definitely put more faith in the first).

It also matters greatly how the pollster go about identifying likely voters. I wrote quite a bit about that process in October 2004 as it applies to random digit dial (RDD) surveys of general election voters. In extremely low turnout contests, such as the Iowa caucuses, most campaign pollsters now rely on samples drawn from lists of registered voters that include the vote history of individual voters. Most of the Democratic pollsters I know agree with Mark Mellman, who asserted in a must-read column in The Hill earlier this year that, "the only accurate way to poll the Iowa caucuses starts with the party's voter file."

So, based on the information they routinely release, what do we know about way the recent polls in Iowa, New Hampshire and South Carolina screened for likely voters? As the many questions marks in the tables below show, not much.


The gold star for disclosure goes to the automated pollster SurveyUSA. Of 22 survey organizations active so far in these states, they are the only organization that routinely releases (and makes available on their web site) all of the information necessary to determine how tightly they screen. Every release includes a simple statement like the one from their May poll of New Hampshire voters:

Filtering: 2,000 state of New Hampshire adults were interviewed by SurveyUSA 05/04/07 through 05/06/07. . . Of the 2,000 NH adults, 1,756 were registered to vote. Of them, 551 were identified by SurveyUSA as likely to vote in the Republican NH Primary, 589 were identified by SurveyUSA as likely to vote in the Democratic NH Primary, and were included in this survey.

I did the simple math using the number above (which are weighted values). For SurveyUSA's May survey, Democratic likely voters represented 29% of adults and Republican likely voters represented 28%, for a total of 57% of all New Hampshire adults. Their screen is a very reasonable fit for a survey fielded eight months before the primary.


Honorable mention for disclosure also goes to two Iowa polls. First, the Des Moines Register poll conducted by Selzer and Company. Ann Selzer provided me with very complete information upon request last year. Her first Iowa caucus survey last year used a registered voter list sample and screened reach a population that represents roughly 11% of the eligible adults (assuming 2.0 million registered voters in Iowa and 2.2 million eligible adults).

Second, the poll conducted in March by the University of Iowa. While their survey asked an open-ended vote question (rendering the results incomparable with those included in our Iowa chart), their release did at least provide the basic numbers concerning their likely voter screen. They interviewed 298 Democratic likely caucus goers and 178 Republican caucus-goers out of 1,290 "registered Iowa voters" (for an incidence of 37% of registered voters). Unfortunately, they did not specify whether they used a registered voter list or a random digit sample, although given the incidence of registered voters in Iowa, we can assume that the percentage of eligible adults that passed the screen was probably in the low 30s.


And speaking of the sampling frame, only 6 of 22 organizations SurveyUSA, Des Moines Register/Selzer, Fox News, Rasmussen Reports, Zogby, and Winthrop University specified the sampling method they used (random digit dial, RBS or listed telephone directory). I will give honorable mention to two more organizations -- Chernoff Newman/ MarketSearch and the partnership of Hamilton Beattie (D) and Ayres McHenry (R) -- that disclosed their sample method to me upon request earlier this year.

The obfuscation of this information by the remaining 14 pollsters is particularly stunning given that the ethical codes of both the American Association for Public Opinion Research (AAPOR) and the National Council on Public Polls (NCPP) include explicitly require the disclosure of the sampling method, also known as the sample "frame." The NCPP's principles of disclosure requires the following for its member organizations for "all reports of survey findings issued for public release:"

Sampling method employed (for example, random-digit dialed telephone sample, list-based telephone sample, area probability sample, probability mail sample, other probability sample, opt-in internet panel, non-probability convenience sample, use of any oversampling).

The AAPOR code mandates disclosure of:

A definition of the population under study, and a description of the sampling frame used to identify this population.

Finally, while virtually all of these surveys told us how many "likely primary voters" they selected, very few provided details on how they determined that voters (or caucus goers) were in fact "likely" to participate. The most notable exceptions were the Hamilton Beattie (D) Ayres McHenry (R) and Chernoff Newman/ MarketSearch polls in South Carolina, and the News 7/Suffolk University poll in New Hampshire. All of these included the questions used to screen for likely primary voters in the "filled-in" questionnaires that included full results.

So what should an educated poll consumer do? I have one more category of diagnostic questions to review, and then I want to propose something we might be able to do about the very limited methodological information available to us. For now, here's two-word hint of what I have in mind: "upon request."

Stay tuned.

**Political scientists typically use two statistics to calculate turnout among adults: all adults of voting age (also known as the voting age population or VAP), or all adults who are eligible to vote (or the voter eligible population or VEP). George Mason University Professor Michael McDonald has helped popularize VEP as a better way to calculate voter turnout, because it excludes adults ineligible for voting such as non-citizens and ineligible felons. The perfect statistic for comparison to telephone surveys of adults would fall somewhere in between, because adult telephone samples do not reach those living in institutions or who do not speak English, but might still include non-citizens that speak English (or Spanish where pollsters use bilingual interviewers).

In a state like California, with a large non-citizen population, VAP is probably the better statistic for comparisons to the way polls screen for likely voters. In Iowa, New Hampshire and South Carolina, however, the choice has very little impact. Had I used VAP rather than VEP above, the turnout statistics in the table would have been roughly a half a percentage point lower.

CORRECTION: Due to an error in my spreadsheet, the original version of the turnout table above incorrectly displayed turnout as a percentage of VAP rather than VEP. For reference, the table below has turnout as a percentage of VAP.


How Tight is the Screen? Part I

The questions we seem to get most often here at Pollster, either in the comments or via email, concern the variability we see in the presidential primary polls, especially in the early primary states. Why is pollster A showing a result that seems consistently different than what pollster B shows? Why do the results from pollster C seem so volatile? Which results should we trust? I took up one such conflict last Friday.

Unfortunately, definitive answers to some of these questions are elusive, given the vagaries of the art of pre-election polling in relatively low turnout primaries. When confronted with such questions, political insiders tend to rely on conventional wisdom and pollster reputation. Our preference is to look at differences in how survey results were obtained and take those differences into account in analyzing the data.

At various AAPOR conferences in recent years, I have heard the most experienced pollsters repeatedly confirm my own intuition: To find the most trustworthy primary election polls, we need to look close at how tightly the pollsters "screen" for likely primary voters. In other words, primary and caucus turnout is usually low in comparison to general elections. In 2004 (by my calculations), the Democratic turnout amounted to 6% of the voting age population for the Iowa Caucuses and 22% for the New Hampshire primary. In other states, turnout averaged 9% in primary states and 1.4% in caucus states in 2004.

A pollster that begins with a sample of adults has to narrow the sample down to something resembling the likely electorate, which is not easy. As few will approach the task exactly the same way, this is an area of polling methodology that is much more about art than science. Nonetheless, in most primary polls, relatively tighter screens are preferable in trying to model a likely electorate.

Thus, to try to make sense of the polls before us we want to know two things. First, how narrowly did the pollsters screen for primary voters? Second, as no two such screens are created equal, what kind of people qualified as primary voters?

In this post, I will look at what some recent national polls have told us about how tightly they screened their samples before asking a presidential primary trial-heat question and what kinds of voters were selected. I will turn to statewide polls in Part II. The table below summarizes the available data, including the percentage of adults that get the Democratic or Republican primary vote questions (if you click on the table, you will get a pop-up version that includes the sample sizes for each survey).


Unfortunately, of the 20 national surveys checked above, only five (Gallup/USA Today, AP-IPSOS, CBS/New York Times, Cook/RT Strategies and NBC/Wall Street Journal) provide all of the information necessary to quantify the tightness of their screen question. Others fall short. Here is a brief explanation at how I arrived at the numbers above.

The calculation is easiest when the pollster reports results for a random sample of all adults as well as the weighted value the subgroups that answered the primary vote questions. In various ways, these five organizations included the necessary information in readily available public releases.

Five more organizations (CNN/ORC, Newsweek, LA Times/Bloomberg, the Pew Research Center and Time) routinely provide the subgroup sizes for respondents that answer primary vote questions, though they do not specify whether the "n-sizes" are weighted or unweighted. Pollsters typically provide unweighted counts because they are most appropriate for calculating sampling error. However, since the unweighted statistic can provide a slightly misleading estimate of the narrowness of the screen, I have labeled the percentages for these organizations as approximate.

Of those that report results among all adults, only the ABC News/Washington Post poll routinely omits information about the size of the subgroups that answer primary vote questions. Even though their articles and reports often lead with results among partisans, they have provided no information about the sub-group sizes or margin of error for party subgroups since February. While the Washington Post provided results for party identification during 2005 and 2006, that practice appears to have ended changed as of February 2007.

[CORRECTION: The June and July filled-in questionnaires available at washingtonpost.com include the party identification question, and those tables also present time series data for the February and April surveys. However, as these releases do not include the follow-up question showing the percentage that lean to either party (which had been included in Post releases during 2006), they still do not provide information sufficient to determine the size of the subgroups that answered presidential primary trial-heat questions].

Determining the tightness of the screen gets much harder when pollsters report overall results on their main sample for only registered or "likely" voters. Three more organizations (Diageo/Hotline, Fox News/Opinion Dynamics and Quinnipiac) provide overall results only for those who say they are registered to vote. For these three (denoted with a double asterisk in the table), I have calculated an estimate of the screen based on the educated guess that roughly 85% of adults typically identify themselves as registered voters on other surveys of adults.

Four more organizations (Rasmussen Reports, Zogby, and Democracy Corps and McLaughlin and Associates) report primary results as subgroups of samples of "likely voters." Since their standard releases provide no information on how narrowly they screen to select "likely voters," we have no way to estimate the tightness of their primary screens. If we simply divided the size of the subgroup by the total sample, we would overstate the size of the primary voting groups in comparison to the other surveys.

Finally, the American Research Group follows a procedure followed for many statewide surveys: It provides only the number of interviews asked the primary vote question with no information about the size of the universe called to select those respondents.

All of the discussion above concerns the first question: How narrowly did the pollsters screen? We have somewhat better information -- at least with regards to national surveys -- about the second question: how those people were selected. The last column in the table categorizes each pollster the by the way they select respondents to receive primary vote questions:

  • Leaned Partisans -- This is the approach taken by Gallup/USA Today, ABC News/Washington Post, AP-IPSOS. It includes, for each party, all adults that identify or "lean" to that party.
  • Leaned Partisan+ -- The approach taken by NBC/Wall Street Journal includes both party identifiers and leaners and those who say they typically vote in the primary election of the given party. The LA Times/Bloomberg poll takes a similar approach although its screen appears to exclude leaners.
  • RV/Leaned Partisan or RV/Partisan -- This approach is taken by a large number of pollsters. It takes only those partisans or "leaned" partisans that say they are also registered to vote. Those labeled RV/Partisan exclude party "leaners" from the subgroup.
  • Primary Voters -- This category includes the surveys that use questions about primary voting (rather than party identification) to select respondents that will be asked primary vote questions.

As should be apparent from the table, the pollsters that use the "leaned partisan" or "leaned partisan+" select partisans more broadly than those that include only registered voters or those that claim to vote in primaries. But all of these approaches are getting a much broader slice of the electorate than is likely to actually participate in a primary or caucus in 2008. As should be obvious, most of the national pollsters are not trying to model a specific electorate -- they are mostly providing data on the preferences of "Democrats" or "Republicans" (or Democratic or Republican "voters"). I wrote about that issue and its consequences back in March.

In Part II, I will turn to statewide polls in the early primary states and then discuss what to make of it all. Unfortunately, while the information discussed above is incomplete, the national polls look like a model of disclosure as compared to what we know about most of the statewide polls.

To be continued...

National GOP Contest: Why are ABC/Post & Rasmussen So Different?

A suggestion from alert reader and frequent commenter Andrew:

I write to suggest that you analyze the huge discrepancy between the latest Rasmussen and Washington Post/ABC polls. I'm talking about the Republican nomination. Rasmussen says Thompson is up by 4 over RG, while WP/ABC says Rudy is up by 20 pts over FT, who isn't even in second place here (36 RG to 14 FT). One of these pollsters is obviously very wrong. Two polls cannot both be accurate, if their margin of victory do not approximate each other. This is a humongous 24 point discrepancy.

Here, with a little assist from Professor Franklin, is a chart showing the discrepancy that Andrew noticed. The two surveys do seem to show a consistent difference that is clearly about more than random sampling error. The ABC News/Washington Post survey shows Giuliani doing consistently better, and Thompson doing consistently worse, than the automated surveys conducted by Rasmussen Reports, although the discrepancy has been largest in terms of how the most recent ABC/Post poll compares to Rasmussen surveys conducted over the last month or so.


To try to answer Andrew's question, it makes sense to take two issues separately. First, why are the surveys producing different results for the Republican primary?

At the most basic level, these surveys seem to be measuring the same thing: Where does the Republican nomination contest stand nationally? And both surveys begin with a national sample of working telephone numbers drawn using a random digit dial (RDD) methodology. Take a closer look, however, and you will see some pretty significant difference in methodology:

  • The ABC/Post survey uses live interviewers. Rasmussen uses an automated recorded voice that asks respondents to enter their answers by pushing buttons on a touch tone keypad. This method is known as Interactive Voice Response (IVR). The response rates -- and more importantly, the kinds of people that respond -- are likely different, although neither pollster has released specific response rates for any of the results plotted above.
  • The ABC/Post survey attempts to select a random member of each household to be interviewed by asking "to speak to the household member age 18 or over at home who's had the last birthday" (more details here). Rasmussen interviews whatever adult member of the household answers the telephone. Both organizations weight the final data to reflect the demographics of the population.
  • Rasmussen Reports weights each survey by party identification, using a rolling average of recent survey results as a target (although their party weighting should have little effect on a sub-group of Republican primary voters). The ABC/Post survey does not weight national surveys at this stage in the campaign by party ID.
  • [Update -- one I overlooked: The ABC/Post survey includes Newt Gingrich on their list of choices. Gingrich receives 7% on their most recent survey. If the Rasmussen survey prompts Gingrich as a choice, they do not report it. It is also possible that Rasmussen omits other candidates as well, as t Their report provides results for just Giuliani, Thompson, Romney and McCain. Update II -- Scott Rasmussen informs via email: "We include all announced candidates plus Fred Thompson"].
  • And perhaps most important for Andrew's question: The ABC/Post survey asks the presidential primary question of all adults that identify with or "lean" to the Republicans. The Rasmussen survey screens to a narrower slice of the population: Those they select as "likely Republican primary voters."

Unfortunately, neither pollster tells us the percentage of adults that answered their Republican primary question, but we can take a reasonably educated guess: "Leaned Republicans" have been somewhere between 35% and 42% of the adult population on surveys conducted in recent months by Gallup and the Pew Research Center. If Rasmussen's likely voter selection model for Republican is analogous to their model for Democrats, their "likely Republican primary" subgroup probably represents 20% to 25% of all adults.

Consider also that, even before screening for "likely voters" and regardless of the response rate, those willing to complete an IVR study may well represent a population that is better informed or more politically interested than those who complete a survey with an interviewer.

Put this all together, and it is clear that the Rasmussen survey is reaching a very different population, something I would wager explains much of the difference in the results charted above.

Now, the second question, which result is more "accurate?" It is tempting to say that this question is impossible to answer, since we will never have a national primary election to check it against. But a better answer may be that "accuracy" in this case depends on what we want to use the data for.

If we were trying to predict the outcome of a national primary, and if all other aspects of methodology were equal (which they're not), I would want to look at the narrower slice of "likely voters" rather than all adult "leaned Republicans." Since the nomination process involves series of primaries and caucuses starting with Iowa and New Hampshire, and since the results from those early contests typically influence preferences in the states that vote later, we really need to focus on early states for a more "accurate" assessment of where things stand now. While interesting and fun to follow, these national measurements provide only indirect indicators of the current status of the race for the White House.

Why would the ABC/Post survey want to look at all Republicans, rather than likely voters? Here is the way ABC polling director Gary Langer explained it in his online column this week:

I like to think there are two things we cover in an election campaign. One is the election; the other is the campaign.

The campaign is about who wins. It's about tactics and strategy, fundraising and ad buys, endorsements and get-out-the-vote drives. It's about the score of the game - the horse race, contest-by-contest, and nothing else. We cover it, as we should.

The election is the bigger picture: It's about Americans coming together in their quadrennial exercise of democracy - sizing up where we're at as a country, where we want to be and what kind of person we'd like to lead us there. It's a different story than the horse race, with more texture to it, and plenty of meaning. We cover it, too.

We ask the horse race question in our national polls for context - not to predict the winner of a made-up national primary, but to see how views on issues, candidate attributes and the public's personal characteristics inform their preferences.

Questions like Andrew's are more consequential in the statewide surveys we are tracking here at Pollster.com, and those surveys have been producing some discrepancies even bigger than the one charted above. We will all be in a better to make sense of those differences if we know more about the methodologies pollsters use. I'll be turning to that issue in far more detail next week.

Of Generic Votes and Likely Voters

Today's flood of new national surveys provides enough raw material for a week's worth of blog posts.  The new surveys are from ABC News/Washington Post, CBS/NY Times, CNN/ORC and USAToday/Gallup, plus one more from Newsweek released over the weekend.  I want to highlight a few key results, particularly what the new surveys tell us about shifts in the so called "generic congressional" ballot.

As Charles Franklin notes in the previous post, these surveys do indicate an improvement in the Democratic margins.  I want to take a closer look at an issue that inevitably confronts us when considering the generic ballot question, whether to watch results among all registered voters or just the sub-samples of "likely voters" as defined by each pollster.  The surveys from CNN, Gallup and Newsweek provide both, so the following table provides both. 


While some of the surveys (Gallup and CNN) show more change than others, all but CBS/NYTimes poll showed at least some improvement in the Democratic margin since September.  When we average the results of the registered voter samples, the Democratic margin increases from 11 to 14 percentage points. 

When we shift to likely voters, things get a little murky.  Only two of the newest polls -- the ones from Gallup and CNN -- reported results for likely voters in both September and October.  And both show bigger swings toward the Democrats, but among both their likely and registered voter subgroups.  The change for Gallup among likely voters is simply enormous (from a dead heat to a 23 point Democratic advantage).  CNN also shows an eight point gain in the Democratic margin among likely voters (from 13 to 21 points).

So which population should we follow?  Frank Newport made the case for the Gallup likely voter model in a post here last week, and a lively debate ensued that will no doubt continue over these new results.  Consider the following table that shows how likely and registered voter results have compared since Labor Day on the pollst that reported both.


There is no apparent consistency in the differences between the registered and likely voter samples.  On average they seem to make the margin about a point less Republican, but even that disappears when we remove the mid-September USAToday/Gallup poll from the analysis.  Consistent with past criticism, the likely voter model appears to be producing more volatile results, particularly for Gallup.  But for all the sound and fury of the debate, likely voters and registed voters are looking more or less the same.