Pollster.com

Articles and Analysis

 

How Tight is the Screen? Part I

Topics: 2008 , Disclosure , Likely Voters , The 2008 Race

The questions we seem to get most often here at Pollster, either in the comments or via email, concern the variability we see in the presidential primary polls, especially in the early primary states. Why is pollster A showing a result that seems consistently different than what pollster B shows? Why do the results from pollster C seem so volatile? Which results should we trust? I took up one such conflict last Friday.

Unfortunately, definitive answers to some of these questions are elusive, given the vagaries of the art of pre-election polling in relatively low turnout primaries. When confronted with such questions, political insiders tend to rely on conventional wisdom and pollster reputation. Our preference is to look at differences in how survey results were obtained and take those differences into account in analyzing the data.

At various AAPOR conferences in recent years, I have heard the most experienced pollsters repeatedly confirm my own intuition: To find the most trustworthy primary election polls, we need to look close at how tightly the pollsters "screen" for likely primary voters. In other words, primary and caucus turnout is usually low in comparison to general elections. In 2004 (by my calculations), the Democratic turnout amounted to 6% of the voting age population for the Iowa Caucuses and 22% for the New Hampshire primary. In other states, turnout averaged 9% in primary states and 1.4% in caucus states in 2004.

A pollster that begins with a sample of adults has to narrow the sample down to something resembling the likely electorate, which is not easy. As few will approach the task exactly the same way, this is an area of polling methodology that is much more about art than science. Nonetheless, in most primary polls, relatively tighter screens are preferable in trying to model a likely electorate.

Thus, to try to make sense of the polls before us we want to know two things. First, how narrowly did the pollsters screen for primary voters? Second, as no two such screens are created equal, what kind of people qualified as primary voters?

In this post, I will look at what some recent national polls have told us about how tightly they screened their samples before asking a presidential primary trial-heat question and what kinds of voters were selected. I will turn to statewide polls in Part II. The table below summarizes the available data, including the percentage of adults that get the Democratic or Republican primary vote questions (if you click on the table, you will get a pop-up version that includes the sample sizes for each survey).

07-31%20National%20Primary%20Screen_sml.png

Unfortunately, of the 20 national surveys checked above, only five (Gallup/USA Today, AP-IPSOS, CBS/New York Times, Cook/RT Strategies and NBC/Wall Street Journal) provide all of the information necessary to quantify the tightness of their screen question. Others fall short. Here is a brief explanation at how I arrived at the numbers above.

The calculation is easiest when the pollster reports results for a random sample of all adults as well as the weighted value the subgroups that answered the primary vote questions. In various ways, these five organizations included the necessary information in readily available public releases.

Five more organizations (CNN/ORC, Newsweek, LA Times/Bloomberg, the Pew Research Center and Time) routinely provide the subgroup sizes for respondents that answer primary vote questions, though they do not specify whether the "n-sizes" are weighted or unweighted. Pollsters typically provide unweighted counts because they are most appropriate for calculating sampling error. However, since the unweighted statistic can provide a slightly misleading estimate of the narrowness of the screen, I have labeled the percentages for these organizations as approximate.

Of those that report results among all adults, only the ABC News/Washington Post poll routinely omits information about the size of the subgroups that answer primary vote questions. Even though their articles and reports often lead with results among partisans, they have provided no information about the sub-group sizes or margin of error for party subgroups since February. While the Washington Post provided results for party identification during 2005 and 2006, that practice appears to have ended changed as of February 2007.

[CORRECTION: The June and July filled-in questionnaires available at washingtonpost.com include the party identification question, and those tables also present time series data for the February and April surveys. However, as these releases do not include the follow-up question showing the percentage that lean to either party (which had been included in Post releases during 2006), they still do not provide information sufficient to determine the size of the subgroups that answered presidential primary trial-heat questions].

Determining the tightness of the screen gets much harder when pollsters report overall results on their main sample for only registered or "likely" voters. Three more organizations (Diageo/Hotline, Fox News/Opinion Dynamics and Quinnipiac) provide overall results only for those who say they are registered to vote. For these three (denoted with a double asterisk in the table), I have calculated an estimate of the screen based on the educated guess that roughly 85% of adults typically identify themselves as registered voters on other surveys of adults.

Four more organizations (Rasmussen Reports, Zogby, and Democracy Corps and McLaughlin and Associates) report primary results as subgroups of samples of "likely voters." Since their standard releases provide no information on how narrowly they screen to select "likely voters," we have no way to estimate the tightness of their primary screens. If we simply divided the size of the subgroup by the total sample, we would overstate the size of the primary voting groups in comparison to the other surveys.

Finally, the American Research Group follows a procedure followed for many statewide surveys: It provides only the number of interviews asked the primary vote question with no information about the size of the universe called to select those respondents.

All of the discussion above concerns the first question: How narrowly did the pollsters screen? We have somewhat better information -- at least with regards to national surveys -- about the second question: how those people were selected. The last column in the table categorizes each pollster the by the way they select respondents to receive primary vote questions:

  • Leaned Partisans -- This is the approach taken by Gallup/USA Today, ABC News/Washington Post, AP-IPSOS. It includes, for each party, all adults that identify or "lean" to that party.
  • Leaned Partisan+ -- The approach taken by NBC/Wall Street Journal includes both party identifiers and leaners and those who say they typically vote in the primary election of the given party. The LA Times/Bloomberg poll takes a similar approach although its screen appears to exclude leaners.
  • RV/Leaned Partisan or RV/Partisan -- This approach is taken by a large number of pollsters. It takes only those partisans or "leaned" partisans that say they are also registered to vote. Those labeled RV/Partisan exclude party "leaners" from the subgroup.
  • Primary Voters -- This category includes the surveys that use questions about primary voting (rather than party identification) to select respondents that will be asked primary vote questions.

As should be apparent from the table, the pollsters that use the "leaned partisan" or "leaned partisan+" select partisans more broadly than those that include only registered voters or those that claim to vote in primaries. But all of these approaches are getting a much broader slice of the electorate than is likely to actually participate in a primary or caucus in 2008. As should be obvious, most of the national pollsters are not trying to model a specific electorate -- they are mostly providing data on the preferences of "Democrats" or "Republicans" (or Democratic or Republican "voters"). I wrote about that issue and its consequences back in March.

In Part II, I will turn to statewide polls in the early primary states and then discuss what to make of it all. Unfortunately, while the information discussed above is incomplete, the national polls look like a model of disclosure as compared to what we know about most of the statewide polls.

To be continued...

 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR