Pollster.com

Articles and Analysis

 

The Pollster.com Disclosure Project

Topics: 2008 , Iowa , Likely Voters , The 2008 Race

Over the last few months I have written a series of posts that examined the remarkably limited methodological information released about pre-election polls in the early presidential primary states (here, here and here, plus related items here). The gist is that these surveys often show considerable variation in the types of "likely voters" they select yet disclose little about the population they sample beyond the words "likely voter." More often than not, the pollsters release next to nothing about how tightly they screen or about the demographic composition of their primary voter samples.

Why do so many pollsters disclose so little? A few continue to cite proprietary interests. Some release their data solely through their media sponsors, which in the past limited the space or airtime available for methodological details (limits now largely moot given the Internet sites now maintained by virtually all media outlets and pollsters). And while none say so publicly, my sense is that many withhold these details to avoid the nit-picking and second guessing that inevitably comes from unhappy partisans hoping to discredit the results.

Do pollsters have an ethical obligation to report methodological details about who they sampled? Absolutely (and more on that below), and as we have learned, most will disclose these details on request as per the ethical codes of the American Association for Public Opinion Research (AAPOR) and the National Council on Public Polls (NCPP). Regular readers will know that we have received prompt replies from many pollsters in response to such requests (some pertinent examples here, here, here and here).

The problem with my occasional ad hoc requests is that they arbitrarily single out particular pollsters, holding their work up to scrutiny (and potential criticism) while letting others off the hook. My post a few weeks back, for example, focused on results from Iowa polls conducted by the American Research Group (ARG) that seemed contrary to other polls. Yet as one alert reader commented, I made no mention of a recent Zogby poll with results consistent with ARG. And while tempting, speculating about details withheld from public view (as I did, incorrectly, in the first ARG post), is even less fair to the pollsters and our readers.

So I have come to this conclusion: Starting today we will begin to formally request answers to a limited but fundamental set of methodological questions for every public poll asking about the primary election released in, for now, a limited set of states: Iowa, New Hampshire, South Carolina or for the nation as a whole. We are starting today with requests emailed to the Iowa pollsters and will work our way through the other early states and national polls over the next few weeks, expanding to other states as our time and resources allow.

These are our questions:

  • Describe the questions or procedures used to select or define likely voters or likely caucus goers (essentially the same questions I asked of pollsters just before the 2004 general election).
  • The question that, as Gary Langer of ABC News puts it, "anyone producing a poll of 'likely voters' should be prepared to answer:" What share of the voting-age population do they represent? (The specific information will vary from poll to poll; more details on that below).
  • We will ask pollsters to provide the results to demographic questions and key attributes measures among the likely primary voter samples. In other words, what is the composition of each primary voter sample (or subgroup) in terms of gender, age, race, etc.?
  • What was the sample frame (random digit dial, registered voter list, listed telephone directory, etc)? Did the sample frame include or exclude cell phones?
  • What was the mode of interview (telephone using live interviewers, telephone using an automated, interactive voice response [IVR] methodology, in-person, Internet, mail-in)?
  • And in the few instances where pollsters do not already provide it, what was the verbatim text of the trial heat vote question or questions?

Our goal is to both collect this information and post it alongside the survey results on our poll summary pages, as a regular ongoing feature of Pollster.com. Obviously, some pollsters may choose to ignore some or all of our requests, but if they do our summary table will show it. We are starting with Iowa, followed by New Hampshire, South Carolina and the national surveys, in order to keep this task manageable and to determine the feasibility of making such requests for every survey we track.

Again, keep in mind that the ethical codes of the professional organizations of survey researchers require that pollsters adequately describe both the population they surveyed and the "sample frame" used to sample it. The Code of Ethics of the American Association for Public Opinion Research, for example, lists "certain essential information" about a poll's methodology that should be disclosed or made available whenever a survey report is released. The relevant information includes:

The exact wording of questions asked . . . A definition of the population under study, and a description of the sampling frame used to identify this population . . . A description of the sample design, giving a clear indication of the method by which the respondents were selected by the researcher . . . Sample sizes and, where appropriate, eligibility criteria [and] screening procedures.

The Principles of Disclosure of the National Council on Public Polls (NCPP) and the Code of Standards and Ethics of the Council of American Survey Research Organizations (CASRO) include very similar disclosure requirements.

We should make it clear that we could ask many more questions that might help assess the quality of the survey or help identify methodological differences that might influence the results. We are not asking, for example, about response rates, the method used to select respondents within each household, the degree to which the pollster persists with follow-up calls to unavailable respondents or the time of the day in which they conduct interviews. We have limited our requests to try to make it easier for pollsters to respond while also focusing on the issues that seem of greatest importance to the pre-primary polls.

What can you do? Frankly, we would appreciate your support. If you have a blog, please post something about the Pollster Disclosure Project and link back to this entry (and if you do, please send us an email so we can keep a list of supportive blogs). If not, we would appreciate supportive comments below. And of course, criticism or suggestions on what we might do differently are also always welcome.

(After the jump - a more exhaustive list of the questions that we will use to determine the percentage of the voting age population represented by each sample)

Appendix - Primary Voter Sample as a Percentage of Adults

We aim to report the percentage of the voting age population represented by each primary sample. In most cases, primary surveys have reported results from samples of likely Democratic and Republican primary voters, although a few have reported results for only one party.

1) If the pollster has collected and reported results from a sample of adults, this calculation is relatively easy: What is the size of each primary voter subgroup (the respondents that answered the primary vote questions for each party) as a weighted percentage of the full sample? More specifically, we need to know:

What is the weighted n of respondents asked the Democratic primary trial heat questions (or the size of the subsample as a weighted percentage of all interviews)

What is the weighted n of respondents asked the Republican primary trial heat questions (or the size of the subsample as a weighted percentage of all interviews)

2) If the pollster screened out some contacts in order to report results from all registered voters or all likely voters, we also need to know:

How many otherwise eligible adults were terminated on screen questions concerning vote registration, votes cast in previous general elections or intent to vote in the upcoming general election?

3) If the pollster screened for and reported the results for likely primary voters only (regardless of the type of sample used), we need to know:

How many otherwise eligible adults were terminated on screen questions concerning intent to vote in the primary election (or caucus) or past votes cast in primary election (or caucus)?

4) Some pollsters use "over sampling" to fill a quota of completed interviews of primary voters of each party, such as 600 Democrats and 600 Republicans. They call a random sample of all adults or all registered voters until they fill the quota for likely primary voters in one party's primary, then screen out likely primary voters of the first party until they fill the quota for the second (in other words, once they interview 600 Democrats, then terminate likely Democratic primary voters until they interview 600 Republicans). In this case, we need to know:

How many otherwise eligible likely primary voters were terminated due to a filled quota for primary voters, and in which party primary did they intend to vote?

5) If the pollster sampled from a list of registered voters, we also need to know the size of the target population they sampled in order to determine the percentage of the voting age population represented by each primary sample.

If the pollster drew a sample of all registered voters or all registered partisans, the size of the pool of target population should be available through the statistics reported by the official registrar of voters (usually the Secretary of State) in each state.

If the pollster used prior vote history or some other criteria to narrow the target population, its size will not be available through public sources. If so, we need to know the size (or "count") of the sampled target population as determined by the sample vendor.

 

Comments

HOW LONG ARE YOU GOING TO IGNORE RON PAUL??????????

ARE YOU SURE YOUR SITE IS BEING FAIR!!!!!!!!!!!

____________________

This is fabulous!--about as close to primary source material as a student of survey research can get, short of interning with you. I've blogged it, so all 5 of MY readers will know about it.

____________________

Matthew:

This is going to be extremely valuable information, I think. For one thing, it will help elucidate more the question about whether younger people are being polled as effectively as older ones. As I understand it, that is one of the central arguments about the polls and the under-representation of support for one campaign in particular. Also, wit some analysis, which I'm not sure I'm able to do (from a technical standpoint), it would be possible to do some comparisons with recent data about actual voting trends . While there's a bit of an apples and oranges comparison there, it would help enormously to see whether the core data shows things to be way off the scale.

Lastly, this is just the right thing for these groups to do. Thanks for doing this.

____________________

It sounds like the sort of disclosure we're used to as a matter of course from pollsters in the UK is a bit of a novelty in the US. Best of luck with this - it certainly is good news in the UK to get decent details automatically on all reputable polls.

____________________

Thanks for doing this. This is a crucial project. Too few decisions of political significance are made on the basis of a careful analysis of what we know and what we don't know.

____________________

Jack:

This is crucial. I'll be here for every poll that comes out.

____________________

Michael:

Fantastic idea from a fantastic blog. Keep up the good work.

ps. Toby Mickle, they're not ignoring Ron Paul, your question was answered a month ago.

____________________

Willie:

Great Idea Mark, this is a public service for all the political junkies, scientists, and professionals who digest this information on a daily basis.

I appreciate what you're doing. Keep up the good work!

____________________

julie:

FINALLY!!!

Thank you for finally addressing the things that make so many polls seem so biased. I have long been asking "How can the polls be accurate if they can't call cell phones?'.

THANK YOU.

____________________

You may also want to ask about preset quotas for interviewing. Some pollsters use them instead of weighting, and I didn't see it in your list.

If not monitored properly, quotas can lead to skew in underlying demos, like a bunch of old dems, young indies, and middle age Republicans, even while the overall demos look fine.

____________________

You may also want to ask about preset quotas for interviewing. Some pollsters use them instead of weighting, and I didn't see it in your list.

If not monitored properly, quotas can lead to skew in underlying demos, like a bunch of old dems, young indies, and middle age Republicans, even while the overall demos look fine.

____________________

stlounick:

This is crucial information and thank you for requesting it. It will make the public polls much more meaningful to the voters.

____________________

ohiomeister:

Yes, please.

Thank you!

____________________

Nancy Mathiowetz:

As President of the American Association for Public Opinion Research, I believe that this is an excellent opportunity for public opinion researchers to help improve the public understanding of polling methodology and interpretation.

Pollster.com is to be applauded for this effort.


More information from AAPOR about disclosure:

http://aapor.org/disclosuresfaqs


____________________

You don't need yet another person to praise you for this effort, but I will--great work.

Would you consider omitting entirely the polls that fail to disclose methodology? The absence of methodology is in some ways worse than useless.

____________________

Katie Robinette:

I feel like I am back in University sitting in my Statistical Analysis Poli Sci course.

I went to Carleton and my prof was Derryl Bricker, VP of Ippsos Reid. So I had a great Prof and Ippso always outlines how they conduct the surveys they publish.

In fact, they did our polling when I was at the Ontario Medical Association.

One question: Do we really want two pages of print per ad to explain such info as the pharmaceutical companies now have to do? Do we want a great viagara ad ruined by saying..."stop taking this drug if your erection lasts longer than 4 hours"? At some point, the consumer should be responsible for finding this information out on thier own.

If I was a campaign manager who hired a polling company, chances are we would be doing a poll in an are where fantastic ads are running and our ground team is very strong.

If I were then to run an ad based on our strong poll results, would I want to be forced to pay for air time explaining that this was such an insignificant sample as to nullify the poll? Why poll if you can't skew the results?

Let the competition tear it apart if they want (like the American Medical Association could and/or other viagara companies - to come back to that example)?

OK - so there are huge liability issues regarding the drug companies and the doctors who prescribe them...but how do you really want to increase the liability costs in spin doctoring??

If a campaign is bragging about a recent poll, the onus should be on the voter (yes, yes, personal responsiblity is a drag) to go on the pollster's web site and look up that info.

Just rambling thoughts.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR