Pollster.com

Disclosure

 

AAPOR Adds Transparency Initiative Endorsements


And speaking of AAPOR's Transparency Initiative, the organization announced via email yesterday the names of 11 more survey organizations that recently pledged their support for the evolving program:

  • Elon University Poll
  • The Elway Poll
  • Magellan Data and Mapping Strategies
  • Monmouth University Polling Unit
  • Muhlenberg College Institute of Public Opinion
  • NORC
  • Public Policy Institute of California
  • Quinnipiac University Poll
  • University of Arkansas at Little Rock Survey Research Center
  • University of Wisconsin Survey Center
  • Western New England College Polling Institute

Among the new names, the two most notable for regular readers are probably Quinnipiac University, the pollster active in many important races in 2010 and Magellan Data and Mapping Strategies, a relatively new firm that has released mostly automated pre-election polls in recent months. When newcomers like Magellan choose to endorse the Transparency Initiative, it's apparent that the "carrot" that past AAPOR President Peter Miller hopes to offer participating pollsters as an incentive is beginning to work.

That new names bring the total number of participants up to 44. When the initiative is launched in about a year, participating which pollsters will routinely release essential facts about their methodology and deposit their information to a public data archive. AAPOR has also posted an update on their work-in-progress on the initiative. I wrote about it in more detail here.


Transparency and Pollster Ratings: Update


[Update: On Friday night, I linked to my column for this week, which appeared earlier than usual. It covers the controversy over Nate Silver's pollster ratings, and an exchange last week between Silver, Political Wire's Taegan Goddard and Research 2000's Del Ali over the transparency in the FiveThirtyEight pollster ratings. In linking to the column I also posted additional details on the polls that Ali claimed Silver had missed and promised more on the subject of transparency that I did not have a chance to include in the column. That discussion follows below.]

Although my column discusses issues of transparency of the database Nate Silver created to rate pollster accuracy, it did not address transparency in regards to the details of the statistical models used to generate the ratings.

When Taegan Goddard challenged the transparency of the ratings, Silver shot back that the transparency is "here in an article that contains 4,807 words and 18 footnotes," and explains "literally every detail of how the pollster ratings are calculated."

Granted, Nate goes into great detail describing how his rating system works, but several pollsters and academics I talked to last week wanted to see more details of the model and the statistical output in order to better evaluate whether the ratings perform as advertised.

For example, Joel David Bloom, a survey researcher at the University at Albany who has done a similar regression analysis of pollster accuracy, said he "would need to see the full regression table" for Silver's initial model that produces the "raw scores," a table that would include the standard error and level of significance for each coefficient (or score). He also says he "would like to see the results of statistical tests showing whether the addition of large blocks of variables (e.g., all the pollster variables, or all the election-specific variables) added significantly to the model's explanatory power."

Similarly, Clifford Young, pollster and senior vice president at IPSOS Public Affairs, said that in order to evaluate Silver's scores, he would "need to see the fit of the model and whether the model violates or respects the underlying assumptions of the model," and more specifically, "what's the equation, what are all the variables, are they significant or aren't they significant."

I should stress that no one quoted above doubts Silver's motives or questions the integrity of his work. They are, however, trying to understand and assess his methods.

I emailed Silver and asked about both estimates of the statistical uncertainty associated with his error scores and about not providing more complete statistical output. On the "margin of error" of the accuracy scores, he wrote:

Estimating the errors on the PIE [pollster-introduced error] terms is not quite as straightforward as it might seem, but the standard errors generally seem to be on the order of +/- .2, so the 95% confidence intervals would be on the order of +/- .4. We can say with a fair amount of confidence that the pollsters at the top dozen or so positions in the chart are skilled, and the bottom dozen or so are unskilled i.e. "bad". Beyond that, I don't think people should be sweating every detail down to the tenth-of-a-point level.

In a future post, I'm hoping to discuss the ratings themselves and whether it is appropriate to interpret differences in the scores as indicative of "skill" (short version: I'm dubious). Today's post, however, is about transparency. Here is what Silver had to say about not providing full statistical output:

Keep in mind that we're a commercial site with a fairly wide audience. I don't know that we're going to be in the habit of publishing our raw regression output. If people really want to pick things apart, I'd be much more inclined to appoint a couple of people to vet or referee the model like a Bob Erikson. I'm sure that there are things that can be improved and we have a history of treating everything that we do as an ongoing work-in-progress. With that said, a lot of the reason that we're able to turn out the volume of academic-quality work that we do is probably because (ironically) we're not in academia, and that allows us to avoid a certain amount of debates over methodological esoterica, in which my view very little value tends to be added.

To be clear, no one I talked to is urging FiveThirtyEight to start regularly publishing raw regression output. Even in this case, I can understand why Silver would not want to clutter up his already lengthy discussion with the output of a model featuring literally hundreds of independent variables. However, a link to an appendix in the form of a PDF file would have added no clutter.

I'm also not sure I understand why this particular scoring system requires a hand-picked referee or vetting committee. We are not talking about issues of national security or executive privilege

That said, the pollster ratings are not the fodder of a typical blog post. Many in the worlds of journalism and polling world are taking these ratings very seriously. They have already played a major role in getting one pollster fired. Soon these ratings will appear under the imprimatur of the New York Times. So with due respect, these ratings deserve a higher degree of transparency than FiveThirtyEight's typical work.

Perhaps Silver sees his models as proprietary and prefers to shield the details from the prying eyes of potential competitors (like, say, us). Such an urge would be understandable but, as Taegan Goddard pointed out last week, also ironic. Silver's scoring system gives bonus accuracy points to pollsters "that have made a public commitment to disclosure and transparency" through membership in the National Council on Public Polls (NCPP) or through commitment to the Transparency Initiative launched this month by the American Association for Public Opinion Research (AAPOR), because he says, his data shows that those firms produce more accurate results.

The irony is that Silver's reluctance to share details of his models may stem from some of the same instincts that have made many pollsters, including AAPOR members, reluctant to disclose more about their methods or even the support the Transparency Initiative itself. Those instincts are what AAPOR's leadership is hoping to use their Initiative to change.

Last month, AAPOR's annual conference included a plenary session that discussed the Initiative (I was one of six speakers on the panel). The very last audience comment came from a pollster who said he conducts surveys for a small midwestern newspaper. "I do not see what the issue is," he said, referring to the reluctance of his colleagues to disclose more about their work "other than the mere fact that maybe we're just so afraid that our work will be scrutinized." He recalled an episode where he had been ready to disclose methodological data to someone who had emailed with a request but was stopped by the newspaper's editors who were fearful "that somebody would find something to be critical of and embarrass the newspaper."

Gary Langer, the director of polling at ABC News, replied to the comment. His response is a good place to conclude this post:

You're either going to be criticized for your disclosure or you're going to be criticized for not disclosing, so you might as well be on the right side of it and be criticized for disclosure. Our work, if we do it with integrity and care, will and can stand the light of day, and we speak well of ourselves, of our own work and of our own efforts by undertaking the disclosure we are discussing tonight.


Transparency and Pollster Ratings


My column for next week has been posted a little earlier than usual. It covers the controversy over Nate Silver's pollster ratings, and bloggy exchange over the last day or two between Silver, Political Wire's Taegan Goddard and Research 2000's Del Ali over the transparency in the FiveThirtyEight pollster ratings. I have a few important footnotes and another aspect of transparency to review, but real life intrudes. So please click through and read it all, but come back to this post later tonight for an update.

***

I'm going to update this post in two parts. First I want to add some footnotes to the column, which covers the questions that have been raised about the database of past polls that Nate Silver created and used to score pollsters. The second part will discuss the transparency regarding additional aspects of Nate's model and scoring.

I want to emphasize that nothing I learned this week leads me to believe that Silver has intentionally misled anyone or done anything intentionally sinister. I have questions about the design and interpretation of the models used to score pollsters, and I wish he would be more transparent about the data and mechanics, but these are issues of substance. I'm not questioning his motives.

So on the footnotes: Earlier today, Del Ali of Research 2000 sent us a list of 12 of his poll results he claimed that Silver should have included in his database and 2 more that he said were in error. Later in the morning he sent one more omitted result. We did our best to review that list and confirm the information provided. Here is what we found.

First, the two polls included in Silver's database with errors:

  • 2008-FL President (10/20-10/22) - Error (+3 Obama not +4)
  • 2008-ME President (10/13-10/15) - Error (+17 Obama not +15)

These are both relatively small errors, and we noticed that the apparent mistake on the Maine poll was also present in the DailyKos summary of the poll published at the time.

There were four three more polls in the omitted category that were either more than 21 days before the election (Hawaii and the Florida House race) our outside the range of races that Silver said he included (he did not include any gubernatorial primaries before 2010). [Correction: We overlooked that the NY-23 special election was omitted intentionally because of Silver's criteria of excluding races "where a candidate who had a tangible chance of winning the election drops out of it prematurely"].

  • 2010-HI-01 Special Election House (4/11-4/14)
  • 2006-FL-16 House (10/11-10/13)
  • 2002-IL Dem Primary Governor (3/11-3/13)
  • 2009-NY-23 Special (10/26-28)

Some may quarrel with Silver's decisions about the range of dates he sets as a cut-off, and I'm hoping to write more about that aspect of his scoring system. But as long as Silver applied his stated rules consistently, these examples do not qualify as erroneous omissions.

That leaves nine ten more Research 200 polls that appear to be genuine omissions in the sense that they meet Silver's criteria but were not included in the database:

  • 2000-IN President (10/28-10/30)
  • 2000-NC President (10/28-10/30)
  • 2000-NC Governor (10/28-10/30)
  • 2002-IN-02 House (10/27-10/29)
  • 2004-IA-03 (10/25-10/27)
  • 2004-NV Senate (10/19-10/21)
  • 2008-ID Senate (10/21-10/22)
  • 2008-ID-01 (10/21-10/22)
  • 2008-FL-18 (10/20-10/22)
  • 2009-NY-23 Special (10/26-28)

Do these omissions indicate sloppiness? We were able to find the NY-23 special election results on Pollster.com and elsewhere, the 2004 Nevada Senate and 2002 Indiana House on the Polling Report and the Iowa 3rd CD poll from 2004 with a Google search at KCCI.com. So those examples should have been included but were not.

However, we could not find the 2000 North Carolina poll anywhere except the subscriber-only archives of The Hotline (although, oddly, with different field dates: 10/30-31). The Hotline database is not among Silver's listed resources.

We also checked and the three results (from two polls) missing for 2008 and found they were also missing from the compilations published by our site, RealClearPolitics and the Polling Report during the campaign (though we did find mention of the Idaho poll on Research2000.com). We could not find the Indiana presidential result from 2000 anywhere.

The point of all of this is that there are really only a small number of examples that qualify as mistakes attributable to Silver's team. Most of the other oversights were also made by their sources. And even if we correct all of the errors and include all of the inside-the-21-day-window omissions, it changes the average error for Research 2000 hardly at all (as summarized in the column [and leaving out NY-23 does not change the average error]). These examples still represent imperfections in the data that should be corrected, and we can assume that more exist for the other pollsters, and as argued in the column, I'm all for greater transparency. But if you are looking for evidence of something "sinister," it just isn't there.

We created a spreadsheet that includes both the original list of Research 2000 polls included in the Fivethirtyeight database and a second tab that includes the corrections and appropriate omissions. It is certainly possible that our spreadsheet contains errors of it's own, so in the spirit of transparency, we've made it available for download. Feel free to email us with corrections.

[I corrected a few typos and cleaned up one mangled sentence in the original post above -- Part II of the update coming over the weekend.]

Update (6/14): Since I did not finish the promised update until Monday afternoon, I posted it as a separate entry. Please click through for more.


Column: AAPOR's Transparency Initiative


My new column reviews AAPOR's Transparency Initiative as described in detail this past weekend by outgoing AAPOR President Peter Miller. I hope you'll click through and read it all. It may not be quite as consequential as health care reform, but in the polling world it has the potential to be a very big deal (to paraphrase our Vice President).

This column follows up on two items posted last week: The first reviews the rationale for the initiative, and the second features a video interview with Miller and includes the full list of participants "so far." Regular readers will know that AAPOR's initiative jives neatly with my own ongoing interest in improving disclosure of polling methodology (discussed most completely here).


AAPOR's Transparency Initiative


Later this week, I'll be in Chicago for the annual conference of the American Association for Public Opinion Research (AAPOR). One of the more newsworthy aspects of this year's conference is the "Transparency Initiative" of AAPOR's current president, Peter Miller.

Until this week, the initiative has been mostly an idea, born out of AAPOR's recently higher profile in publicizing "the failure of survey organizations to be open about their research methods," as Miller put it last fall. Regular readers may recall AAPOR's work to investigate the polling failures in New Hampshire and elsewhere during the 2008 presidential primaries, and its recent public censure of two non-members for failing to disclose basic facts about their methodologies: Dr. Gilbert Burnham, regarding research he published on civilian deaths in Iraq, and Strategic Vision, LLC, regarding pre-election polling data they released in 2008.

Last fall, Miller concluded that what AAPOR's efforts to date have been inadequate. "Despite decades of work," he wrote "transparency in public opinion and survey research remains an elusive goal." The investigation of the polls in New Hampshire in 2008 was a focal point:

AAPOR's Ad Hoc Committee that studied pre-primary polls in the winter and spring of 2008 intended to release its report in time for our annual meeting in May of that year. The members of the committee hoped their findings would inform polling practice in the general election. Instead, the committee issued its report in April 2009, about a year late, because many organizations that published pre-primary poll results took so long providing methodological information. In the end, the committee had to publish its findings based on partial data.

It is obvious that if an AAPOR committee cannot efficiently gather methodological information for a report commissioned in the aftermath of a significant polling failure (in New Hampshire in 2008), then transparency is not the guiding norm that it should be in our profession.

So Miller proposed that AAPOR follow a different course. Rather than focus solely on violations of the AAPOR ethical code, Miller proposed to create positive incentives, to "give AAPOR's stamp of approval to survey organizations for timely and complete methodological disclosure." Toward that end, he also proposed to create an AAPOR administered archive -- a "system for collecting and storing disclosed information in one place" -- and to "provide education and assistance" to survey organizations that pledge to routinely deposit information about their surveys to that archive.

This morning, Miller gave a hint of what is coming later this week. He sent an email message to the entire AAPOR membership, urging members "in a position to decide whether your survey organization can participate...to join the initiative." Miller writes that he plans "to publicize the names of organizations that have agreed to help during my Presidential Address" on Friday. He adds:

This is a big commitment for AAPOR, maybe the biggest thing that the Association has ever tried to do. At the same time, it appears that such a program is essential at this time when the status and credibility of our profession is under unprecedented threat. It can move AAPOR from an occasional, largely ineffective re-actor in the realm of survey standards to a proactive positive force for the profession. And it can coalesce polling and survey organizations around a common goal of openness and integrity.

Regular readers will know that Miller's initiative jives neatly with my own ongoing interest in improving disclosure of polling methodology (discussed most completely here). As such, it should come as no surprise that I strongly support this initiative. Among other things, I will be a participant along with Miller in the opening plenary session on Thursday night. I look forward to reporting more details about Miller's transparency initiative, and about the rest of the AAPOR conference. This week especially, you will want to stay tuned...

[Interests disclosed: I served on AAPOR's Executive Council from 2006 to 2008]


Re: Automated or Not?


I received the following email from InsiderAdvantage CEO Matt Towery in response to today's column:

Mark, I take criticism now constructively and we will do more to make clear we use IVR. Out of a sense of equal fairness would you share with your readers that PPP, I assume using a phone room, had virtually the same numbers in Crist-Rubio one day before we released ours? Transparency should flow both ways, don't you agree? All my best Matt

Consider that point shared, along with links to the releases by PPP and InsiderAdvantage that have been posted all along on our Florida Governor Republican primary chart.

And for the record, PPP also uses an automated methodology, not a "phone room," though Towery's observation raises a fair point about PPP: Their blog posts and releases rarely disclose that their surveys use an automated methodology, although their embrace of that technology is no mystery. If nothing else, their web site's mission page makes it crystal clear.


Minimal Disclosure and Pollster.com


My column today concludes with the argument that news media outlets, including Pollster.com, need to do a better job holding pollsters to the minimal disclosure standards set by organizations like the National Council on Public Polls (NCPP). What follows are some thoughts about how we plan to do better on that score here at Pollster.com.

One challenge we have been confronted with in recent months is what to do about polls released by organizations that are either newly formed or that have not previously released surveys on the campaigns we track. We saw that happen in the special election for U.S. Senate in Massachusetts, and given the emergence of vendors offering to conduct automated surveys for less than a thousand dollars, we will likely see much more over the next six months.

So as a first step, starting today, when we encounter polls from an new organization (or an organization that is new to us), we are going to require that their publicly accessible reports meet all of NCPP's minimal (Level 1) disclosure requirements before including their results in our charts and tables:

Level 1 Disclosure: All reports of survey findings issued for public release by a member organization will include the following information:

  • Sponsorship of the survey
  • Fieldwork provider (if applicable)
  • Dates of interviewing
  • Sampling method employed (for example, random-digit dialed telephone sample, list-based telephone sample, area probability sample, probability mail sample, other probability sample, opt-in internet panel, non-probability convenience sample, use of any oversampling)
  • Population that was sampled (for example, general population; registered voters; likely voters; or any specific population group defined by gender, race, age, occupation or any other characteristic)
  • Size of the sample that serves as the primary basis of the survey report
  • Size and description of the subsample, if the survey report relies primarily on less than the total sample
  • Margin of sampling error (if a probability sample)
  • Survey mode (for example, telephone/interviewer, telephone/automated, mail, internet, fax, e-mail)
  • Complete wording and ordering of questions mentioned in or upon which the release is based
  • Percentage results of all questions reported

Member organizations reporting results will endeavor to have print and broadcast media include the above items in their news stories.

Note that the last sentence provides something a loophole: Disclosure of the specified information is required in "all reports of survey findings issued for public release," but not necessarily in newspaper and television stories based on those reports. Since virtually every pollster or sponsoring news organization now maintains some sort of web site, we will interpret the rule to mean that while news stories may not disclose all of this detail, the pollster needs to make a more complete report available somewhere on the web.

Discerning readers will immediately see some big shortcomings in this first step. Let's consider the most obvious:

1) It's not fair. Many polls that Pollster.com currently publishes fall short of meeting NCPP's minimal disclosure guidelines.

True. Exhibit A, as reported in today's column, is Insider Advantage, a pollster that almost never discloses their survey mode in their public reports. But we don't have to stop there. Other items on the NCPP list that many pollsters frequently neglect to disclose include the sampling method (or "frame"), the fieldwork provider and -- all too often -- the complete wording and ordering of survey questions.

However, given how little the NCPP code requires, these are shortcomings that pollsters can easily correct, going forward. The sample mode, sample frame and fieldwork provider can be specified in just a sentence or two. And how hard is it to complete text and order of survey questions in the form of a PDF on web site?

To address the inconsistency of applying this rule to some pollsters but not others, I pledge a second step: Over the next month or so, we will examine all of the polls published in Pollster.com charts over the last year to determine more precisely how many pollsters are falling short on the NCPP standards. We will report those findings here and, at that point, consider whether any pollsters merit a "delisting."

2) That's a weak standard. Shouldn't pollsters disclose more about their work?

Absolutely. I am certainly on record asking pollsters to disclose much more, especially with respect to party identification, and the demographics and mechanics of "likely voter" samples. Back in August, I called for a system of scoring the quality of disclosure based on much more than the NCPP Level 1 information.

Also, the American Association for Public Opinion Research (AAPOR), is currently in the process of revising their own disclosure guidelines. Their proposed minimal disclosure standards mandate a few things that NCPP's standards do not, including "a description of the variables used in any weighting or estimating procedures" and the name of the supplier that provided the survey sample.

So I'll pledge two additional steps: First, in examining the polls we have published over the last year, we will also look at whether pollsters are meeting AAPOR's minimal standards and consider whether to require that polls meet both the AAPOR and NCPP minimal standards.

Second, we will gather whatever methodological details pollsters have published, including those listed in NCPP's Level 2 and Level 3 disclosure and the items that AAPOR's proposed code asks pollsters to make available after 30 days.

Again, my ultimate goal is to move toward all of this information to score the quality of disclosure of public polls. The steps described above will move us in that direction.

3) But disclosure isn't quality. A pollster could tell you everything you want to know about a crappy poll, and it would still be a crappy poll.

Unfortunately, that's mostly true. There is probably some correlation between a pollster's ability to answer basic questions about their methodology and the quality of their work. It is hard to have much confidence in a pollster that will not describe their sampling frame or weighting variables or that cannot release a disposition report on the numbers dialed.

But I won't quarrel with the basic point: Disclosure is not quality. The unfortunate problem is that pollsters have a very hard time agreeing among themselves about what defines a quality poll. If we want to make judgments about survey quality, full disclosure is a necessary prerequisite. When a survey's methodology is a mystery, it is much harder to conclude much of anything about its quality.

So we'll start by asking newcomer pollsters to meet the NCPP minimal standards, but that's just a start.


Automated or Not?


My National Journal column for this week looks at the failure of one pollster, Insider Advantage, to disclose whether it uses live interviewers or an automated method in its reports and the resulting consequences.

I have more to add on this topic -- please check back later today.

Update: I posted thoughts on how we how we plan to do better at holding Pollsters to minimal standards for disclosure here at Pollster.com.

Update II: A response from InsiderAdvantage CEO Matt Towery.


Strategic Vision: Back, But Not Here


They're back. As reported yesterday by Politico, Strategic Vision, LLC posted results** from what they claim is a survey of Georgia. As per our previous entries on this subject (here and here), we will no longer publish their results as "poll updates" or in our poll charts. Yesterday's release does virtually nothing to answer of questions raised by well over 200 purported surveys released by Strategic Vision since 2004. It also falls well short of the minimal standards of disclosure that got the company into trouble in the first place.

For the uninitiated, the saga began with a rare censure by the American Association for Public Opinion Research (AAPOR) last fall resulting from Strategic Vision's failure to comply with requests for information about their response rates and weighting procedures -- information that 21 other organizations provided upon request in connection with AAPOR's investigation of the primary election polls of 2008.

Following AAPOR's action, blogger Nate Silver raised the possibility of fraud and subsequently found a pattern in the trailing digits of the percentages reported in Strategic Vision polls suggesting a "possibility of fraud." Michael Weissman, a retired professor of Physics at the University of Illinois and frequent commenter on Silver's site, did some additional number crunching (a Fourier analysis) and concluded that the odds were 1 in 5,000 that the pattern in Strategic Vision could have been produced by chance alone.

The issues raised by Silver and Weissman were highly technical and difficult for mathematical mortals to evaluate, but even more troubling was Strategic Vision's strange pattern of half-truths and evasion. Commenters on FiveThirtyEight discovered that the four offices listed on the Strategic Vision web site were UPS store mailboxes. In the wake of the initial stories, Strategic Vision CEO David Johnson announced to at least five news organizations that he would soon take legal action against AAPOR and Silver. He promised to release additional subgroup tabulations of the contested data. None of this ever happened.

"We intend to vindicate ourselves," Johnson told Politico a few days after the AAPOR Censure. If his surveys were real, if they had been conducted by live interviewers at actual call centers, Johnson should have a wealth of evidence at his disposal to silence his critics. The public polls released by Strategic Vision since 2004 (archived here and here by Harry Enten) add up to more than 200,000 interviews. That many interviews would leave a lot of witnesses: Call center managers, supervisors, probably hundreds of interviewers, any one of which could come forward to vouch for the process that produced the numbers. And there should be electronic records of the actual survey data somewhere -- at least for the most recent projects. Why hasn't Strategic Vision taken any steps to present some of this evidence and vindicate themselves?

Yesterday, Johnson tried to Strategic Vision stopped releasing public polling in September and went dark in terms of public polling until this week. Here's what he told Politico:

[Johnson said] that the lull in his firm's work had represented a deliberate choice to take some time off in the light of the allegations and let the scrutiny subside. He also said a family illness prevented him from polling the Georgia gubernatorial race earlier in the year.

"Some of the stuff was getting to me. I felt it was best to take some time off," Johnson said. "You know the old adage - lawyers should never defend themselves. I should never try to be my own PR person."

He also told Atlanta Journal Constitution columnist Jim Galloway that his libel suit threats "was me speaking in anger because I was really outraged at the time."

Yesterday's release includes two new twists. For this first time, Strategic Vision sent Galloway and other reporters a set of tables in a compressed file showing results tabulated by gender, age, race and income, and all of their percentages are computed to one decimal point. Is it more likely that these results are based on some sort of interview data, for what that's worth.

Mark Grebner, a Michigan-based, Democratic political consultant, left this comment on Pollster.com last month when Johnson started promising new surveys:

I've got a counter-intuitive guess: maybe SV-LLC will start doing real polling. It's not hard, it doesn't cost much money, and it would serve more than one purpose.

One thing is that it would restore their status as serious participants in conservative politics. A second benefit is that it would undermine the case against them, at least in the public's mind.

Maybe they'll never release another result, but if they do, I'd guess it would be genuine.

Doesn't affect the utter bogusness of everything they've done to date, of course.

He's right that new surveys, even if "genuine," do nothing to resolve the serious questions raised about Strategic Vision's previous work.

But let's ponder the meaning of "genuine" as we consider what Strategic Vision's latest release does not tell us: They say nothing about the mode of the survey (whether it used live interviewers or some automated method), the sample frame (whether telephone numbers were selected from some sort of list or via a random digit method), the weighting procedure (whether results were weighted and the variables used to weight them), and they do not identify of who conducted the survey (the call center or field-work provider, if these used one). These basic facts are part of the minimal disclosure requirements of both AAPOR and the National Council on Public Polls (NCPP).

NCPP also requires that its members describe the "size and description of the subsample, if the survey report relies primarily on less than the total sample." It is not clear whether NCPP's mandate applies to cross-tabulations, but it is very clear that Strategic Vision tables provide no information about the size of each demographic subgroup.

Both organizations also mandate that releases tell us, "who paid for the poll?" Strategic Vision's release says nothing about how this poll was paid for and, as an alert Pollster reader informs me, fails to disclose a significant conflict of interest: A search of Georgia campaign finance records shows that Strategic Vision was paid $3,500 to conduct a poll in 2009 for Ralph Hudgens, a candidate in the Republican primary contest for Insurance Commissioner tested in the new survey.

So again, for all of these reasons, we will no longer publish results by Strategic Vision, LLC on Pollster.com. But that raises a much bigger problem: Strategic Vision is not the only polling organization that has fallen far short of the minimal disclosure requirements of organizations like AAPOR and NCPP, and their results do appear on Pollster.com. That shortcoming is something I want to discuss at greater length this week. Stay tuned.

**For what it's worth: That link and the rest of the Strategic Vision, LLC web site remains inaccessible to computers in our offices and to our colleagues at the National Journal Group and Atlantic Media.


Conflicts at UW-Madison


On Sunday, the Associated Press published a lengthy report on a controversy brewing at the University of Wisconsin-Madison that involves some friends of Pollster.com.

What AP reporter Ryan Foley describes as a "fiasco" involves a year-old agreement between the University and the Wisconsin Policy Research Institute (WPRI), a conservative think-tank, to conduct statewide polls this year in partnership with the University. Under the agreement, WPRI would help fund statewide polling, including a $13,000 contract with UW political scientist Ken Goldstein. According to the AP report, however, the University never had a formal contract with WPRI. And then there are these details uncovered by a liberal activist:

Scot Ross, a liberal muckraker who runs the group One Wisconsin Now, was critical of the deal from the beginning. He said his "worst fears were confirmed" after he obtained e-mails under the open records law showing WPRI President George Lightbourn lobbied Goldstein to publicize results from one question in a way favorable to its agenda.

The question asked whether government funding should be used for school vouchers, which WPRI supports. A majority of residents statewide were opposed, but those surveyed from Milwaukee County were in favor.

Lightbourn wrote Goldstein he was concerned critics would portray the data as showing a lack of support for vouchers and asked for the Milwaukee County results to be emphasized. The university's press release read: "School choice remains popular in Milwaukee."

The AP story -- which is well worth reading in full -- includes complete details plus a reaction from Goldstein who says he is "stunned, flabbergasted, amazed -- every single adjective you can come up with" as the criticism he has received.

Our own interests in this story are as follows: Pollster.com co-creator and contributor Charles Franklin is a member of the UW-Madison political science department and a friend and colleague of Goldstein but, he tells me, was not personally involved in the WPRI polling. Also, well before the WPRI polling project, my assistant Emily Swanson worked for Goldstein as an undergraduate at UW-Madison.

If nothing else, this episode demonstrates the increasing difficulty consumers of polling data have in identifying potential conflicts in the sponsorship and funding of public polling. Simply identifying polls sponsored by a political campaign or political action committee or conducted by a campaign pollster -- something we try to do on Pollster.com -- is obviously not enough. In this case, a University of Wisconsin news release billed WPRI as a "non-partisan, non-profit think tank [that] has been conducting independent, annual polls on politics and issues for more than 20 years." Yet the Institute acknowledged to AP what their report characterized as a "free-market, limited government slant and receives funding from the Bradley Foundation, a Milwaukee group that supports numerous conservative causes."


Stranger and Stranger


The strange saga of Strategic Vision, LLC and the continuing promises of its CEO David Johnson to produce data or new surveys continues. The latest installment: Johnson appeared on conservative talk radio show in Wisconsin promising a new survey there in a few weeks.

The relevant background: Back in September, following a censure from American Association for Public Opinion Research (AAPOR) and accusations of fraudulent data from blogger Nate Silver, Johnson threatened to sue everyone in sight to clear his name and promised to release cross-tabular data that reporters had requested. Five months later, as far as I know, no one has been sued, and no crosstabs have been released.

Last month, Johnson surfaced long enough to inform a columnist from the Savannah Morning News that he planned to conduct a Georgia survey during January. As of this afternoon, and no new poll results from any state have been posted to strategicvision.biz since September 2009.

Yesterday, our own Charles Franklin recorded Johnson giving an interview to conservative talk-radio host Vicki McKenna on Madison station WIBA-1310 WISN-1130 and promising yet another new survey.

McKenna: You're going back into the field here in Wisconsin in a couple of weeks, aren't you?

Johnson: Yes we are.

McKenna: Awesome. Um, yea, we need to find out just how soft Russ Feingold is. You have got a target rich environment here in Wisconsin when you start making those phone calls, David. But you have been polling elsewhere, and I guess my question to you is just a very broad question: Does the polling suck for Republicans anywhere?

Johnson: No it doesn't, we're looking at a real Tsunami...

For those interested, I have also uploaded the complete interview, which covers the full spectrum of politics from the Right but includes no further discussion of Strategic Vision or its polling.

As I wrote in December, if and when Strategic Vision resumes "making phone calls" or otherwise reporting results, absent significantly better methodological disclosure from Johnson, we will no longer include their numbers in our charts or publish them as "poll updates."

P.S. You can't make this up:  Johnson apparently also appeared on camera on ESPN earlier today discussing the subject of "Tiger Woods and crisis communications."


Un-Disclosing Data is Hard


While I was out shoveling snow and trying to keep my snow-bound children entertained last week, Dartmouth undergraduate Harry Enten -- our Pollster.com intern-to-be for 2010 -- was busy blogging up a storm. One item he posted last week provides yet another epilogue to an epilogue on the story of Strategic Vision LLC.

My last installment on the odd twist in this story noted that after apparently blocking access to strategicvision.biz to our offices at the National Journal Group, the web masters at Strategic Vision also sought to block the Internet Archive from displaying content previously released on strategicvision.biz.

But Harry noticed something: "The web pages can still be accessed online right now even without the Internet Archive!" How?

Well, it turns out that, despite not having one single page to display the polling data from 2005-2007 (they do for 2008 and 2009), one can still retrieve the original individual pages the polling data was displayed upon. In what can only be deemed as one of the WORST coverups of all time, Strategic Vision, LLC left the individual polling pages on its servers.

All you need to access the data is the original link to any poll. Those links are easily available from polling aggregation sites such as RealClearPolitics.com, Pollster.com, and even Wikipedia.org.

He even provides a video to show, step-by-step, how the links can be found.

And just in case our friends at Strategic Vision, LLC decide to take those not-quite-removed-pages down, Harry "downloaded every single poll file from 2005-2009 and have uploaded it in a single zip file for anyone to download."

Thanks Harry!

PS: He has a new post today that catches a prominent blogger's oversight and teaches a lesson about putting too much trust in wikipedia.


WSJ's Bialik on Strategic Vision


The Wall Street Journal's Carl Bialik weighed in today on the Strategic Vision, LLC controversy in both his print column and a separate blog item. Collectively, like Shaila Dewan's New York Times story on Sunday, they provide a decent, concise overview of the story for those that have not been following it. Unfortunately, there is little news for those of us that have been following the story closely. According to Bialik, Strategic Vision CEO David Johnson "didn't respond to Wall Street Journal requests for comment."

Bialik did seek comment from various mathematicians regarding the claims of statistical irregularities in Strategic Vision's results made by Nate Silver (here, here and here) and others at FiveThirtyEight.com:

This week, Mr. Silver brought in a physicist and commenter on his blog to calculate the probability, which shrank to 5,000 to 1 against, when removing what he said was an unproven assumption that each digit should appear equally often. Several mathematicians said the shift in odds doesn't diminish Mr. Silver's finding that the Strategic Vision numbers were unlikely to arise by a quirk of fate.

He added additional details in the blog post:

Mathematicians said the Silver analysis -- finding that certain digits showed up far more often than others in Strategic Vision polls -- was troubling but want to see more evidence. Jordan Ellenberg, a University of Wisconsin, Madison, mathematician, blogged that the case isn't as persuasive as investigations into possible fraud in the Iranian election. "It's not so substantial that I would have gone public with it, if it were me," Ellenberg said, but he does think it merits further investigation.

"To strengthen the argument that Strategic Vision's (or any other polling group's) numbers seem unusual, the next step would be to assess the observed variation across a number of similar polling organizations and see where various groups fall," said Lance Waller, a biostatistician at Emory University.

One bit of news that Bialik passes along is that some of Strategic Vision's clients are now pressing the company for more verification of their data:

Two think tanks that are clients of Strategic Vision also are seeking more details on the firm's methods in light of Silver's analysis. The Goldwater Institute, which calls itself a free-market think tank, and the Oklahoma Council of Public Affairs hired Strategic Vision to test high-school students' civic knowledge in Arizona and Oklahoma, respectively.

After Silver questioned the Oklahoma results as being too bleak, both think tanks sought verification from Strategic Vision. "Although I find it very unlikely that Strategic Vision manufactured this data, I have asked for receipts from the marketing firm from which they purchased the contact data just to make certain," Matthew Ladner, vice president of research for Goldwater Institute, said.

Brandon Dutcher, vice president for policy for the Oklahoma group, isn't making up his mind just yet. "I have requested voluminous survey data from them, as well as answers to some methodological questions -- all of which I expect they can and will provide so that they can go about defending their firm and I can go about defending this survey," Dutcher said. "If not, however, then of course I would want my money back and wouldn't hire them again.

See both the article and blog post for all the details.

Bialik observes that while the controversy has gotten "bogged down in threats of litigation and arcane calculations," the controversy "has shed light on an inconvenient truth about widely reported political polls: Verifying their numbers is nearly impossible." That conclusion is hard to argue with and a big reason why better disclosure -- the issue behind the AAPOR reprimand that helped bring these issues to a head -- is so important. Only greater transparency will prevent these sorts of controversies in the future.


So Why Isn't AAPOR More Transparent?


The crux of the reprimand issued last week by the American Association for Public Opinion Research (AAPOR) against Strategic Vision, LLC is that the pollster failed to provide "any information about response rates, weighting, or estimating procedures." But if you look closely at the materials posted online in connection with AAPOR's "Ad Hoc" investigation of the primary polling mishaps of 2008, you will see several other pollsters for whom no response rate or weighting information is available. So why did AAPOR single out Strategic Vision?  And why isn't AAPOR itself more transparent about the identify of the person that filed the complaint or about their communication with Strategic Vision's CEO, David Johnson? Let's take a closer look.

As of this writing, AAPOR has published information disclosed by pollsters in response to the requests of its primary polling investigation in two ways. Their final report, released this past April, summarizes the information that had been disclosed at the time (see especially Tables 4, 5, 7, 9 and 18). In partnership with the Roper Center, AAPOR has also created an online archive that includes responses and data received from pollsters, as well as many of their initial public reports. Some of these responses on the Roper site were received since they wrote the report.

If you take the time to sift through the various documents, you will still find (as of this writing) no responses on weighting procedures from three organizations: Strategic Vision, Clemson University and Ebony/Jet. Response rate information is still missing for those three plus two more, LA Times and Rasmussen Reports. Both response rates and weighting information are among the "minimal disclosure" items that the AAPOR code mandates that all pollsters disclose. So why did AAPOR single out Strategic Vision for public condemnation and not any of the others?

I put that question to AAPOR and received a two-part answer from standards chair Stephen Blumberg. First, the Roper/AAPOR archive does not include all of the latest information:

We recognize that there may be discrepancies between the ad hoc committee report, the information on the Roper Center site, and the information available to the ad hoc committee. Some information that was received after the ad hoc committee report was finalized has not yet been posted. More information will be posted soon to update the Roper Center site.

Second, while some organizations were apparently unable to provide all the the information requested, they apparently convinced AAPOR that they had made a good faith effort to disclose whatever information they had retained or otherwise had available:

Several organizations provided responses indicating that they did not produce, obtain, or retain sufficient information to provide the methodological information listed in the AAPOR Code and requested by the ad hoc Committee. Hence, it was not always possible for each organization to provide equally detailed information.

So why was Strategic Vision singled out for public reprimand?

Strategic Vision LLC, however, was the only polling firm that explicitly refused to provide such information in response to multiple requests. Strategic Vision LLC never indicated that such information was not produced, obtained, or retained.

Blumberg also expanded on why AAPOR is not commenting on the actions of other pollsters or disclosing the identify of the person that filed the initial complaint against Strategic Vision:

Regarding any judgments that may have been made during an AAPOR Standards Investigation of the adequacy of disclosure for any organization, you are aware (as an active AAPOR member and former Council member) that the confidentiality provisions in our Procedures do not permit AAPOR to comment. We cannot reveal whether complaints were filed, evaluation committees were formed, judgments were made, or actions other than public censure were taken.

[And yes, interests disclosed once again: I am an active AAPOR member and served as a member of its Executive Council from 2006 to 2008].   

Blumberg's reference to confidentiality raises an objection voiced frequently by Strategic Vision CEO David Johnson in response to the AAPOR action. "We've asked for a copy of the complaint that was filed against us, and who filed it," Johnson told Jim Galloway of the Atlanta Journal Constitution. "How can you respond to something when you don't know who filed the complaint." He also told the website Research that he "find[s] it unusual that an organisation that says they are all about transparency won't supply us with details of the complaint. What they were asking for were trade secrets."

AAPOR's refusal to name the person that filed the complaint is, as Blumberg says, consistent with its extensive "Schedule of Procedures for Code Violations" that includes numerous safeguards to "maintain confidentiality of the subject(s), information sources, and methods of investigation." Why the lack of transparency?

A good clue to the answer can be found in Sidney Hollander's chapter of the official AAPOR history that is posted on the organization's website. Ironically, emphasis on anonymity and confidentiality was partly a reaction to the concern about potential lawsuits and legal liability of the sort that David Johnson is now threatening. Hollander writes (p. 76):

In early 1974, some Council members began exploring what legal liability the organization might incur if it were to adopt stronger measures. The legal advice obtained recommended explicit procedures that could be applied uniformly as a means of minimizing the possibility of retaliation by liability suits.

Hollander does not address the issue, but it seems likely that the authors of those procedures wanted to protect against those who might try to use their process to promote frivolous or unfounded complaints. So they set up a procedure to carefully evaluate and investigate reported violations of their code before making any comment.

The chapter also reports that complaints about unidentified complainants are not new. He cites a 1973 complaint against a polling organization that (p 76),

declined to respond to the complaint without knowing the identity of the complainants. Anonymity of the complaint's source was an issue that has been continually debated as the Code developed. Although Council member Cisin said that the concealing complainants' identities make the Standards Committee party to a 'security action,' the Standards Committee took the position that once a claim is accepted, the committee itself becomes the plaintiff in criminal law. (p. 76)

And what about another of David Johnson's complaints: Why would AAPOR expect a non-member to conform to its rules? That issue, Hollander writes, was also the subject of internal debate from the very beginning. He writes that in 1964, an argument in favor of acting on complaints against non-members was that AAPOR members had shown "overwhelming support for action against pseudo-surveys, for instance, and other violations by non-members that threatened to impair interviewer access to respondents" (p. 74). As a professional organization, AAPOR has always been concerned about unethical actions that threaten the image of their profession.

They resolved the debate, Hollander writes, by setting up rules that would hold practitioners "responsible for their work" through disclosure requirements, but not proscribe specific standards or best practices for how survey research should be conducted. And that brings us back to the Strategic vision reprimand.

What is striking about AAPOR's action last week, especially in light of the responses of other organizations, is that other pollsters that fell short of full disclosure were not the subject of public reprimand. For example, the AAPOR Code says researchers should release response rates with their public reports, but as far as I know, only 1 of 21 pollsters disclosed a response rate at the time their surveys were released in 2008.  However, the other organizations either  released response rate information on request or responded in good faith about the information they could and could not "produce, obtain, or retain." AAPOR singled out Strategic Vision because it was, they say, the only organization that flat out refused to answer even cursory questions about its response rates and weighting procedures. It was the only organization, in effect, that refused to take responsibility for its work.


Strategic Vision: Time for Transparency


Troubling new details continue to emerge about Strategic Vision following their reprimand from AAPOR for a lack of methodological disclosure earlier this week. We ought to take great care before making allegations of outright fraud, but there is now enough conflicting information -- including lack of evidence of any physical Strategic Vision office -- that the burden of proof has shifted to Johnson. His obligation is not to AAPOR but to the general public. Transparency is the first crucial ingredient that allows us to determine whether their polls, or any other, deserve our trust.

I want to set aside Nate Silver's trailing digit analysis aside for the moment (beyond my comments yesterday), as I understand that he is working on further analyses that respond to some of the questions raised by his commenters. Instead, I want to focus on the conflicting statements of Strategic Vision CEO David Johnson and other details reported by Politico's Ben Smith and others yesterday.

1) Johnson and AAPOR - Johnson told Smith that "he was refusing to cooperate with AAPOR" because the organization refuses to tell him the identity the person that filed the complaint:

"What we have asked from the very beginning was we would share all the methodology we wanted, - we wanted a copy of who filed the complaint," he said. "If they want transparency there has to be full transparency."

The problem with that statement is the phrase "from the beginning." AAPOR made multiple requests for methodological information from Strategic Vision -- and from 20 other polling organizations that did surveys in four primary states in 2008 -- starting in March 2008. These included two letters sent by Federal Express to the primary Strategic Vision mailing address listed on their web site. But these had nothing to do with any "complaint," and Johnson and Strategic Vision ignored them all.

Johnson's claim that he is "refusing to cooperate" with AAPOR contradicts two statements he made earlier in the week saying that he had cooperated fully. The journal Research reported that Johnson "said the firm had supplied AAPOR with all the information it had requested on 19 June this year." He also told ABC's Gary Langer that "I'm a little confused because we provided them the information on June 19."

AAPOR's press release this week is consistent with the more detailed statement given to ABC's Gary Langer by AAPOR Standards Chair Stephen Blumberg: After receiving a request for information in March 2009, Johnson finally "responded with an explicit refusal to provide the requested information," then after "in response to notification of AAPOR's initial findings of a violation, Mr. Johnson provided some, but not all, of the information requested." At that point, he stopped responding to their queries.

This whole episode is both puzzling and troubling. No, pollsters were not generally as cooperative with the AAPOR investigation last year as some of us had hoped. But Strategic Vision's bizarre stonewalling of AAPOR's requests this year, last year and (and of mine in 2007) were unusual. None of it makes much sense.

2) Where are The Cross-tabs? - Ben Smith writes: "Details of Strategic Vision's polls have long raised flags among pollsters, in part because it refuses -- unlike other pollsters -- to release "cross-tabs" -- the detailed demographic breakdowns of individual polls."

Strategic Vision is not the only pollster that fails to regularly post cross-tabular tables on its web site the way SurveyUSA, PPP and others do. Some hold back such tables for paying subscribers. Others prefer to report subgroup results selectively. However, Strategic Vision is the only pollster that, as far as I know, has refused to release cross-tabulations of its political surveys to anyone, including reputable journalists.

Three years ago (3/31/2006), for example, Johnson promised my colleagues at The Hotline that he would "honor requests for crosstabs and will make them available online in 4/06, when their website is revamped to handle the files." No such files ever appeared on the Strategic Vision website.

A helpful reader also alerts me to requests for cross-tabs made to Johnson by Jim Galloway of the Atlanta Journal Constitution in March 2006 and twice (here and here) in 2007, but finds no evidence that Galloway ever received any of the promised cross-tabs.

3) Where's the Office? - Some alert commenters on FiveThirtyEight discovered something that Ben Smith also reported: "[Strategic Vision's] website, as recently as last month, listed offices in Atlanta, Madison, Seattle, and Tallahassee -- all of which match the locations of UPS stores, rather than actual offices."

And to underline the point, the Atlanta mailing address (2451 Cumberland Parkway SE, Suite 3607) that, as of this hour, remains the only address on the Strategic Vision contact page is also a post office box at a UPS Store. That was the same address to which AAPOR sent its FedEx requests during 2008. I called to the UPS Store to confirm, and Suite 3607 is a mailing address that they maintain.

FiveThirtyEight reader inferno asks a good question: "[D]o they have any sort of actual physical address? i.e., an office?" If there is, its awfully hard to find any record of it.

The annual business registration for Strategic Vision filed with the Georgia Secretary of State indicates that their most recent filing -- which, notably, appears to be "active" but in "noncompliance" -- also lists the Cumberland Parkway address. The only previous address for the firm in a prior filing (2002) lists a suite in a Peachtree Street office tower that, according to a Google search, now appears to be occupied by a law office.

4) What Does It Cost? - Ben Smith gets to the crux of the issue: "Another question is how the firm pays for its polls. Its website lists at least 172 public polls, and at a stated cost of $30,000 a poll, that's an expenditure of more than $5 million -- quite a sum for a small firm."

The "stated" $30,000 cost comes from a 2006 interview Johnson gave to then Hotline polling editor Aoife McCarthy (3/31/2006). She wrote:

So how much does all of this cost? Strategic Vision uses these political polls as marketing tool for the company. Johnson says each poll costs an average of $30K to conduct. Do the math -- 39 polls in '05 and 10 polls so far in '06 at the bargain price of $30K results in nearly $1.5M in just 15 months. That does not include the first year of polling in '04.

Would it really cost Strategic Vision $30,000 to conduct surveys that typically include 800-1000 interviews? I doubt it. Not if they are really the "10-question-per-state-polls" that Johnson claimed to the Hotline in 2004 (roughly the length of most polls on their website). Not if, as Johnson reports in the 2006 Hotline interview that "callers are paid directly by Strategic Vision." I'll save the specific numbers for another post (if anyone is interested), but I have a hard time figuring out how his costs could be much higher than $5 per interview (though that still would amount to nearly a million dollars since 2004).

Why would he cite such a big number? I haven't a clue, but given all the other issues now swirling, it is a question David Johnson needs to answer. In 2004 he told the Hotline that "no client is paying," but he told Smith yesterday that some of their surveys "are piggybacked onto other polls." So which is it?   

So my bottom line: I have no idea whether Nate Silver's insinuations of fraud are real, but the burden of proof is shifting. Strategic Vision has to become considerably more transparent about their methods and data, or we will have little choice but to reach an ugly conclusion.

But there is a much bigger problem here for the rest of us. The larger issue is not whether Strategic Vision may be fabricating numbers or whether another less sensational explanation exists for all this obfuscation and contradiction. The problem is that anyone could theoretically make up a set of numbers and -- without a lot more transparency about methods and data than we now typically see -- pass it off as a real poll. The way those of us in the "new media" consume polling releases from every conceivable source, and that certainly includes Pollster.com, makes that possibility all too real. PPP's Tom Jensen has this exactly right:

I could leave PPP, start Tom Jensen Polling, put out a bunch of topline numbers the day before an election that just copied the Pollster, RCP, or Nate Silver predictions and be one of the most accurate pollsters in the country. That would be pretty darn easy and anyone could do it. And that's why public pollsters should hold themselves to a higher standard and also be held to a higher standard by the media.

That's perhaps the most extreme reason why, last month, I argued for a system of scoring pollsters for the quality of their disclosure and posting those scores online, as a matter of routine, alongside polling results. This episode makes the need for such a system more clear and more urgent than ever.

So who's with me?


US: News Interest (Pew 9/18-21)


Pew Research Center
9/18-21/09; 1,000 adults, 3.5% margin of error
Mode: Live telephone interviews
(Pew release)

National

All things considered... these days have you been hearing too much, too little, or the right amount about Barack Obama?
37% Too much, 12% Too little, 46% Right amount

Most Closely Followed Story
36% Debate over health care reform
15% Reports about the condition of the U.S. economy
14% Reports about swine flu and the availability of a vaccine
12% The murder of Yale graduate student Annie Le in a campus lab building
6% The U.S. military effort in Afghanistan
4% Obama cancelling a planned missile defense system in Poland and the Czech Republic

How much if anything, have you heard about each of the following?

Senator Max Baucus unveiling his health care reform proposal:
19% A lot, 36% A little, 45% Nothing at all

Employees of the community organizing group "ACORN" appearing to give advice to a couple posing as a pimp and prostitute:
31% A lot, 25% A little, 43% Nothing at all

Charges that racism is a factor in criticisms of President Obama and his policies:
40% A lot, 35% A little, 24% Nothing at all

A September 12th rally in Washington to protest government spending and policies:
23% A lot, 37% A little, 40% Nothing at all


AAPOR "Raises Objections" to Strategic Vision's Non-Disclosure


As the final act of a yearlong investigation of the polling mishaps leading up to the New Hampshire and other primary elections last year, the American Association for Public Opinion Research (AAPOR) today criticized the firm Strategic Vision LLC for refusing to disclose "essential facts" about surveys it conducted prior to the 2008 New Hampshire and Wisconsin primaries. An AAPOR press release described Strategic Vision's nondisclosure as "inconsistent with the association's Code of Professional Ethics and Practices and contrary to basic principles of scientific research."

When AAPOR's special ad hoc committee released its report last April, over a year after the New Hampshire miscues it was created to investigate, it had still not received "minimal disclosure" from 3 of 21 firms that had received requests a year earlier (though Strategic Vision's evasions were notworthy and consistent with my own experience). Today's AAPOR release completes the story:

For more than one year, AAPOR was unable to obtain the following basic information about Strategic Vision LLC's polling in New Hampshire and Wisconsin: who sponsored the survey; who conducted it; a description of the underlying sampling frame; an accounting of how "likely voters" were identified and selected; response rates; and a description of any weighting or estimating procedures used. AAPOR considers the release of this information for public polls to be a minimum requirement for professional behavior among those who conduct public opinion research.

Following Strategic Vision LLC's failure to respond to AAPOR's inquiries, a complaint was filed alleging a violation of the association's Code of Professional Ethics and Practices. The investigation process included two notices of non-compliance to Mr. David Johnson, CEO of Strategic Vision LLC, who explicitly refused to provide the requested information. Later, after receiving notification of the association's initial findings of a violation, Mr. Johnson offered partial but incomplete information. AAPOR never received any information about response rates, weighting, or estimating procedures. The AAPOR Executive Council now concludes that the repeated noncompliance by Strategic Vision LLC was a violation of the AAPOR Code.

AAPOR's action comes with no penalty, since no one associated with Strategic Vision is an AAPOR member.

The release also notes that this action "concludes AAPOR's official evaluation" of the 2008 primary polling mishaps while also noting that Strategic Vision was the "only polling firm" that failed to meet its minimal disclosure standards in response to their committee's requests. That action implies that other firms have provided additional information since the release of the committee's report in April. As of this hour, however, the reports available on the Roper Center web page appear to include no new information on the South Carolina surveys by Clemson University and Ebony/Jet/Lester & Associates beyond what they publicly released in 2008 (both firms had been singled out in the April report, along with Strategic Vision, for failing to respond to committee requests).

So there you have it. Twenty months after announcing its intention to request data related to New Hampshire primary poll, six months after reporting that only 7 of 21 firms had gone beyond the minimal disclosure that AAPOR mandates for public release "in any report of research results," AAPOR today "raises objections" about the response of one firm.

To put it simply: The process of "on demand" disclosure backed by the sort of punitive sanction issued today is not working. As I wrote in August, there may be a better way.  

[Interests disclosed:  I'm an active AAPOR member and served on the AAPOR's Executive Council from 2006 to 2008].

Update: ABC's Gary Langer has more, including news that "AAPOR also said an updated and final version of its report on the pre-primary polls is now available."  An AAPOR spokesperson tells me that Gary is in error and that the report has not been recently updated.


A Tale of Two Doctor Polls


My National Journal column for the week explores two recent surveys of physicians that produced widely divergent results. One was published in the New England Journal of Medicine (NEJM), conducted by two researchers at he Mount Sinai School of Medicine in New York and funded by the Robert Wood Johnson Foundation. The second was conducted by Investors' Business Daily (IBD) and the TechnoMetrica Institute of Policy & Politics (TIPP).

The column focuses on the huge difference in the quality of disclosure between the two -- the NEJM study tells us much, much more about its methods and questions than the IBD/TIPP survey. The latter fails to meet even minimal standards for disclosure mandated by the National Council on Public Polls and the American Association for Public Opinion Research.

Late Update - One of the subsequent stories published by Investor's business daily on their survey of doctors now includes the following information not disclosed in their initial report:

The questionnaires were sent out Aug. 28 to 25,600 doctors nationwide. The sample was purchased from a list broker, Lake Group Media of Rye, N.Y. One hundred of those responding were retired, and their answers were not included in the final results.

If it was there last week, I missed it, but this new information tells us two things: First, the IBD/TIPP survey did not use the same sample frame as the NEJM survey. Second, as reader Franzneumann points out, the response rate is significantly lower: Even if they had included the 100 retired doctors along with 1,376 reported interviews, their response rate is only 6%. Compare that to the 43% response rate reported by the NEJM survey.

Update1:  So why do the two surveys produce such different results? As I argued in the column, without even minimal disclosure from the IBD/TIPP poll we can only speculate, but here are the most likely theories:

1) Different questions. The NEJM study asks a three-way, forced choice between a private only option (involving "tax credits or low income subsidies to buy private insurance"), a public only option (that would "eliminate private insurance and cover everyone in a single plan like medicare") and a combination offering a choice between public and private plans. It makes no reference to current legislation, the Democrats or President Obama. We do not know exactly what the IBD question asks, although it appears to be simply about "the proposed plan" or possibly about a "proposed government expansion."

We have seen some pretty big differences on samples of all adults between surveys that simply ask about health reform "proposals" being debated in Congress (without further definition) and those that attempt to describe a "public option." For example, in August, CBS News found 57% of adults in favor and 35% opposed to "the government offering everyone a government administered health insurance plan -- something like the Medicare coverage that people 65 and older get -- that would compete with private health insurance plans." At about the same time, the Pew Research Center reported 38% of adults in favor and 44% opposed to "the health care proposals being discussed in Congress."

The three-option question that the NEJM survey reported also produces a bigger percentage supporting a public option, alone (10%) in in combination with private plans (63%), than an up-or-down, favor-or-oppose question. The NEJM questionnaire actually asked about each of the three proposals separately (see the column for complete wording), then followed up with a three way choice.

They have not yet reported on the individual results, although Alex Federman, one of the two medical researchers at the Mount Sinai School of Medicine in New York who conducted the survey, tells me that 65% said they supported a choice between public and private options when asked about it separately. They plan to publish the remaining results from the survey in future articles. Federman added that they considered the three way choice most appropriate, both because of what they learned from questionnaire pretesting and because they wanted doctors to make the same choice facing legislators between a public-only option, private-only options or a choice between the two. In pre-testing, Federman says, they learned that Doctors had all sorts of opinions on health reform, but "a lot of people were more mixed" between their views of public and private options.

2) Different samples? I put a question mark on this theory because, as reviewed in the column, we know almost nothing about how the IBD/TIPP poll sampled "practicing physicians." As Nate Silver points out, they do not even define what they mean by "practicing." However, if the IBD/TIPP used a different sample frame than the AMA Physician Masterfile (a comprehensive list of all physicians, not an AMA membership list), it might well explain some of the difference in results.

[Clarification: See Late Update above -- The IBD/TIPP survey did use a different sample frame although for now we know only where it came from, not how it differed. They also imply that "practicing" means non-retired].

[One dissent worth noting: This morning, a public health research used Twitter to take issue with my observation of that the AMA file is a "very accurate" list. "Best out there?," she asked. "Yes, but problematic."]

3) Response bias? Even if the the two lists (or "sample frames") were identical, it is possible that different kinds of doctors responded to the two surveys. For example, did the IBD survey prominently identify their survey as sponsored by Investors' Business Daily? Those who know or subscribe to it know that its editorials tend to be more conservative, and its editorial criticism of the Obama health care reforms have drawn sharp rebuke. As such, reform supporters might be less likely to participate in an IBD survey. Of course, we do not know anything about how the IDB/TIPP survey recruited respondents.

[Clarification: See the Late Update above -- The IBD/TIPP survey has a response rate of roughly 6% compared the a 43% response rate on the NEJM survey].

According to Federman, the NEJM survey identified itself as coming from the Mount Sinai School of Medicine and prominently identified its support from the Robert Wood Johnson Foundation. The headline of the initial postcard they sent identified it as "The National Physician's Survey on Health Care Reform" (emphasis in original). The subhead: "Congress wants to hear from doctors on health care reform."

Some relatively recent research has shown that "interest in the survey topic, reactions to the survey sponsor, and the use of incentives" are the three most critical variables in the linkage between the response rate and the potential for non-response bias. Perhaps those interested in "health care reform" enough to do a survey are more supportive of reform than those opposed?

4) Different survey dates. The surveys did have different field periods. The first NEJM questionnaires were sent out on June 25 and they started analyzing data on September 4. The IBD poll, published on September 15, says only that it was "conducted by mail the past two weeks."

While it is possible that the field dates contributed to the difference in results, since opposition to reform has increased over the course of the year, it's unlikely. First, as Charles Franklin noted on Friday, opposition to health care reform among all adults has been mostly stable since early July, about the time when the NEJM survey started. Second, according to Federman, they sent out their questionnaires in multiple waves, analyzed the results by wave and found no trends over the course of the field period.

Again, much of this commentary is just speculation. The larger point is that when one survey discloses its methods and the other does not, we are left guessing.


'Can I Trust This Poll?' - Part III


In Part II of this series on how to answer the question, "can I trust this poll," I argued that we need better ways to assess "likely voter" samples: What kinds of voters do pollsters select and how do they choose or model the likely voter population? Regular readers will recall how hard it can be to convince pollsters to disclose methodological details. In this final installment, I want to review the past efforts and propose an idea to promote more complete disclosure in the future.

First, lets review the efforts to gather details of pollster methods carried out over the last two years by this site, the American Association for Public Opinion Research (AAPOR) and the Huffington Post.

  • Pollster.com - In September 2007, I made a series of requests of pollsters that had released surveys of likely caucus goers in Iowa. I asked for information about their likely voter selection methods and for estimates of the percentage of adults represented by their surveys. A month later, seven pollsters -- including all but one of the active AAPOR members -- had responded fully to my requests, five provided partial responses and five answered none of my questions. I had originally planned to make similar requests regarding polls for the New Hampshire and South Carolina primaries, but the responses trickled in so slowly and required so much individual follow-up that that I limited the project to Iowa (I reported on the substance of their responses here)
  • AAPOR - In the wake of the New Hampshire primary polling snafu, AAPOR appointed an Ad Hoc Committee to investigate the performance of primary polls in New Hampshire and, ultimately, in three other states: South Carolina, California and Wisconsin. They made an extensive request of pollsters, asking not only for things the AAPOR code requires pollsters to disclose but also for more complete information, including individual-level data for all respondents. Despite allowing pollsters over a year to respond, only 7 of 21 provided information beyond minimal disclosure, and despite the implicit threat of AAPOR censure, three organizations failed to respond with even the minimal information mandated by AAPOR's ethical code (see the complete report).
  • HuffingtonPost - Starting in August 2008, as part of their "Huffpollstrology" feature, the Huffington Post asked a dozen different public pollsters to provide response and refusal rates for their national polls. Six replied with response and refusal rates, two responded with limited calling statistics that did not allow for response rate calculations and four refused to respond (more on Huffpollstrology's findings here).

The disclosure requirements in the ethical codes of survey organizations like AAPOR and the National Council on Public Polls (NCPP) gained critical mass in the late 1960s. George Gallup, the founder of the Gallup Organization, was a leader in this effort, according to Albert Golin's chapter in a published history of AAPOR (The Meeting Place). In 1967, Gallup proposed creating what would ultimately become NCPP:

The disclosure standards [Gallup] was proposing were meant to govern "polling organizations whose findings regularly appear in print and on the air....also [those] that make private or public surveys for candidates and whose findings are released to the public." It was clear from his prospectus that the prestige of membership (with all that it implied for credentialing) was thought to be sufficient to recruit public polling agencies, while the threat of punitive sanctions (ranging from a reprimand to expulsion) would reinforce their adherence to disclosure standards [p. 185].

Golin adds that Gallup's efforts were aimed at a small number of "black hat" pollsters in hopes of "draw[ing] them into a group that could then exert peer influence over their activities." Ultimately, this vision evolved into AAPOR's Standards for Minimal Disclosure and NCPP's Principles of Disclosure.

Unfortunately, as the experiences of the last year attest, larger forces have eroded the ability of groups like AAPOR and NCPP to exert "peer pressure" on the field. A new breed of pollsters has emerged that cares little about the "prestige of membership" in these groups. Last year, nearly half the surveys we reported at Pollster.com had no sponsor other than the businesses that conducted them. These companies either disseminate polling results for their market value, make their money by selling subscription access to their data, or both. They know that the demand for new horse race results will drive traffic to their websites and expose their brand on cable television news networks. As such, they see little benefit to a seal of approval from NCPP or AAPOR and no need for exposure in more traditional, mainstream media outlets to disseminate their results.

The recent comments of Tom Jensen, the communications director at Public Policy Polling (PPP) are instructive:

Perhaps 10 or 20 years ago it would have been a real problem for PPP if our numbers didn't get run in the Washington Post but the fact of the matter is people who want to know what the polls are saying are finding out just fine. Every time we've put out a Virginia primary poll we've had three or four days worth of explosion in traffic to both our blog and our main website.

So when pressured by AAPOR many of these companies feel no need to comply (although I should note for the record that PPP responded to my Iowa queries last year and responded to the AAPOR Ad Hoc Committee request for minimal disclosure, but no more). The process of "punitive sanctions" moves too slowly and draws too little attention to motivate compliance among non-AAPOR members. Although the AAPOR Ad Hoc Committee made its requests in March 2008, its Standards Committee is still processing the "standards case" against those who refused to comply. In February, AAPOR issued a formal censure, its first in more than ten years, of a Johns Hopkins researcher for his failure to disclose methodological details. If you can find a single reference to it in the Memeorandum news compilation for the two days following the AAPOR announcement, your eyes are better than mine.

Meanwhile, the peer pressure that Gallup envisioned continues to work on responsible AAPOR and NCPP members, leaving them feeling unfairly singled out and exposed to attack by partisans and competitors. I got an earful of this sentiment a few weeks ago from Keating Holland, the polling director at CNN, as we were both participating in a panel discussion hosted by the DC AAPOR chapter. "Disclosure sounds like a great idea in the confines of a group full of AAPOR people," he said, "but it has real world consequences, extreme real world consequences . . . as a general principal, disclosure is a stick you are handing to your enemies and allowing them to beat you over the head with it."

So what do we do? I have an idea, and it's about scoring the quality of pollster disclosure.

To explain what I mean, let's start with the disclosure information that both AAPOR and NCPP consider mandatory -- the information that their codes say should be disclosed in all public reports. While the two standards are not identical, they largely agree on these elements (only AAPOR considers the release of response rates mandatory, while NCPP says pollsters should provide response rate information on request):

  • Who sponsored/conducted the survey?
  • Dates of interviewing
  • Sampling method (e.g. RDD, List, Internet)
  • Population (e.g. adults, registered voters, likely voters)
  • Sample size
  • Size and description of the subsample, if the survey report relies primarily on less than the total sample
  • Margin of sampling error
  • Survey mode (e.g. live interviewer, automated, internet, via cell phone?)
  • Complete wording and ordering of questions mentioned in or upon which the release is based
  • Percentage results of all questions reported
  • [AAPOR only] The AAPOR response rate or a sample disposition report

NCPP goes farther and spells out a second level of disclosure -- information pertaining to publicly released results that its members should provide on written request:

  • Estimated coverage of target population
  • Respondent selection procedure (for example, within household), if any
  • Maximum number of attempts to reach respondent
  • Exact wording of introduction (any words preceding the first question)
  • Complete wording of questions (per Level I disclosure) in any foreign languages in which the survey was conducted
  • Weighted and unweighted size of any subgroup cited in the report
  • Minimum number of completed questions to qualify a completed interview
  • Whether interviewers were paid or unpaid (if live interviewer survey mode)
  • Details of any incentives or compensation provided for respondent participation
  • Description of weighting procedures (if any) used to generalize data to the full population
  • Sample dispositions adequate to compute contact, cooperation and response rates

They also have a third level of disclosure that "strongly encourages" members to "release raw datasets" for publicly released results and "post complete wording, ordering and percentage results of all publicly released survey questions to a publicly available web site for a minimum of two weeks."

The relatively limited nature of the mandatory disclosure items made sense given the print and broadcast media into which public polls were disseminated when these standards were created. But now, as Pollster reader Jan Werner points out via email, things are different:

When I argued in past decades for fuller disclosure, the response was always that broadcast time or print space were limited resources and too valuable to waste on details that were only of interest to a few specialists. The Internet has now removed whatever validity that excuse may once have had, but we still don't get much real information about polls conducted by the news media, including response rates.

So here is my idea: We make a list of all the elements above, adding the likely voter information I described in Part II. We gather and record whatever methodological information pollsters choose to publicly release into our database for every public poll that Pollster.com collects. We then use the disclosed data to score the quality of disclosure of every public survey release. Aggregation of these scores would allow us to rate the quality of disclosure for each organization and publish the scores alongside polling results.

Now imagine what could happen if we made the disclosure scores freely available to other web sites, especially the popular poll aggregators like RealClearPolitics, Fivethirtyeight and the Polling Report. What if all of these sites routinely reported disclosure quality scores with polling results the way they do the margin of error? If that happened, it could create a set of incentives for pollsters to improve the quality of their disclosure in a way that enhances their reputations rather than making them feel as if they are handing a club to their enemies.

Imagine what might happen if we could create a database available for free to anyone for non-commercial purpose (via Creative Commons license) of not just polls results, sample sizes and survey dates, but also a truly rich set of methodological data appended to each survey. We might help create the tools that would allow pollsters to refine their best practices and the next wave of ordinary number crunchers to find ways to decide which polls are worthy of our trust.   

The upside is that this system would not require badgering of pollsters or a reliance on a slow and limited process of "punitive sanctions." It would also not place undue emphasis on any one element of disclosure (as the "Huffpollstrology" feature does with response rates). We would record whatever is in the public domain, and if pollsters want to improve their scores, they can choose what new information to release. If a particular element is especially burdensome, they can skip it.

The principal downside is that turning this idea into a reality requires considerable work and far more resources than I have at my disposal. We would need to expand both our database and our capacity to gather and enter data. In other words, we would need to secure funding, most likely from a foundation, to make this idea a reality.

The scoring procedure would have to be thought out very carefully, since different types of polls may require different kinds of disclosure. We would need to structure and weight the index so that different categories of poll get scored fairly. I am certain that to succeed, any such project would need considerable input from pollsters and research academics. The index and scoring would also need to be utterly transparent. We would want to set up a page or data feed so that anyone on the Internet could see the disclosed information for any poll, to evaluate how any survey was scored.

For the moment, at least, this is more an idea than a plan, and it may be little more than fanciful "pie in the sky" that gets not much further than this blog posting. Nevertheless, in my five years of participating in this amazing revolution of news and information on the internet that we used to call "the blogosphere," I have come to a certain faith that ideas become a reality when we put them out in the public doman and offer them up for comment, criticism and revision.

So, dear readers, what do you think? Want to help make it reality?

[Note: I will be participating in a panel Tomorrow on "How to Get the Most Out of Polling" at this week's Netroots Nation conference. This series of posts previews the thoughts I am hoping to summarize tomorrow].


Did the Dog Eat the Data?


Here is an update on Strategic Vision, one of the three polling firms that never responded to repeated requests for information by the American Association for Public Opinion Research (AAPOR) investigation of the problems with primary election polling in New Hampshire and elsewhere in 2008. Jim Galloway of the Atlanta Journal Constitution contacted Strategic Vision's CEO David Johnson about a new Georgia poll they released yesterday and asked him to comment on his firm's lack of cooperation with the AAPOR committee:

Johnson, the CEO of Strategic Vision, said he received a single request from the organization. "I got the request for this two days before the report was released," he said. "And I've got the e-mails to prove it." Johnson said the AAPOR says it sent a request by certified mail, but he never received it.

I forwarded Johnson's comments to Nancy Mathiowetz, the former AAPOR president who oversaw the task of requesting information from the 21 polling organizations that released surveys in the four states studied by the AAPOR committee. She replied with two Federal Express receipts showing that documents were sent to Johnson at the Atlanta "headquarters" address listed on the Strategic Vision web site, one on March 5, 2008 and the second on October 1, 2008 -- a full year and six months, respectively, before the release of the AAPOR report.


2009-04-23fedexreceipt.png

(Click here for the PDF of both receipts)

While we cannot know what happens to once a document arrives at an organization, the Fed Ex receipts confirm that the AAPOR documents were received and signed for on both occasions.

Regardless of when they first learned of the requests, nothing prevents Strategic Vision from disclosing the requested information right now. The AAPOR report indicates that their investigators were unable to obtain Strategic Vision's response rate, their method of selecting a respondent in each sampled household, a description of their weighting procedures and information about their sampling frame or the method or questions used to identify likely voters -- all information that, according to AAPOR's code of ethics, a pollster should always disclose with a public poll report. Johnson could share this information with all of us right now if he wanted to.

And as for the raw data for all individuals contacted and interviewed -- as well as all of the other information requested -- the AAPOR report makes clear that it is not too late. The committee has deposited all of the information they received in the Roper Center Data Archive where, according to the report, "it will be available to other analysts who wish to check on the work of the committee or to pursue their own independent analysis of the pre-primary polls in the 2008 campaign." Moreover, "If additional information is received after the report's release, the database at the Roper Center will be updated."

Johnson's response to the AJC may sound familiar. Long time readers will remember that I made my own requests of pollsters that had fielded surveys in Iowa during 2007. Strategic Vision was one of five organizations that never answered any of my questions. Unlike AAPOR, I relied on email since I lacked the budget to send requests via Federal express. Thanks to my Gmail archive, I can report the following:

I sent an initial request by email to David Johnson on September 27, 2007 and heard nothing back.

I followed up with a reminder on October 17, 2007 that produced the following response (from the same email address for David Johnson I had used for the original request):

Mark,

I did not receive this email of 9/27. I am not sure why unless it has to do with our hosting company or server. I will be glad to get you responses and as things would have it, will be releasing an Iowa poll tomorrow

Two days later, having heard nothing further, I sent Johnson another reminder and received this response:

Mark,

I am working on your responses now. I was slammed the past two days with deadlines.

It was certainly a busy time, so I waited another eleven days before reporting on the degree of cooperation I received from the Iowa pollsters and six weeks more before posting an analysis of of the information I had received. Unfortunately, I never heard anything more from David Johnson.

This sort of episode makes it clear that we are naive to expect all pollsters to provide disclosure of meaningful methodological information even "on request" even to organizations like AAPOR. Last Friday, I attended a conference on survey quality at Harvard University, where UNC professor Phil Meyer said that our best hope is a "real accountability system" based on public pressure, "a more efficient market on the demand side." He is absolutely right.

Update: A belated hat-tip to reader EC for the tip on Galloway's AJC item.  


Re: AAPOR's Report - 2008 vs 1948


Via email, Kathy Frankovic, former Director of Surveys at CBS News, sends this comment about my post yesterday on the disappointing pollster cooperation with AAPOR's ad hoc committee report on the New Hampshire primary polling mishap:

There is a big difference between 1948 and where we are today in the field of survey research. In 1948, there was an accepted academic standard for survey research – probability sampling – one that was not used by the public pollsters. That – in addition to the lack of polling close to the election – was an obvious conclusion the SSRC researchers could make to explain what went wrong. There is no such obvious methodological improvement available as an explanation for the 2008 problems in NH. It’s not cell phones, it’s not respondent selection, and it’s not ballot order. Timing (and the possibilities of last minute changes) may once again be everything. And while more organizations should have disclosed more, I think that it’s unlikely that more data would have told us anything more definitive than we learned from the report as written.

For what it's worth, the report itself specifically referenced the contrast with 1948:

The work of the committee, and hence this report, has been delayed by a slow response from many of the pollsters who collected data from the four states in which the committee focused its efforts – New Hampshire, South Carolina, Wisconsin, and California. This is quite a different situation than after the 1948 general election, when there were fewer firms engaged in public polling, the threat to the future of the industry seemed to be greater, and the polling firms were fully cooperative. In 2008, many of the firms that polled in New Hampshire had studies in the field for primaries that occurred right after that. Today, there are well-publicized standards for disclosure of information about how polls are conducted. AAPOR, an organization of individuals engaged in public opinion research; the National Council on Public Polls (NCPP), an organization of organizations that conduct public opinion research; and the Council of American Survey Research Organizations (CASRO), also an organization of organizations, have all promulgated standards of disclosure. Despite the norms, at the time this report was finalized, one-fifth of the firms from which information was requested had not provided it. For each of these four firms, we were able to retrieve some of the requested information through Internet searches, but this was incomplete at best. If additional information is received after the report’s release, the database at the Roper Center will be updated.

So if and when pollsters who did not share raw, respondent level data share it with AAPOR, it will be posted to the Roper Center's listing (which is open to anyone, not limited to member institutions). I am told that Roper will also soon add pdf reproductions of all the responses received from pollsters, not just those that shared respondent level data.


AAPOR's Report: Why 2008 Was Not 1948


As someone who writes about polling methodology, I consider last week's report from the American Association for Public Opinion Research (AAPOR) on the mishaps in the New Hampshire and other primary election polling last year manna from heaven. Republican pollster David Hill was right to call it "the best systematic analysis of what works and what doesn't for pollsters" in decades. The new findings and data on so many aspects of polling arcania, from "call backs" to automated-IVR polls, is invaluable, especially given that the AAPOR researchers lacked access to all of the public polling data from New Hampshire or the three other states they focused on.

But that lack of information was also important. Valuable as it is, the report was also was hindered by a troubling lack of disclosure and cooperation from many of the organizations that played a part in what even prominent pollsters described as an unprecedented "fiasco" and "one of the most significant miscues in modern polling history."

Last week, the Wall Street Journal's Carl Bialik summed up the problem:

Just seven of 21 polling firms contacted over a year ago by the American Association for Public Opinion Research for the New Hampshire postmortem provided information that went beyond minimal disclosure -- such as data about the interviewers and about each respondent.

Last year, two days after the New Hampshire primary, I wrote a column reminding my colleagues of the investigation that followed the 1948 polling debacle that created the infamous "Dewey Beats Truman " headline (emphasis added):

[A] week after the [1948] election, with the cooperation of virtually every prominent public pollster, the independent Social Science Research Council (SSRC) convened a panel of academics to assess the pollsters' methods. After "an intensive review carried through within the span of five weeks," their Committee on the Analysis of Pre-election Polls and Forecasts issued a report that would ultimately reshape public opinion polling as we know it.

[...]

[SSRC Committee] members moved quickly, as their report explains, out of a sense that "extended controversy regarding the pre-election polls ... might have extensive repercussions upon all types of opinion and attitude studies."

The American Association for Public Opinion Research "commended" the SSRC effort and urged its member organizations to cooperate. "The major polling organizations," most of which were commercial market researchers competing against each other for business, "promptly agreed to cooperate fully, opened their files and made their staffs available for interrogation and discussion."

But that was 1948. Things were different last year.

On January 15, 2008, AAPOR announced it would form an ad-hoc committee to evaluate the primary pre-election polling in New Hampshire. Two weeks later, it announced the names of the eleven committee members. They convened soon thereafter and decided to broaden the investigation to include the primary pre-election polls conducted in South Carolina, California and Wisconsin (p. 16 of the report explains why).   On March 4, 2008, AAPOR President Nancy Mathiowetz sent a six page request to the 21 organizations that had released public polls in the four states, including 11 that had polled in New Hampshire.

The request (reproduced on pp. 83-88 of the report) had two categories: "(1) information that is part of the AAPOR Standards for Minimal Disclosure and (2) information or data that goes beyond the minimal disclosure requirement." The first category included items typically disclosed (such as survey dates, sample sizes and the margin of error), some not always available (including exact wording of questions asked and weighting procedures) and some details that most pollsters rarely release (such as response rates). The second category of information beyond minimal disclosure amounted to the 2008 equivalent of "opening of files" from 1948. Specifically, they asked for "individual level data for all individuals contacted and interviewed, records about the disposition of all numbers dialed, and information about the characteristics of interviewers.

The Committee had originally hoped to complete its report in time for AAPOR's annual meeting in May 2008, but by then as committee chair Michael Traugott reported at the time, only five firms had responded to the request (the first to respond, Mathiowetz tells me, was SurveyUSA which provided a complete, electronic data files for the two states they polled on April 8, 2008). In fairness, many of the pollsters had their hands full with surveys in the ongoing primary battle between Barack Obama and Hillary Clinton. Nevertheless, when I interviewed Traugott in May, he still hoped to complete the report in time for the conventions in August, but as cooperation lagged, the schedule slipped once again.

By late November 2008, with the elections completed, some firms had still not responded with answers to even the "minimal disclosure" questions asked back in March. At that point, Mathiowetz tells me, she filed a formal complaint with AAPOR's standard committee, alleging violations of AAPOR's code of ethics. Since the standards evaluation committee has not yet completed its work, and since that committee is bound to keep the specifics of such complaints confidential, Mathiowetz could not provide further details. However she did say that some pollsters supplied information subsequent to her complaint that the Ad Hoc Committee included in last week's report.

So now that the report is out, let's use the information it provided to sort the pollsters into three categories:

The best: Seven organizations, CBS News/New York Times, the Field Poll, Gallup/USA Today, Opinion Dynamics/Fox News, Public Policy Institute of California (PPIC), SurveyUSA and the University of New Hampshire/CNN/WMUR provided complete "micro-data" on every interview conducted. These organizations lived up to the spirit of the 1948 report, opening up their (electronic) files and, as far as I can tell, answering every question the AAPOR committee asked. They deserve our praise and thanks.

The worst: Three organizations -- Clemson Unversity, Ron Lester & Associates/Ebony/Jet and StrategicVision -- never responded.   

The rest in the middle: Eleven organizations -- American Research Group (ARG), Datamar, LA Times/CNN/Politico, Marist College, Mason-Dixon/McClatchy/MSNBC, Public Policy Polling (PPP), Rasmussen Reports, Research 2000/Concord Monitor, RKM/Franklin Pierce/WBZ, Suffolk University/WHDH and Zogby/Reuters/C-Span -- fell somewhere in the middle, providing answers to the "minimal disclosure" questions but no more.   

The best deserve our praise, while those that provided evaded all disclosure deserve our scorn. But what can we say about the pollsters in the middle?

First, remember that their responses met only the "minimal disclosure" requirements of AAPOR's code of ethics. They provided the "essential information" that the pollsters should include, according to AAPOR's ethical code, "in any report of research results" or at least "make available when that report is released." In other words, the middle group provided information that pollsters should always put into public domain along with their results, and not months later or only upon request following an unprecedented polling failure.

Second, consider the way that minimal cooperation cooperation hindered the committee's efforts to explain what happened in New Hampshire, especially on the question of whether a late shift to Senator Clinton in New Hampshire explained some of the polling error there. That theory is popular among pollsters (yours truly is no exception), partly because of the evidence -- most polls finished interviewing on the Sunday before the primary and thus missed reactions to Clinton's widely viewed "emotional" statement the next day -- and partly because the theory is easier for pollsters to accept, as it lets other aspects of methodology off the hook. The problem wasn't the methodology, the theory goes, just a "snapshot" taken too soon.

While the committee found evidence that several other factors influenced the polling errors in New Hampshire, the concluded that "late decisions "may have contributed significantly." They based this conclusion mostly on evidence from two panel-back surveys -- conducted by CBS News and Gallup -- that measured vote preferences for the same respondents at two distinct times. The Gallup follow-up survey was especially helpful, since it recontacted respondents from their final poll for a second interview conducted after the primary.

Although the evidence suggested that a late shift contributed tot he problem, the committee hedged on this point because, as they put it, "we lack the data for proper evaluation." Did more data exist that could shed light on this issue? Absolutely.

First, four pollsters continued to interview on Monday. ARG, Rasmussen Reports, Suffolk University and Zogby collectively interviewed approximately 1,500 New Hampshire voters on Monday, but the publicly released numbers combined those interviews with others conducted on Saturday and Sunday. The final shifts these pollsters reported in their final releases were inconsistent, but none of the four ever released tabulations that broke out results by day-of-the-week, and all four refused to provide respondent level data to the AAPOR committee.

That omission is more than just a missed opportunity. It also leaves open the possibility that at least one pollster -- Zogby -- was less than honest about what his data said about the trend in the closing hours of the New Hampshire campaign. See my post from January 2008 for the complete details, but the last few days of Zogby's tracking numbers simply do not correspond with the his characterization of that data the day after the primary. Full cooperation with the AAPOR committee would have resolved the mystery. Zogby's failure to cooperate should leave us asking more troubling questions.

But it was not just the "outlaw pollsters," to quote David Hill, that failed to share important data with the AAPOR committee. Consider the Marist Poll, produced by the polling institute at New York's Marist College. Marist is not a typical pollster. It's directors, Lee Miringoff and Barbara Carvalho are long-time AAPOR members. More important, Miringoff is a former president of the National Council on Public Polls (NCPP) and and both Miringoff and Carvalho currently serve on its board of trustees. NCPP is a group of media pollsters that has its own, slightly less stringent disclosure guidelines that nonetheless encourage members to "release raw datasets (ASCII, SPSS, CSV format) for any publicly released survey results."

The day after the New Hampshire primary, Marist reported its theories about what went wrong and promised "to re‐contact in the next few days the voters we spoke with over the weekend to glean whatever additional insights we can." Seven weeks later, Miringoff participated in a forum on "What Happened in New Hampshire" sponsored by AAPOR's New York chapter and shared some preliminary findings from the re-contact study. "Our data," he said, "suggest there was some kind of late shift to Hillary Clinton among women."   

Given the importance of that finding, the academic affiliation of the Marist Poll, Miringoff's role as a leader in NCPP and that organization's stated commitment to disclosure, you might think that Marist would be first in line to share its raw data with the respected scholars on the AAPOR committee.

You might think that, but you would be wrong.

As of this writing, the Marist Institute has yet to share raw respondent level data for either their final New Hampshire poll or the follow-up study. In fact, the Marist Institute has not yet provided any of the results of the recontact study with Professor Traugott or the AAPOR committee -- not a memo, not a filled-in questionnaire, not a Powerpoint presentation...nothing.

I was surprised by their failure to share raw data, so I emailed Miringoff for comment. His answer:

First, we did provide information on disclosure as required by AAPOR and I spoke, along with Frank Newport, on the NH primary results at a meeting of NYAAPOR. It was a great turnout and provided an opportunity to discuss the data and issues.

Unfortunately, the "information on disclosure" they provided was, again by AAPOR standards, the minimum that any researcher ought to include in any publicly released report. To be fair, Marist had already included much of that "minimal disclosure" information in their original release. According to Nancy Mathiowetz, however, Marist did not respond to her requests -- filling in information missing from the public report such as the order of questions, a description of their weighting procedure and response rate data -- until November 17, 2008. And that transmission said nothing at all about the follow-up study.

Miringoff continued:

Second, we did conduct a post-primary follow-up survey to our original pre-primary poll. We think both these datasets should be analyzed in tandem. We are preparing them to be included at the Roper Center along with all of our pre-primary and pre-election polling from 2008 for anyone to review.

What's the hurry?

I am not sure what is more depressing: That a group of "outlaw pollsters" can flaunt the standards of the profession with little or no fear of recrimination or that a former former president of the NCPP can so blithely dismiss repeated requests from AAPOR's president with little more than a "what me worry" shrug. Does it really require 14 months (and counting) to prepare these data for sharing?

Just after the primary, I let myself hope that the pollsters of 2008 might follow the example of the giants of 1948, put aside the competitive pressures and open their files to scholars. Fortunately, the survey researchers at CBS News, the Field Poll, Gallup, Opinion Dynamics, PPIC, SurveyUSA and the University of New Hampshire (and their respective media partners) did just that. For that we should be grateful. But the fact that only 7 of 21 organizations chose to go beyond minimal disclosure in this case is profoundly disappointing.

The AAPOR Report is a gift for what it tells us about the state of modern pre-election polling in more ways than one. The question now is whether polling consumers can find a way to do something about the sad state of disclosure this report reveals.

Correcting the Correction: I had it right the first time. The CBS News/New York Times partnership conducted their first New Hampshire survey in November 2007, but CBS News was solely responsible for the panel-back study The original version of this post incorrectly identified the CBS News New Hampshire polling as a CBS/New York Times survey.  While those organizations are partners for many projects, the New York Times was not involved in the New Hampshire surveys.


AAPOR Releases More Details on Burnham Censure


In early February, the American Association for Public Opinion Research (AAPOR) censured Dr. Gilbert Burnham, a faculty member at the Johns Hopkins Bloomberg School of Public Health, for violating the AAPOR ethical code for failing to disclose "essential facts about his research," a study (pdf) of civilian deaths in Iraq originally published in the journal Lancet.

When I blogged the story, one reader asked for more specifics about what exactly Burnham had failed to disclose. The response from AAPOR's standards chair, Mary Losch was a somewhat vague summary: "Included in our request were full sampling information, full protocols regarding household selection, and full case dispositions -- Dr. Burnham explicitly refused to provide that information for review."

On Tuesday, AAPOR's executive committee issued a statement (pdf) with more specifics on what they requested and how Burnham responded:

As part of the investigation, the AAPOR Standards Chair requested information from Dr. Burnham. The specific requests related to AAPOR’s finding of violation of minimum disclosure were as follows:

1. The survey sponsor(s) and sources of funding for the survey.

2. A copy of the original questionnaire or survey script used in the 2006 survey, in all languages into which it was translated.

3. The consent statement or explanation of the survey purpose.   

4. A full description of the sample selection process, including any written instructions or materials from interviewer training about sample selection procedures.

5. A summary of the disposition of all sample cases.

6. How were streets selected? How were the starting street, and the starting household, selected? Once the starting point was selected, how were interviewers instructed to proceed (e.g., when they came to an intersection)? How were houses and respondents chosen at housing units?

7. The survey description says that, “The interview team were given the responsibility and authority to change to an alternate location if they perceived the level of insecurity or risk to be unacceptable.” In how many clusters did the team change location, and what were the reasons for the changes?

8. The survey description says that, “Empty houses or those that refused to participate were passed over until 40 households had been interviewed in all locations.” Were such cases included in the number of not-at-home and refusal cases counted in each cluster?

Dr. Burnham responded with the following information related to the detailed request:

• “This study was carried out using standard demographic and household survey methods.”

• “The methods we employed for this study were set out in the Lancet paper reporting our findings (Lancet, 2006;368:1421-28). The dataset from the study was released some time ago.”

Despite repeated requests from the AAPOR Standards Chair for the information detailed above, Dr. Burnham refused to provide any additional information. He did not indicate that the information was unavailable, nor did he suggest that disclosure of this information would risk revealing the identities of survey participants.

Keep in mind that AAPOR asked Burnham to disclose these details to their standards committee as a part of a confidential inquiry. They were not asking him to make these details public, at least not at that stage of their investigation. They have not provided information on the nature of the original complaint made by an AAPOR member, which may have involved aspects of the research other than disclosure. Either way, the AAPOR code is very clear about a researchers obligation to disclose such details, on request. Failure to disclose is grounds for censure.

Mary Losch will present an overview of AAPOR's code and the Burnham case and will be available for questions today at 3:00 p.m. at an event sponsored by AAPOR's DC Chapter (more details at DC-aapor.org).

Interests disclosed: I am an AAPOR member and served on AAPOR's Executive Council for two years, from May 2006 to May 2008, but was not involved in the Standards Committee's investigation of the Lancet study.


Disclose This!


In case you haven't see it yet (via Jim Carroll of the Courier Journal, via TPM & Atlantic Politics) , Kentucky's Republican Senator Jim Bunning had a classic response when asked about an internal poll apparently co=== he had conducted an internal poll on his 2010 reelection bid:

"Let's say I did the polling," the senator told reporters on a conference call this morning.

What does that mean?

"That means it's none of your g--d--- business," Bunning said, who then followed up with a laugh. "If you paid the 20 grand for the poll, you can get some information out of it."

In fairness, Bunning is under no obligation, ethical or otherwise, to disclose the results of an internal poll. Results from internal campaign polling are typically not disclosed, although it is probably likely that Bunning would share good news if it helped bolster perceptions of his current standing. Bunning was ready with an answer:

Asked if people could infer he was not happy with the results, Bunning replied: "You are going to infer any damn thing you choose, so why should I try to influence it?...I'm not going to say a word. So you can only speculate."


The Demographics of Texas Polls


Last fall, I asked all of the organizations that conducted Iowa Caucus surveys to disclose the demographic composition of their samples and several other aspects of their methodology. Although cooperation was mixed, our "disclosure project" demonstrated how polls that are theoretically reporting on the same population of "likely voters" can sample very different kinds of people.

With so much attention focused on Ohio and Texas this week, I thought it would be worthwhile to attempt a less ambitious version. Earlier this week, I sent all the pollsters that had released surveys in either state in recent weeks to disclose some of their demographic composition and to estimate the percentage of Texas adults that their surveys represent.

While new polls have been appearing every day, I want to report the responses so far, starting with Texas.

The demographic mix is especially important in Texas given the large percentages of both African American and Latino voters there. Fortunately, in this case at least, we now have a fairly complete look at how these polls of "likely Democratic primary voters" differ demographically. When the data was not already in the public domain, I received quick cooperation (in Texas) from the pollsters at Washington Post/ABC News, Constituent Dynamics, Hamilton Campaigns and Public Policy Polling (PPP). Also, an encouraging number of pollsters have included demographic profile data their Texas releases, including several that are typically more reticent, including ARG, Rasmussen Reports. And thanks to the Houston Chronicle, even the Zogby/Reuters/C-SPAN poll helped make the world a "better place" by making cross-tabs featuring demographic composition available on Chron.com.


03-01 texas demos4.png

As in Iowa, the results show considerable variation, particularly on the Latino or Hispanic percentage of the samples, which vary from a low of 24-26% (ARG) to a high of 39% (Post/ABC). Other categories also show wide variation including the percentages of African Americans (from 14% to 23%), women (from 51% to 58%) and voters over 65 years of age (from 15% to 26% 30%; comparisons by age categories are especially difficult, since no two pollsters report exactly the same age breaks [Update: At "Joe's" suggestion, I've added some additional age breaks where available]).

I have included comparable numbers from the 2004 exit poll,* although we will not know what the "right" answer is until the votes are cast and results from this year's exit poll are available.

If they had not yet done so in their public release, I also asked pollsters to estimate the percentage of Texas adults represented by their samples, which is a decent measure of how tightly they screened for likely voters. The percentage of eligible adults that participated in the Texas Democratic presidential primary was just 6% of eligible adults in 2004 and 9% of eligible adults in 2000 (if calculated as a percentage of all adults, including non-citizens, the percentages would be 5% and 8% respectively).


03-01 Texas adults percentage.png

Even though the following table includes results for just four pollsters, the range of adults represented** is huge, from a low of 7-8% for the Texas Credit Union League(TCUL)/Hamilton Campaigns/Public Opinion Strategies poll to a high of 40% on the first poll from SurveyUSA. The TCUL poll obviously comes closest to past turnout, although turnouts have been much higher in other states so far this year than in 2004. The ABC News release concedes that an actual turnout of 24% of adults is "unlikely" but reports that "vote preference results are similar in likely voter models positing much lower turnout."

Next, Ohio... and then after posting all the statistics, I'll come back and speculate about what they might mean for what everyone cares about: where the race stands heading into the final weekend. Given the time crunch, I put these tables together quickly. So if you spot a typo or can help fill in a blank that I've missed please send an email (to questions at pollster dot com).

Footnotes:

*UPDATE: All of the 2004 exit poll results in the table above are from the final weighted data available from the Roper Center archives. Some of the percentages differ slightly from those posted on election night 2004 by CNN and still available online. The difference is likely due to final weighting done after 10:43 p.m. on March 9, 2004, the time the CNN tables were last updated. An earlier version of the table posted above was based, in part, on the CNN results.

**For the ABC/Post, CNN and SurveyUSA polls, we estimate the percentage of adults represented by dividing the number of interviews conducted among likely primary voters by the number of adults interviewed. Since those samples of adults will include non-citizens, and since non-citizens are 12% of the Texas adult population, I calculated a range for the two polls -- PPP and TCUL -- that sample from lists of registered voters (see "Update II" of this post for more explanation).

[Table updated on 3/1 to include Below/WFAA/Public Strategies surveys].


A New York Times Op-Ed...and an Eplogue


I am honored to report that I have op-ed column in today's New York Times on a subject that will be familiar to regular readers: The need for better disclosure of methodological details by pollsters. I hope you will go read it all.

As it happens, John Zogby provides an epilogue on his hugely inaccurate California survey in his post Super Tuesday commentary:

About California: Some of you may have noticed our pre-election polling differed from the actual results. It appears that we underestimated Hispanic turnout and overestimated the importance of younger Hispanic voters. We also overestimated turnout among African-American voters. Those of you who have been following our work know that we have gotten 13 out of 17 races right this year, and so many others over the years. This does happen.

So now he tells us. Although, if you notice, he is still not ready to disclose the racial and ethnic composition of his California survey. By how much, exactly, did they "underestimate" HIspanic turnout and "overestimate" the contribution of younger Hispanics and African Americans? He did not make these details available on his survey release on Tuesday (at least not to non-subscribers), and is apparently not making them available now. SurveyUSA, Field, McClatchy/Mason Dixon, and Suffolk University did report demographic composition details. That ought to tell us something.

Incidentally, it is also worth noting that while the results of the final SurveyUSA poll nailed the final ten-point Clinton margin, and the "sturdy" 13-point Obama lead forecast by Zogby never materialized, in Missouri the roles were reversed. In the final hours, SurveyUSA showed Clinton leading by eleven <s>nine</s> points (54% to 43%), while Zogby gave Obama a slight advantage (45% to 42%). Obama won 49% of the actual vote to 48% for Clinton.

The lesson?: Better disclosure puts us in a better position to understand and interpret the data, but all pollsters are fallible and all polls are subject to error (random and otherwise).


Times/Bloomberg IA Poll - What % of Adults?


Here are some additional details on the new Los Angeles Times/Bloomberg poll in Iowa. The last Times/Bloomberg poll in September drew a sample of "caucus voters" that represented a much larger slice of the Iowa population than other polls. The Democratic sample represented 39% of Iowa adults, while the Republican sample represented 29% of adults. While this statistic varied greatly among pollsters, most have reported "likely caucus goer" samples representing a range of 9-17% of Iowa adults for the Democrats and 6-11% for the Republicans (see the second table in my Disclosure Project post).

For this most recent survey, the Times release did not report the percentage of adults represented by each sample, but they did provide the unweighted sample sizes for the four different Iowa subgroups they released. All four are considerably closer to the low-incidence samples reported by most of the other pollsters that have disclosed these methodological details, although even the smaller Democratic "likely caucus goer" sample (17% of adults, unweighted) appears to be on the high side of what other pollsters reported to our Disclosure Project.

12-28%20times%20bloomberg.png

I put "appears to be" in italics above because the more accurate weighted values may be different. The methodology blurb in the Times release implies that the weighted size of each sample may be slightly smaller. Though unclear on the details, they say they "designed" their sample to " yield greater numbers of voters and thus a larger pool of likely caucus goers for analysis." That design may mean that the weighted value of the caucus voter and likely caucus-goer samples may be slightly smaller. I emailed a request for the weighted values and, as of this writing, have not received a response.

Update: Just received a response and added the weighted values to the table above. The weighting does bring down the size of the two "likely caucus goer" subgroups slightly, to 15% for the Democrats and 7% for the Republicans.

Update 2: "So what does this mean?" Two commenters ask that question, so I obviously neglected to explain. For those interested in all the details, the complete context can be found in this section of my Disclosure Project results post. The key issue is that the previous historical highs for caucus turnout are 5.5% of adults for the Democrats in 2004 and 5.3% of adults for the Republicans in 1988. Pollsters are generally not trying to screen all the way down to a combined 11% of adults, since (a) no one knows what turnout will be next week, (b) low incidence screens cannot select truly "likely" caucus goers with precision and (c) all political surveys presumably have some non-response bias toward voters (on the theory that non-voters are less interested and are more likely to hang up).

On the other hand, I consider it highly questionable to report results representing 68% of adults as representative of "caucus voters" as the Times/Bloomberg survey did in September.

So the results above mean two things. First, the latest Times/Bloomberg surveys are a vast improvement in terms of the portion of Iowa adults they represent. Second, at least in theory, the "likely caucus goers" are the more appropriate subgroups to watch. Of course, the percentage of adults sampled is just one aspect of accurately modeling the likely electorate. The kinds of voters selected are just as important, and can vary widely across polls that screen to the same percentage of adults. See the full Disclosure project post for more details.


Re: Disclosure Project: Results from Iowa


Last week's Disclosure Project report produced two good questions worthy of follow-up.

Q: Given then almost complete lack of overlap in the way pollsters are defining likely caucus goers, how useful are poll averages?

Good question. Averaging polls with differing methodologies is always a bit risky if those differences affect the results in a big way. Simple averages of the most recent polls can get distorted when one "outlier" value enters the average. That's one reason why we have greater confidence in our regression trend lines. Because they draw on all of the available data rather than just a handful of recent polls, they are less likely to be thrown off by a single odd value. But the Iowa example is a tough example because, as the reader understands, the selected "likely voter" universes are so different, and because those difference affect the results.

It may be helpful to think of this process like a game of darts. Suppose twenty people all threw darts at a bullseye. Some throws would be more accurate, some less so. If we imagine that we could see only where the darts landed (but not the target) and then picked the center-point in the pattern misses, that point would probably be pretty close to the bullseye.

In a sense -- and like all metaphors, this one is imperfect -- that's why poll averaging works. When all of the pollsters are aiming for roughly the same target (or the same universe of "likely voters"), the average of their efforts typically gets us closer to reality than any one poll, largely because of the inherent random variability that affects individual surveys.

But what if those "throwing the darts" can't see the target and are guessing at its location? What if some aim carefully while others throw carelessly? What if some players guess at the target by looking at the throws made by other players? In that case, the mid-point of the various throws may be off completely.

And that is the fear with polling in Iowa. If the average of the pollsters guesses about the size and characteristics of the likely caucus-goers is about right -- even with all the obvious variation -- then averages or trend lines based on the combined results will get us closer to reality than the individual polls. However, if the consensus "best guess" about the pool of likely caucus goers is way off, than we may be in for a big surprise on January 3.

Q: So, as a close reader of the polls, where do you think the Democratic race in Iowa stands today?

This is a tougher question, but obviously the one that everyone is asking.

The safest thing we can say is that polling in Iowa represents too blunt an instrument to tell is with any precision who would be ahead if the caucuses were held today. This has less to do with the statistical "margin of sampling error," than with both the wide divergence in likely caucus goers and the practical difficulties of modeling the caucus process. But let's look more closely at the results.

12-16 Iowa.png

Our chart for the Democrats, which draws a regression line through the cloud of results, currently shows Obama with an estimated 28.2%, Clinton with 26.7%, Edwards with 22.7% and other candidates running far behind. This result represents the rough consensus of all the polls, drawing on both recent results and the apparent trend over the course of the year. But a look at the range of results for each candidate on the chart, or in the table below, shows considerable variation in the margin between Obama, Clinton and Edwards.

12-17 Iowa Dem chart.png

Three recent polls released since December 1-- by Research2000, Strategic Vision and Newsweek -- show margins in Barack Obama's favor of 9, 8 and 6 percentage points each over Hillary Clinton (though only the Research2000 result is large enough to be statistically significant in its own right, assuming a 95% confidence level). Four other surveys conducted during the same period -- by Diageo/Hotline, RasmussenReports, Mason Dixon/MSNBC/McClatchy and Zogby -- show either an exact tie or Clinton ahead by 2-3 statistically insignificant points.

Does methodology explain the apparent divergence in these results? Perhaps. Newsweek's sample represents a greater than average number of Iowa's adults (24%) than most of the other polls that disclosed their Iowa methodologies. However, both Research2000 and Strategic Vision have failed to disclose comparable details about their methodologies, so we cannot be certain. It is entirely possible that these three sampled a broader slice of the Iowa population than the other pollsters.

If so, these results are generally consistent with what cross-tabulations show within individual surveys: Obama should do better the more the samples include younger voters, first-time caucus goers and independents.

So what do we make of this? The three frontrunner campaigns can all make a reasonable case why polls have been systematically under-representing their true caucus night strength. Obama supporters argue that polls aiming to replicate past turnout are missing their younger, first-time supporters. Clinton and Edwards supporters arguing just the opposite, that polls are including too many younger, independent voters than have voted in past caucuses. The Edwards campaigns also argues that given the 15% threshold requirement to win delegates, its organizational advantage and supposed strength in rural Iowa will add 2-3 points to the actual results as compared to his poll standing.

If I had to guess, I would say there is some truth to all three arguments, and that they may effectively cancel each other out. So even if the pollsters are far apart in their individual "models" of the likely electorate, their collective average may be close to reality, and the overall average suggesting a very close race is probably right.

But it may not be. It is always possible that they are all (or mostly) aiming at the wrong target, a possibility that makes this entire exercise so terrifying to pollsters and so interesting to everyone else.


Disclosure Project: Results from Iowa


It is time -- actually long past time -- to summarize the returns from the Pollster.com "Disclosure Project." Back in September I declared my intent to request disclosure of key methodological details from pollsters doing surveys in Iowa, New Hampshire, South Carolina and the nation as a whole. I sent off the first batch of requests to the Iowa pollsters, and then began a long slog, delayed both by other activity and, frankly, by a surprising degree of resistance from far too many pollsters. The result is that now, nearly three months later, I can report results from Iowa only.

I should note that many organizations (particularly ABC/Washington Post, CBS/New York Times, Los Angeles Times/Bloomberg, the Pew Research Center, Rasmussen Reports and Time/SRBI) either put much of the information into the public domain or responded within days (or hours) to my requests. With others, however, the responses were slower, incomplete or both. A few asked for more time or assured me that responses were imminent, yet ultimately never responded despite repeated requests. Sadly, such is the state of disclosure in my profession, even upon request.

So while the results described below are far from a complete review of all the polls in Iowa, they do tell a very clear story: No two Iowa pollsters select "likely caucus goers" in the same way. Moreover, each pollster has a unique conception -- sometimes radically unique -- of the likely electorate.

This post is a bit long, so it continues after the jump...

Continue reading "Disclosure Project: Results from Iowa"


HuffPo's OfTheBus Polling Project


This morning, the Huffington Post has announced it's OffTheBus Polling Project that aims to scrutinize pollsters -- "an industry devoted to scrutinizing us" -- by created a forum for survey respondents to report and share their experiences. Here is how Arianna Huffington describes it:

Our aim is simple: to get a better understanding of how polling is being used across the country. We want to get to the bottom of how pollsters conduct their surveys, how they gather and build their stats, how they target who they contact, and, ultimately, how they reach their conclusions -- conclusions that often fuel the very races they are supposed to be analyzing.

We are launching this non-partisan effort to examine the polling industry with a wide variety of co-sponsors reaching across the political spectrum, including: Talking Points Memo, Instapundit, Politico, The Center for Independent Media, The Nation, Pajamas Media, Mother Jones, WNYC Radio, My Silver State, and Personal Democracy Forum.

Our methods are simple and direct, and stress transparency - the key ingredient missing from a lot of polling data. With the help of our co-sponsors we are looking to ask as many people as we can reach to share their polling experiences via this form, telling us exactly how they have been polled. Who called them? At what time? Did they agree to participate in the poll or refuse to (one of the least transparent aspects of polling continues to be the refusal of most polling companies to release response rates, which have plummeted in recent years to around 30 percent)? What questions were they asked? Did the questions seem fair or were they worded in a way that seemed loaded? Did they feel like they were being targeted because of their age, gender, or ethnicity? Did the pollster seem to be guiding them toward a predetermined answer?

beenpolled_3_cosponsor.gif

At Pollster.com, we share the goals of greater transparency and helping survey data consumers gain a better understanding of how polls are conducted and what the data mean. Greater transparency has great potential to improve surveys, and to help reduce the abuses of the sort we have seen in recent days. Those values are also at the core of our own Disclosure Project. As such, we have signed on as formal sponsors of the HuffPost's Polling Project, and encourage our readers to participate with their own experiences.

As someone who earned his living for more than 20 years as a survey researcher, I believe the respondent is often forgotten by too many in our industry. After all, virtually every number on this site depended on respondents who donated their time to answer the pollsters questions. So having a forum for respondents to report their experiences, both good and bad, should provide a way for pollsters themselves to get a sense for what they are doing well and what not so well. I am convinced that the Huffington Post is committed to creating a resource that is both non-partisan and itself transparent.

Those in the survey industry will remember the "Partnership for a Poll Free America" that Arianna Huffington led a few years ago, and may wonder why we are sponsoring a project led by the person who said she wanted to convince "all 270 million of us collectively decided to hang up the next time some stranger from a polling company interrupted our dinner." The controversy led the American Association for Public Opinion Research (AAPOR ) to speak at its 2003 conference,** because as the conference chair put it,

I believe it is our responsibility to formulate answers [to Huffington's criticisms of polls] and to educate the public about the value and validity of our work, rather than essentially asking the public to give us the benefit of the doubt.

Moreover, as I heard it, Huffington's criticism pertained mostly to how polls are interpreted and used. She argued that pollsters need to be more transparent about their methods, that poll consumers too often fail to understand the limitations of surveys, and that politicians are too often "slaves to polls." I never agreed with her bottom-line prescription ("hang up on all polls"), as it threatened to disrupt a lot of vital non-political research, to say nothing of failing to distinguish the good from the bad of political polling. Still, I see much in her basic critique that I can agree with, and regardless, this latest Polling Project is a very positive step.

So we at Pollster.com enthusiatically support it and encourage our readers to check it out.

**Interests disclosed: I currently serve on AAPOR's Executive Council and attended Huffington's speech to the 2003 Conference.


Polling Nevada


I have been focusing heavily on the Iowa caucuses, both our Disclosure Project started with polls there and because the competition, particularly on the Democratic side, is so intense. With a Democratic debate in Nevada tonight, we have had two new polls out of "likely voters" in the Nevada Democratic caucuses from Zogby and CNN.** Their results are quite different though for reasons that are probably explicable.

Both show Hillary Clinton leading, followed by Obama, Edwards and Richardson, in that order, but the percentages are very different. CNN shows Clinton leading Obama by 28 points (51% to 23%), with Edwards far behind (at 11%). Zogby shows Clinton with a narrower, 22 point lead over Obama (37% to 19%) with Edwards closer (at 15%).

The biggest obvious difference is that the CNN survey effectively pushed respondents harder for a choice. They show only 4% with no opinion, while the Zogby shows 17% as unsure. This is a very common source of variation across polls, leaving pollsters to debate which approach - pushing for a choice or allowing uncertain voters to register their indecision - is most appropriate when the election is still months away.

One likely contributor to that difference is that the CNN questions includes the job title of each candidate ("New York Senator Hillary Clinton," "Former North Carolina Senator John Edwards") which may frame the question a bit differently. Of course, since Zogby fails to disclose the full text of its vote question, we cannot know for certain.

But there is one other potential source of variation: How the pollster handles the expected low turnout. The CNN release tells us that they conducted 389 interviews with voters "who say they are likely to vote in the Nevada Democratic presidential caucus" out of a total sample of 2,084 adults. Thus, CNN screens rather tightly to identify a Democratic sample that represents 19% of Nevada adults. Once again, as Zogby fails to disclose it, we have no idea what portion of Nevada their sample represents (ditto for Mason-Dixon, ARG and Research 2000, the three other pollsters that have released Nevada surveys).

But at 19%, even the CNN survey may be a shot in the dark at the turnout in Nevada on January 19. In 2004, Nevada held traditional caucuses in mid-February that drew an estimate 9,000 participants (according to the Rhodes Cook Letter). That amounts to roughly one half of one percent (0.5%) of the state's voting age population at that time.

Of course, Nevada is switching to a party-run primary (the main difference being far fewer polling places). The states of Michigan and New Mexico have used a similar system, that produces a higher turnout than traditional caucuses (outside Iowa) typically get, but not much higher. The 2004 Democratic turnout, as a percentage of the voting age population, was 2.2% in Michigan and 7.3% in New Mexico (both events occurred a week before Nevada but a week after the New Hampshire primary).

So who turns out this time is anyone's guess. Will the voters sampled in these surveys bear any resemblance to those that turn out in Nevada on January 19? In size, at least, that seems very unlikely.

**Zogby has also released results for likely Republican caucus-goers. According to their release, CNN sampled likely Republican caucus-goers, but they have not yet released those results.


Re: Details on the New University of Iowa Poll


Sorry to blog one more time on the University of Iowa "Hawkeye" poll, but we want to clarify a few issues. Attentive readers may have noticed that we added the latest results from the Hawkeye poll to our Democratic and Republican presidential charts for Iowa, despite having excluded the poll from the charts previously and despite my words of caution about its methodology yesterday. Why the change?

The main reason for our prior exclusion had nothing to do with their method of sampling or likely voter selection. We left it out because it used an open-ended vote question, which required respondents to volunteer the name of the candidate they support. Earlier in the year, we were concerned the idiosyncrasies of the open-ended vote question would skew our trend lines given the small number of polls available. At this point, however, the undecided percentages reported by University of Iowa poll are in line with other recent surveys, and we now have sufficient polls that any one survey does not make a noticeable difference in the trends.

Our bigger concern, particularly with Iowa, was the perception that we were "drawing a line" in seeming to condemn one survey while including others of either largely unknown or questionable methodology. Yes, the University of Iowa survey has a relatively loose screen but it is not alone in that regard. And other pollsters continue to withhold the necessary details which would allow a fair comparison of all Iowa polls.

Our philosophy for Pollster.com is to track all polls that claim to provide representative, projective measurements of vote preferences and provide the tools and analysis to let readers sort out good from bad. So we decided that further exclusion of the University of Iowa poll serves no useful purpose.

But that brings me to the Disclosure Project. I have delayed an update in hopes that a few pollsters that have promised to do so would provide answers to our questions. But at this point, we have delayed too long. I will have a Disclosure Project update either later today or first thing tomorrow.


Details on the New University of Iowa Poll


As promised, we have a new Iowa poll today. But be sure you read the fine print below.

Let's start with the basic "poll update" that the estimable Eric Dienstfrey usually posts in this space. A new University of Iowa "Hawkeye" poll of 689 likely caucus goers in Iowa (conducted 10/17 through 10/24; summary, methodology, presentation) finds:

  • In an open-ended question where 306 Democrats had to volunteer, without prompting, the name the candidate they are supporting, Sen. Hillary Clinton (at 29%) edges out Sen. Barack Obama (at 27%) in a statewide caucus; Sen. John Edwards runs at 20%, Gov. Bill Richardson at 7%, Sen. Joe Biden at 5%. All other candidates receive less than five percent each.
  • In an open-ended question where 282 Republicans had to volunteer the name of the candidate they are supporting, former Gov. Mitt Romney leads former Mayor Rudy Giuliani (36.2% to 13.1%) in a statewide caucus; former Gov. Mike Huckabee trails at 12.8%, former Sen. Fred Thompson at 11%, former Sen. John McCain at 6%. All other candidates receive less than five percent each.

Readers should consider that the methodology of this survey, as in August, is different from most of the other Iowa caucus surveys we have seen. According to Professor David Redlawski, who spoke at a Washington press briefing this morning, the October Hawkeye this most recent survey used essentially the same methodology as their August survey. That is, it used an open-ended vote question, the same screening questions and sampled from a list of telephone numbers drawn from listed telephone directories (i.e. not a registered voter list and not using a random digit dial methodology).

The summary posted by the Hawkeye pollsters includes information on their screening procedure. They report that the two "likely caucus goer" samples represented 55.8% of their "registered voter contacts" (that is, of adults who said they were registered to vote), that 58.6% of these said they would attend the Democratic caucus and 41.4% would attend the Republican* caucus.

If we assume that 87% of Iowa adults are registered to vote (1,970,110 "active" registered voters divided by 2,264,010 Iowa adults), that means that the Democratic sample represents 28% of Iowa adults and the Republican sample represents 20% of Iowa adults. [Correction: The screening information initially provided by the University of Iowa and quoted above was in error and skewed these calculations. The correct percentages of Iowa adults represented were 17% for the Democrats and 13% for the Republicans. See the update at the end of this post for more details].

The problem with that is that it projects to a "likely caucus goer" universe of nearly half the adults in Iowa - more than a million. The estimated Democratic turnout in 2004 was 124,000 - the previous all-time high was 126,000 in 1988. The all-time high for Republicans was 106,000, also in 1988. So this poll is sampling a considerably broader population of Iowa adults than has turned out to attend past caucuses.

So interpret these results in that context and with great caution. The trends observed by comparing the August an October Hawkeye polls are meaningful - because they used the same methodology for both polls - but apply only to the very broad population of Iowa adults sampled. It helps that the trends in this poll bear a resemblance to what we have seen lately on other Iowa polls, but we advise huge grains of salt before comparing the support for any particular candidate on this survey to that measured by any other survey.

*Typo corrected. Thanks James.

Update - 12/16/2007: The original PDF release put out by the University of Iowa that I used in making the above calculations included the following sentences:

Respondents were asked whether they were very likely, somewhat likely, not very likely, or not at all likely to attend their party’s caucus in 2008. Responses of “not at all likely” were screened out of the sample. Remaining respondents were further asked which party's caucus they would attend. Those unable to name which party were also screened out of the sample. Of registered voter contacts, 36.2 percent were eliminated on the initial screen. Another 8.0 percent were screened out because they could not name the party with whom they would caucus.

On December 13, 2007, The University of Iowa's Caroline Tolbert notified me via email that the paragraph above was incorrect. They subsequently revised the paragraph in their summary to read as follows:

Respondents were asked whether they were very likely, somewhat likely, not very likely, or not at all likely to attend their party’s caucus in 2008. Among all registered voters contacted, 37.2% said they were "not at all" likely to caucus, while another 21.3% said they were "Not very likely". These two groups were not considered "likely caucus goers". The remaining 41.5% said they were "Very Likely" (24.1%) or "Somewhat Likely" (17.4%) to caucus. A second screen then asked which party's caucus the voter planned to attend. Of the initial screen of likely caucus goers, 4.4% could not name a party, and were dropped. Approximately 35% of the original registered voter sample is thus classified as "likely caucus goers". Of the total original registered voter sample, about 19.1% are likely Democratic caucus goers and 14.4% are likely Republican Caucus Goers.

Since my calculations were based on the erroneous information included in the first release, they too were in error. The correct statistics, based on this new information, are as follows: The Democratic sample represented 17% of Iowa adults, the Republican sample represented 13% of Iowa adults.


About that Des Moines Register Iowa Poll


NBC's First Read said it best: No Iowa poll "gets (and deserves) more attention than the Des Moines Register poll by ace Iowa pollster Ann Selzer." That reputation was earned, in part, from their final 2004 pre-caucus survey, the only public poll to correctly predict the rank order of the top four Democratic candidates. Political web sites have been buzzing since Sunday about their latest release, which to the credit of all involved includes a "methodology and questions" page that answers many of the questions asked by our Pollster Disclosure project. Today, Ann Selzer provides us with a few additional answers.

Their methodology page includes the full text of the substantive questions asked, plus a reasonably complete general description of how they selected "likely caucus goers." Follow the link for full details, but the gist is that they started with a random sample of telephone numbers drawn from "the Iowa secretary of state's voter registration list." They then interviewed those who said they would "definitely" or "probably" attend the caucuses on a question that offered those two choices plus one more ("probably not").

Selzer also informs us via email that their completed interviews included a small number of voters interviewed on their cell phones. They sent their original sample to a service that identified the known cell phone numbers among those provided by the secretary of state. Selzer dialed those numbers separately.

The data released on the Des Moines Register site did not address two questions we have been asking pollsters as part of our Disclosure Project. The first involves the percentage of adults represented by the each sample. In other words, how tight was the screen?

Ann Selzer has provided an answer via email. I will spare you the wonky math: The Democratic sample represents roughly 12%, and the Republican sample 10%, of Iowa's voting age population.

While the Register did not include data on the demographic composition of their samples on their results pages, the Register's David Yepsen (via First Read), included some of this information in his Sunday column:

Among likely Democratic caucusgoers, 62 percent are women, and Clinton carries more of them - 34 percent - than any other candidate...

Only 2 percent of likely Democratic caucusgoers are under age 25, while 51 percent are over age 55. On top of that, only 23 percent of the Democrats say this will be their first caucus...

[T]he poll shows 49 percent of the likely Democratic attendees are from rural and small-town Iowa. Among Republicans, 54 percent say they live in those places...

Among likely Republican caucusgoers, 51 percent describe themselves as "born again" or fundamentalist Christians...

A majority of GOP caucusgoers - 58 percent - are men, a contrast with the 62 percent female majority among Democrats...

Among Democrats, 76 percent have at least some college or more and 56 percent of them earn more than $50,000 a year. Among Republicans, 80 percent have some college or more and 60 percent earn more than $50,000.

We will have more on returns from the Disclosure Project later in the week.


How Many Calls Per Minute?


Last week, the InsiderAdvantage poll released new surveys of 6,357 likely Republican primary voters in five states all conducted in just two nights, October 2-3, 2007. This feat prompted reader Chantal to ask some reasonable questions:

Maybe I'm missing something, but how many phone calls per minute do you have to make in order to get 1,339 likely Republican caucus attendees [in Iowa] over the course of just two nights? What kind of incidence rate are we talking about?

And this doesn't take into consideration the fact that InsiderAdvantage was also polling in four other states these evenings. Who is paying for this? Are these robopolls? Was the call center in the North Poll?

I forwarded Chantal's question to InsiderAdvantage CEO Matt Towery, along with a request to provide answers to our Disclosure Project questions for the Iowa survey.

Regarding the number of calls made, Towery replied on Saturday that he does not have "the exact number on a weekend, but clearly they are [in the] thousand[s]." He added, "we have a very high completion rate on these because we ask only a very few questions."

Towery did not mention that his surveys typically sample from lists of registered voters and make use of past vote history to help select "likely voters," so that they need to screen out relatively few contacted respondents.

How many interviewers would it require to complete 6,357 interviews with likely Republican primary voters? In the absence of a more specific answer from Towery, we can guess, but the answer will depend on a variety of issues involving how the poll was fielded: The exact length of the interview, how many "unlikely" voters they terminated, how many "call backs" they made to phone numbers yielding no answer on the first dial, whether they called during the day or just during early evening hours and whether they used a "predictive" auto-dialer that waits until a human being answers the phone before connecting an interviewer (something many pollsters avoid but that can certainly boost interviewer productivity).

Given the sort of incidence that InsiderAdvantage reported for their recent Florida survey and the variables mentioned above, a single interviewer might be able to complete anywhere from 5 to 15 interviews per hour. If we assume the more conservative estimate of 5 an hour, such a survey could require roughly 1300 interviewer hours. If we assume they dialed during evening hours only, the project would require somewhere between 100 and 150 interviewers. That's not an implausible number, especially if the interviews were farmed out to more than one call center. And obviously, any number of compromises in methodology (daytime interviewing, predictive dialers, and so on) could enable completion of a project like this with far fewer interviewers.

As for the question of who is paying for the interviews, Towery had this reply:

We are, as I noted, owned by a holding company (Internet News Agency, LLC) which is comprised of investors including the family owners of one of the nation's largest privately owned newspaper chains, the largest privately held real estate development fund in the Southeast, as well as numerous other investors. We employ some of the region's top journalists such as Tom Baxter, former national editor and chief political correspondent for the Atlanta Journal-Constitution; Hastings Wyman, founder of the Southern Political Report in D.C.; Lee Bandy, 40 year political editor for The State newspaper in South Carolina and the like. I myself am syndicated by Creators Syndicate, the largest independent newspaper syndication company in the nation. We also have a non-political research/consulting divisions with clients primarily composed of Fortune 500-1000 publicly held companies, as well as large associations, such as the Florida Chamber of Commerce. We started in January of 2000 and were founded by a Democrat and a Republican. I hope this sheds some light on who we are and how and why we are able to poll so frequently.

Readers - does this information answer your questions?

PS: Other than the answers above, I have received no response from Insider Advantage to our Disclosure Project questions regarding their Iowa poll.


When Pollsters Attack: Epilogue


Nearly two weeks ago, just before we kicked off our Disclosure Project, InsiderAdvantage pollster Matt Towery used a syndicated column headlined "What's a Quinnipiac?" to attack the Florida polls conducted by the Quinnipiac University Polling Institute. Towery not only highlighted how his polls differed from a recent Quinnipiac survey but also commissioned Mason-Dixon Polling and Research to conduct a parallel poll to prove his point. Towery's unusually jocular broadside amounted to "one pollster whack[ing] the other upside the head," as Politico's Jonathan Martin put it.

"At the very least," Towery argued,

Quinnipiac numbers should stop being taken at face value as the paragon of accuracy in Florida. Somewhere in their methodology they continue to misread the state they claim to know so intimately.

When I looked at the four polls of Florida Republicans conducted recently by InsiderAdvanage, Quinnipiac and Mason-Dixon, the differences between them seem explained mostly by the inclusion of Newt Gingrich as a candidate in the Quinnipiac poll and the fact that Fred Thompson's announcement occurred during the fielding of the Quinnipiac poll but before the others. Still, Towery's suggestion that the Quinnipiac differences might be found "somewhere in their methodology" led me to ask the same kinds of methodological questions as we have been asking as part of our Disclosure Project. Their responses follow, and the difference in sampling methodology adds another possible explanation. Quinnipiac's sample of "registered Republicans" samples a population roughly four times the size of the "likely voters" surveyed by InsiderAdvantage and Mason-Dixon.

Interview Dates - One question I asked only of Quinnipiac was to provide the number of interviews conducted before and after Fred Thompson's announcement of candidacy. Doug Schwartz at Quinnipiac reports that 199 (or 45%) of their 438 interviews were conducted on or before the evening of September 5. Thompson declared his intentions later that night and received a burst of positive coverage in the week that followed. While the methodologies of these surveys differ, it is worth remembering that the other polls by InsiderAdvantage and Mason-Dixon were fielded in their entirety after September 5.

Sample Frame - Although the term is a bit wonky, one of the most important ways these polls differ is in what pollsters call the "sampling frame." Put more simply, the issue is the source for the random sample of voters called by each pollster.

Quinnipiac uses a random digit dialing (RDD) methodology that contacts a random sample of all the working landline telephone numbers in Florida and then uses screen questions to select a random sample of registered Republicans. In this case, both InsiderAdvantage and Mason-Dixon selects voters at random from the list of registered Republicans provided by the Secretary of State, using both actual vote history and screen questions to identify and interview "likely" Republican primary voters.

For more information on the debate about RDD versus list sampling, see our prior posts here and here.

How Did They Select Republican Registered or Likely Voters? - The pollsters at Quinnipiac provided a complete and relatively straightforward answer. They asked two questions about vote registration and party affiliation:

Some people are registered to vote and others are not. Are you registered to vote in the election district where you now live, or aren't you?

[IF REGISTERED] Are you registered as a Republican, Democrat, some other party or are you not affiliated with any party?

Both Towery and Brad Coker at Mason-Dixon were both initially reluctant to describe the specifics of their likely voter selection procedures, citing the need to protect "proprietary" methods. After a bit of email back-and-forth, however, both were willing to describe their methods in general terms. Let's start with Coker, answering on behalf of Mason-Dixon:

Our sample design and screening method takes into account voter registration, party registration, past primary voting history and likeliness to vote in the primary. Other factors that were taken into account were the age, county, gender and race of the voting population based on previous Republican primary elections.

What that means -- as I read it -- is that Mason-Dixon uses information on past vote history on the voter list to draw a random sample of a subset of Republicans that they consider most likely to vote. They ask those sampled individuals questions on "likeliness to vote in the primary" and screen out unlikely voters. Finally, they weight the demographics of the final sample based on the demographics of voters in previous Republican primaries.

Towery reports using a similar procedure at InsiderAdvantage: "We do poll off of [a list of] registered voters, but we do then cull that number down based on a voting history that gives us a more likely voter sample." They then ask a screen question to identify likely primary voters: "are you likely to vote in the_____presidential primary to be held ____."

What Percentage of the Voting Age Population Did Each Poll Represent? - The calculation for the Quinnipiac poll is relatively straightforward: They report starting with a random sample of 1,325 adults and using the questions above to identify and interview 438 registered Republicans. So the Quinnipiac Republican sample amounted to 33% of Florida adults (438 divided by 1,325).

Again, Coker and Towery were initially reluctant about sharing specific numbers, but ultimately provided the information necessary to answer my question. Let's start again with Coker and Mason-Dixon:

The population we were trying to capture was the roughly 1 million Republican voters who will be most likely to vote in January. In a universe of approximately 3.8 million registered Republicans, we targeted a population of about 1.2 million Republican voters and had an incidence of 83%.

So Mason-Dixon interviewed a sample designed to represent approximately 1 million voters (1.2 million * .83) out of 14.2 million Florida adults, or 7.0% of Florida adults.

Next, Towery and InsiderAdvantage:

The [target] universe based on our sample system was around 1.6 million. This reflects the slightly higher than normal turnout you see in a Presidential primary. Incidence rate, based on data I just received was around 75%. Based on your description this would mean a final "universe" of around 1.2 million voters (all registered) which I believe reflects the likely turnout for a GOP Pres. primary turnout.

So the Insider-Advantage interviewed as ample designed to reflect approximately 1.2 million adults, or 8.5% of Florida adults.

As should be obvious, Mason-Dixon and InsiderAdvantage sampled significantly narrower populations of Republican voters than Quinnipiac. Via email Brad Coker argues that a narrower "screen" is more appropriate to a pre-election poll aimed at projecting the preferences of likely voters: "I would question the validity of a poll of ‘registered Republican voters' simply on the grounds that 75% of those sampled probably won't be voting in January."

According to the Florida Secretary of State, the vote for Republican primary candidates totaled roughly 1 million in 2006 (Governor), 1.2 million in 2004 (Senate) and 700,000 in the 2000 presidential primary.

In response to my initial questions, Quinnipiac's Doug Schwartz sent this statement:

The methodology used by the Quinnipiac poll is similar to that of all the other major polling operations in the country. It has correctly predicted the outcome of every major race it has polled on in Florida during the past three years. For details on the methodology used, contact Doug Schwartz or visit http://www.quinnipiac.edu/x271.xml

That is true, but we should note that the final pre-election survey conducted by Quinnipiac in 2006 reported the results among "likely voters" rather than all registrants. So their primary voter "methodology" may shift as we get closer to Election Day. Pollsters continue to debate the merits of various likely voter models months prior to the election, something I covered in great detail in 2004 in the context of general elections. Putting that debate aside, however, the point is that the universes sampled in this instance are very different.

What Are the Demographics? - I asked each pollster to provide the results to demographic questions asked of their Republican samples. Both Quinnipiac and Mason-Dixon were quick to respond. The table below shows that the Quinnipiac sample is a bit younger. This is not surprising given that voters are typically older than non-voters.

10-02%20demographics.png

Both Quinnipiac and Mason-Dixon also included the regional composition of their samples. While their regions were not identical, their definition of South Florida came close. Mason-Dixon had fewer Republicans in their Southeast Florida region (18% in Palm Beach, Broward, Dade and Monroe Counties) than Quinnipiac (23%; although the Quinnipac South Florida region also includes Hendry County, which accounts for just 0.1% of registered Republicans statewide).

This last difference is important because, according to the Mason-Dixon cross-tabulations that Coker also provided, Rudy Giuliani ran far ahead of Fred Thompson (33% to 11%; n=70) in Southeast Florida, but trailed Thompson narrowly elsewhere (22% to 26%; n=330). So the fact that Quinnipiac had a greater percentage of respondents in South Florida provides yet another explanation for Giuliani doing better statewide in their poll.

Coker also provided this information:

Since we have the Florida voter file, we know the precise demographic profile of those who have voted in previous elections (at least in terms of county, age, gender and stated race/ethnicity). Our sample matches it within 1-2% of the actual figures from the average of 2004 & 2006 GOP primary turn-outs. Deaths and out-migration could easily account for any differences.

Towery, on the other hand, was more reticent:

We don't give out our weighting percentages or our demographic regional breakdowns because those are proprietary and if we did so, it would be like Coke giving away the secret formula, well not that big, but important to us!

Which brings us back to the whole point of our Disclosure Project. We should congratulate all three pollsters for providing the "incidence" data necessary to help us answer, in essence, not just "What is a Quinnipiac?" (to borrow Towery's headline) but also, "what is a Mason-Dixon?" and "what is an InsiderAdvantage? As a result of their disclosure, we can see how different the "target populations" were and take those differences into account in assessing the results.

It took some coaxing, to be sure. Coker has previously refused similar requests on the grounds of protecting proprietary interests. Given his extensive experience in Florida (he tells me he has conducted more than 200 statewide polls in Florida since 1984), Coker was understandably reluctant about responding in this instance. So his cooperation here is noteworthy. Hopefully, other pollsters follow his lead because the general descriptions and incidence calculations provided above could be easily replicated by every pollster and released online for every poll. Similarly, a demographic composition table, like the one above, would be an easy addition to the online documentation virtually every pollster and news organization makes available for every poll.

On the other hand, Towery's "secret formula" dodge has a fundamental flaw. Coke need not give away its "secret formula" when it prints on every can, as required by law, a list of ingredients, the number of calories and the grams of carbohydrate and other nutrients contained in each serving. As should be obvious, the our Constitution's First Amendment precludes the sort of mandatory labeling for pollsters that the FDA requires for food. However, pollsters like Towery ought to start thinking about how to better label their own products in terms of their sample composition, lest some snarky blogger ask, "What's an InsiderAdvantage?"


A Different Perspective on Disclosure


Last Friday, in my semi-regular "remainders" wrap of interesting poll blogs of the week, I neglected to link to an interesting column from Kathy Frankovic of CBS News, which has an interesting twist on our recent focus on disclosure.

She writes of the laws in "at least 30 countries" other countries that prohibit the publication of pre-election poll results, but then also points out the unusual new law in Greece:

In Greece, however, the restriction on reporting pre-election polls was brand new, and it also carried disclosure requirements. A published opinion poll there has to be based on at least 1,000 interviews; and the questionnaire, the collected data and the survey report must be deposited with a special public committee.

After noting that the First Amendment of the U.S. Constitution would prevent any such prior restraint in our country, she considers that disclosure requirement:

The Greek law's requirement of disclosure is something that professional survey research organizations have long desired. The American Association for Public Opinion Research (AAPOR), the National Council on Public Polls (NCPP), and the World Association for Public Opinion Research (WAPOR), among others, require disclosure of information that would allow a reader or listener to judge the value of a poll.

However, these organizations also oppose government restrictions on publication of pre-election polls. (The Internet has made those restrictions more difficult to enforce. How could the French government enforce its law prohibiting the publication of poll results, if those results appeared on Web site based in Switzerland?)

She goes on to consider the implications of a lack of pre-election polling in the last two weeks of the Greek campaign. It's worth reading in full.


More on the Newsweek Poll


Family obligations and a nagging cold virus kept me mostly off the grid this weekend while the blogs were abuzz over the latest Newsweek poll of Iowa likely caucus goers. So while late, let me add a few thoughts to those already offered over elsewhere.

First, the margins of sampling error reported by Newsweek -- +/- 7% for the likely Democratic caucus goers and +/- 9% for he Republicans - means that statistically meaningful conclusions are all but impossible regarding Barack Obama's "slight edge" (28% to 24%) over Hillary Clinton. Strictly speaking, even Mitt Romney's 9 point advantage does not attain the usual 95% confidence level that pollsters require to describe a lead as "statistically significant."

Noam Scheiber wonders about what the pollsters could say about the probability of an Obama lead among likely caucus goers, if not 95%. My best guess (assuming that the reported margins of error were based on the usual 95% confidence level) is that the probability of an Obama lead based on the Newsweek poll is about 50%. In other words, the odds of Obama "leading" on this poll are no better than a coin-flip, if we were to take repeated samplings of exactly the same design.

But Matt Yglesias makes the more important point:

It seems to me that there's no real point in arguing about the significance of the rather large +/- 7 points margin of error on this Newsweek poll . . . For something like this, uncertainty about the likely voter screen are probably going to be a bigger problem than sampling error anyway.

He is exactly right. Since July we have seen 12 public polls released in Iowa by 9 different organizations, and each appears to define and sample the likely caucus-goer universe differently. To the extent that pollsters have revealed the details, their snapshots of the electorate are poles apart, to say nothing of the candidates that those voters support. A month ago, for example, I found the percentage of first-time caucus-goers reported on four different polls of Democrats varying from 3% to 43%, with Edwards doing worse (and Clinton better) as the percentage of newcomers increased. The Newsweek survey reports 36% of likely Democratic caucus goers saying "this would be your first caucus."

Unfortunately, the Newsweek release omits many of the same methodological details left out of the other Iowa polling releases (including, remarkably enough, the number of interviews conducted with likely Democratic and likely Republican caucus goers). I have emailed Newsweek's pollsters the same questions we sent last week to the other Iowa pollsters and will include their responses when we begin reporting on the Disclosure Project

By the way, Yglesias also makes another important point: In a truly close race, the ultimate winner among the Democrats may depend on the second choices forced by the convoluted Caucus rules on those whose first choice fails to achieve "viability" (usually 15% of the vote) in their precinct. Remember that the official results for the Democrats will not be a head-count of the first preference of all caucus goers (as in a poll) but rather the estimated share of state delegates won by each candidate based on the final choices at the end of the night. So even if pollsters agreed on how to sample "likely caucus goers," the numbers would still be inconclusive in a close race.

Update:: Slate's Christopher Beam, who called just before I wrote this item, has more


Disclosure Project Update


Since kicking off our Disclosure Project on Monday, we are pleased to report some very favorable early mentions and links from a variety of bloggers including The Atlantic's Marc Ambinder, Time's Ana Marie Cox, USA Today's Memmott and Lawrence, Politico Ben Smith, MSNBC's Clicked, DailyKos' DemFromCt, MyDD's Jerome Armstrong and Jonathan Singer, The Democratic Strategist's Ed Kilgore. Other names you may not recognize have left comments or endorsed the efforts on their blogs.

As of yesterday, we can add an important name to that list: My colleague Nancy Mathiowetz, the current president of the American Association for Public Opinion Research (AAPOR), left the following comment here at Pollster.com:

As President of the American Association for Public Opinion Research, I believe that this is an excellent opportunity for public opinion researchers to help improve the public understanding of polling methodology and interpretation.

Pollster.com is to be applauded for this effort.

More information from AAPOR about disclosure:

http://aapor.org/disclosuresfaqs

Regular readers will know that I serve with Mathiowetz on AAPOR's Executive Council, but it is nonetheless an honor to have her support.

I want to recognize the prompt and complete replies we have already received to our queries from the pollsters at ABC News/Washington Post and LA Times/Bloomberg and Time. Other organizations have requested more time to gather and report the data we requested. Given that we are doing something new here while also "debugging" our own process, we are going to allow as many organizations as possible to respond before publishing the first set of replies (and before sending out similar queries for polls done in New Hampshire, South Carolina and the nation as a whole).

I have also been in contact with the pollsters at Quinnipiac, Mason-Dixon and InsiderAdvantage regarding my post last Friday on their recent polls in Florida and will have more on that subject very soon.

I can say that all of those that have responded appear to be making a good faith effort to be transparent and, as Nancy Mathiowetz put it, "help improve public understanding of polling methodology and interpretation." For that we are grateful.

However, not all of the pollsters have responded to our queries, which is your why your support is important. As Ana Marie Cox put it:

The questions will probably seem unbearably wonky to many, but the reason for wanting the answers is important: They'll help reporters (and readers) better judge the accuracy and importance of these primary state polls.

Or to quote DemfromCT from DailyKos:

The bottom line is that if you want better data to analyze, then we, the consumers of all things political, ought to support pollster.com in asking for it. And if we expect and appreciate the analysis done by pollster.com, Swing State Project, Open Left, Slate, Real Clear Politics or any of the other sites that digest and analyze polling data, let's help make the data a bit more "open source" and transparent.

If you can comment or blog your endorsement of this, we would greatly appreciate it.


On Plouffe's Memo and the Disclosure Project


A public memo circulated today by the Obama campaign and authored by campaign manager David Plouffe (via Marc Ambinder) argues that "Iowa is fundamentally a close three-way race with Obama, Clinton and Edwards all within the same range in most public polling." His characterization is reasonable, especially if one defines that "same range" as roughly seven percentage points wide. Our own trend estimates for Iowa based on all available public polls show both Clinton and Edwards running a few points apart in the mid twenties (Clinton does slightly worse and Edwards slightly better if we exclude the polls by the American Research Group) with Obama trailing at roughly 20%.

But Plouffe goes on to make an assertion that is harder to evaluate:

[P]olls consistently under-represent in Iowa, and elsewhere, the strength of Barack's support among younger voters for at least three reasons. In more than one survey, Barack's support among Iowa young voters exceeded the support of all the other candidates combined. First, young voters are dramatically less likely to have caucused or voted regularly in primaries in the past, so pollsters heavily under-represent them. Second, young voters are more mobile and are much less likely to be at home in the early evening and thus less likely to be interviewed in any survey. Third, young voters are much less likely to have a landline phone and much more likely to rely exclusively upon cell phones, which are automatically excluded from phone surveys. So all of these state and national surveys have and will continue to under-represent Barack's core support – in effect, his hidden vote in each of these pivotal early states. Of course, there are organizational challenges associated with maximizing this support, but we are heavily focused on that task.

Each of the Plouffe's three arguments is at least theoretically plausible, particularly in Iowa, but hard to prove or disprove conclusively with the data available.

Consider the cell phone effect. We know that younger adults are much more likely to live in cell-phone only households, that unweighted national poll samples tend to skew older as a result but that age-related bias tends to fade to just a percentage point or two (at most) when pollsters adjust their adult samples to match census age estimates. However, in a state like Iowa, the big polling challenge is to select the "likely caucus goers" that will hopefully represent the tiny sliver of adults that will choose to participate in the caucus. The “census norms” available for all adults are of much less utility when trying to determine the appropriate demographic composition of the one-in-ten voters that we hope will represent likely caucus-goers.

Pollsters will argue and disagree among themselves about the best way to model and weight likely voters in a state like Iowa. We will not be able to resolve those arguments here. What the rest of us should be looking for, at least, is whether the various public polls are showing variation in their age composition and whether any such variation is making a tangible difference in the results.

Although Plouffe may be cherry-picking an unusally favorable result, the national surveys consistently show Obama doing better among younger voters. For example, in a combined crosstabulation of its five most recent national polls (conducted since June), the Cook Political Report/RT Strategies survey shows Obama receiving 30% of the vote among 18-24 year olds, 24% among 25 to 49 year olds and only 17% among those over 50. So if early state polls are under representing younger voters, they may be slightly understating Obama's support.

But how much is the age difference in Iowa and how much do the Iowa polls (or any of the other early states) vary by vary in their age composition? Who knows? As far as I can tell, only the Time survey conducted by in late August has reported its composition by age and other demographics.

All of which brings me back to the Pollster.com disclosure project. One of the most important reasons why we are requesting additional details on the polls conducted in Iowa and the other early states is to allow us all to better evaluate arguments like the one Obama's campaign manager made today. So please read my post from earlier today and comment or blog if you think this is a worthy idea. We would appreciate your support.


When Pollsters Attack


A few hours ago, we posted results from two new surveys of likely primary voters in Florida, both sponsored by the Southern Political Report. One survey was conducted by InsiderAdvantage (which is essentially part of the Southern Political Report) and one by the Mason-Dixon Polling and Research. The remarkable thing about the summary by InsiderAdvantage pollster Matt Towery has less to do with the numbers than with his unusual frontal assault on the Quinnipiac University poll.

I'll get to Towery's blast, but let's start with the survey results. The table below shows the results from the two Southern Political Report surveys, both conducted September 17-18, alongside the results from the most recent Quinnipiac University poll, conducted September 3-9. The most obvious difference is that Rudy Giuliani holds an eleven point lead over Fred Thompson (28% to 17%) in the Quinnipiac polls, but Giuliani leads Thompson by a single, statistically insignificant percentage point (24% to 23%) on the two most recent surveys sponsored by Towery's company. The previous InsiderAdvantage survey showed Thompson with an eight point lead (27% to 21%).

09-20%20florida%20reps.png

So, asks Towery, "why would Quinnipiac have shown Giuliani with an 11 point lead over Thompson -- especially a week ago, when Thompson arguably was at his apex in the Sunshine State?" Two fairly obvious and important differences jump out. First, the Quinnipiac survey includes non-candidate Newt Gingrich (who gets 6% in their poll), while the other two polls did not. That was the explanation that Towery - himself a former Gingrich campaign chair - provided in his own release last week.

Second, the Quinnipiac survey was the only one of the four fielded (at least in part) before Thompson's official declaration of candidacy on the Tonight show on the evening of September 5. The Quinnipiac poll fielded from September 3-9, so at least some of the interviews occurred before the burst of publicity that Thompson received from his announcement. The last InsiderAdvantage poll that showed Thompson "at his apex" kicked off on September 6, the day that Thompson's announcement appeared most prominently in the news.

Our chart of national Republican polls shows a continuing upward trend for Thompson. Moreover, by my hand calculations, the nine national polls conducted entirely after Thompson's announcement (by Cook/RTStrategies, Reuters/Zogby, USA Today/Gallup, Fox News, AP-IPSOS, ARG, NBC/Wall Street Journal and CNN) show his share of the Republican vote increasing by an average of four points (from 18% to 22%) as compared to the average of what the same organizations showed for Thompson before his announcement.

But for Towery the conclusion appears foregone:

In the instance of the Quinnipiac poll showing Giuliani with a monster lead over Thompson, it became all too obvious that it's time to call out this polling organization.

Maybe they're right and everybody else is wrong. But it's unlikely. At the very least, Quinnipiac numbers should stop being taken at face value as the paragon of accuracy in Florida. Somewhere in their methodology they continue to misread the state they claim to know so intimately.

In case the point is not obvious: Towery concluded a week ago that it was "time to call out" Quinnipiac and had his company take the remarkable step of sponsoring parallel studies this week to do so.

Earlier in today's release, after recalling some questionable Quinnipiac results, an old joking reference from former Governor Jeb Bush ("What's a Quinnipiac?") and a jocular answer from Mason Dixon pollster Brad Coker ("I figured [Quinnipiac] wanted more name identification in Florida so they could build recruiting for a Division 1-A football team!"), Towery focuses on what he considers the "bigger issue:"

Are universities that publish polls presumed by the media to somehow be more reliable because there are professors and students involved?

Once we get past all the snark, that's a fair question. It is also fair to ask, as Towery did last week, why Quinnipiac continues to include Gingrich as a candidate. That is especially important since, unlike most other pollsters, they ask no "second choice" question allowing for a recalculation of the results without Gingrich.

And finally, it is also fair, especially given some of the past inconsistencies, to want to probe further into the methodologies of all three pollsters. In writing this post, here are some questions I tried to answer:

  • What was the "sample frame" for these surveys? Who knows? In the past, Quinnipiac has used a random digit dial (RDD) methodology while InsiderAdvantage has sampled from voters lists. But none of the organizations specifies the sample frame used in their most recent releases.
  • How did the pollsters define the Republican electorate? Quinnipiac says they interviewed "registered Republicans" in Florida. InsiderAdvantage and Mason-Dixon say they interviewed "likely Florida primary voters." But what defined a "likely voter?" What questions were used to identify them? You won't find the answers anywhere in the various online releases.
  • More specifically, how tight was the screen? More specifically, what percentage of Florida adults qualified as registered or likely voters? Again, nothing at all from InsiderAdvantage or Mason Dixon. If we do the math on the Quinnipiac sample sizes, we can at least tell that their Republican sample amounts to 38% of Florida registered voters. However, we do not know what percentage of adults qualified as registered voters.
  • Finally, how did the composition of the samples vary demographically? InsiderAdvantage does provide results for gender, age and race, but Mason-Dixon and Quinnipiac do not.

Here is a suggestion for all three pollsters involved: Provide us with answers to all of the questions above, and we will take a closer look at what differences can be found "somewhere" in their methodologies.


What Are the Demographics?


Before my vacation in August, I used two posts to review the sorry state of disclosure with respect to how tightly (or not-so-tightly) the pollsters screen for likely primary voters, especially those in early primary or caucus states. Today I want to take a look at another question we might want to ask about these polls to help sort out the differences in results among them: What are the demographics?

The issue that becomes much more acute in pre-primary surveys is that the pollster is trying to determine two things at once: (1) identify the voters that will participate in the primary and (2) measure the attitudes and vote preference of those voters. When different primary polls produce seemingly contradictory results, the culprit is usually the people selected. So if we want to try to tease out the reasons why polls show different results, we want to know as much as we can about the "likely voters" they sample.

Just last week, for example, I looked at the differing results in some polls of Iowa Democratic "likely caucus goers" and found wide variation in the number reporting past caucus participation. As explained in the post, that variation appears related to support for various candidates (previous caucus goers are more likely to support John Edwards, newcomers more apt to support Hillary Clinton and, to a lesser extent, Barack Obama).

For the Democratic candidates, it is not hard to imagine that variation in demographic variables like gender, age and race might have similar effects. For example, various public polls have shown that Clinton does better among women and Obama better among younger voters. Clinton and Obama also dominate among African Americans to the relative detriment of John Edwards and other candidates. Demographic patterns in polls of Republican primary voters have been relatively inconsistent, although Giuliani and McCain tend to do better among moderates and Republican leaning independents.

The demographic composition can vary widely. Consider the African American percentage of the likely Democratic primary electorate in South Carolina in polls released over the last several months.

Needless to say, the variation above is huge (although the relationship between racial composition and the Clinton-Obama result has been weak so far). While these results certainly cannot all be right, the right answer is not obvious. African-Americans comprised 47% of Democratic primary voters in the 2004 South Carolina primary, according to the network exit poll, but of course an exit poll is also a survey with potential problems of its own. And whatever the past result, the composition in 2008 may be different.

The important point is that educated poll consumers will want to know all they can about the demographics of pre-primary polling, and unfortunately, such information is very hard to find. I went back to the public releases from 23 different organizations that have released public polls in the last six months or so in Iowa, New Hampshire and South Carolina.

  • Hamilton Beattie/Ayres McHenry (SC) - gender age, race, income, party
  • PPP (IA, SC) - gender, age, race (SC only), party
  • SUSA (NH) - gender, age, ideology
  • Time (Iowa) - gender, age, education, race, party affiliation, percentage of past caucus goers

As I recall, the Garin-Hart survey of South Carolina also included some demographics (as I included their result for race in my post on 5/3), but their PDF release is no longer available online. A few organizations provide more limited information: The American Research group routinely provides the percentage of independents included in their samples (but not standard demographics). The ABC/Washington Post survey of Iowa caucus goers provided results to a question on past caucus attendance (but nothing more). And two surveys of South Carolina - from Clemson University and CNN - both provided the African-American composition only.

The national surveys are not much better. Only two organizations routinely provide full data on the demographic composition of the subgroups that hear primary trial-heat questions: Cook Political Report/RT Strategies and Diageo/Hotline.

For more than a month, I have been promising some ideas about what we might do about this paucity of information. I'll have more on that in the next post.


More on ARG and Iowa


Following up on yesterday's post, in which I speculated - wrongly, as it turns out -- about the incidence of eligible adults selected by the American Research Group (ARG) as likely caucus goers for their most recent surveys of Democrats and Republicans in Iowa. I emailed Dick Bennett, and can now report on how their surveys compare to the others that have provided us with similar details.

First, according to Bennett, I was incorrect in speculating that they use only one question to screen for "likely caucus goers." They start with a random digit dial (RDD) sample of adults in Iowa in households with a working telephone and then ask four different questions (although they provide only the last question on the page reporting Iowa results):

  • They ask whether respondents are registered to vote, and whether they are registered as Democrats or Republicans. Non-registrants are terminated and not interviewed.
  • They ask registrants how likely they are to participate in the Caucus "a 1-to-10 scale with 1 meaning definitely not participating and 10 meaning definitely participating." Those who answer 1 through 6 are terminated and not interviewed.
  • They ask unaffiliated registrants ("independents" registered as neither Democrats nor Republicans) whether they plan to participate in the Democratic or Republican caucus. Registered Democrats and independents who plan to caucus with the Democrats get the Democratic vote question; Registered Republicans and independents who plan to caucus with the Republicans answer the Republican question.
  • After asking vote question, they asks the question that appears on the web site: "Would you say that you definitely plan to participate in the 2008 Democratic presidential caucus, that you might participate in the 2008 Democratic presidential caucus, or that you will probably not participate in the 2008 Democratic presidential caucus?" Only the definite are included in the final sample of likely caucus voters.

So the process involves calling a random sample of adults until they reach a quota of 600 interviews for voters of one of the parties. In their most recent Iowa survey, they were able to fill the quota for Democrats first, so they continued dialing the random sample until they had interviewed 600 Republicans, terminating 155 Democrats in the process. Bennett reports that they also terminated another 4,842 adults on their various screen questions (740 who say they were not registered to vote, 3,598 who rated their likelihood of participating as 6 or lower and 504 who were less than "definite" about participating on the final question).

So, the "back of the envelope" calculation for ARG is that their most recent sample of Democrats represents 12% of Iowa adults (755 Democrats divided by 755+600+4,842). Their most recent sample of Republicans represents roughly 10% of Iowa adults (600 Republicans divided by 755+600+4,842). We can compare the Democratic statistic to those provided by other Iowa pollsters:

And again, for those just joining this discussion, the 2004 Democratic caucus turnout was reported as 122,200, which represented 5.4% of the voting age population and 5.6 of eligible adults.

So, if we take all of these pollsters at their word, my "blogger speculation" yesterday was off-base: ARG's incidence of Democratic likely voters as a percentage of eligible adults is very close to the surveys done by Time and ABC/Washington Post. Apologies to Bennett.

But we still have a mystery. Why the consistent difference between the result from ARG and other surveys that appears to favor Clinton? Professor Franklin is working on a post as I speak that will chart the difference, but when we exclude the ARG's surveys from our estimate for Iowa, Clinton's current 2 point margin over Edwards (26.2% to 24.2%) becomes a 1.3 point deficit (24.6% to 25.9%). [See Franklin's in-depth discussion, now posted here].

IAIATopDems-sml.png

I asked Bennett whether he had any theories that might explain the difference. Here is his response:

Our sample size is larger and our likely voter screen is more difficult to pass. As you have pointed out, many surveys (although they are not designed to project participation) project unrealistic levels of participation. A likely voter/participant does not need to vote/participate to represent the pool of likely voters/participants, but the likely voter/participant pool is not much larger than the actual turnout.

Our results in Iowa show that John Edwards has a slight lead over Hillary Clinton among those voters saying they have attended a caucus in the past. Hillary Clinton has a greater lead among those saying this will be their first caucus. Hillary Clinton also has very strong support among women who say they usually do not vote/participate in primary/caucus races - this is true in Iowa and the other early states

Sample size is largely irrelevant to the pattern in our chart. Smaller samples would explain greater variability, but not a consistent difference across a large number of samples. The observation in his second paragraph is much more important. Since ARG's previous releases did not mention these results, I asked for the question about past caucus participation and the associated results. His response:

The question is: Will this be the first Democratic caucus you have attended, or have you attended a Democratic caucus in the past?

We first asked this in Feb:

Feb - 41% first, 59% past
Mar - 44% first, 55% past
Apr - 39% first, 60% past
May - 45% first, 55% past
Jun - 42% first, 57% past
Jul - 40% first, 60% past
Aug - 43% first, 57% past

We can compare this result to similar questions or reports from other recent surveys and they show a clear pattern. The differences among the four pollsters are huge and show a clear pattern, consistent with the differences Bennett reports in his own surveys: John Edwards does better against Clinton as the percentage of past caucus goers increases.

08-31%20first%20time%20caucus%20goer.png

So what is the right number of past caucus goers? Bennett can certainly argue that the entrance polls from the 2000 and 2004 Caucuses are on his side. Bennett used exactly the same question as the network entrance poll, which reported the percentage of first-time Democratic caucus goers as 53% in 2004 and 47% in 2000. Of course, as we learned three years ago, exit polls have their own problems, and I am guessing that other pollsters will debate what past-caucus goer number is correct. We will pursue this point further.

Finally, it is worth saying that this exchange and my arguably unfair "blogger speculation" yesterday makes one thing clear: If we are going to dig deeper into these issues, we have an obligation to ask these questions (about incidence and sample characteristics) about all polls, not just those from ARG, Time and a handful of others.

Stay tuned.


Iowa: A Tale of Two New Polls


So today we have another installment in that pollster's nightmare known as the Iowa caucuses: Two new polls of "likely Democratic caucus goers" conducted over the last ten days that show very different results. The American Research Group (ARG) survey (conducted 8/26-29, n=600) shows Hillary Clinton (with 28%) leading Barack Obama (23%) and John Edwards (20%). And a new survey from Time/SRBI (conducted 8/22-26, n=519, Time story, SRBI results) shows essentially the opposite, Edwards (with 29%) leading Clinton (24%) and Obama (22%).

Is one result more trustworthy than the other? That is always a tough question to answer, but one of these polls is considerably more transparent about its methods. And that should tell us something.

While I have been opining lately about both the difficulty in polling the Iowa Caucuses and the remarkable lack of disclosure of methodology in the early states (especially here and here and all the posts here), the new Time survey stands out as a model of transparency:

The sample source was a list of registered Democratic and Independent voters in Iowa provided by Voter Contact Services. These registered voters were screened to determine their likelihood of attending the 2008 Iowa Democratic caucuses.

Likely voters included in the sample included those who said they were

  • 100% certain that they would attend the Iowa caucuses, OR
  • probably going to attend and reported that they had attended a previous Iowa caucus.

The margin of error for the entire sample is approximately +/- 5 percentage points. The margin of error is higher for subgroups. Surveys are subject to other error sources as well, including sampling coverage error, recording error, and respondent error.

Data were weighted to approximate the 2004 Iowa Democratic Caucus "Entrance Polls," conducted January 19, 2004.

Turnout in primary elections and caucuses tends to be low, with polls at this early stage generally overestimating attendance.

The sample included cell phone numbers, which, to the extent SRBI was able to identify them, were dialed manually.

I emailed Schulman to ask about the incidence and he quickly replied with a "back of the envelope" calculation: Their sample of 519 likely caucus goers represents roughly 12% of eligible adults in Iowa (details on the jump), exactly the same percentage as obtained by the recent ABC News/Washington Post poll, but higher than the reported 2004 Democratic caucus turnout (5.5% of eligible adults). Keep in mind, however, that the ABC/Post poll used a random digit dial methodology and screened from the population of all Iowa adults.

The Time/SRBI survey started with a list of registered Democrats and independents - so theoretically did a better job screening out non-registrants and Republicans. On the Time survey, 92% of respondents report having "ever attended" Iowa precinct caucuses (see Q2)." On the Post/ABC survey, 68% report having "attended any previous Iowa caucuses" (see Q12). Readers will notice that on the 2004 entrance poll, 55% of the caucus-goers said they had participated before.

What is the American Research Group Methodology? All they tell us on the website is that they completed 600 interviews and that respondents were asked:

Would you say that you definitely plan to participate in the 2008 Democratic presidential caucus, that you might participate in the 2008 Democratic presidential caucus, or that you will probably not participate in the 2008 Democratic presidential caucus?

Blogger speculation alert: If this was the only question used to screen, it is likely that ARG's incidence of eligible adults was much higher. Such a difference likely explains why they show Clinton doing consistently better in Iowa than other pollsters, but that is just an educated guess. [Update: A guess that turns out to be wrong....]. We owe Dick Bennett the opportunity to respond with more details. I have emailed him with questions and will post a response when I get it. [Update: Details of Bennett's response here. They ask four questions to screen for likely voters and their Democratic sample in this case represented roughly 12% of adults in Iowa. Apologies to ARG].

I suspect that if we could know all about every pollsters' methods in Iowa, we would see evidence of a disagreement about how tightly to screen and about what percentage of the completed sample should report having participated in a prior caucus.

The resolution of that argument is neither simple nor obvious, but seems to have a profound impact on the results. Surveys that appear to include more past caucus goers (Time, Des Moines Register and One Campaign survey -- see our Iowa compilation) tend to favor John Edwards, while Hillary Clinton does better on surveys that define the likely caucus-goer universe more broadly. [Update: The disagreement may have more to do with the appropriate number of self-reported past caucus goers].

Details on Time's "back of the envelope" incidence calculation after the jump...

Continue reading "Iowa: A Tale of Two New Polls"


Screens & RDD: The ABC/Post Survey


It was probably Murphy's Law. Within hours of my posting a review of the sorry state of disclosure of early primary poll methodology, ABC News and The Washington Post released a new survey of likely caucus goers in Iowa that disclosed the two critical pieces of information I had searched for elsewhere. The two ABC News releases posted on the web (on Democratic and Republican caucus results) disclosed both the sample frame and the share of the voting age population represented by each survey. ABC News polling director Gary Langer also devoted his online column last Friday to a defense of his use of the random digit dial (RDD) methodology to sample the Iowa caucuses.

Let's take a closer look.

Langer concluded his column with a note on "likely voter screening," a subject I have been posting on lately. He writes:

Some polls of likely caucus-goers, or likely voters elsewhere, may include lots of people who aren't really likely to vote at all. Drilling down, again, is more difficult and more expensive. But if you're claiming to home in on likely voters, you want to do it seriously. Anyone producing a poll of "likely voters" should be prepared to answer this question: What share of the voting-age population do they represent?

Amen.

The good news is that Langer and ABC News also provided an answer. For the Democratic sample:

This survey was conducted by telephone calls to a random sample of Iowa homes with landline phone service. Adults identified as likely Democratic caucus goers accounted for 12 percent of respondents; with an adult population of 2.2 million in Iowa, that projects to caucus turnout of 260,000.

In 2004, by comparison, just over 122,000 Democrats (5.5% of the voting age population) turned out for the caucuses.

And for the Republicans:

Adults identified as likely Republican caucus-goers accounted for seven percent of respondents; with an adult population of 2.2 million in Iowa, that projects to caucus turnout of 150,000. That's within sight of the highest previous turnout for a Republican caucus, 109,000 in 1988.

The estimated turnout for the 2000 Republican caucuses was lower (approximately 86,000), partly because John McCain focused his campaign on the New Hampshire primary. Thus, Republican turnout amounted to 4% to 5% of the voting age population in the last two contested Iowa caucuses.

So first, let's give credit where it is due. Of the thirteen organizations that have released surveys in Iowa so far this year, only ABC News has published full information about how tightly they screened likely caucus voters.

Having said that, two questions remain: First, is the screen used by the ABC/Washington Post poll screen tight enough? After all, their screen of Democrats projects to "likely voter" population of 260,000, a number more than double both the 2004 turnout (122,000) and the all-time record for Democrats set in 1988 (125,000). The ABC release seems to anticipate that question with the following passage:

A more restrictive likely voter definition, winnowing down to half that turnout, or about what it was in 2004, does not make a statistically significant difference in the estimate -- Edwards, 28 percent; Obama, 27 percent; and Clinton, 23 percent, all within sampling tolerances given the relatively small sample size. The more inclusive definition was used for more reliable subgroup analysis.

The full sample had Obama at 27% and Edwards and Clinton at 26% each. While the release does not specify the "more restrictive" definition they used, The Washington Post's version of the results indicates that exactly half (50%) of the likely Democratic caucus goers indicated that they are "absolutely certain" they will attend.

The Republican release makes essentially the same assertion: "A more restrictive likely voter definition, winnowing down to lower turnout, makes no substantive difference in the results."

So ABC's answer is: We could have used a tighter screen but it would have made no significant difference in the results.

Their decision is reasonable considering that the Des Moines Register poll used essentially the same degree of screening for their first poll of Democrats in 2006, using a list based methodology that nailed the final result in 2004. Also keep in mind that no screen based on self-reports of past behavior or future intent can identify the ultimate electorate with anything close to 100% accuracy. Pollsters know that some respondents will falsely report having voted in the past, and that respondents often provide wildly optimistic reports about their future vote intent that typically bare little resemblance to what they actually do on Election Day. And while we know what turnout has been in the past, we can only guess as to the Iowa Caucus turnout this coming January (or, perhaps even December). The ideal methodology defines turnout a bit more broadly than expected...[Oops, forgot to finish that sentence: An ideal method defines turnout a bit too broadly but also looks at narrower narrower turnout groups within the sample as this survey did].

The second and more complex question involves the ABC/Washington Post to use a random digit dial (RDD) sample frame rather than a sample drawn from a list of registered voters.

Langer makes the classic case for RDD, by pointing out the potential flaws in samples drawn from the list of registered voters provided by the Iowa secretary of state. Roughly 15% of the voters on the Secretary of State's list lack a telephone number and about as many will turn out to be non-working or business numbers (according to data he cites from a Pew Research Center Iowa poll conducted in 2003). Include the traditionally small number of Iowans that may still register to vote (or participate after having been inactive for many years), and we have, he writes, "a lot of noncoverage - certainly enough, potentially, to affect estimates." Langer acknowledges that RDD samples now face their own non-coverage problem due to the growth of cell phone only households (12-15% now lack landline phone service), but concludes that RDD "produces far less noncoverage than in list-based sampling."

True enough. But Langer leaves out some pertinent information. First, campaign pollsters that make use of registered voter lists typically use a vendor that attempts to match the names and the addresses on the list to telephone listings. Two vendors I spoke with today tell me that they are able to use such a process to increase the "match rate" to over 90%, a level that makes Iowa's lists among the best in the nation for polling.

Second - and this is a more complicated issue that really demands another post - the potential value of sampling from a registered voter list is not the ability to call only registered voters with the confidence that "people are reporting their registration accurately." It also allows pollsters to use the rich past vote history data available on the list for individual voters to inform their decisions about which voters to sample and interview. Pollsters can also make use of data providing the precise geographic location, party registration, gender and age of each sampled voter provided on the list to correct for non-response bias.

Finally, the campaign pollsters on the Democratic side that shell out "up to $100,000" to the Iowa Democratic Party for access to the list do not conduct polls that "entirely exclude" first time caucus goers (as Langer suggests). The Iowa party appends past caucus vote history to the full list of registered voters, and pollsters can use the additional data to greatly inform their sample selection methodology (Democrat Mark Mellman gives a hint of how this works here; Mellman's complete procedure probably resembles the methodology proposed by Yale political scientists Donald Green and Alan Gerber here and here).

Ultimately, the decision about what sample frame to use involves a trade-off between the potential for greater coverage error (when using a list) and greater measurement error in identifying true likely voters (when using RDD). The decision between the two is ultimately a judgment call for the pollster. Those of us who have grown comfortable with list samples believe that the increased accuracy in sampling true likely voters offsets the risk of missing those without accurate phone numbers on the lists. But the choice is not obvious. The fact that ABC and the Post have gone in a different direction -- and have disclosed the pertinent details -- will ultimately enrich our understanding of both the poll methodology and the Iowa campaign.


How Tight is the Screen? Part II


I want to pick up where I left off on Tuesday, when I wrote about the way national surveys screen for primary voters. How well have the pollsters in early primary states done in disclosing how tightly they "screen" to identify the voters that will actually turn out to vote (or caucus)? Not very well, unfortunately.

For those just dropping in, here is the basic dilemma: Voter turnout in primary elections and, especially in caucus states like Iowa, is typically much lower than in the general election. A pre-election survey that aims to track and ultimately project the outcome of the "horse-race" -- the measure of voter preferences "if the election were held today" -- needs to represent the population of "likely voters." When the expected turnout is very low, that becomes a difficult task, especially when polling many months before an election.

And in Iowa and South Carolina, if history is a guide, that turnout will be a very small fraction of eligible adults,** as the following table shows:

08-02%20turnout.png

When a pollster uses a random digit telephone methodology, they begin by randomly sampling adults in all households with landline telephone service. They need to use some mechanism to identify a probable electorate from within a sample of all adults. If recent history is a guide, the probable electorate in Iowa -- Democrats and Republicans -- will fall in the high single digits as a percentage of eligible adults. South Carolina's turnout is better, but is still unlikely to exceed 30% of adults. And while the New Hampshire primary typically draws the highest turnout of any of the presidential primaries, it still attracts less than half of the eligible adults in the state. Despite all the attention the New Hampshire primary receives, many voters that ultimately cast ballots in the November general election (roughly 30% in 2000) choose to skip their states' storied primary.

A pollster may not want to "screen" so that the size of their likely voter matches the exact level of turnout. Most campaign pollsters I have worked with prefer to shoot for a slightly more expansive universe, both to capture those genuinely uncertain about whether they will vote and to account for the presumption that "refusals" (those who hang up on their own before answering any questions) are more likely to be non-voters.

Nonetheless, the degree to which pollsters screen matters a great deal. If, hypothetically, one Democratic primary poll captures 10% of eligible adults while another captures 40%, the results could easily be very different (and I'll definitely put more faith in the first).

It also matters greatly how the pollster go about identifying likely voters. I wrote quite a bit about that process in October 2004 as it applies to random digit dial (RDD) surveys of general election voters. In extremely low turnout contests, such as the Iowa caucuses, most campaign pollsters now rely on samples drawn from lists of registered voters that include the vote history of individual voters. Most of the Democratic pollsters I know agree with Mark Mellman, who asserted in a must-read column in The Hill earlier this year that, "the only accurate way to poll the Iowa caucuses starts with the party's voter file."

So, based on the information they routinely release, what do we know about way the recent polls in Iowa, New Hampshire and South Carolina screened for likely voters? As the many questions marks in the tables below show, not much.

08-02%20NH.png

The gold star for disclosure goes to the automated pollster SurveyUSA. Of 22 survey organizations active so far in these states, they are the only organization that routinely releases (and makes available on their web site) all of the information necessary to determine how tightly they screen. Every release includes a simple statement like the one from their May poll of New Hampshire voters:

Filtering: 2,000 state of New Hampshire adults were interviewed by SurveyUSA 05/04/07 through 05/06/07. . . Of the 2,000 NH adults, 1,756 were registered to vote. Of them, 551 were identified by SurveyUSA as likely to vote in the Republican NH Primary, 589 were identified by SurveyUSA as likely to vote in the Democratic NH Primary, and were included in this survey.

I did the simple math using the number above (which are weighted values). For SurveyUSA's May survey, Democratic likely voters represented 29% of adults and Republican likely voters represented 28%, for a total of 57% of all New Hampshire adults. Their screen is a very reasonable fit for a survey fielded eight months before the primary.

08-02%20IA.png

Honorable mention for disclosure also goes to two Iowa polls. First, the Des Moines Register poll conducted by Selzer and Company. Ann Selzer provided me with very complete information upon request last year. Her first Iowa caucus survey last year used a registered voter list sample and screened reach a population that represents roughly 11% of the eligible adults (assuming 2.0 million registered voters in Iowa and 2.2 million eligible adults).

Second, the poll conducted in March by the University of Iowa. While their survey asked an open-ended vote question (rendering the results incomparable with those included in our Iowa chart), their release did at least provide the basic numbers concerning their likely voter screen. They interviewed 298 Democratic likely caucus goers and 178 Republican caucus-goers out of 1,290 "registered Iowa voters" (for an incidence of 37% of registered voters). Unfortunately, they did not specify whether they used a registered voter list or a random digit sample, although given the incidence of registered voters in Iowa, we can assume that the percentage of eligible adults that passed the screen was probably in the low 30s.

08-02%20SC.png

And speaking of the sampling frame, only 6 of 22 organizations SurveyUSA, Des Moines Register/Selzer, Fox News, Rasmussen Reports, Zogby, and Winthrop University specified the sampling method they used (random digit dial, RBS or listed telephone directory). I will give honorable mention to two more organizations -- Chernoff Newman/ MarketSearch and the partnership of Hamilton Beattie (D) and Ayres McHenry (R) -- that disclosed their sample method to me upon request earlier this year.

The obfuscation of this information by the remaining 14 pollsters is particularly stunning given that the ethical codes of both the American Association for Public Opinion Research (AAPOR) and the National Council on Public Polls (NCPP) include explicitly require the disclosure of the sampling method, also known as the sample "frame." The NCPP's principles of disclosure requires the following for its member organizations for "all reports of survey findings issued for public release:"

Sampling method employed (for example, random-digit dialed telephone sample, list-based telephone sample, area probability sample, probability mail sample, other probability sample, opt-in internet panel, non-probability convenience sample, use of any oversampling).

The AAPOR code mandates disclosure of:

A definition of the population under study, and a description of the sampling frame used to identify this population.

Finally, while virtually all of these surveys told us how many "likely primary voters" they selected, very few provided details on how they determined that voters (or caucus goers) were in fact "likely" to participate. The most notable exceptions were the Hamilton Beattie (D) Ayres McHenry (R) and Chernoff Newman/ MarketSearch polls in South Carolina, and the News 7/Suffolk University poll in New Hampshire. All of these included the questions used to screen for likely primary voters in the "filled-in" questionnaires that included full results.

So what should an educated poll consumer do? I have one more category of diagnostic questions to review, and then I want to propose something we might be able to do about the very limited methodological information available to us. For now, here's two-word hint of what I have in mind: "upon request."

Stay tuned.

**Political scientists typically use two statistics to calculate turnout among adults: all adults of voting age (also known as the voting age population or VAP), or all adults who are eligible to vote (or the voter eligible population or VEP). George Mason University Professor Michael McDonald has helped popularize VEP as a better way to calculate voter turnout, because it excludes adults ineligible for voting such as non-citizens and ineligible felons. The perfect statistic for comparison to telephone surveys of adults would fall somewhere in between, because adult telephone samples do not reach those living in institutions or who do not speak English, but might still include non-citizens that speak English (or Spanish where pollsters use bilingual interviewers).

In a state like California, with a large non-citizen population, VAP is probably the better statistic for comparisons to the way polls screen for likely voters. In Iowa, New Hampshire and South Carolina, however, the choice has very little impact. Had I used VAP rather than VEP above, the turnout statistics in the table would have been roughly a half a percentage point lower.

CORRECTION: Due to an error in my spreadsheet, the original version of the turnout table above incorrectly displayed turnout as a percentage of VAP rather than VEP. For reference, the table below has turnout as a percentage of VAP.

08-02%20turnout-vap.png


How Tight is the Screen? Part I


The questions we seem to get most often here at Pollster, either in the comments or via email, concern the variability we see in the presidential primary polls, especially in the early primary states. Why is pollster A showing a result that seems consistently different than what pollster B shows? Why do the results from pollster C seem so volatile? Which results should we trust? I took up one such conflict last Friday.

Unfortunately, definitive answers to some of these questions are elusive, given the vagaries of the art of pre-election polling in relatively low turnout primaries. When confronted with such questions, political insiders tend to rely on conventional wisdom and pollster reputation. Our preference is to look at differences in how survey results were obtained and take those differences into account in analyzing the data.

At various AAPOR conferences in recent years, I have heard the most experienced pollsters repeatedly confirm my own intuition: To find the most trustworthy primary election polls, we need to look close at how tightly the pollsters "screen" for likely primary voters. In other words, primary and caucus turnout is usually low in comparison to general elections. In 2004 (by my calculations), the Democratic turnout amounted to 6% of the voting age population for the Iowa Caucuses and 22% for the New Hampshire primary. In other states, turnout averaged 9% in primary states and 1.4% in caucus states in 2004.

A pollster that begins with a sample of adults has to narrow the sample down to something resembling the likely electorate, which is not easy. As few will approach the task exactly the same way, this is an area of polling methodology that is much more about art than science. Nonetheless, in most primary polls, relatively tighter screens are preferable in trying to model a likely electorate.

Thus, to try to make sense of the polls before us we want to know two things. First, how narrowly did the pollsters screen for primary voters? Second, as no two such screens are created equal, what kind of people qualified as primary voters?

In this post, I will look at what some recent national polls have told us about how tightly they screened their samples before asking a presidential primary trial-heat question and what kinds of voters were selected. I will turn to statewide polls in Part II. The table below summarizes the available data, including the percentage of adults that get the Democratic or Republican primary vote questions (if you click on the table, you will get a pop-up version that includes the sample sizes for each survey).

07-31%20National%20Primary%20Screen_sml.png

Unfortunately, of the 20 national surveys checked above, only five (Gallup/USA Today, AP-IPSOS, CBS/New York Times, Cook/RT Strategies and NBC/Wall Street Journal) provide all of the information necessary to quantify the tightness of their screen question. Others fall short. Here is a brief explanation at how I arrived at the numbers above.

The calculation is easiest when the pollster reports results for a random sample of all adults as well as the weighted value the subgroups that answered the primary vote questions. In various ways, these five organizations included the necessary information in readily available public releases.

Five more organizations (CNN/ORC, Newsweek, LA Times/Bloomberg, the Pew Research Center and Time) routinely provide the subgroup sizes for respondents that answer primary vote questions, though they do not specify whether the "n-sizes" are weighted or unweighted. Pollsters typically provide unweighted counts because they are most appropriate for calculating sampling error. However, since the unweighted statistic can provide a slightly misleading estimate of the narrowness of the screen, I have labeled the percentages for these organizations as approximate.

Of those that report results among all adults, only the ABC News/Washington Post poll routinely omits information about the size of the subgroups that answer primary vote questions. Even though their articles and reports often lead with results among partisans, they have provided no information about the sub-group sizes or margin of error for party subgroups since February. While the Washington Post provided results for party identification during 2005 and 2006, that practice appears to have ended changed as of February 2007.

[CORRECTION: The June and July filled-in questionnaires available at washingtonpost.com include the party identification question, and those tables also present time series data for the February and April surveys. However, as these releases do not include the follow-up question showing the percentage that lean to either party (which had been included in Post releases during 2006), they still do not provide information sufficient to determine the size of the subgroups that answered presidential primary trial-heat questions].

Determining the tightness of the screen gets much harder when pollsters report overall results on their main sample for only registered or "likely" voters. Three more organizations (Diageo/Hotline, Fox News/Opinion Dynamics and Quinnipiac) provide overall results only for those who say they are registered to vote. For these three (denoted with a double asterisk in the table), I have calculated an estimate of the screen based on the educated guess that roughly 85% of adults typically identify themselves as registered voters on other surveys of adults.

Four more organizations (Rasmussen Reports, Zogby, and Democracy Corps and McLaughlin and Associates) report primary results as subgroups of samples of "likely voters." Since their standard releases provide no information on how narrowly they screen to select "likely voters," we have no way to estimate the tightness of their primary screens. If we simply divided the size of the subgroup by the total sample, we would overstate the size of the primary voting groups in comparison to the other surveys.

Finally, the American Research Group follows a procedure followed for many statewide surveys: It provides only the number of interviews asked the primary vote question with no information about the size of the universe called to select those respondents.

All of the discussion above concerns the first question: How narrowly did the pollsters screen? We have somewhat better information -- at least with regards to national surveys -- about the second question: how those people were selected. The last column in the table categorizes each pollster the by the way they select respondents to receive primary vote questions:

  • Leaned Partisans -- This is the approach taken by Gallup/USA Today, ABC News/Washington Post, AP-IPSOS. It includes, for each party, all adults that identify or "lean" to that party.
  • Leaned Partisan+ -- The approach taken by NBC/Wall Street Journal includes both party identifiers and leaners and those who say they typically vote in the primary election of the given party. The LA Times/Bloomberg poll takes a similar approach although its screen appears to exclude leaners.
  • RV/Leaned Partisan or RV/Partisan -- This approach is taken by a large number of pollsters. It takes only those partisans or "leaned" partisans that say they are also registered to vote. Those labeled RV/Partisan exclude party "leaners" from the subgroup.
  • Primary Voters -- This category includes the surveys that use questions about primary voting (rather than party identification) to select respondents that will be asked primary vote questions.

As should be apparent from the table, the pollsters that use the "leaned partisan" or "leaned partisan+" select partisans more broadly than those that include only registered voters or those that claim to vote in primaries. But all of these approaches are getting a much broader slice of the electorate than is likely to actually participate in a primary or caucus in 2008. As should be obvious, most of the national pollsters are not trying to model a specific electorate -- they are mostly providing data on the preferences of "Democrats" or "Republicans" (or Democratic or Republican "voters"). I wrote about that issue and its consequences back in March.

In Part II, I will turn to statewide polls in the early primary states and then discuss what to make of it all. Unfortunately, while the information discussed above is incomplete, the national polls look like a model of disclosure as compared to what we know about most of the statewide polls.

To be continued...


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR