Articles and Analysis


Research 2000, DailyKos and Transparency

Today the polling world was rocked by claims that polls by Research2000 for DailyKos are substantially flawed:

We do not know exactly how the weekly R2K results were created, but we are confident they could not accurately describe random polls

Coming on the heels of recent arguments by Nate Silver at FiveThirtyEight.com that Strategic Vision faked their polling over several years, this is a new blow to the credibility of public polling.

Mark Blumenthal here at Pollster.com has lead a concerted effort over the last two years to increase the degree of disclosure expected from polling firms, an effort that paid off in new disclosure requirements from the American Association for Public Opinion Research (AAPOR) this spring. Some three dozen firms immediately signed on to the new disclosure requirements, but there are many firms that produce widely cited polls that have not yet agreed to disclose as much as required.

I've only had time for a single quick read of the Research2000 analysis by Mark Grebner, Michael Weissman, and Jonathan Weissman. It seems to be done seriously and it raises important doubts about Research2000's practices. But with my academic's hat on, I'd like to see it receive serious review by professional statisticians with polling experience. Academic journals typically reject 80-90% of articles submitted to them because on close inspection by experts, flaws are found in the theory or the analysis. These are serious charges, and they deserve to be vetted by professionals qualified to do such an evaluation. If flaws in the analysis are discovered, they can be fixed and the conclusions corrected. If the analysis is found to be sound, then the evidence is even more compelling and worrisome for its implications for the polling industry.

There is one element of disclosure that has not been pushed, but which could significantly and easily reduce the chance of "pollsters" making up their data. Every media firm, including DailyKos, should write into their contracts the requirement that the raw data and complete questionnaire be deposited within two months with the Roper Center Polling Archive at the University of Connecticut. Two months is long enough that there is little remaining news value, but rapid enough that meaningful vetting and analysis is possible. By forcing this disclosure, by contract, the sponsors of polling would gain credibility for their polls while insisting that their pollsters live up to the standards of disclosure by AAPOR as well as making the raw data available for subsequent scrutiny.

Most major media polls already deposit their raw data with the Roper Center (including, Gallup, ABC/Post, CBS/NYT, NBC/WSJ, Pew, Time, Newsweek), though not necessarily as quickly as two months. Their example should encourage others to also deposit their data.

But most importantly, it is in the interest of the sponsors of polling to protect their reputation by requiring full disclosure and deposit of the data. Such practice would enhance the value of their polls, not diminish it.


Michael Weissman:

Charles- Your comment is the first one which nominally takes a look at the technical arguments, so I'd like to respond to several points.

1. Vetting by professional statisticians. James Robins was of assistance throughout the process. John Marden did an extraordinarily thorough line-by-line technical vetting, which led to change. To call these guys "professional statisticians" is a gross understatement. Did you not click on their links in the article? They are not specifically in the poll business. I'm not sure that's a negative.

(I'm also not a statistics neophyte, having well over 100 peer-reviewed publications on statistical fluctuations. I've also been an Executive Editor of Fluctuation and Noise Letters for some years.)

2."80-90%" of articles rejected due to "flaws".
I suppose that is intended to serve as your Bayesian prior. As somebody who has helped reject more than my share of articles, and had a small number (

Needless to say, I think those priors are not only dubious in general, but inapplicable to my own very low rejection rate, to papers vetted by Robins and Marden, or to papers written and vetted in the full consciousness that there would be serious consequences if they were erroneous.


Michael Weissman:

One paragraph above was scrambled because html seemed to have it's own interpretation of the symbols. Here it is in plainer text.

2."80-90%" of articles rejected due to "flaws".
I suppose that is intended to serve as your Bayesian prior. As somebody who has helped reject more than my share of articles, and had a small number (probably less than 10 percent) rejected myself, my impression is that a large fraction of the rejections are for being boring or unoriginal or on inappropriate topics, as opposed to erroneous. Typically, Nature will say "Not hot enough for us, send to Phys. Rev. Lett." or Phys. Rev Lett. will say "Not hot enough for us..." on down the chain.) Rejecting the erroneous ones is, however, more fun and more memorable.


Professor M:

The defensiveness of Michael Weissman's comments is odd. He seems to be saying, "look at all of our expertise, you can put your faith in our conclusions without verifying them." But isn't that precisely what they are accusing R2K of doing? It seems to me that Grebner, Weissman, and Weissman should be welcoming critical review of their own work, not viewing the suggestion as an insult to their abilities.

In the academia that I am familiar with, lots of critical review is a measure of the importance and influence of the scholarship. If you write an article and no one bothers to engage with it, you haven't actually accomplished very much beyond adding a line to your CV.




Yes, it would help if all raw data are deposited with Roper, but what I don't understand is why the client (Kos) did not take steps to insure that he could have his own raw data! He bought it. He owns it.



I do not understand your trust in this disclosure to the Roper Center. How will the requirement of uploading some spreadsheet files to a server in CT prevent fraud?

The motivation to commit polling fraud is enormous. Polls (esp. counter-intuitive ones) set the narrative agenda for several news cycles. This drives donations etc.

All your disclosure agreements will do is make the fraud more sophisticated. I could easily set up a parent distribution with results I wanted my poll to show. Then I would sample from that distribution (aka do psuedoexperiments). My results would show the proper fluctuations, and would be impossible to distinguish from reality.

I could even do a real poll, and salt it with results from my pseudoexperiments.

And then I would happily deposit all my data into the Roper center.

How do we know people are NOT doing this now?


Michael Weissman:

Professor M- Sorry to sound defensive. We made a point of not doing the whole authority thing on the blog. Then to have Franklin say we should get vetted by stats guys without noting that we had done just that seemed to require an answer. The best thing to do is to download the plain data from our site:


and do with it whatever you think best.


SystematicError: What is deposited at the Roper Center are the original individual level data, not toplines or crosstabs in a spreadsheet.

Typical polls include several dozen items so simulating the raw data would require not just the marginals for each item but also the complete covariance structure for all items in the dataset, and accounting for the discrete nature of the variables. That "could be done" but it isn't trivial and if one were sophisticated enough to do it, I think they would also be capable of returning real value for real surveys, and so have no need to fake anything.

Of course disclosure can never guarantee fraud won't occur. But raising the norms of disclosure among reputable pollsters would make it more obvious which pollsters choose not to live up to those norms and would subject them to greater scrutiny. Not a miracle, but a marginal improvement.

Ike: I'm sure most local media lack in-house expertise to know what to do with an SPSS dataset. Apparently that goes for DailyKos as well. Lacking the expertise and finding the news value has a very short shelf life, I can understand why they don't demand the data. An advantage of requiring the pollster to deposit the raw data at Roper would be a minimal validity check (the SPSS file at least exists!) and would require little expertise by the sponsor.

Michael-- I've not expressed an opinion one way or the other about your analysis. It is serious analysis that also has serious consequences for other people. I am simply pointing out that peer review of scientific analysis is the norm precisely because authors are not the best judges of the possible shortcomings of their analysis or of the reach of their conclusions. It is good that you had two distinguished statisticians consult on the analysis, but that is not the same as an arms-length anonymous review.



Thanks for your reply, Charles.

You are right, nothing will prevent fraud, but disclosure makes it less likely.

I have a question about the Roper Center. Will the identities (contact information) of the polled be among the data archived? Will there be a credible threat that someone like Prof. Charles Franklin (or his grad students) could check up to make sure actual humans were called?


SystematicError: all personally identifying information is removed from the data before archiving. So respondent confidentiality is fully protected. While still giving access to all the data.


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.