Pollster.com

Articles and Analysis

 

Re: Quality, Standards and Disclosure


An update on Monday's post that linked to a ABC News polling director Gary Langer's recent column criticizing polls based on "opt-in" internet panel surveys that report a margin of error. "You need a probability sample to compute sampling error," he wrote. Opt-in panels that claim sampling error are "trying to nose their way" into "the house" of probability sampling and "don't belong there."

Simon Jackman, professor of political science at Stanford University, responded with a blog post that takes a different perspective:

Observation: all survey respondents “opt-in”. Would-be respondents (selected via random sampling or not) decide whether to respond or not, or can’t be reached at all. We then weight the data we get to try to deal with any resulting biases. The resulting standard errors should be computed taking the weighting into account (in almost all media polling I see, they are not, and the standard error is computed a la Stats 101 with the number of completed interviews in the denominator), but in any event, even the correct standard errors are conditional on the way the weights were computed. The Stats 101 “textbook purity” of “simple random sampling” has long been left behind…particularly given some of the horror stories you hear about [random digit dial] RDD [telephone survey] response rates.

So I tend to think the “you can’t trust opt-in Internet polls” line is something of a beat-up. Sure, there is work to be done in understanding the properties of data generated this way, and how to compute a standard error with these data. I don’t see this as an impossible hill to climb. It is critical that this work get done, because if/when we can get comfortable with the bias issues (and we know what the issues are), then I think its game over.

Jackman, we should point out, was a principal investigator for the Cooperative Campaign Analysis Project (CCAP), an academic survey conducted using the opt-in internet panel maintained by YouGov/Polimetrix (the company that also owns Pollster.com). Nevertheless, this conversation is important since, as Jackman points out, "internet polling is not going away" and just about every pollster I know concedes that we need to find a way to "solve the problem" of random sampling over the internet.

 

Comments
sfcpoll:

It seems to me that the equation of the "opt-in" nature of online polls and telephone surveys is specious. One major issue that exacerbates the "opt-in" character of online surveys is the fact that a respondent can choose (and is often encouraged monetarily to do so) to participate in as many surveys as possible. One can earn a regular payment for completing them, something like a part-time job. I'd like to know if these online surveys are verifying that respondents aren't coming from the same IP address (which can easily be done, but isn't nearly as necessary with phone numbers sampled via RDD).

Jackman cites RDD response rate horror stories, yet am I wrong that research has shown sample representativeness doesn't suffer dramatically from declining rates, once weights are applied. If the chief detractor of response rates for telephone interviews is telemarketing, how serious could online advertising be in driving down the representative nature of these polls-my guess is much more.

By Jackman's reasoning no one should use MoE, so why does he?

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR