Pollster.com

Articles and Analysis

 

Because You Asked...

Topics: Internet Polls

Pollster reader Petey posted a critical and incisive comment to my post last week showing that a new Harris Interactive Internet panel survey produced ratings of Hillary Clinton that were not wildly inconsistent with other recent conventional surveys:

Am I missing something, Mark?

Just because the Harris results are somewhat in line with random sampling poll results doesn't mean they should be treated as a real data point.

The Zogby Interactive results in the last election likewise were somewhat in line with other results, but they were still basically un-useful as polls.

No matter how good the weighting, self-selected polling is always going to be fundamentally flakey, no?

Petey -- assuming this is the same guy who often leaves astute comments on Pollster and Mystery Pollster -- does not miss much. His pointed question made me realize that a few lines of that post could have been written better. Specifically, by writing that a side-by-side test of the Clinton item using both the Internet panel and a traditional phone survey might "help resolve the sampling question," I implied that such a test might establish the validity of the Harris method. What I meant, more narrowly, as that it might resolve whether the Harris method produced a sample more hostile to Hillary Clinton in this instance (although since that post we now have another data point that allows a better comparison - see below). The larger issue of the merits of Internet panel surveys will not be resolved by any one test or blog item.

The problem with Internet panel surveys generally is their departure from random sampling. Traditional surveys begin by drawing a random sample from a "frame" (the pool of all wired telephone numbers, for example) that allows all or most of the population of interest a chance of being selected. Internet panels draw their samples from a pool of individuals who have volunteered to participate in online surveys, usually by responding to a banner advertisement. That fundamental difference, to answer Petey's question, is something that should make us more skeptical of Internet panel surveys generally.

However, it is important to remember that traditional telephone surveys now face fundamental challenges of their own. Response rates have fallen to below 30% on the best of public polls (and far lower on many others in the public domain), while the rapid growth of mobile phone usage has reduced the coverage of wired random digit dial (RDD) telephone samples below 90%. Since the true "science" of random probability sampling requires the assumption of 100% coverage and response - a goal that no public opinion poll comes anywhere close to reaching - we have to realize that any survey is now theoretically "flakey."

So how do we evaluate the relative "flakiness" of the competing methods? When it comes to non-probability Internet samples, some of my colleagues in the American Association for Public Opinion Research (AAPOR) like to remind me that (as one put it on Friday), "we can't accept a survey as valid just because we like the results." In other words, echoing Petey's point, if a survey lacks a sound theoretical basis, it should not be trusted just because it produces results that are consistent with other polls.

The problem with that argument, unfortunately, is that our continued trust in traditional surveys (despite their fundamental flaws) rests on studies that evaluate the results. We find reassurance in the way conventional surveys continue to predict election outcomes about as well as in past years. And the most rigorous academic studies -- such as those I saw presented at a workshop sponsored last Friday by the Washington DC chapter of AAPOR -- find few examples of bias due to low response rates once they are weighted demographically. For example, the influential study conducted by Scott Keeter and his colleagues at the Pew Research Center that used a side-by-side experiment to compare their standard methodology to a more rigorous design that produced a much higher response rate. The result? "[W]ithin the limits of experimental conditions, non-response did not introduce substantial biases in the estimates."

In other words, despite fundamental theoretical flaws in our traditional methods, we still like the results.

As such, those who produce surveys based on non-probability samples are right to ask whether their methods should also be evaluated on their "track record" (as Harris Interactive's Humphrey Taylor argued in an article in the January 15 print edition of the Polling Report). And comparing track records is what we aim to do here at Pollster.

The Zogby "Interactive" polls released in 2006 suffered from more than just theoretical questions Their results were far more variable and less valid (when compared to election results) than other conventional polls. And the public Internet panel surveys conducted during the 2004 U.S. presidential campaign by all sources (Zogby, Harris and YouGov) all showed a small but a consistent bias toward the Democrats (Professor Franklin and I been crunching the numbers on 2004 and 2006 and will have some posts to share on this subject very soon).

But all Internet panel surveys are not created equal, and past performance may not be a guarantee of future results - for any survey method. All of which brings me back to the Harris poll on Hillary Clinton. Since my post last week, Time released its latest survey which, by chance, included a question with a more similar wording and structure to the Harris item than those other surveys I looked at last week.

Time/SRBI - [Asked of 1,102 registered voters only, March 23-26] If the following candidate were to run for president and the election was being held today, how much would you support this candidate, definitely support, probably support, probably not support definitely not support?

Harris Interactive - [2,223 adults via Internet panel, Marc 6-14] If Hillary Clinton was the Democratic nominee for President, which is closest to the way you think? I definitely would vote for her, I probably would vote for her, I probably would not vote for her, I definitely would not vote for her, I wouldn't vote at all

04-02%20clinton%20support.png

In this case, the Time survey shows a bigger number (46%) in the two support categories than Harris (37%), although two big differences between the measurements (other than sampling) remain. The first is that the Harris item offered respondents the explicit choice of not voting at all, while the Time question offered only four categories of support. The second is that Time asked the question of registered voters only, while Harris asked it of all adults. We could get a closer comparison if we could tabulate the Harris result among self-reported registered voters, but no such results appear in the Harris release.

Nonetheless, we have at least a hint here that those who volunteer to participate in Internet panels may be less supportive of Hillary Clinton than respondents interviewed by conventional methods. And that is something we need to keep an eye on.

[Update - Interests disclosed: The primary sponsor of Pollster.com is the research firm Polimetrix, Inc. which conducts online panel surveys].

 

Comments
JW:

While your initial comments about online vs. RDD surveys are right on target, the example you provide is not.

To begin with, in this kind of analysis, you should percentage pro and con on a base of those expressing an opinion. This changes the ratios from what you show to HI: 43/57 vs. SRBI: 47/53 (rounded). That's not exactly compelling evidence one way or the other.

Beyond that, you could make the same case that the difference results from one sample consisting only of registered voters, or the two weeks that elapsed between the surveys, or anything else you can think of.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR