Pollster.com

Mark Blumenthal: April 1, 2007 - April 7, 2007

Zogby, Hillary and the Judicial Watch Poll


Today brings another controversy involving pollster John Zogby with two potential lessons, first, about a set of transparently biased and leading questions and, second, on the limits of such efforts to manipulate opinions.

This morning, the Washington Post's Dana Milbank tells the story of a new poll conducted by Zogby Interactive and sponsored by Judicial Watch, a group that "back in the day filed drawers full of lawsuits alleging Clinton corruption." Milbank describes the poll as "rather loaded in its language:"

"Some people believe that the Bill Clinton administration was corrupt," one question begins. In another question about Hillary Clinton, every answer included the word "corrupt," and the question was not asked about other candidates so that a comparison could be made.

The pollster, John Zogby, defended the questions as "balanced" -- a label Fitton [president of Judicial Watch] made no attempt to earn. As he presented the results yesterday, he announced that Bill Clinton's financial conflicts of interest "make the issues of Halliburton and Dick Cheney . . . pale in comparison."

Let's take a look at the first two questions:

304. Some people believe that the Bill Clinton administration was corrupt. Whether or not you believe the Clinton administration was corrupt, how concerned are you that there will be high levels of corruption in the White House if Hillary Clinton is elected President in 2008?

26% Very concerned
19% Somewhat concerned
20% Not very concerned
33% Not at all concerned
1% Not sure

305. When thinking about Hillary Clinton as a politician, which of the following best describes her?

17% Very corrupt
25% Somewhat corrupt
21% Not very corrupt
30% 51% Not at all corrupt
7% Not sure

You can pretty much stop after the first sentence. The suggestion that "some believe the Clinton administration was corrupt" is an obvious effort to lead the respondents to the desired answer. The drumbeat of "corrupt" and "corruption" that follows - implying that the issue is not whether Clinton is corrupt but how much - makes the bias almost comic. MyDD's Jonathan Singer has it exactly right:

[T]he apparently unbalanced wording of the polling conducted by Zogby International belies the notion that the organization is serious about coming up with results that actually reflect the views of the American public rather than just the views of those who paid for its services. To harp on one example, beginning a question on the scruples of a politician by saying that some people believe his or her spouse was corrupt inserts such a bias to void the results of the question -- and perhaps even the questions that follow. Simply put, the questions in the poll were not, as Zogby insists, "balanced."

But this episode also raises a second issue. How effective were these leading questions in producing the desired response? Put another way, did Judicial Watch get their money's worth?

Putting aside the obvious - that a 53% majority is not concerned about corruption in a Hillary Clinton White House - consider how the Zogby results compare to a set of balanced (though somewhat dated) questions about honesty and trust (via Polling Report):

ABC News/Washington Post (May 11-15, 2006. n=1,103 adults) - Please tell me if the following statements apply to Hillary Clinton or not... She is honest and trustworthy

52% applies
42% does not apply
6% unsure

CNN/USA Today/Gallup (Aug. 5-7, 2005. n=1,004 adults) Thinking about the following characteristics and qualities, please say whether you think each applies or doesn't apply to Hillary Clinton. How about...Is honest and trustworthy?

53% applies
43% does not apply
4% unsure

So a year (or more) ago, roughly the same percentage of Americans considered Hillary Clinton "honest and trustworthy" as expressed little or no concern about Clinton corruption in the Zogby/Judicial Watch survey. While the comparison is obviously imperfect, the lesson here may be that well developed opinions tend to be more resistant to manipulation by leading questions. If you are convinced that Hillary Clinton is honest (or dishonest), the leading language is unlikely to alter your answer either way. 

Don't get me wrong. I am not defending the Zogby questions, which are obviously and comically biased. However, the similarity in results when compared to fairly worded questions about honesty and trust suggest that opinions toward Hillary Clinton are well developed and resist manipulation. Voters have a pretty clear sense of who Hillary Clinton is, and those opinions may be difficult for either Clinton or her foes to change.

UPDATE: Nancy Mathiowetz, the president elect of the American Association for Public Opinion Research (AAPOR) just sent out the following release concerning the Zogby/Judicial Watch poll (interests disclosed - I serve on AAPOR's Executive Council):

It's always disappointing when pollsters who are internationally known and widely quoted engage in practices that are so clearly out of line with industry standards -- like using loaded and biased questions. There's no other way to describe the questions in the Zogby poll performed for Judicial Watch.

The good news is that it did not fly under the radar -- The Washington Post was quick to point out the flagrant disregard for accepted survey standardsin the poll - A number of blogs whose authors are well versed in industry best practices and standards also wrote about the poll.

Industry standards, including the American Association for Public Opinion Research's Best Practices , make it clear --the manner in which questions are asked as well as the response categories provided, can greatly affect the results of a survey.

That's why question wording and order are some of the toughest parts of designing a good survey or poll, and thoughtful practitioners will spend a significant amount of time trying to ensure that they are balanced, simple, direct and clear


Nancy Mathiowetz
President-elect,
American Association for Public Opinion Research


Taylor's Column on Internet Polling

Topics: Internet Polls

In my post on Monday, I mentioned an article by Humphrey Taylor of Harris Interactive that appeared in the recent subscriber-only print edition of the Polling Report. We asked nicely, and the powers-that-be at the Polling Report have provided free access to Taylor's piece.

Taylor's piece is obviously not the last word on Internet polling, but his argument is well worth the click. The debate over "non-probability" sampling and Internet methodology is certainly one worth having.

Finally, in that Monday item, I neglected to include one bit of full disclosure that should accompany any commentary here on Internet polling: The primary sponsor of Pollster.com is the research firm Polimetrix, Inc. which conducts online panel surveys.


Diageo-Hotline's Generic Presidential Vote

Topics: 2008 , The 2008 Race

Last week I received a great question from astute reader JG:

I'm an avid but amateur follower of presidential polling, and I've been wondering why all the polls to date, at least the ones reported here and at pollingreport.com, are horse-race types, matching each of the presidential candidates against each other. Why haven't any firms asked a question about a generic Democrat-Republican matchup, that is, whether voters are likely to vote for the Democratic or a Republican presidential candidate, whoever that is? Wouldn't that be more informative about the state of voters than all the horse-race questions?

While I thought this was a good question to put to the media pollsters that conduct most of the national surveys, I emailed JG back that my guess was that most prefer to use candidate names whenever feasible. The main reason they ask a "generic" vote at all is that trying to identify and administer 435 different Congressional match-ups is simply to complex a task for an RDD telephone survey.

Well, the Diageo/Hotline poll is about to provide JG with a better answer. The Hotline is telling its paid subscribers that it will release a survey later today that will show "a generic Dem candidate leading a generic GOP candidate 47%-29% in a WH '08 matchup." Results should be posted at diageohotlinepoll.com sometime this afternoon.

Regular readers will recall that I have never been a big fan of the generic vote (see commentary here, here and here). This far out, I believe that a generic vote question tells us mostly about the way voters perceive the national political parties. While those images apparently give the Democrats a huge early advantage - a finding that is certainly informative about the voters' current attitudes - the ultimate nominees of each party and their campaign messages will likely reshape those images. So, for my money, the generic vote remains something of questionable value in tracking where the race will be in 18 months. But for now (or whenever the Hotline posts the numbers)...have at it.

PS: The comment by reader Tlaloc below reminds me that one pollster - the Democracy Corps project led by Democrat Stan Greenberg - did create a "generic" vote question that inserted candidate names into a sample of 50 competitive congressional districts (see Q27). Greenberg's question actually debuted on an NPR survey he conducted along with Republican Glenn Bolger (that I wrote about here). Most pollsters now conduct surveys using "computer assisted telephone interviewing "(CATi) software that makes it feasible, with a bit of programming, to insert of candidate names for different districts.

The bigger limitation has to do with the sample. Greenberg's 50 district sample used 50 small samples drawn from registered voter lists, so it was a relatively simple matter to match telephone numbers to districts. However, most national surveys use a random digit dial (RDD) method that picks telephone numbers at random from working telephone exchanges (the first three digits of the seven digit phone number). Since phone exchanges do not correspond neatly with congressional district boundaries, it is impossible to precisely match telephone numbers with districts in an RDD sample.


Order Effects in The Fox News Poll?


A series of unusual questions near the end of a Fox News/Opinion Dynamics survey released last week generated criticism from the left side of the blogosphere (see Jonathan Singer on MyDD, TPM Cafe, Carpetbagger Report, Keith Olbermann via Crooks and Liars). As Carpetbagger's Steve Benen puts it, "FNC [Fox News Channel] is practically offering professors of quantitative analysis case studies in what not to do in a poll." He has a point. The wording and ordering of some of the question in the Fox News survey are a lesson in things pollsters should avoid.

Let's start near the end of the survey with a question about the recent congressional vote on the Iraq War (the Fox poll was fielded March 27-28 among 900 registered voters nationwide):

40. Last week the U.S. House voted to remove U.S. troops from Iraq by no later than September 2008 -- would you describe this as a correct and good decision or a dangerous and bad decision?

44% Correct and good
45% Dangerous and bad
11% (Don't know)

Now compare these results to findings from four other recent surveys (each sampled all adults rather than registered voters - text and results via Polling Report):

Newsweek Poll, March 28-29, 2007, n=1,004 adults - Do you support or oppose the legislation passed this week by the U.S. Senate calling for the withdrawal of U.S. troops from Iraq by March 2008?

57% Support
36% Oppose
7% Unsure

CBS News Poll, March 26-27, 2006, n=831 adults - Do you think the United States should or should not set a timetable for the withdrawal of U.S. troops from Iraq that would have MOST troops out by September 2008?

59% Should
37% Should not
4% Unsure

USA Today/Gallup, March 23-25, 2007, n=1,007 adults - Would you favor or oppose Congress taking each of the following actions in regards to the war in Iraq? How about setting a time-table for withdrawing all U.S. troops from Iraq no later than the fall of 2008?

60% Favor
38% Oppose
2% Unsure

Pew Research Center, March 22-25, 2007, n=1,245 adults - And thinking about a specific proposal: The Congress is now debating future funding for the war in Iraq. Would you like to see your congressional representative vote FOR or AGAINST a bill that calls for a withdrawal of troops from Iraq to be completed by August of 2008? .

59% Vote For
33% Vote Against
8% Unsure

So we have four slightly different questions from four different survey organizations asked during roughly the same period that provide very similar results. Support for the congressional vote to withdraw troops from Iraq in 2008 varies between 57% and 60%. But the Fox News poll shows only 45% support. Why the difference?

One contributing factor may be the oddly double-barreled answer categories on the Fox question that imply "danger" in the congressional vote. Put another way, the Fox item asks two questions: Was the vote "good" or "bad," and was the vote "correct" or "dangerous?" Good pollsters avoid double barreled questions, but even then, the wording of this question is odd. "Dangerous" is not exactly the converse of "correct." Some Americans may see the vote is both good and dangerous.** How should they answer the question?

But the bigger potential problem has to do with question order. Consider the question that Fox asked just before the item on the Congressional vote:

39. Who do you trust more to decide when U.S. troops should leave Iraq -- U.S. military commanders or Members of Congress? (ROTATE)

69% Commanders
18% Congress
7% (Both)
3% (Neither)
3% (Don't know)

40. Last week the U.S. House voted to remove U.S. troops from Iraq by no later than September 2008 -- would you describe this as a correct and good decision or a dangerous and bad decision?

44% Correct and good
45% Dangerous and bad
11% (Don't know)

While I can only speculate, I suspect that the order of these two questions primed respondents with the notion that the House vote was at odds with the recommendations of military commanders (although the initial item certainly said nothing about the views of those commanders). The only way to know for certain would be to conduct a controlled experiment that divides respondents into two random samples, asking Q39 first for half the respondents and Q40 first for the other half. And even that sort of experiment would tell us nothing about the potential impact of the 38 questions that came first.

Academic methodologists have conducted many such experiments over the years showing that the order of questions can sometimes affect results, often in very subtle and unexpected ways. In fact, the academic evidence on these "order effects: tends to involve far more subtle examples of priming than the one theoretically at work above (for examples, see Chapter 2 of the seminal text by Howard Schuman and Stanley Presser, Questions and Answers in Attitude Surveys).

I have sympathy for the task media pollsters often face in ordering questions on long surveys covering a wide range of topics that change from month to month. They seem to struggle to resolve goals that often come into conflict: On the one hand, they want to preserve the order of questions asked previously. On the other hand, inserting new items can risk an order effect in which the older items asked earlier bias the new questions inserted later. So they often face no-win choices.

In this case, however, the intent of the author of these questions seems clear. Check the MoveOn.org question (Q36) that nearly the liberal bloggers cited as a self-evident example of partisan bias:

36. After the 2004 presidential election, the president of the left-wing Moveon.org political action committee made the following comment about the Democratic Party, "In the last year, grassroots contributors like us gave more than $300 million to the Kerry campaign and the DNC, and proved that the Party doesn't need corporate cash to be competitive. Now it's our Party: we bought it, we own it and we're going to take it back." Do you think the Democratic Party should allow a grassroots organization like Moveon.org to take it over or should it resist this type of takeover?

16% Yes, allow a grassroots organization to run the party
61% No, don't allow a grassroots organization to run the party
24% Don't know

Could this clearly partisan "message test" have had an order effect of its own on the subsequent questions on the congressional vote on Iraq? Your guess is as good as mine, but either way, the items at the end of this Fox News survey are not exactly "by the book."

**See for example, the last two questions on this same Fox poll. On Q41, 59% agree that the "killing or capturing of Usama bin Laden" would be a "huge" or "major" accomplishment. But on Q42, 47% go on to say that bin Laden's capture would also "encourage additional terrorist attacks," while only 24% say it would "discourage" further terrorism. Many Americans apparently consider that outcome both desirable and dangerous.


Because You Asked...

Topics: Internet Polls

Pollster reader Petey posted a critical and incisive comment to my post last week showing that a new Harris Interactive Internet panel survey produced ratings of Hillary Clinton that were not wildly inconsistent with other recent conventional surveys:

Am I missing something, Mark?

Just because the Harris results are somewhat in line with random sampling poll results doesn't mean they should be treated as a real data point.

The Zogby Interactive results in the last election likewise were somewhat in line with other results, but they were still basically un-useful as polls.

No matter how good the weighting, self-selected polling is always going to be fundamentally flakey, no?

Petey -- assuming this is the same guy who often leaves astute comments on Pollster and Mystery Pollster -- does not miss much. His pointed question made me realize that a few lines of that post could have been written better. Specifically, by writing that a side-by-side test of the Clinton item using both the Internet panel and a traditional phone survey might "help resolve the sampling question," I implied that such a test might establish the validity of the Harris method. What I meant, more narrowly, as that it might resolve whether the Harris method produced a sample more hostile to Hillary Clinton in this instance (although since that post we now have another data point that allows a better comparison - see below). The larger issue of the merits of Internet panel surveys will not be resolved by any one test or blog item.

The problem with Internet panel surveys generally is their departure from random sampling. Traditional surveys begin by drawing a random sample from a "frame" (the pool of all wired telephone numbers, for example) that allows all or most of the population of interest a chance of being selected. Internet panels draw their samples from a pool of individuals who have volunteered to participate in online surveys, usually by responding to a banner advertisement. That fundamental difference, to answer Petey's question, is something that should make us more skeptical of Internet panel surveys generally.

However, it is important to remember that traditional telephone surveys now face fundamental challenges of their own. Response rates have fallen to below 30% on the best of public polls (and far lower on many others in the public domain), while the rapid growth of mobile phone usage has reduced the coverage of wired random digit dial (RDD) telephone samples below 90%. Since the true "science" of random probability sampling requires the assumption of 100% coverage and response - a goal that no public opinion poll comes anywhere close to reaching - we have to realize that any survey is now theoretically "flakey."

So how do we evaluate the relative "flakiness" of the competing methods? When it comes to non-probability Internet samples, some of my colleagues in the American Association for Public Opinion Research (AAPOR) like to remind me that (as one put it on Friday), "we can't accept a survey as valid just because we like the results." In other words, echoing Petey's point, if a survey lacks a sound theoretical basis, it should not be trusted just because it produces results that are consistent with other polls.

The problem with that argument, unfortunately, is that our continued trust in traditional surveys (despite their fundamental flaws) rests on studies that evaluate the results. We find reassurance in the way conventional surveys continue to predict election outcomes about as well as in past years. And the most rigorous academic studies -- such as those I saw presented at a workshop sponsored last Friday by the Washington DC chapter of AAPOR -- find few examples of bias due to low response rates once they are weighted demographically. For example, the influential study conducted by Scott Keeter and his colleagues at the Pew Research Center that used a side-by-side experiment to compare their standard methodology to a more rigorous design that produced a much higher response rate. The result? "[W]ithin the limits of experimental conditions, non-response did not introduce substantial biases in the estimates."

In other words, despite fundamental theoretical flaws in our traditional methods, we still like the results.

As such, those who produce surveys based on non-probability samples are right to ask whether their methods should also be evaluated on their "track record" (as Harris Interactive's Humphrey Taylor argued in an article in the January 15 print edition of the Polling Report). And comparing track records is what we aim to do here at Pollster.

The Zogby "Interactive" polls released in 2006 suffered from more than just theoretical questions Their results were far more variable and less valid (when compared to election results) than other conventional polls. And the public Internet panel surveys conducted during the 2004 U.S. presidential campaign by all sources (Zogby, Harris and YouGov) all showed a small but a consistent bias toward the Democrats (Professor Franklin and I been crunching the numbers on 2004 and 2006 and will have some posts to share on this subject very soon).

But all Internet panel surveys are not created equal, and past performance may not be a guarantee of future results - for any survey method. All of which brings me back to the Harris poll on Hillary Clinton. Since my post last week, Time released its latest survey which, by chance, included a question with a more similar wording and structure to the Harris item than those other surveys I looked at last week.

Time/SRBI - [Asked of 1,102 registered voters only, March 23-26] If the following candidate were to run for president and the election was being held today, how much would you support this candidate, definitely support, probably support, probably not support definitely not support?

Harris Interactive - [2,223 adults via Internet panel, Marc 6-14] If Hillary Clinton was the Democratic nominee for President, which is closest to the way you think? I definitely would vote for her, I probably would vote for her, I probably would not vote for her, I definitely would not vote for her, I wouldn't vote at all

04-02%20clinton%20support.png

In this case, the Time survey shows a bigger number (46%) in the two support categories than Harris (37%), although two big differences between the measurements (other than sampling) remain. The first is that the Harris item offered respondents the explicit choice of not voting at all, while the Time question offered only four categories of support. The second is that Time asked the question of registered voters only, while Harris asked it of all adults. We could get a closer comparison if we could tabulate the Harris result among self-reported registered voters, but no such results appear in the Harris release.

Nonetheless, we have at least a hint here that those who volunteer to participate in Internet panels may be less supportive of Hillary Clinton than respondents interviewed by conventional methods. And that is something we need to keep an eye on.

[Update - Interests disclosed: The primary sponsor of Pollster.com is the research firm Polimetrix, Inc. which conducts online panel surveys].


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR