Mark Blumenthal | August 27, 2009
Topics: Health Care Reform , Measurement , Public Option
Is there a right way to "poll the public option?" Are most pollsters "getting it wrong," while only a few ask the "perfect question" about the much debated health care reform proposal to create a "public option?" Nate Silver, in a post earlier this week, argues just that and suggests "five essential ingredients" for a good poll on the public opinion.
While I agree with some of Nate's observations, I have to disagree with his underlying premise. When it comes to testing reactions to complex policy proposals, I would rather have 10 pollsters asking slightly different questions and allowing us to compare and contrast their results than trying to settle on a single "perfect question" that somehow captures the "truth" of public opinion. On an issue as complicated and poorly understood as "public option," that sort of polling perfection is neither attainable nor desirable. In this case, public opinion does not boil down to a single number.
Before tackling Silver's argument, let's start with another similar discussion that takes a different approach to the challenges of polling the public option, also posted on Monday, by ABC News polling director Gary Langer. He gathered results and full text (PDF) of eight questions asked on recent national polls that attempt to measure support for the "public option" and found support ranging from a high of 66% (on the late July CBS/New York Times poll) to a low of 43% (on the recent NBC/Wall Street Journal poll).
Langer's post reviews possible explanations for the variation, including question structure (whether it appears as a "stand alone" question or within a list), the way "polling technique" can influence the undecided level, the potential for question order effects and, of course, the differences in question language. For example, references to the plan as "similar to Medicare" appear to create greater support, although as Langer notes, a split-sample experiment conducted by the Kaiser Family Foundation in June found "no significant overall difference when it mentioned Medicare" (details here).
He also highlights the many variations in the way these questions describe the role of government, such as "government run," "government sponsored," "government created" or "administered by the government." These varying approaches, he writes, "underscore the challenges in polling on health care reform; it's tough to come up with wording that precisely portrays proposals that themselves haven't been clearly defined."
Langer's bottom line is that the larger variability of the polling data confirms an argument he has made previously: "public opinion on health care reform has long been highly malleable" and "pushback works. "Malleable" is also the word the analysts at the Kaiser Family Foundation chose to describe views on the public option as measured by a series of questions on their July tracking poll. After finding 59% in favor of "creating a government-administered public health insurance option similar to Medicare to compete with private health insurance plans," the Kaiser pollsters followed up with "arguments commonly heard in the debate" and asked respondents if they would "still favor" or "still oppose" the plan as described. They found that one-sided arguments pro and con could push support for such a plan as low as 35% or as high as 72%.
We often hear pollsters use words like "malleable" and "fluid" to explain wide poll-to-poll variation, and many readers either find those explanations confusing or hear them as a cop-out for "bad" work. Those of us that spend our time deeply engaged in public policy debates often feel frustrated with the wording of surveys that fail to conform to our own sense of the substance of the debate. And that brings me back to Nate Silver's post.
He presents "five essential ingredients in conducting a good poll on the public option:"
1. Make clear that the 'public option' refers unambiguously to a type of health insurance, and not the actual provision of health care services by the government.
2. Make clear that by "public", you mean "government".
3. Avoid using the term 'Medicare' when referring to the public option.
4. Make clear that the public option is, in fact, an option, and that private insurance is also an option.
5. Ask in clear and unambiguous terms whether the respondent supports the public option -- not how important they think it is.
The post fleshes out each item in more detail, and there are certainly elements I agree with. For example, if our goal is to measure support for a proposal, better to ask respondents if they support it rather than whether they consider it "important." I also agree that the word "government" conveys the reality of the public option more clearly than the word "public." Generally speaking, of course, survey questions should always try to describe policy proposals accurately.
But that said, Silver's recommendations still involve a lot of subjective judgments. What I disagree with is that these various judgments add up to an objectively "perfect" question. To explain why, let's step back a bit and think about different kinds of questions we could ask respondents.
First, consider some entirely "factual" questions we might ask: How old are you? Are you male or female? What is the last grade you completed in school? Do you own a car? Do you own a cell phone? In each case, most people have a ready answer. For a respondent to answer these questions on a poll, the process is mostly about retrieving an answer from memory and selecting the most appropriate answer category.
Attitude questions can probe similarly real, currently held opinions: Do you like your car (if you have one)? Do you like your cell phone? Do you have a positive or negative impression of Barack Obama? Fitting that opinion to the pollsters sometimes vague answer categories may get a little fuzzy (e.g. "somewhat favorable"), but here again, most people have real, ready opinions to share.
On the other hand, consider this question: Should we repeal the 1975 Public Affairs Act? Very, very few Americans should have a pre-existing opinion since no such act ever passed. Yet that did not stop 30% to 40% from agreeing or disagreeing that the non-existent Act should be repealed on survey experiments conducted by George Bishop and University of Cincinnati colleagues in the mid 1970s.
What Bishop and others have learned over the years is that survey respondents work hard to answer questions and that they frequently form those answers on the spot based on underlying values tapped by cues in the question language. Twenty years later, for example, the Washington Post's Richard Morin modified the experiment and found that when the question informed respondents that either "President Clinton" or the "Republicans in Congress" wanted to repeal the non-existent law, responses polarized along partisan lines.
So how many Americans are familiar enough with the "public option" to have real, pre-existing opinions about it? I am guessing very few, but unfortunately, few pollsters have tried to tackle that particularly challenging question. An AARP sponsored survey released just yesterday claims to have an answer ("only 37% able to identify" public option), but the sparse details available about its non-random internet-panel methodology and the nature of the question (anyone guessing would have a one-third chance of choosing the right answer) suggest we should interpret it with extreme caution. If anything, their result probably overstates true familiarity with the public option.
For the sake of argument, lets assume that only about a third of Americans are familiar with the public option and have an opinion, pro-or-con. What do we hope to achieve by providing a very brief description and asking a random sample of all Americans whether they favor or oppose? Are we trying to measure current attitudes or predict what views might be in the future? Or are we aiming for something in between, treating the poll like a jury panel, chosen to weigh new information and render verdict as a proxy for the larger population?
If we are trying to describe current attitudes, then virtually every "public option" poll vastly overstates both support and opposition to the public option. "Don't know" is most likely the true prevailing view of the public option.
If we are either trying to predict future opinion or, as I think is more commonly the case, expecting the survey respondents to quickly absorb new information and render judgments on behalf of all Americans, then the task of settling on a "perfect" question involves a lot of hugely subjective decisions. How much information is enough? How much is too much? What information is "fair and balanced," what is leading?
Silver tells us, for example, that poll questions should make clear that the public opinion "is, in fact, an option" along with private insurance. On the other hand, he argues that likening the proposal to Medicare may be leading and, as such, provides "too much information."
Fair enough. But if "choice" is important, why not clarify that the "option" would only be available to those under-65 Americans currently without health insurance? And why stop there? If our goal is to fairly educate our jury, why not include the price tag or the need for some sort of tax increase to pay for it? Why stop at a single sentence? Why not present more complete arguments, pro and con?
My point here not to take a side on any of these questions but to point out that reasonable people might come to different conclusions about how much information is necessary, and about what is fair and what is leading.
Four years ago, I waded into a similar discussion of polling about the Terry Schaivo controversy and, much as Nate is doing, tried to referee the "right" way to ask about the issue. An old friend and academic sent this very persuasive comment that applies just as well to this conversation about polling the public option. I will let him have the last words:
Mark, I think that your discussion here implicitly endorses a commonly held error about the best way to interpret polling data about matters of public interest . . .
The error is the incorrect belief that there is a "right" or "unbiased" way to ask a question about any given public issue. There is no such thing. Everyone who works within the polling field is well aware that small changes in wording can affect the ways in which respondents answer questions. This approach leads us into tortuous discussions of question wording on which reasonable people can differ. Further, as you have pointed out many times in the past, random variation in the construction of the sample or in response rates can skew the results of any single poll away from the true distribution of opinions in the population.
So how do we look at public opinion on an issue such as the Schiavo case? The answer is NOT to find a single poll with the "best" wording and point to its results as the final word on the subject. Instead, we should look at ALL of the polls conducted on the issue by various different polling organizations. Each scientifically fielded poll presents us with useful information. By comparing the different responses to multiple polls -- each with different wording -- we end up with a far more nuanced picture of where public opinion stands on a particular issue. If we can see through such comparisons that stressing different arguments or pieces of information produces shifts in responses, then we have perhaps learned something. Like our own personal opinions, public opinion is not some sort of simple yes/no set of answers; it is complex, and it can see both sides of complicated issues when presented with enough information.
If we were to lock pollsters of all partisan persuasions in a room and force them to pick the "best" question wording on the Schiavo issue, we might end up with everyone asking the same question, but overall we would end up with less information about public opinion, not more. We are better off having the wide variety of different polls, with questions stressing different points of view on the issues, and then comparing them all to one another.