Articles and Analysis


Dispatches: Trusting What People Say in Polls

This post is part of Pollster.com's week-long series on Stan Greenberg's new book, Dispatches from the War Room.


I'm pleased to join this conversation and to have received Stan Greenberg's book, which is a fascinating insider's look at the way polls were used in five major situations. It's great history by itself, and I would strongly recommend the book to any one at all interested in current events and/or the role of polling consultants in public policy.


I have several comments I would like to make at some point, but this one will focus on one issue raised by Mark Blumenthal and subsequently addressed by Greenberg himself.


Mark pointed to Greenberg's query at one point in the book, where the author asked how much he could trust his own polls on public policy matters. To make it easier for the reader to follow this conversation, I will reproduce what Mark wrote:


...the most candid observation from the book concerns Greenberg's admission that his focus groups and polls misled him on the question of whether Israeli voters would ever accept a division of Jerusalem. At first, two-thirds of voters in his surveys said "it was unacceptable to have a Palestinian state with its capital in East Jerusalem." When Greenberg "saw no movement" when he presented arguments for the division in surveys, and voters "nearly cried" in focus groups, insisting that dividing Jerusalem would be "like taking away your beloved child," Greenberg advised his client that such a policy was a "dead end." [emphasis added]


Yet four weeks after Ehud Barak put that option on the bargaining table at Camp David, despite a negative approval rating and strong opposition in parliament, a majority of Israelis were ready to "go with him" on Jerusalem in Greenberg's polling. Thus Greenberg raises a critical question:


If I cannot believe what people tell me is unacceptable in my surveys on Jerusalem, then what of my findings on other subjects? Why can't a determined leader change these too?


Greenberg's question about the changeability of public opinion is an important one, because much of what polls present to us today are measures of public opinions that appear to be firm - when in fact we know that for many people, polls measure only the most ephemeral of views. What's lacking in most polls are any measures of intensity.


My question to Greenberg would be how intensely did the poll respondents feel about the "unacceptable" response they gave to the interviewers? Greenberg gives no indication that he measured it, though he did get an idea of intensity in the focus groups when participants "nearly cried" about the proposed division of Jerusalem.


Still, as we know, focus groups are poor indicators of general public opinion. The people who are willing to participate are no doubt more engaged in issues than people who can't be bothered (or paid) to participate, and the focus group experience itself can be intense - making the participants completely unrepresentative of the general population.


Greenberg is, of course, quite aware of such limits, readily acknowledging them in his book. Still, as we all know, it's very difficult to ignore numbers and visceral experiences even when we know they are technically unrepresentative of the larger population. The high emotion of the focus group participants could well have made it seem impossible that voters at large could change their minds.


In a series of experiments that Jeff Jones and I designed while I was at Gallup,[1] we discovered that on any given issue about 40 percent to 60 percent of the public had a "permissive" opinion. Though many people may have initially expressed a preference for a policy (saying that they either favored or opposed it), those with a "permissive" opinion then admitted (in response to a follow-up question) that they would not be "upset" if the government did the opposite of what they had just said. This does not mean that later on, once a policy has been implemented, those same people will not hold the leaders accountable if it doesn't work. But it does mean that political leaders have tremendous leeway in what they initially decide to do.


And it also means that on most public policy issues, leaders can probably do pretty much what they want to without being held accountable.


Greenberg refers to that phenomenon when he talks about the importance of public opinion to legislators (p. 395). "The fundamental lesson is that people matter because elections matter. You could only think otherwise if you haven't spent any time close to elected officials or candidates for office...The antics of the Republicans in Washington over the past decade seemed to challenge that presumption when they ignored overwhelming public sentiment on Clinton's impeachment, taxes, and the Iraq war, and escaped accountability at the polls."


What I'm suggesting here is that the "overwhelming public sentiment" described by Greenberg was not, in fact, overwhelming. The polls are misleading us. Such sentiments as Greenberg mentions may be widespread, but thinly anchored. And whether we like it or not, from a democratic point of view, it means that politicians can often get away with ignoring what appears to be a public consensus.


The Iraq War is a good example, though on the opposite side of the issue from what Greenberg is discussing. Did Americans "overwhelmingly" support the war before President Bush launched it? The polls all said yes, by about two-to-one margins or greater. And it would appear that most Democratic Senators, who might have been expected to oppose the war, were influenced by these polls to support the war resolution. But Jeff Jones and I discovered that after measuring intensity on that issue, in fact the public was evenly divided - three in ten strongly supporting the war, three in ten strongly opposed, and a plurality - four in ten - with a "permissive" opinion (not upset either if the United States went to war or didn't go to war). How different the political climate in Washington might have been had this picture of public opinion prevailed, instead of the erroneous depiction of a public hankering for war.


While Gallup has not measured intensity on this issue recently (nor have other pollsters), I suspect that even when it comes to withdrawing troops - which Bush refused to do - the public is more divided than unified. The point is that the public is much more in the middle of an issue, and thus willing to defer to its leaders, than the polls tell us, because most polls ignore the intensity with which people hold their poll-expressed views.


In the conclusion to his book, Greenberg writes that despite his ability, post hoc, to explain why his poll results about Jerusalem might have misled him (p. 422),


...it does not change the question I now must face whenever I see a survey result that sets such dramatic limits on what is possible. How do you know that people will not rethink their starting points? How do you know they will not be moved by a deliberative process that thinks about the problem in new ways?...How do you know you won't discourage a less fearless leader from chancing to be bold?


I do not have an answer to this question, other than to constantly remind myself that opinion is changeable, that I must always simulate changing circumstances, and that I should be wary of telling a leader the public will not join him or her in this.


Like Greenberg, I think there is a much larger portion of the public in the middle of any given issue than might be at first assumed - and that we find reflected in most current polls. It's a point that Kristen Soltis makes in a different way when she praises Morris Fiorina's Culture War?: The Myth of a Polarized Electorate.


In a follow-up commentary about Greenberg's book, Blumenthal  wrote that "public opinion will ultimately limit or control the extent to which policy makers can affect change and achieve their goals, and a wise wonk will want to study public opinion -- both as it exists now and where political leaders can move it in the future."


But political leaders need realistic measures of such opinion. Crucial to that goal is measuring not only the direction of the public's preference, but the intensity with which people hold their views - and thus their potential willingness to be influenced by their political leaders.


Greenberg's Dispatches is a testament to the importance of this dimension of public opinion so often ignored by our major media polls.



[1] David W. Moore and Jeffrey M. Jones, "Permissive Consensus: Toward A New Paradigm for Policy Attitude Research," revision of a paper presented at the annual meeting of the American Association for Public Opinion Research, May 16-19, 2002.




I have worked for a major pollster for almost 10 years. I am one of those individuals who makes the phone calls, read the survey questions and records responses. If pollsters want to gain some invaluable insights about respondent intensity as it relates to the malleability of public opinion they need to seriously solicit the opinions of these phone surveyors. The premise above is that two individuals can give the same respone to a policy issue and thus have their opinions given equal weight in the analysis process although one is more likely to reverse that opinion than the other. And as noted above this can lead to a misrepresentation of public opinion. This is a problem that first and foremost begins at the top, with the authors and clients of the major polling firms. For the most part surveys are written less to assess public policy but more so to ascertain how to effect and advocate a predisposed public policy agenda. Clients come in with this agenda and the polling firm readily accomidates. Wording, placement and length are creative variables that start at the top and effect responses at the bottom more so than policy specifics. Wording is often confusing, not tested for it's auditory effect on the respondent and often versed in Washingtonese. Proper instructions for phone surveyors in how to deal with respondent confusion and weak opinions is woefully lacking. First of all design a format that is simple, tested auditorily and one that can probe for intensity. Allow the respondent to be able to respond that he or she does not have an opinion either way and don't challenge phone surveyors to push for an answer. "Not sure" is considored undesirable by many clients but is actually more reflective of intensity than forcing an otherwise weak response.


I've been doing this for more than 20 years and the most consistent thing I've found, both in politics and consumer goods, is arrogant ignorance. People will consistently reject the simple workable solution on the grounds that "they" are trying to trick them and instead turn to the dangerously stupid because "they" don't like them. As a trivial example, consider the products now sold to keep you healthy in the air, developed by "a school teacher."

For larger decisions, the "undecided" will almost always say the "need more information." Any simple probe reveals almost complete ignorance of the subject, with their phrase simply meaning "I don't know and I don't care."

You're all pros. You all know this.


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.