Pollster.com

David Moore

A Red Herring

Topics: Disclosure , Divergent Polls , Measurement

Mark Blumenthal is right to argue in his recent National Journal article ("The Problem With Polling Cap-and-Trade,") that "politicians should proceed with caution when trying to anticipate public opinion on a complex policy issue." In my opinion, however, he goes too far when he implicitly expresses approval (or perhaps tolerance) for such polling, on the grounds that pollsters are just probing latent public opinion.

As Blumenthal notes, CNN and Pew each conducted polls on this issue, with CNN finding 97 percent of the public expressing an opinion, while Pew reported 89 percent with an opinion. Also, CNN found a 23-point margin in favor of cap-and-trade legislation, Pew an 11-point margin. Blumenthal emailed George Bishop, author of  The Illusion of Public Opinion, asking "What should we make of such findings?"

Bishop responded by writing: "Reliable and valid measures of public opinion on such a complex policy issue cannot be so simply simulated by merely telling respondents what it's about and then asking them to react to it on the spot. Down that road lie misleading illusions and the manufacturing of public opinion - a disservice to the Congress, the president and the press that covers them."

Blumenthal takes issue with Bishop's response, arguing that "policy makers have good reason to want to probe the sorts of opinions that the venerable political scientist V.O. Key once termed latent, those likely to be stirred up should the legislation become law or the focus of a future election campaign. Pollsters can attempt to simulate such hypothetical attitudes in a telephone survey, but the results will be very sensitive to the words they use and, more importantly, to the assumptions they make about the competing arguments voters may eventually hear."(italics added)

So far, I agree with everything Blumenthal says. Note especially the words in italics, which emphasize both that pollsters can "attempt" to simulate hypothetical attitudes, and that such results are sensitive to question wording.

But then Blumenthal argues that "It is possible to anticipate how public opinion on a topic like cap-and-trade may evolve, but proceed with caution." (italics added) Unfortunately, the assertion in italics lacks supporting evidence. In fact, the efforts to predict public opinion on complex issues are more likely to generate wildly conflicting polling results, such as polls on support and opposition for a bailout, or the "card check" (EFCA) bill. The reason: As Blumenthal cautions - because polling results in such situations are highly dependent on question wording and context.

More distressing, however, is Blumenthal's advice to "proceed" - even with caution - which implies 1) that pollsters recognize they are measuring "latent" opinion and 2) they are willing to admit as much to the public. But that is not the case.

It's important to remember that "latent" opinion is not "current" opinion. It is hypothetical, speculative, inconclusive - i.e., it is a concept about what opinion might emerge, depending on the varied ways in which the issue is covered in the press. It would be disingenuous for pollsters to (as Blumenthal says above) "attempt to simulate such hypothetical attitudes" using only one method (question wording) and then declare that the results represent current public opinion. Yet, that is exactly what media pollsters do.

Pollsters do not say they are "simulating hypothetical attitudes." Instead, recognizing that many people may not know anything about the issue (which Pew in fact acknowledges), pollsters feed their respondents information, used forced-choice questions to extract an immediate reaction, and then announce the results as though they represent what the public is currently thinking.

CNN announced its hypothetical results by saying - "CNN Poll: 6 in 10 back 'cap and trade'." Pew was no less assertive in presenting its hypothetical results, saying that "the survey finds more support than opposition for a policy to set limits on carbon emissions." Its sub-headline was also quite definite: "Modest Support for 'Cap and Trade' Policy." In no place did either polling organization admit that these results were based on "simulating hypothetical attitudes."

It's important to keep in mind that once respondents are fed information in the context of a survey, they no longer represent the larger population from which they were drawn - because the rest of the public has not been given the exact same information at the exact same time. Objective questions can often be asked of sample respondents without fatally tainting the sample, but giving respondents extra information about an issue cannot avoid contamination. Such a process inevitably means that, at best, pollsters are trying to simulate what public opinion might emerge. But they don't present their results with such a warning label.

So, it seems to be a red herring to justify polls on complex issues, such as 'cap and trade', by suggesting that policy makers may want to probe "latent" public opinion. Yes, perhaps they do. But that's not what the media pollsters admit to doing.

As Blumenthal writes in a previous post, "If subtle changes in wording can produce such different results, then we can assume that many respondents are forming opinions on the spot rather than sharing pre-existing views on the actual legislation. 'Public opinion' in this sense isn't so much 'fluid' (a favorite pollster cliché) as non-existent."

Yes, indeed. Pollsters converting non-existent public opinion into the appearance of current public opinion, exactly the "manufacturing" charge that Bishop makes.

If the media pollsters want to simulate hypothetical public opinion, they should clearly label it as such, instead of presenting their results as reflective of actual (existing) public opinion. Until pollsters are more candid about the nature of their polls, I've got to side with Bishop in characterizing this kind of polling as "manufacturing opinion," which produces "misleading illusions" about the public and is ultimately a "disservice" to the Congress, the president, the press and the people.


Likely Voters and Mid-Term Elections, Part II (The 1934 Mid-Term Election)


In my earlier post (Part I) on this subject, I suggested it would be a political miracle if Democrats did not lose U.S. House seats in the 2010 election. A major reason is that in mid-term elections, the voters most likely to turn out are those who are disgruntled with the policies of the incumbent president, rather than his supporters. In fact, since Democrats and Republicans first began competing against each other for Congress in the mid-19th Century, the president's party has lost seats in mid-term elections, relative to the "out" party, all but three times - 1934, 1998 and 2002.

 

The two most recent times seem to be special situations that are likely to have little relevance for 2010. The 2002 mid-term election saw the president's party pick up a few seats, probably because the country was still rallying around the flag in reaction to the terrorist attacks the previous year. President Bush's approval rating was still high (Gallup showed it at 68 percent in a poll conducted Nov. 8-10, 2002).

 

And the 1998 mid-term election found voters quite dissatisfied with the Republicans' vote to impeach President Clinton, apparently a major reason why the president's party was able to pick up a few House seats. In both years, the Republicans enjoyed slim majorities, and the changes did not affect their majority control.

 

If neither of the two exceptions just noted appear to have much relevance to what might happen in 2010, the third exception (the first chronologically) is a different story. Taking place just two years after President Roosevelt came into office during the Great Depression, the 1934 mid-term election campaign focused largely on the New Deal measures adopted by Congress. The result was a net increase of 9 seats for the president's party - from 313 to 322. Democrats today no doubt hope that a similar debate about the economic stimulus bills and health care reform will be a positive inducement for voters next year.

 

That's where the differing poll results - depending on whether the pollsters use "likely voters" or not - provide an interesting story. The latest results (noted in my previous article) suggest the Democrats have an overall lead in the congressional house vote of from six to seven percentage points among the general public or registered voters, but are apparently in a dead heat with Republicans based on likely voters.

 

If indeed, the two-party aggregate vote is about even on Election Day, that would almost certainly result in major seat losses for the Democrats, though it's difficult to say how many. The aggregate two party vote (the total voting for Democrats vs. Republicans in the country as a whole) is not a perfect indicator of how well the parties fare in winning House seats. In 2002, for example, the Republicans won 54.1 percent of the two-party vote nationwide, but got 52.6 percent of the House seats. In 2004, they won a smaller percent of the two-party vote (51.4 percent) but picked up three seats, expanding their majority to 53.3 percent of the seats.

 

In 2006, the Democrats won 54.2 percent of the aggregate vote, and won about the same percentage of House seats - 53.6 percent. Two years later, their share of the national vote increased by only .4 percent (to 54.6 percent), but they gained 21 seats to hold 59.0 percent of the House seats.

 

Despite the inconsistency between the percentage of the national vote and the percentage of House seats won by the majority party, a tie vote nationally would likely cause the Democrats to lose a significant number of seats. The Democrats beat the Republicans in 2006 and 2008 by about 8 to 9 percentage points. Currently, not many polls suggest the Democrats will win by that margin in 2010.

 

Of course, with more than a year to go, much can happen to shape the political landscape. No doubt, the most salient domestic issues will be the stimulus bills and health care reform, and no one knows for sure what will happen to the latter. And then there is always the possibility of some major international event that could influence the elections.

 

Democrats may hope for a repeat of the 1934 mid-term election, but history tells us the circumstances will have to be quite unusual for that to happen. As indicated in Part I, given the expected low turnout, I'd be especially attentive to the preferences of "likely voters." If they indicate a Democratic lead of 8 to 10 points, that would be unusual indeed.

 


Likely Voters and Mid-Term Elections, Part I


It would be a political miracle if the Democrats did not lose seats in the 2010 Congressional elections, yet the polls so far suggest that scenario is doubtful at best. I think it's because most polls are providing a rosier picture for the Democrats by reporting voting intentions of the general public, or registered voters, rather than the much smaller segment of "likely voters" that will ultimately turn out to cast a ballot.

That the Democrats will almost certainly lose House seats in 2010 is attested to by several factors. The most important, of course, is that since the advent of the current two party system (Republicans and Democrats), the party of the president almost always loses seats in a mid-term election. The best theory for this phenomenon is that disgruntled people (i.e., those who identify with the "out" party) are more motivated to cast a protest vote than the relatively satisfied people (i.e., those who identify with the party of the president) are to cast a vote of support.

The second factor is that, in the wake of the protracted war in Iraq and the sagging economy, Democrats won many seats in 2006 and 2008 that would "normally" go to Republicans. In 2010, with Bush gone and a Democratic administration in charge, Democratic House members in those "normally" Republican seats are going to be quite vulnerable.

The final factor is that as a general rule, Republicans are more likely to turn out than Democrats, because Republicans tend to be higher on the socio-economic scale - generally more educated, with higher incomes, and more actively involved in politics than Democrats.

So, if all of these reinforcing factors suggest the Republicans are likely to gain seats, why aren't the polls showing that? Here are some interesting recent poll results (see pollingreport.com):

 

July-August Polls 2009 Measuring Support

for Congressional Candidates, 2010

 


Date

Poll

Democrats

Republicans

Unsure/neither/dk

Democratic advantage

 

 

%

%

%

Pct Pnts

Aug 10-13

(general public)

Daily Kos/

Research 2000

36

28

36

+8

July 31-Aug 1

(general public)

CNN/Opinion Research Corp

44

34

22

+10

July 24-27

(general public)

NBC/Wall Street Journal

46

39

15

+7

July 22-26

(likely voters)

NPR/POS and GQRR

42

43

15

-1

July 19-23

(likely voters)

GWU - Tarrance/Lake

43

40

17

+3

July 9-13

(registered voters)

Diageo/Hotline

39

32

30

+7

July 10-12

(registered voters)

Gallup

50

44

7

+6

 

Note that there is little difference in the lead that polls show for Democrats when the sample is either the general public or registered voters - from six to ten percentage points. However, the two polls that reported results based on "likely voters" show essentially a dead heat (a 3-point Democratic lead or a one-point Republican lead).

Nate Silver (at fivethirtyeight.com) suggests caution in relying on likely voter models this early in the 2010 campaign. Generally, I agree that early polls - especially in specific races (as opposed to the more general generic ballots reported above) - need to be viewed with caution. Many people are undecided 10 to 12 months ahead of the election, though some pollsters obscure that fact by using a forced choice format. See, for example, the contrast between Diageo/Hotline and Gallup above, the former showing 30 percent of registered voters undecided, Gallup showing just 7 percent.

Furthermore, different polling organizations use different screeners to arrive at their presumed "likely voters," some more "aggressive" than others. So, it's difficult to make direct comparisons with polls showing different leads, even if they base their results on likely voters, rather than registered voters or the general public.

That said, I would argue that in general we get a more realistic view of the general sentiment of voters, if the sample has been screened fairly tightly to produce a relatively small segment of likely voters rather than a much larger group of people - the general public or even "registered voters." In mid-term elections, turnout is only about half or so of turnout in presidential elections. Thus, screening out the non-voters is much more sensitive for understanding mid-term elections than presidential elections.

So, contrary to Nate Silver's advice, I would suggest that when polls diverge, one based on likely voters is probably a better reflection of the actual electorate than a poll based on the general population or even registered voters.

(In Part II I will discuss the exceptions to the general rule that the president's party loses House seats in mid-term elections, and whether those exceptions are relevant to 2010.)


Not Seeing the Forest for the Trees: Important Lessons from a SurveyUSA Experiment


In a recent conversation with Jay Leve, founder and head of SurveyUSA, I was alerted to a split sample telephone survey experiment he conducted last October in the San Francisco Bay Area. It was on the subject of the government's plan to bailout or rescue Wall Street.

 

Though the experiment dealt with an issue that is so last year (!), I'm writing about it now because it demonstrates how questions about specific policy plans can produce misleading results about the public's views of the broader issue - a classic case of not seeing the forest for the trees.

 

SurveyUSA Experiment

 

Jay tested four different ways of phrasing the bailout question, and each one found mixed to slightly negative results. But then two follow-up questions starkly contradicted these results to suggest a clear majority of the public was supportive of the bailout efforts.

 

While the SurveyUSA experiment tested four different ways of wording the bailout issue, three are rigorously comparable, and so I mention them first. I'll come back to the implications of the fourth version later in this post.

 

Each of the following questions was asked of a split sample of just over 500 respondents:    

 

  1. The government may invest billions to try and keep financial institutions and markets secure. Do you think this is the right thing for the government to do? The wrong thing for the government to do? Or, do you not know enough to say?
  2. The government may spend billions to bail-out Wall Street. Do you think this is the right thing for the government to do? The wrong thing for the government to do? Or, do you not know enough to say?
  3. The government may spend billions to rescue Wall Street. Do you think this is the right thing for the government to do? The wrong thing for the government to do? Or, do you not know enough to say?

 

TABLE 1

Results of Three Versions of Bailout Question

"Right or Wrong Thing to Do?"

 

Right

Wrong

No Opinion

N

 

%

%

%

 

A. Invest billions to keep markets secure

39

37

24

532

B. Spend billions to bail out Wall Street

40

42

18

533

C. Spend billions to rescue Wall Street

34

46

20

523

 

Was the Public Ambivalent?

 

With these results, one would have to conclude that the public was at best ambivalent toward a bailout or rescue of Wall Street. Given the sample sizes, Form A results are significantly different from those in Form C, suggesting "rescue" is the most negative way to phrase the issue and "invest" is the most positive way.

 

But then come two additional questions that throw a completely different light on the issue. The question below taps into a general feeling about the issue, and finds that people seem to want Congress to do something - and that they're more afraid Congress will do too little than too much.

 

 

TABLE 2

More Afraid Congress Will Under-react Than Over-react

 

 

Too Much

Too

Little

No Opinion

N

 

%

%

%

 

What concerns you more: that the government will do too much to fix the economy? Or, that the government will do too little?

32

58

11

2100

 

The problem with comparing the above question to the other three is that this one doesn't explicitly allow for "no opinion," while the first three questions do.

 

Public Supports Bailout

 

But the next question is comparable in offering an explicit "don't know" response, and it also suggests that a clear majority of the public wants Congress to do something.

 

TABLE 3

Should Congress Support SOME Economic Rescue Effort?

 

 

FOR

AGAINST

No Opinion

N

 

%

%

%

 

Do you want your representative in Congress to vote FOR an economic rescue? To vote against an economic rescue? Or, do you not know enough to say?

 

54

30

16

2100

 

Note that despite offering an explicit "or don't you know enough to say" option, this question shows a clear majority in favor, with only 30 percent opposed.

 

Had we depended on only the first three questions, which were asked of separate split samples, for our understanding of the public, we might well have concluded that whether it was "bailout" or "rescue" or "invest," the public was either evenly divided about a bailout or leaning against such an effort. But these last two questions suggest that less than a third of the public was opposed to an economic rescue/bailout plan in principle, while a clear, though small, majority (54 percent) of people were in favor.

 

New Complications

 

When we look at the results of the fourth version of the split sample experiment, we find even more support for some type of rescue plan. This version included three substantive options (compared with just two for the other three versions - which is why I'm treating this question differently from the first three versions), followed by the explicit option of not expressing an opinion.

 

TABLE 4

Fourth Version of Bailout Question

 

 

Pass this plan

Pass different

Plan

Take no action

No Opinion

N

 

%

%

 

%

 

Congress is working on a plan to buy and re-sell up to 700 billion dollars of mortgages. What would you like the Congress to do? Pass this plan? Pass a different plan? Take no action? Or, do you not know enough to say?

31

40

13

16

512

 

This version shows the least support for the current plan, but its second option - to pass a "different" plan - suggests that more than seven in 10 respondents are in favor of some rescue plan (31 percent the current plan, plus 40 percent for a different plan). These results also suggest that only 13 percent (rather than 30 percent as shown in Table 3) are opposed to some kind of effort.

 

Important Lessons

 

These results, based on a telephone survey in the San Francisco Bay Area, demonstrate how variable the survey results can be - even when the survey is about an issue that is the subject of much media attention. The first three versions of the bailout question could reasonably be interpreted to suggest that the Bay Area public was either ambivalent or negative about any bailout plan, while the question reported in Table 3 gives the opposite impression - that the same public was in fact looking for some action by the federal government.

 

The important lesson: Sometimes, asking about specific plans can blind us to the larger issue of whether some type of action is still desired.

 

The last question reaffirms that when questions have two options in favor of a policy (pass the current plan, or pass a different one) and one against (take no action), the "opinion" that is measured can be quite different from when a question has just one option in favor of a policy and one opposed.

 

The last question can also be interpreted to have two options against the "current" plan - only one option says pass the current plan, while two options are against it (pass a different plan, and take no action).

 

This doesn't mean that three options should never be offered. The example does reaffirm, however, that the more options that are offered in general, the less similar the results will be to questions that offer only two options.

 

Note: My thanks to SurveyUSA's Jay Leve for pointing me to his very insightful experiment.


Dispatches: Trusting What People Say in Polls


This post is part of Pollster.com's week-long series on Stan Greenberg's new book, Dispatches from the War Room.

Dispatches.jpg

I'm pleased to join this conversation and to have received Stan Greenberg's book, which is a fascinating insider's look at the way polls were used in five major situations. It's great history by itself, and I would strongly recommend the book to any one at all interested in current events and/or the role of polling consultants in public policy.

 

I have several comments I would like to make at some point, but this one will focus on one issue raised by Mark Blumenthal and subsequently addressed by Greenberg himself.

 

Mark pointed to Greenberg's query at one point in the book, where the author asked how much he could trust his own polls on public policy matters. To make it easier for the reader to follow this conversation, I will reproduce what Mark wrote:

 

...the most candid observation from the book concerns Greenberg's admission that his focus groups and polls misled him on the question of whether Israeli voters would ever accept a division of Jerusalem. At first, two-thirds of voters in his surveys said "it was unacceptable to have a Palestinian state with its capital in East Jerusalem." When Greenberg "saw no movement" when he presented arguments for the division in surveys, and voters "nearly cried" in focus groups, insisting that dividing Jerusalem would be "like taking away your beloved child," Greenberg advised his client that such a policy was a "dead end." [emphasis added]

 

Yet four weeks after Ehud Barak put that option on the bargaining table at Camp David, despite a negative approval rating and strong opposition in parliament, a majority of Israelis were ready to "go with him" on Jerusalem in Greenberg's polling. Thus Greenberg raises a critical question:

 

If I cannot believe what people tell me is unacceptable in my surveys on Jerusalem, then what of my findings on other subjects? Why can't a determined leader change these too?

 

Greenberg's question about the changeability of public opinion is an important one, because much of what polls present to us today are measures of public opinions that appear to be firm - when in fact we know that for many people, polls measure only the most ephemeral of views. What's lacking in most polls are any measures of intensity.

 

My question to Greenberg would be how intensely did the poll respondents feel about the "unacceptable" response they gave to the interviewers? Greenberg gives no indication that he measured it, though he did get an idea of intensity in the focus groups when participants "nearly cried" about the proposed division of Jerusalem.

 

Still, as we know, focus groups are poor indicators of general public opinion. The people who are willing to participate are no doubt more engaged in issues than people who can't be bothered (or paid) to participate, and the focus group experience itself can be intense - making the participants completely unrepresentative of the general population.

 

Greenberg is, of course, quite aware of such limits, readily acknowledging them in his book. Still, as we all know, it's very difficult to ignore numbers and visceral experiences even when we know they are technically unrepresentative of the larger population. The high emotion of the focus group participants could well have made it seem impossible that voters at large could change their minds.

 

In a series of experiments that Jeff Jones and I designed while I was at Gallup,[1] we discovered that on any given issue about 40 percent to 60 percent of the public had a "permissive" opinion. Though many people may have initially expressed a preference for a policy (saying that they either favored or opposed it), those with a "permissive" opinion then admitted (in response to a follow-up question) that they would not be "upset" if the government did the opposite of what they had just said. This does not mean that later on, once a policy has been implemented, those same people will not hold the leaders accountable if it doesn't work. But it does mean that political leaders have tremendous leeway in what they initially decide to do.

 

And it also means that on most public policy issues, leaders can probably do pretty much what they want to without being held accountable.

 

Greenberg refers to that phenomenon when he talks about the importance of public opinion to legislators (p. 395). "The fundamental lesson is that people matter because elections matter. You could only think otherwise if you haven't spent any time close to elected officials or candidates for office...The antics of the Republicans in Washington over the past decade seemed to challenge that presumption when they ignored overwhelming public sentiment on Clinton's impeachment, taxes, and the Iraq war, and escaped accountability at the polls."

 

What I'm suggesting here is that the "overwhelming public sentiment" described by Greenberg was not, in fact, overwhelming. The polls are misleading us. Such sentiments as Greenberg mentions may be widespread, but thinly anchored. And whether we like it or not, from a democratic point of view, it means that politicians can often get away with ignoring what appears to be a public consensus.

 

The Iraq War is a good example, though on the opposite side of the issue from what Greenberg is discussing. Did Americans "overwhelmingly" support the war before President Bush launched it? The polls all said yes, by about two-to-one margins or greater. And it would appear that most Democratic Senators, who might have been expected to oppose the war, were influenced by these polls to support the war resolution. But Jeff Jones and I discovered that after measuring intensity on that issue, in fact the public was evenly divided - three in ten strongly supporting the war, three in ten strongly opposed, and a plurality - four in ten - with a "permissive" opinion (not upset either if the United States went to war or didn't go to war). How different the political climate in Washington might have been had this picture of public opinion prevailed, instead of the erroneous depiction of a public hankering for war.

 

While Gallup has not measured intensity on this issue recently (nor have other pollsters), I suspect that even when it comes to withdrawing troops - which Bush refused to do - the public is more divided than unified. The point is that the public is much more in the middle of an issue, and thus willing to defer to its leaders, than the polls tell us, because most polls ignore the intensity with which people hold their poll-expressed views.

 

In the conclusion to his book, Greenberg writes that despite his ability, post hoc, to explain why his poll results about Jerusalem might have misled him (p. 422),

 

...it does not change the question I now must face whenever I see a survey result that sets such dramatic limits on what is possible. How do you know that people will not rethink their starting points? How do you know they will not be moved by a deliberative process that thinks about the problem in new ways?...How do you know you won't discourage a less fearless leader from chancing to be bold?

 

I do not have an answer to this question, other than to constantly remind myself that opinion is changeable, that I must always simulate changing circumstances, and that I should be wary of telling a leader the public will not join him or her in this.

 

Like Greenberg, I think there is a much larger portion of the public in the middle of any given issue than might be at first assumed - and that we find reflected in most current polls. It's a point that Kristen Soltis makes in a different way when she praises Morris Fiorina's Culture War?: The Myth of a Polarized Electorate.

 

In a follow-up commentary about Greenberg's book, Blumenthal  wrote that "public opinion will ultimately limit or control the extent to which policy makers can affect change and achieve their goals, and a wise wonk will want to study public opinion -- both as it exists now and where political leaders can move it in the future."

 

But political leaders need realistic measures of such opinion. Crucial to that goal is measuring not only the direction of the public's preference, but the intensity with which people hold their views - and thus their potential willingness to be influenced by their political leaders.

 

Greenberg's Dispatches is a testament to the importance of this dimension of public opinion so often ignored by our major media polls.

 

 



[1] David W. Moore and Jeffrey M. Jones, "Permissive Consensus: Toward A New Paradigm for Policy Attitude Research," revision of a paper presented at the annual meeting of the American Association for Public Opinion Research, May 16-19, 2002.

 


"Reaction" Polling


In his post on "Hired Gun" Polling, Mark Blumenthal suggests the need for pollsters to "distinguish between questions that measure pre-existing opinions and those that measure reactions."

 

He makes an important point. Much of what pollsters offer to the world as "public opinion" is in reality hypothetical, based on giving respondents information that many in the general public may not have and then immediately asking respondents for their reaction to that information.

 

Such results can be illuminating, but pollsters recognize that feeding respondents information means the sample no longer represents the American public and what Mark calls its "pre-existing" opinion. Unfortunately, many pollsters fail to acknowledge the hypothetical nature of such results, and instead treat them as though they represent the current state of the public's views.

 

The problem with this kind of approach is illustrated in the case that Mark discussed in his post, dealing with the "card check" bill, the proposed Employee Free Choice Act (EFCA) concerning the authorization of unions in the workplace.

 

The vast majority of Americans, one can reasonably assume, have little to no knowledge of the provisions of the bill. Thus, to measure "public opinion" on the issue, pollsters feel they need to tell respondents what the bill is all about. A Republican pollster explained the bill one way, a Democratic pollster another way, and - to no one's surprise - they ended up with a "public opinion" that reflected their respective party's position on the issue.

 

While one may argue the relative merits of the questions used by the two pollsters, the main point is that informing the public of any major policy proposal is intrinsically biased. Pollsters have to decide what is important among all the various elements of the proposal, and they can often come up with quite different conclusions. This problem applies to public policy pollsters as well, who - we can reasonably assume - have no partisan agenda, but who nevertheless can produce what appear to be partisan results.

 

Such problems have multiplied with the recent public policy polling on the bailout proposals for Wall Street and for the auto industry, and on the stimulus plan being considered by Congress. Most pollsters assume the public has little specific knowledge of such proposals, and thus pollsters provide respondents specific information to measure the public's (hypothetical) reaction to the proposal.

 

When CNN described the proposal to bailout the auto industry by characterizing it as "loans in order to prevent them from going into bankruptcy," in exchange for which the companies would produce plans "that show how they would become viable businesses in the long run," it found a 26-point margin in favor (63 percent to 37 percent). But when an ABC/Washington Post poll only a few days earlier had mentioned the word "bailout" in its question and did not refer to plans leading to the companies becoming viable, the poll showed a 13-point majority against the proposal (55 percent to 42 percent).

 

Again, one can debate the relative merits of the two questions, but the tendency of pollsters is to say that each set of results provides different insights into the dynamics of the public's views on this matter. In short, each provides a picture of potential public reaction to the proposal, if the proposal is framed to the general public in the way each polling organization presented the issue to its respondents.

 

That distinction is generally lost in the news reports. Each polling organization instead announces its results as though they reflect the current views of the public, over which the polling organization had no influence. But the reality is that the polling organization inevitably shapes its results by the very way it presents the issue to respondents.

 

As Mark argues, such "reaction" polling has a useful role to play in the public discourse on public opinion. However, it's also important that pollsters make clear that their results do not reflect "pre-existing" opinion (opinion before the polls were conducted - though one might instead use the word "extant" opinion), but rather hypothetical opinion under restricted conditions.

 

It used to be that newspapers made a formal distinction between "hard news" articles and "analysis" articles - clearly labeling the latter as such. That procedure doesn't seem to be followed these days, but it may be a useful analogous model for pollsters. Perhaps, in a similar way, pollsters can devise a method to formally separate their reports of potential "reaction" public opinion from existing public opinion.

 

I can envision, for example, one article in a newspaper that describes how many (or how few) people are actually aware of an issue and how many express ambivalence about the matter, while another article could explicitly describe how the public might react if the issue were universally framed in one way or another. Pollsters have made such distinctions sporadically, which is why we know that the public is more likely to support "rescuing" the auto industry than "bailing it out."

 

Still, the need for a formal and widely accepted method of distinguishing "reaction" questions from those that measure existing opinion needs to be found, if pollsters are to avoid the confusion that occurs when highly reputable polls produce wildly contradictory results.

 


"Manipulating" Public Opinion


My colleague, Mark Blumenthal, has recently posted his reaction to an earlier post of mine, in which I suggested that most major media pollsters deliberately manipulate public opinion, in order to make it appear as though most of the public has an opinion on an issue. My examples included polls about the stimulus package, and how the public viewed the Democrats' control of the three branches of government.

 

In his critical remarks, Mark suggested that I was being a bit narrow in my view of public opinion and unfair in implying a nefarious motive on the part of the pollsters. In a later blog on the same issue, he noted that he had received reactions from two different pollsters, who did not want to reveal their names, one who works on campaigns and the other who works for the media. They provided somewhat different takes on his discussion of public opinion about the stimulus package, takes which I think tend to support my criticisms of media polls.

 

More about that in a moment. First, let me say that I appreciate Mark's well-considered criticisms, recognizing that they probably also reflect the views of many other practicing media pollsters (though let's hope not all). And I appreciate the opportunity that Mark offers for me to blog on this site about these issues, because I think such conversations are at the heart of the scientific enterprise - as does he. For that I am grateful.

 

As to the flaws in my concept of public opinion, I think Mark may misunderstand my recent focus on the lack of "no opinion" measures. Like Mark, I think that is just one part of measuring public opinion, but still a crucial one. Mark seems to agree, writing that "Yes, it is important to understand that many Americans lack a specific opinion on the 'economic stimulus' legislation per se, something stressed by too few pollsters. Still, that finding is just one part of 'public opinion' on this issue." (emphasis added)

 

I couldn't agree more. My view is that at on important policy matters, pollsters should measure at least three dimensions of public opinion: 1) direction of support (from support to opposition), including the magnitude; 2) intensity of views; and 3) the absence of a meaningful view on the matter, or non-opinion. In my recently book, The Opinion Makers, I elaborate more fully on this concept. (For the time being, I will ignore the oft-neglected measure of intensity.)

 

If measuring the direction of public opinion and measuring non-opinion are both important, why don't we find both measures in most polls? Look at the graph below - these are the poll results that Mark assembled from the various polls in his critique of my commentary. All of them measure direction of opinion, as we would expect, but only one attempts to measure non-opinion (NBC/WSJ). (Mark suggests that Rasmussen may have provided an explicit "no opinion" option, but Rasmussen's topline with the actual question shows it was a forced-choice format, with "no opinion" a volunteered option.)

 

(The graph below takes the difference between the percentage of people who support and the percentage who oppose the stimulus package, as described in the respective polls, which is then plotted as the "margin in favor" - since all polls showed more people in favor than opposed to the stimulus. The graph also shows the percentage of people without an opinion, as reported by each poll.)

 

0901_30 Reply to Mark B on Manipulating public opinion (Econ stim diff polls).png Mark acknowledges that too few pollsters stress the percentage of people who lack an opinion (see italicized part of his quotation above), and his (and my) concerns are amply illustrated in the graph. Only 1 percent have no opinion according to CNN, just 3 percent say ABC/WP, 4 percent says Ipsos, 7 percent say NBC/WSJ, and 11 percent to 12 percent say Gallup and Hotline. Those are hardly credible numbers, if we are referring to a specific stimulus package (rather than to the general idea of some kind of stimulus), as all of these results do.

 

Indeed, Mark says the consensus of the two pollsters who contacted him in the wake of his critique was that "both imply agreement on one thing: Most Americans know little about the 'economic stimulus plan,' except that the President and the Congress are talking about it." Mark also adds at the end of his commentary that "'tepid' support is about the right phrase to use" to describe public opinion on the issue.

 

If that is the case (and I tend to agree with it), how did these pollsters arrive at that conclusion? Certainly not by looking at Gallup, Hotline, Ipsos, NBC/WSJ, CNN or ABC/WP. All those pollsters suggest very few people unsure about the specific stimulus plan being considered by Congress, and very large majorities in favor. None of these pollsters suggested "tepid" support and widespread ignorance.

 

Mark also writes that besides measuring non-opinion, "reactions to new information are also important, as are the underlying values driving responses to all of the questions reproduced above." Again, I don't disagree, though he implies that I do. It is not mutually exclusive to measure non-opinion (which most pollsters fail to do) and also to measure reactions to new information or underlying values.

 

The question is why do most pollsters fail to measure non-opinion?

 

(Separately, why do most pollsters fail to measure intensity? That's also an important dimension, but I'll talk about the failure to measure intensity at a later time.)

 

My response is that most pollsters don't measure non-opinion in general because they don't want to reveal that a sizeable number of Americans don't have an opinion on important policy matters. Mark thinks I'm being unfair, and that I'm attributing nefarious motives to such pollsters.

 

Here's the dilemma: Mark and I both agree that non-opinion is a crucial part of measuring the public's position on policy matters. We also know that all the major media pollsters do measure non-opinion from time to time. So, why don't they measure non-opinion on such major issues as the stimulus? What criteria do they use to determine when they will, and when they won't, ask questions with an explicit "no opinion" option?

 

The simple answer is that the news media wouldn't find it interesting to constantly report that large segments of the population don't have a meaningful opinion about the major policy issues facing the country. That's why pollsters feed respondents information and asked forced-choice questions, all in an effort to reinforce the myth of a highly informed, rational, and engaged public. It's a far more newsworthy myth than the reality of a public with many people uninformed and unengaged on issues, and thus lacking meaningful opinions.

 

I'm not arguing that all people fit that category. I'm only arguing that pollsters and the media should be willing to admit the existence, and measure the size, of such large segments of the public, instead of manipulating respondents to come up with answers so that it will appear as though virtually all Americans have a meaningful opinion.

 

In 1942, Elmo Roper wrote in an essay for Fortune magazine, titled "So the Blind Shall Not Lead," that even then, less than a decade since the advent of modern polling, "the emphasis in public opinion research has been largely misplaced. I believe its first duty is to explore the areas of public ignorance."[1]


Exploring areas of public ignorance may not necessarily be the pollsters' first duty, but it is certainly an important duty they usually fail to perform.



[1] Elmo Roper, "So the Blind Shall Not Lead," Fortune, 25, No. 2, p. 102, cited in George Bishop, The Illusion of Public Opinion: Fact and Artifact in American Public Opinion Polls (Lanham, Maryland: Rowman & Littlefield Publishers, Inc., 2005), p. 6.

 


George Bishop's and David Moore's 2009 Top Ten "Dubious Polling" Awards


My colleague at the University of Cincinnati, George Bishop, and I have launched what we expect to be an annual listing of the Top Ten "Dubious Polling" reports for the previous year. Posted on Stinky Journalism.Org, we intend this as a satirical look at some of the practices of the major media pollsters.

 

As the opening paragraph notes: "Every year, poll watchers are confronted with poll results and commentary that defy either logic or science, often raising question about the very utility of polls. Typically, the problems are not with the method of conducting polls, but with the pollsters themselves - as they focus on what they believe is entertaining and appealing to the audience rather than an accurate reflection of public opinion. In the process, pollsters manipulate public opinion or write commentary that makes a mockery of what the public is really thinking."

 

Each award is ranked, from a low of one set of crossed fingers to a high of five sets. Pollsters generally know in their hearts when all is not right with their polls, but they (figuratively) cross their fingers and hope that no one notices anything amiss. The five crossed-fingers icon is the ultimate in wishful thinking, perhaps the equivalent of football's "Hail Mary pass" for the truly untrustworthy poll.

 

Our top award - earning the five crossed fingers - goes to all the major media polls[1] for their prediction of Giuliani as the early Republican frontrunner. Collectively this group, beginning more than one year prior to the first statewide electoral contest in Iowa, relentlessly, and without regard for any semblance of political reality, portrayed Rudy Giuliani as the dominant Republican candidate in a fictitious national primary.

 

Other "Dubious Polling" awards are:

·        Loopiest Poll Award: Pew Research Poll, for weekly pre-election polls in October that showed wild swings in Obama's lead.

·        Shooting Yourself in the Foot Award: Gallup Poll, for publishing two polls on Feb. 25, 2008, that contradicted each other.

·        Over-the-Top Gloating Award: Gary Langer, polling director of ABC, for writing that "What I liked best about the final New Hampshire pre-election polls [which erroneously predicted Obama to win] is that I didn't do any of them" - cleverly completing his polling in the Granite State far enough away from the election to avoid having his results compared with the election outcome.

·        180 Degree Award: CBS News/New York Times and USA Today/Gallup polls, for coming to opposite conclusions about the controversy over Rev. Jeremiah Wright.

·        Waiting for Godot Award: The American Association for Public Opinion Research Committee that still has not issued a report on the erroneous predictions in the N.H. Democratic Primary.

·        Who Knows? Award: Pew Research, ABC News/Washington Post, and Los Angeles Times/Bloomberg polls for contradictory conclusions about public support for Wall Street bailout.

·        Wake-Me-Up-When-It's-Over Award: NPR, Kaiser Family Foundation, and the Harvard School of Public Health Survey for a vague 131-word question.

·        Flip-Flop Award: CNN for two December polls that showed opposite results of the public's support for the auto bailout.

·        For Sale! Award: Peter D. Hart Research Associates, for their General Motors-sponsored poll that found (surprise! surprise!) overwhelming public support of auto industry bailout.

 

For a full description and rationale for the awards, go to Stinky Journalism.Org.



[1] These include polls by the Associated Press, ABC News/Washington Post, CBS News/New York Times, CNN, FOX, NBC News/Wall Street Journal, USA Today/Gallup, Newsweek, the Los Angeles Times/Bloomberg, and Pew Research.

 


Why Pollsters Manipulate Public Opinion


Two recent polls, one by Gallup and the other by CNN, illustrate how easy it is for pollsters to manipulate public opinion into something different from what it really is.

 

The Gallup poll, Jan. 6-7, 2009, attempted to measure the public's reaction to a federal government stimulus package, with the question phrased as follows:

 

Do you favor or oppose Congress passing a new 775 billion dollar economic stimulus program as soon as possible after Barack Obama takes office?

 

In almost the same time period, the NBC News/Wall Street Journal poll also attempted to measure public opinion about the stimulus package, with a question that provided for a "don't know" option:

 

Do you think that the recently proposed economic stimulus legislation is a good idea or a bad idea? If you do not have an opinion either way, please just say so.

 

The results are shown below:

 

0901_16 CNN Manipulating public opinion (Econ stim pack).pngThe major difference, of course, is in the percentage of people who don't have an opinion - Gallup says just 11 percent, while NBC/WSJ says almost three times that number.

 

The margin in favor of the stimulus package is virtually identical in the two polls, 16 and 17 percentage points, but instead of being able to report a majority of Americans in favor, NBC and the Journal had to report that a "plurality" of Americans were in favor, with a substantial portion of the public ambivalent or unengaged. Gallup, by contrast, could report (although erroneously) that a majority was in favor.

 

It's not always the case that the margin in favor of a proposition is always the same in both ways of measuring public opinion, as is illustrated in the following case. When CNN wanted to discover whether the public was copasetic with Democratic control of all three branches of government, it asked a forced choice question (Nov. 6-9, 2008):

 

As you may know, the Democrats will control both the Senate and the House of Representatives, as well as the presidency. Do you think this will be good for the country or bad for the country?

 

Again, coincidentally, another polling organization, Associated Press/GfK Roper Public Affairs and Media asked a similar question at virtually the same time (Nov. 6-10, 2008), though this question allowed for a middle position:

 

As you may know, the Democrats will now control the House of Representatives, the Senate and the presidency. Do you think it good for the country, bad for the country, or does it not really make a difference that the Democrats now control the House, the Senate and the presidency?

 

The results of the two polls show two very different publics:

 

0901_16 CNN Manipulating public opinion (Dems control branches).pngWhile CNN reports a large majority of Americans in favor of Democratic control, by a 21-point margin, the Associated Press finds a small plurality in favor (just an 8-point margin) and about a quarter of the public either saying the situation doesn't matter or not expressing an opinion.

 

In this case, both polling organizations deliberately manipulated their respondents to come up with an opinion (even if, in the case of the AP/GfK poll, to say the issue didn't matter) by giving them information up front. Why did they need to tell respondents that the Democrats controlled all three branches? Why not find out how many people knew that, and then - among those who knew it - ask whether it was good or bad, didn't it make a difference, or didn't they have an opinion?

 

But the major media pollsters are generally not interested in realistic measures of public opinion. On the matters discussed here, Gallup and CNN clearly do not want to report how many people don't have an opinion or might want to take a valid middle position on the issue. Instead, these pollsters believe it's more interesting to create a "public opinion" that reflects a highly engaged and decisive public.

 

For CNN to say that 97 percent of Americans believe Democratic control of the government is either "good" or "bad," and for Gallup to claim that nine out of ten Americans have an opinion about the stimulus package, may fit their journalistic needs - but they know, and we know, it's simply not true.


The Fluctuating Convergence Mystery


The "convergence mystery" gets even more mysterious.

 

In Survey Practice, I initially raised the question of why the national presidential polls showed a great deal of variance in their results during the month of October, but then converged to a relatively tight cluster in the final predictions. Mark Blumenthal then calculated the variance among state polls, showing that they also exhibited much greater variance during October than in their final predictions.

 

He suggested the phenomenon was probably not a deliberate effort by pollsters to change their numbers. Instead, he proposed that pollsters, whose numbers were outliers, probably looked to see if their polls needed "fixing" - and sure enough, they found reasons to adjust their numbers closer to the mean. Thus, the convergence at the end of the campaign.

 

My original analysis was a weekly average of the national polls, while Mark's was a weekly average of selected state polls (including 12 battleground states with at least 20 polls in October/November). In a further analysis, I looked at the eight tracking polls from October 4 through November 2. The group includes four daily tracking polls for the whole time period, and four that started a bit later - two on Oct. 6, one on Oct. 12, and the last on Oct. 16.

 

Shown below is the overall graph of their results.

 

2008 Fluct Conv Mys Graph 1.png A quick examination shows a couple of times when the polls converged to a tight cluster before expanding to much greater differences - around Oct. 18 and again around Oct. 28-29.

 

The next graph shows the same polls, but with the "variance" plotted on the same graph (the pink line). What I hadn't noticed in the graph of all the polls are the three spikes in variance shown below.

 

2008 Fluct Conv Mys Graph 2.png

The next graph is a scatterplot of the variance. The linear regression line indicates a significant decline in variance over the month of October, though clearly there are spikes.

 

2008 Fluct Conv Mys Graph 3.png

The last graph shows the day-by-day fluctuation, with three major spikes, all occurring just a couple of days after each of the October debates.

 

2008 Fluct Conv Mys Graph 4.png 

The first spike occurs on Oct. 7, the second on Oct. 12-13, and the last Oct. 20-22. In each case, the spike begins five days after a debate. It's important to keep in mind that the daily tracking polls are typically about 3-day rolling averages, so that means the spike occurs two days after the debate.

 

These results add to the mystery of convergence, because they 1) show an overall decline in variance over the month, and 2) nevertheless show sudden and temporary spikes in variance, starting just two days after a vice presidential or presidential debate. The largest spike occurs right after the third presidential debate on Oct. 15 - not immediately reflected in the 3-day tracking polls until five days after the debate.

 

The delayed spikes can be accounted for in this way: The vice presidential debate took place on Oct. 2. The next day, the networks broadcast their interpretations of the debate, and the following day, the polls begin to show quite different results. The debate effect is not complete until the end of the 3-day tracking period, which would mean the first full results would be manifest on Oct. 7, five days after the debate.

 

Similar scenarios suggest that five days after each of the two succeeding debates, new spikes should occur - and they do. Oct. 12 (five days after the second presidential debate) and Oct. 20 (five days after the final presidential debate) find the beginnings of spikes - the first lasting two days, and the second lasting three days, before beginning the downward movement.

 

There is one last minor spike, from the end of October to the final prediction figures. It's hard to tell if this is random noise, or part of a predictable pattern.

 

In any case, the mystery is this: Why do the eight tracking polls show more variance in results following the debates? What is there about the debates that would cause different polls to show greater inconsistencies in results than normal? And why do the polls show a month-long decline in variance, except for the three temporary spikes?

 

I think that Mark's initial suggestion -- that pollsters with the outlying results tend to "fix" their methodology, and thus have their polls converge toward the mean - may need to be re-examined in light of the tracking poll data. The decline in the variance is gradual over the month, but interrupted by the debate-generated spikes.

 

Please offer any theories you might have that could explain this phenomenon.

 

 


Poll Performances: Crazy October


The final presidential contest predictions of the major media polls all came pretty close to the actual results, predicting Obama to win by anywhere from 5 to 11 percentage points (he actually won by 6.7 points).

 

However, the polls showed a great deal of variability even during the last four weeks of October leading into the election, raising questions about how to measure poll "accuracy" during the election campaign itself.

 

Shown below is a graph of the results of 10 polls that publicized results for at least the final three weeks of October.

2008 Oct weekly poll trends.png

An examination of the daily tracking polls provides no better picture of poll accuracy.  

2008 Oct daily poll trends.png 

The differences in the overall trends are quite substantial, as are individual points.

 

On October 12, IBD/TIPP shows an Obama lead of two points, while DailyKos says it's 12. The next day, IBD/TIPP produces a 3-point lead, while GWU has a 13-point lead. On October 25, GWU's lead is just three points, while DailyKos has it at 12 points. Even right before the election, IBD/TIPP shows just a 2-point lead, while ABC/WP says the lead is 11 points.

 

More important are the many different pictures of the dynamics of the race. If we single out, say, DailyKos from GWU, one would never know they are measuring the same contest - except that they both converge in their final predictions. Another example: Rasmussen shows only a little variability in the race, between an Obama lead of 3 to 8 points, ending at 6, while GWU goes from 13 points down to 1, finally ending at 5. Likewise, Zogby's description of the campaign dynamics shows a relatively stable race for the first half of the month, followed by a major surge, a big decline, and then a last minute surge. DailyKos usually had the most optimistic Obama leads, mostly double digits, except for the middle of the month, and then at the end when the lead declined to just five points.

 

What can we say about poll performances when there are such different stories about the October dynamics? The notion that the polls were mostly "accurate" must be modified to reflect how divergent they were during the campaign.

 


Exit Polls and the Undecided Voters


The 2008 exit polls suggest that most the major media pollsters missed an important part of the presidential campaign, as they either failed to measure or mostly ignored the large undecided group of voters just after the major party conventions officially nominated their candidates, and its diminishing size over the next two months.

 

Unlike in 2004, the 2008 election polls obtained somewhat more detail about how undecided the voters were, and the results support my argument made on pollster.com several times previously, that many voters mull over their decisions until late in the campaign period.

 

In the 2008 election, the exit polls show that 4 percent of voters said they made up their minds on Election Day, another 3 percent in the previous three days, and an additional 3 percent within the past week - for a total of 10 percent. That's virtually the same as the 11 percent who said they had made their decision in the past week in 2004 - with 5 percent saying the day of the election, 4 percent the previous three days, and 3 percent the past week.

 

 

When Voters Made Their Decisions

2004 and 2008

EXIT POLLS

(Shown on CNN - click column headings at right)

CNN.com Election 2004

CNN.comElection 2008

 

%

%

Day of election

5

4

Previous 3 days

4

3

Past week

3

3

TOTAL (past week)

11

10

Last month/In October

10

15

In September

n/a

14

Before September (2008)

n/a

60

Before October (2004)

78

n/a

TOTAL before past week

88

89

 

More interesting, the 2008 exit polls suggest that only 60 percent of voters had decided whom to support before September, with about four in ten making up their minds after the major party conventions in August.

 

While it is certainly difficult for a voter to pinpoint exactly when he or she made a final decision, some pre-election polling data from this year suggests the exit poll results may be pretty good approximations. Of course, most pollsters don't measure the undecided voter directly (which they could do, by asking whom voters intend to choose on Election Day and then asking, "or haven't you made up your mind yet?"), but instead pollsters will often do so indirectly. After the hypothetical, forced choice vote question, for example, the CBS/New York Times poll sometimes asks, "Is your mind made up, or is it still too early to say for sure?" CBS reports that in mid-August 2008, about a third of all registered voters were "uncommitted" - they had either not chosen a candidate initially, or they had mentioned a candidate but then said it was still to early to say for sure if their minds were made up.

 

A somewhat larger undecided voter group was measured in a 1996 Gallup poll, conducted Sept. 3-5, 1996, which asked voters up front if they had made up their minds - rather than the standard "who would you vote for if the election were held today" question. In that format, 60 percent said they had made up their minds, while 39 percent said they had not, and 1 percent were unsure. Those 1996 figures are similar to what the 2008 exit poll responses suggest as well.

 

Below are two graphs of voter preferences. The first is based on a reconstruction from the exit poll crosstabs, which show voter preferences including the undecided vote. Of course, such a reconstruction needs to be viewed cautiously. It's difficult for people to remember exactly when they made up their minds, so at best this graph is an approximation of what voter preferences might have looked like for each month.

 

Voter pref Aug-Election Day (2008 Exit polls).png

As you can see, this first graph shows more voters undecided than choosing either of the two major candidates before September, and it shows the decline in the undecided group over time. (Obviously, each time period on the X-axis is not proportional to the number of days in the time period, but the general pattern is obvious.)

 

The second graph is a reconstruction (averaging) of Gallup's daily tracking poll, using the likely voter results when available, and the registered voter results otherwise. The "last week" results are based on just four days of the week before the election, while the "last 3 days" are based on just those days from the tracking poll. I used this method to approximate the exit poll categories and provide a comparable base of analysis.

 

Voter pref Aug-Election Day (Gallup Daily Tracking polls).pngIn contrast with the first graph, the second graph of the Gallup tracking poll shows no significant change in the undecided voter group from August through Election Day. In fact, Gallup's daily tracking poll, which goes back to March 2008, shows a steady 5 - 6 percent undecided group for the whole seven months - something that not even Gallup researchers can argue (with a straight face) is accurate.

 

If we believe that the exit polls have any validity in measuring opinion, it's hard to deny the superiority of the first graph in giving poll consumers an accurate picture of the changing electorate during the campaign. The declining size of the undecided vote over the course of the campaign is clearly an important dynamic in the campaign, regardless of whether pollsters will acknowledge it.

 


Pew's Andrew Kohut Mischaracterizes Own Data


On "All Things Considered" Sunday night, Andrew Kohut, director of the Pew Research Center, reported the latest results of his organization's poll, showing Obama with only half the lead he had the previous week. In explaining the decline, Kohut misstated what his poll results actually showed.

 

According to Kohut, Obama was up by just 7 points among likely voters in the latest Pew poll, 49 percent to 42 percent, down from the 15-point lead he enjoyed the previous week. The NPR anchor asked Kohut to explain the dramatic decline in Obama's lead. "There are two things going on," he said. "First of all, John McCain has made some gains among whites and he's made some gains among independent voters. The other thing - McCain is enjoying the typical boost we get when we narrow the sample from registered voters to likely voters."  

 

Actually, Pew has been reporting the results among likely voters since early September, and the decline in Obama's lead occurred among Pew's likely voters - which favored Obama by 53 percent to 38 percent in the Oct. 23-26 poll. The 4-point drop in Obama's support and 4-point gain in McCain's support found by the Oct. 29-Nov. 1 poll could not be attributed to narrowing the sample from registered to likely voters, given that both sets of results were based on likely voters. (See Pew's chart here.)

 

This misinterpretation of the data comes in the wake of Pew's previous two October polls, which were clearly outliers compared with other national polls conducted in the same time periods. Other polling organizations in mid and late October showed Obama with only half the lead that Pew did, so when Pew's last pre-election poll found only a 7-point lead, that finally brought Pew back into line with other polls.

 

It may be, as Kohut suggests, that McCain picked up support among whites and independent voters - though it is worth further research to explain why none of the other polls report the same dramatic change. In any case, whatever the mysterious causes of Pew's outlier results, followed by the sudden bounce back into line with other polls, this unusual fluctuation cannot be easily explained away as a change from registered to likely voters.


Undersized Undecideds


Two days ago, Nick Panagakis reopened our debate about the "true" size of the undecided voters in his post on pollster.com, entitled Supersized Undecideds. Oddly, his post tends to support my argument, rather that contradict it.

 

First I should note that Nick has misstated my position somewhat, which was explained here and here. In brief, my argument is that pollsters should measure the undecided vote, by including in their vote choice question a tag line, "or haven't you made up your mind yet?" I also argue that pollsters should not insist on asking who voters would choose "if the election were held today," but who would they support on Election Day. I contend that this way of asking voters their candidate preferences produces a more realistic and accurate picture of the electorate than the way pollster currently report the results of their hypothetical, forced-choice vote question.

 

Nick disagrees, because he thinks that this approach would exaggerate the number of undecided voters. He makes the novel argument that any indecision measured as I suggest would be "calendar-induced" indecision but not "candidate induced" indecision. I don't know of any evidence for the validity of this distinction, but it's crucial to his argument.

 

To illustrate this point, he presents recent data from the ABC/Washington Post tracking polls, which suggest that currently only 9 percent of voters say they could change their mind before election day, including 3 percent who say it's a "good" chance they could do so, and 6 percent who say it's "pretty unlikely" they would do so. The latter term Nick interprets in his own mental framework as "no chance in h*ll."

 

Then, as though it's an obvious problem, Nick says, "Imagine if polls up until last week were showing undecideds 10 to 20 points higher - or still showing 9 points greater this week." Yes, let's imagine the 9 percentage point increase in the undecided voter group over what is reported these days.

 

It's important to note that most polls have been showing just a couple of percentage points of undecided voters, including ABC and the Post. These news organizations did not highlight the 9 percent undecided in their news stories, but instead focused on Obama's lead over McCain by 52 percent to 45 percent - leaving 3 percent unaccounted for (1 percent "other" and 2 percent "undecided"). If you want to know how many voters might "change their minds," you have to look hard for the data. Of course, ABC and the Post are no different from most other polling organizations that regularly suppress the undecided vote.

 

So, if the polls were to show "9 points greater undecided this week," as Nick feared, that would still be only 10 to 11 percent. That hardly seems excessive, given that the 2004 exit poll found 9 percent of voters saying they had made up their minds in the three days just prior to the election. And just today, the AP reported that about 14 percent of voters were "persuadable," a news story that emphasized the size of the undecided voter unlike most poll stories, which suppress that information.

 

Just before the New Hampshire Democratic Primary, the UNH Survey Center found 21 percent of voters who said they had not made up their minds (when asked directly, without the hypothetical, forced-choice version that is standard), and the exit poll showed that 17 percent of voters said they had made up their minds on election day.

 

These numbers suggest that measuring and reporting the size of the undecided voters is an important part of describing the state of the electorate. Not to do so is one of the continuing failures of most media polls.


Different Polls, Different Trends


As the discussion of Charles Franklin's column on house effects suggests, most people believe that "who's right" in their poll results these days will be resolved after Election Day. Then we can compare which polls came closest to the final results, and infer that the most accurate polls in the final pre-election predictions were probably the most accurate during the campaign as well.

 

But it doesn't usually work out that way. In 2004, the seven polls noted in the accompanying chart all showed Bush winning by a margin of one to three percentage points, except for Fox, showing Kerry the winner by two. All the results were well within the polls' margins of errors in comparison with the actual election results.

 

0810_14 Final Poll Predictions 2004 Election.png

However, the interesting point is that during the month of September, these very same polls showed dramatically different dynamics. As shown in the next graph, there were three basic stories: ABC, Gallup, Time and ABC all showed Bush gaining momentum in the weeks following the Republican National Convention, and then falling toward the end of the month. Furthermore, although these pollsters all agreed with the general pattern, at the end of the month Gallup showed Bush with an 8-point lead, CBS and Time had him at one point, and ABC at 6 points.

 

The second story, reported by Fox, Zogby and TIPP, showed very little movement over the month of September, with the margin varying from a Kerry lead of one point to a Bush lead of three points.

 

Finally, Pew had its own dynamic, not found by any of the other polls, showing a significant surge for Bush after the convention, followed by a dramatic decline, then another significant surge.

 

0810_14 Bush Lead Sept 2004.png

One of the most interesting comparisons is between Gallup and Pew, which diverged by 13 points in mid-September, but closed to agreement by the end of the month.

 

At the end, all the pollsters could claim they were "right" on target, and NCPP dutifully noted the fine performance of the media polls. That performance, of course, was only in the final prediction. No effort was made to evaluate the polls during the campaign, though clearly they presented contradictory results. It appears as though we need a means of evaluating the polls during the election campaign.

 

It's true, of course, that we can't know which polls are most accurate during the campaign, but we can say that collectively they often tell quite divergent stories. And that hardly qualifies them for plaudits after Election Day.

Last week (Oct. 6), Gallup and DailyKos/Research 2000 tracking polls both showed Obama up by 9 and 11 points respectively, the same figures they show as of Oct. 13. Diageo/Hotline, GWU/Battleground and Zogby tracking polls all showed quite different results - with quite different trends.

 

On Oct. 7, Diageo/Hotline, GWU and Zogby showed an average of a 2-point lead for Obama, while DailyKos and Gallup showed an average of a 10.5 point lead. All three of the former polls reported an increasing lead for Obama in the subsequent week, while Gallup and DailyKos told us there was essentially no change.

 

Obama's Lead Among Five Tracking Polls

 

Gallup

Gallup2

DailyKos

Diageo

GWU

Zogby

6-Oct

9

 

11

2

7

3

7-Oct

11

 

10

1

4

2

8-Oct

11

 

10

6

3

4

9-Oct

10

 

12

7

8

5

10-Oct

9

 

12

10

 

4

11-Oct

7

6

13

8

 

6

12-Oct

10

10

12

6

8

4

13-Oct

9

10

11

6

13

6

 

 

After the election, will we know which tracking polls were right? If history is a guide, all will come within their polls' margins of errors compared to the final election results. And we will all forget how confusing their different prognostications were during the campaign.

 

Perhaps we need another standard by which to judge the polls' performances during the election campaign.


What the Bailout Polls Really Tell Us


Three polls, all at the same time, give three wildly contradictory pictures of the American public. The Los Angeles Times/Bloomberg poll says the public opposes taxpayer bailout of Wall Street by 55 percent to 31 percent, a result cited on CNN by David Gergen the night the poll was published. He used the poll to illustrate his point that "the American people" were angry with the thought of using government funds to help Wall Street firms. That theme seemed to dominate several of the networks' coverage of the issue, though it was contradicted by a Pew Research poll, published the same day as the Times/Bloomberg poll. Pew found that "Most Approve of Wall Street Bailout" (by a margin of 57 percent to 30 percent). Either a 24-point margin against the bailout, or a 27-point margin in favor. Could there be any greater demonstration of how confusing the media polls are to anyone who genuinely cares about what the public thinks? But then there is the Washington Post/ABC poll published the very same day as the other two, showing a very different public, one that finds "Tepid Public Approval for Fed Action," by a statistically insignificant difference of 44 percent to 42 percent.

ABC's Gary Langer acknowledges these discrepant results, writing that "Some analysts might say the results are contradictory; I'd suggest instead that we learn more, not less, by comparing and contrasting them." The "instead" clause seems like a non sequitur to me - it is obviously true that the results are contradictory and, yes, we can also learn "more, not less" by examining their contradictions. Perhaps especially enlightening, besides the fact that each polling organization phrased the questions differently (giving different information to the respondents), is Langer's point that only 27 percent of respondents had "strongly" held views - 9 percent in favor, 18 percent opposed. It hardly portrays the public as fighting mad against the government's plan to address the economic crisis, when almost three-quarters of the public seems more tentative than decisive.

Two days later, a CBS/New York Times poll found what might best be described as "tepid opposition" to the federal government's bailout plan - 42 percent who approve to 46 percent who disapprove. But after asking respondents about that plan (without specifying the details), the poll then gave respondents limited information about the plan Congress is working on, principally that the government would "provide" $700 billion of government funds to financial service companies in danger of going bankrupt. The question then asked if respondents thought it was a good idea or a bad idea, or "don't you know enough to say?" With that formulation, the poll found 38 percent opposed, 16 percent in favor, and 46 percent without an opinion.

An examination of all the poll results suggests a public that is mostly taking a wait-and-see attitude toward whatever plan the president and Congress might finally adopt, a conclusion that was hardly the dominant theme of any network, nor of any news media organization conducting its own polls. That the public might be ambivalent is not surprising, given how confusing the actual events have proven to be. As Langer notes, the vast majority of people don't feel strongly one way or the other. Moreover, as the CBS/NYT poll shows, close to half of the public expresses no opinion, when explicitly given that option. I suspect the percentage would have been even higher, if the poll hadn't given respondents information about the plan and then asked their immediate reaction to it. (It is also likely the direction of the results would have been different, if the information provided to the respondents had been more objective - perhaps including mentions of oversight, control of CEO salaries, public equity in the companies, and/or the "investment" character of the funds, rather than the implication that the money would be handed out to the companies with no strings attached).

It's true as Langer notes, that we pollsters can learn a great deal by examining the results of contradictory poll results. But that doesn't address the larger problem of how the general public and political leaders view them. We may think conflicting poll results are enlightening, but I suspect to many others, they merely demonstrate how untrustworthy polls are in the first place.


Gallup Daily - The Worst Thing in 10 Years?


In Mark Blumenthal's post on how David Plouffe is polling for Barack Obama, the Democratic nominee's communications director Dan Pfeiffer is quoted as saying that "the Gallup Daily is the worst thing that's happened to journalism in 10 years." Gallup's Frank Newport predictably rejected the comments, claiming that Pfeiffer's comments "are the same types of sentiments that have been expressed since George Gallup's first presidential polls in 1936."

 

I don't think Frank is correct in his boiler plate response. It is not useful to dismiss all criticisms of polls these days as the same old tired comments of seven decades ago that have long been discredited. If I understand Mark's blog correctly, Pfeiffer and Plouffe object to the Gallup Daily because it does not, contrary to Frank's assertion, provide an accurate description of where the presidential race stands today.

 

According to Mark's post, Plouffe claims that the topline polling data aren't especially useful (they "don't tell you anything"). Instead, the campaign focuses on who are the "true undecideds," and what messages will persuade them to vote for Obama. Knowing how many undecided voters there are is an integral part of understanding the presidential race. That's true for the campaigns, and it is no less true for political observers and the public.  

 

But Gallup refuses to measure the undecided vote, and instead gives a hypothetical description of a presidential race, "if the election were held today" - showing us that 95 percent of voters have already made up their minds. But the election is not being held today, and the Gallup Daily does not tell us the truth about how many voters are - at this point in the campaign - committed to a candidate, and how many voters have yet to make up their minds. From Plouffe's and Pfeiffer's point of view, the Gallup Daily is useless - even in understanding the national sentiment.

 

Frank claims that the public needs "independent polling" so that it doesn't have to rely on "campaign operatives' self-promoting insights on where the race stands." I  couldn't agree with him more. But the public needs accurate independent polling, which gives the public a full picture of where the presidential race stands. Gallup Daily does not do that. But it could.


The Myth of "Obama Fatigue"


According to Pew's Andrew Kohut, the American electorate is suffering from "Obama fatigue." A close examination of the polling data suggests this conclusion is more of a personal opinion than one supported by the polling data.

 

Kohut came to his conclusion after first noting that the latest Pew Research Center poll in early August found Barack Obama's lead over John McCain "withering." He then noted that the same poll found more people saying they had been hearing "too much" about Obama's campaign than said that about McCain's campaign. Linking the two findings, Kohut concluded that Obama's greater news exposure over the summer "has proved a problem, not a blessing, for the Democratic candidate."

 

There are a couple of problems of data interpretation. First is the assertion of what Kohut calls a "tightening race." Pew conducted three polls - one each in June, July and August - and in those polls found Obama's lead going from eight points in June (48 percent to 40 percent), to five points in July (47 percent to 42 percent) and to just three points in early August (46 percent to 43 percent). Thus, overall, Obama's support dropped two percentage points over the summer, while McCain's increased by three. That such minor differences in the polls should be treated as a definitive trend is stunning. Even with larger-than-average sample sizes, those differences in the polls are within the polls' margins of error. In other words, even according to these polls, it's quite possible that there was no decline in Obama's lead, and perhaps even an increase. We just can't know for sure (using the 95 percent confidence level).

 

There are many other polls besides Pew that are measuring the candidates' support, but only one major media organization has conducted polls on a daily basis over this same time period. Gallup has been interviewing about 1,000 respondents each day, reporting the results on a three-day rolling average. If anyone wants to know how the campaign has changed over time, Gallup provides the best set of results. And these results do not show a linear change over the time period described by Kohut, but rather many fluctuations that defy any clear trend.

 

On June 10, Gallup reported a 6-point Obama lead, which disappeared by June 25. The lead went back to as high as six points in early July, down to one point in mid-July, up to nine points in late July, then down to zero only five days later. The lead was back up to six points on August 12, but down to one point on August 21. One can "discover" a linear three-month trend only by cherry-picking Gallup's results - but the cherry-picked trend could just as easily show an increase as a decline. In any case, the notion that "Obama fatigue" could explain all of these variations is simply not credible.

 

A second problem with data interpretation is the almost indecipherable meaning that is elicited by the question that was used to suggest Obama fatigue. The poll question Kohut cited asked whether people felt they had been hearing "too much, too little, or the right amount" about each of the campaigns. Forty-eight percent said too much about Obama's campaign, 26 percent about McCain. To be sure, that's a major gap, but what does it mean? If it means people are unhappy with hearing about Obama, and that is related to their "declining" support for him, how could Pew have found Obama's support dropping by only two percentage points, given the 22-point gap in the "fatigue" question? If that sentiment truly affected voters' support of Obama, one would expect a much greater drop.

 

More important, we know that the crucial question to explain change in support is whether the explanatory variable also shows change over the same time period. Did people become more dissatisfied from June to August with media coverage of Obama's campaign and, if so, did that increased dissatisfaction in turn cause their support to "wither"? As it turns out, Pew didn't ask that question back in June, so we don't know. Thus, statistically, we can't link dissatisfaction in the August poll with the change in support from June to August. The assertion of "Obama fatigue" is not a statistical conclusion, but an intuitive one.

 

An alternative intuitive explanation of what this question measured is that many voters may well be tired of a presidential campaign that goes on for 18 months or more - in other words, not "Obama fatigue" as much as "campaign fatigue." Dissatisfaction may have appeared to be more focused on Obama in this particular poll, because the question was asked during a time when there was more media coverage of Obama for his overseas trip. Had the question been asked at a different time, or had the pollsters tried to probe beneath the surface of this superficial question, we might have obtained a better insight into what the public was thinking.

 

Instead, we are treated to the fiction of "Obama fatigue" as a cause of a "tightening race"  - a spurious explanation of a non-event.

 

(A slightly different version of this critique was posted at HuffingtonPost.)

 


The "Loopy" Zogby Polls


All pollsters, it seems, eventually find themselves with what Andy Kohut once referred to as "loopy" results. His comment was about the Gallup polls in the 2000 election, though in September 2004, Pew experienced such results itself, and of course several polls this campaign season have produced inexplicable or "wrong" numbers, as indicated by the subsequent primary election vote counts.

 

This time, it's Zogby's turn to confuse the masses. His latest Reuters/Zogby poll, based on a sample of 1,089 "likely voters" drawn from listed telephone numbers, conducted Aug. 14-16, 2008, shows McCain over Obama by 46% to 41%.

 

Two days earlier, Zogby reported substantially different results. His online poll (of self-selected people who want to be part of his Internet polling sample) of 3,339 "likely voters," conducted Aug. 12-14, showed Obama with a three-point lead, 43% to 40%.

 

By Zogby's own calculation of the margins of error of each poll, the difference between the two polls in McCain's support (46% in the later telephone poll vs. 40% in the earlier online poll) is statistically significant. The difference in Obama's support (41% vs. 43% respectively) would not be statistically significant. Still, the 8-point difference in the margin of McCain's lead would be significant - a McCain 5-point lead vs. an Obama 3-point lead in the earlier poll.

 

If we believe both polls, the period of Aug. 13-14 must have been a real bummer for Obama and an electoral high for McCain. Whatever it was that caused millions of voters to "change" their minds and gravitate toward the Republican candidate in the two-day period, however, escaped my notice. Perhaps others have been more observant.

 

Of course, there are reasons to discount both polls. Zogby has long been known for refusing to use sound methods in designing his samples. The use of only listed telephone numbers, and the self-selected samples of voters in his online surveys, are the two most salient problems. Still, his last pre-election polls often come close to the actual election results, and many news media outlets regularly publish his results.

 

Regardless of how loopy are Zogby's results, or his sampling methods, his polls contribute to what Kathy Frankovic, in her AAPOR presidential address in 1993,[i] referred to as the "noise and clamor" of the polls. Thus, they're worth noting, if only in disbelief.



[i] Kathleen A. Frankovic, Presidential Address "Noise and Clamor: The Unintended Consequences of Success," Public Opinion Quarterly, Vol. 57, No. 3 (Autumn, 1993), pp. 441-447.


The Accuracy of Likely Voter Models


Several recent posts have addressed whether the likely voter (LV) models are more accurate than the results based on registered voters (RV). My sense is that this is not a question that can be answered in general, but rather has to be considered separately for each polling organization.

 

Let's look, for example, at Brian Schaffner's recent post, where he compared the RV vs. LV results for several pollsters in the 2004 election. He found that "On average, the RV samples for these eight polls predicted a .875 Bush advantage while the LV samples predicted a 2.25 advantage for Bush, remarkably close to the actual result." He concludes that "it does appear as though likely voters did a better job of predicting the result in 2004 than registered voters."

 

This quick look at the data is hardly conclusive, of course, which Brian acknowledges. He had only eight polling organizations in his analysis, and despite the average results, four of the eight polls showed no advantage to using the LV model, while the other four did. As he suggests, it would be important to look at other years, but also other types of elections and other polling organizations.

 

Even then, however, an overall conclusion would not be especially helpful. Each polling organization's LV model is so different from another's that each organization has to look at its own success rate over the years to determine whether the LV model is helpful. In 1996, Gallup's senior editor, Lydia Saad, showed that in some presidential elections, Gallup's LV and RV results showed little differences, but when they did differ significantly, the LV results were more accurate in predicting the election results. That was at the national level. In the New Hampshire primary Gallup poll results over time, however, it's the RV results that typically were marginally closer to the final election results.

 

Still, at the national level, I would always bet on Gallup's LV results being a better estimate than the RV results, an example that Brian found for 2004. Gallup's final RV results showed Kerry up by two percentage points, while the LV results showed Bush winning by two points. Pew, which uses the Gallup LV model, also showed a four-point swing, from a one-point Kerry victory to a 3-point Bush victory.

 

Other polling organizations, of course, could find different results. And trying to average the results across polling organizations, to determine whether "in principle" LV models should be used, I would argue, is not helpful. That question has to be addressed by each polling organization based on its experience with its own LV model.

 


Why Should Pollsters "Cringe" at the Undecided Vote? (Panagakis-Moore, cont'd.)


Nick Panagakis' response to my column on a different approach to measuring vote choice reflects, I believe, the current conventional wisdom, that a forced choice vote choice question is the best predictor of how voters will cast their ballots. This approach, Nick argues, "historically comes close to the actual outcome." Not only that, he "cringes" when he sees pollsters hedge their bets on a poll, by saying "candidate A is up by 9 points - but 30% could change their minds." He says that reporting such numbers "devalues polls."

 

But what is the "truth" of the matter? Are we not interested in accurately portraying what the electorate is thinking "today"? If so, how can we say, as CNN does, that 100 percent of voters have made up their minds with more than three months before the election? Or as Gallup has been telling us for the past two months, that an average of 95 percent of voters have already made up their minds? Or even, as most other pollsters say, that over 90 percent have made a choice?

 

Pollsters get away with producing such dubious numbers, I think, because most pundits take a schizophrenic approach to the polls. At one level, they treat the results as though they are the Holy Grail. At the next moment, they dismiss the numbers as being irrelevant at this time of the campaign season, saying that we need to wait until after the conventions before people begin paying attention to the election. Dan Rather's recent column encapsulates this sentiment, headlined as "Summer polls in the presidential campaign are pure folly."

 

If we are concerned about devaluing polls, we might want to think about giving an accurate portrayal of what the public is actually thinking (or not thinking) weeks and months before an election. The current vote choice question clearly does not reveal the extent of public indecision, and thus, I think, undermines the credibility of polls more generally.

 

I am not arguing that shortly before election day, in their last pre-election polls, pollsters should not press voters for their choices. I agree that in most elections, even the "undecided" voters have an inkling of whom they will support. Barring last minute media coverage that favors one candidate or the other, the faint-hearted leanings of these undecided voters usually turn out to be decent predictors of how they will act when they get in the voting booth. (Notable exceptions at the national level occurred in the 1948 and 1980 presidential elections, of course, not to mention the 2008 New Hampshire, South Carolina, and California primaries, among others).

 

Still, during the campaign leading up to the election, why should pollsters "cringe" at reporting that a large segment of the population remains undecided? In fact, that's just what CBS News has done, commendably in my view, when it headlined its latest poll results as "Poll: Obama Leads, But Race Fluid." Nick, it seems, would not favor such a headline, nor apparently would most other media pollsters - at least as indicated by their own reports.  

 

There may be better ways to get at voter indecision, other than asking first, if people have made up their minds. Andy Smith of the UNH Survey Center said he will be experimenting this election season with other approaches, which could include the names of the candidates, as well as asking voters who they expect to vote for in November (not "today"), with the tag line, or haven't they made up their minds yet? A follow-up question could probe their leanings, but at least up front, the question would explicitly allow for the undecided voter to indicate such a sentiment.

 

It seems pretty clear that the standard vote choice question sacrifices "truth" about the electorate during the campaign, whatever the question's utility in predicting results right before the election. The research task, I believe, is to find an approach that does not produce misleading results about the state of the electorate during the campaign, while still allowing pollsters to make as accurate predictions as possible right before election day.


A Different Approach to Measuring Vote Choice (and Lack of Choice)

Topics: Andy Smith , Barack Obama , CNN , John McCain , Measurement , UNH Survey Center

In the recent release of the Granite State Poll, Andy Smith (director of the UNH Survey Center) noted that Barack Obama led John McCain by three percentage points, 46 percent to 43 percent, with 3 percent favoring another candidate, and 8 percent undecided. In the next paragraph, however, he noted that "only 51 percent of likely voters say they have definitely decided who they will vote for, 21 percent are leaning toward a candidate, and 28 percent say they are still trying to decide."

 

The second sentence may seem incompatible with the first - 8 percent of voters undecided, at the same time there are 21 percent leaning and another 28 percent still trying to decide - but it's a compromise that allows Smith to use the standard vote choice question, while still measuring the extent of voter indecision.

 

The standard forced-choice (who would you vote for if the election were held "today") approach produces results showing that more than nine in ten voters have already made up their minds about whom to support for president. Such a finding defies credulity, as I argued in a previous post, because of many other indicators that suggest a substantial proportion of voters have not even begun to think about the election. This was a particularly problematic result during the early primary season, when anyone who had even a dollop of experience with elections knew that primary voters had not made up their minds weeks and months ahead of their respective elections, despite what the polls said.

 

During the New Hampshire primary season, CNN's Keating Holland and Smith experimented with a different approach for measuring voter preferences. I had suggested a dichotomous question up front, asking if voters had made up their minds (or not) who to vote for on primary election day, but Holland and Smith came up with an alternative three-part response - to ask up front if voters had definitely decided whom to support on primary election day, if they were leaning, or if they were still trying to decide. Following this question, regardless of the answer, all respondents were asked the standard vote choice question, whom they would vote for if he election were held today.

 

With this format, the pollsters were able to determine how committed voters were to a choice in January (primary election day), and also to measure their top-of-mind preference if the election were held "today." Asking the undecided question first did not appear to influence respondents' willingness to give a preference to the second question, thus allowing CNN and the UNH Survey Center to report numbers that were comparable to what other polling organizations were doing - but still being able to indicate the size of voter indecision.

 

In the final pre-election poll, the CNN/UNH Survey Center results were as close to the actual outcome on the Republican primary as any of the other polling organizations. On the Democratic side, the polling results were similar to the average of the other polling organizations, showing Obama over Clinton, when in fact Clinton won. But CNN and the Survey Center were able to announce up front that with three days to go, 21 percent of the Democratic voters said they were still trying to make up their minds - suggesting the potential for movement.

 

Because the experiment appeared to provide additional insight into the state of voters' minds, Smith has continued to ask the undecided question up front in the general election polling. That's what gave him the results noted at the beginning of this post.

 

There are several ways to report the results. In accordance with standard practice, Smith focuses on the "today" results. Alternatively, he could focus on the results that treat the "still trying to decide" as though, in fact, they are undecided. Both results are shown in the table below:

 

 

TABLE 1

Standard Vote Choice Question

Results Filtered

%

%

Obama

46

38

McCain

43

32

Other

  3

  1

Undecided

  8

  29*

 

100

100

*Among those who said they had "definitely" made up their minds, 2 percent (1 percent of whole sample) said there were undecided who to vote for, giving 29 percent, instead of 28 percent, in the undecided column..

 

 

A more detailed table of filtered results would show the following:

 

 

TABLE 2

%

Definitely Obama

28

Lean Obama

10

Other/Undecided (1%/29%)

30

Lean McCain

10

Definitely McCain

22

TOTAL

100

 

By the way, it's clear that McCain does better than Obama among people who say they have not yet decided whom to support, which is why the margin is 6 points in the filtered version and just 3 points in the standard version. Table 3 shows the crosstabs:

 

 

TABLE 3

Decided

Leaning

Still trying to decide

 

%

%

%

Obama

54

47

30

McCain

44

46

38

Other

  1

  3

  8

Undecided

  2

  4

24

(Weighted N)

N=239

N=98

N=128

 

If the above results are typical of national polls, then one reason that McCain may be competitive with Obama, despite the underlying factors that suggest a Democratic election year, is that voters who haven't yet made up their minds are more likely to have heard McCain's name. When pressed by pollsters who they would support "if the election were held today," they mention the more familiar name. That doesn't mean, however, that come election day, they will actually vote for McCain.

 

So, which is the more accurate representation of the results - the one showing just 8 percent undecided, or the one showing 29 percent undecided?

 

My own preference would be to report the results as shown in Table 2, or in Table 1 in the "filtered" column. Those results are not comparable to the way most polling organizations present their figures, but I think they give a more accurate picture of the state of the electorate's collective preferences than the standard approach. After all, it's difficult to argue that at this time in the campaign season, 95 percent of voters have already made up their minds.

 

However, the approach that Smith follows may be seen by the news media as more acceptable - initially focus on the standard vote choice results, but also follow up that presentation with figures showing how committed or undecided the electorate is, based on the undecided question that is asked first.

 

Comments?

 


Moore: "Swing Voters" Redux - CBS/NYT vs. Gallup

Topics: 2008 , Barack Obama , CBS , CBS/New York Times , David Moore , Gallup , John McCain , UNH Survey Center

Today's Guest Pollster article comes from David W. Moore, a senior fellow with the Carsey Institute at the University of New Hampshire. He is a former vice president and senior editor with the Gallup Poll, where he worked for 13 years, and is the founder and former director of the UNH Survey Center. He manages the blogsite, Skeptical Pollster.com.

In a post last week, I suggested the size of the group that Gallup calls "swing voters" was probably a significant under-estimate of the actual proportion of the electorate that is up for grabs. A new CBS/New York Times poll seems to confirm my suspicions, reporting the equivalent swing voter group at one and a half times greater than what Gallup reported - 36 percent vs. 23 percent respectively.

The Gallup report defined "swing voters" as those who were "undecided" (6 percent), plus those who initially supported one of the two major candidates but then admitted they could change their minds before election day (17 percent). The same criteria, applied to the CBS/NYT poll, suggest a larger swing voter group because this poll has a larger undecided group than the Gallup poll (12 percent), and a larger number of voters who initially chose a candidate but said it was too early to say their minds were made up (24 percent).*

There is almost a month between the two polls, Gallup's conducted June 15-19 and the CBS/NYT poll conducted July 7-14. So, the difference in the size estimates of the swing voter group could be a function of time. If so, that leads to the somewhat counterintuitive conclusion that this past month's campaigning has led to an increase in voter uncertainty, rather than the reverse, as more conventional frameworks might predict. I'm not a fan of this unconventional theory, though CBS reports that according to their poll, the undecided vote increased from 6 percent to 12 percent in the past month. Still, I suspect the differences between the two polls are mostly caused by house effects, but it would be hard to prove one way or the other.

In my post last week, I suggested the actual proportion of the electorate up for grabs is probably greater than the 40 percent figure, found by Gallup in September 1996 in the Robert Dole - Bill Clinton contest. The CBS/NYT poll lends credence to my suspicions, even though it also used the forced-choice format ("who would you vote for if the election were held today?") that Gallup used last month. In September 1996, Gallup first asked if voters had made up their minds, and then asked voters who they preferred. If the CBS/NYT poll had used that format this time, it almost certainly would have found a larger group of swing voters than the 36 percent they just reported.

In any case, CBS has rightfully emphasized in its headline the most important conclusion from these data: "Obama Leads But Race Looks Fluid." Too few pollsters, and too few news organizations, are looking at the fluidity of the electorate. Instead, like the New York Times article, they let stand the forced-choice horserace numbers as though such figures are solid estimates of voter intentions.

While I applaud CBS and Gallup for pointing to the fluidity of the race, I think it's worthwhile saying again that the uncertainty in the race is not because many voters may change their minds before election day, but rather because many voters have not yet made up their minds. The notion that 90 percent or more of voters have already come to a conclusion as to whom they will support (even a conclusion they can change) is highly misleading - an artifact of poor question wording that pollsters should have long since modified.


* The CBS/NYT poll shows that 28 percent of those who initially made a choice then admitted their minds were not made up. The table shows that 86 percent made a choice (45 percent for Obama, 39 percent for McCain, 2 percent for other). The two percentages multiplied by each other give 24 percent. The latter figure added to the 12 percent who originally said they were undecided produces a 36 percent total for "swing voters."


Moore: USA Today's Cluster Analysis of Voters - How Useful?

Topics: 2008 , David Moore , UNH Survey Center , USA Today

Today's Guest Pollster article comes from David W. Moore, a senior fellow with the Carsey Institute at the University of New Hampshire. He is a former vice president and senior editor with the Gallup Poll, where he worked for 13 years, and is the founder and former director of the UNH Survey Center. He manages the blogsite, Skeptical Pollster.com.

On Thursday, July 10, USA Today published an analysis of voter intentions that produced "six types of voters" who the paper claims "will decide the presidential election." The types included: true believers (30 percent of the electorate), up for grabs (18 percent), decided but dissatisfied (16 percent), fired up and favorable (14 percent), firmly decided (12 percent), and skeptical and downbeat (12 percent).* As Mark Blumenthal indicated, this is a fascinating analysis, but how useful is it for understanding the election?

The six types of voters were produced using cluster analysis. This statistical technique is similar to factor analysis, except that it classifies respondents into distinctive groups, while factor analysis classifies various opinions into distinctive groups. Without going into the details of how the technique works, I think it's sufficient to note that the analyst has a great deal of control over the types of groups produced by cluster analysis. The analyst chooses the variables that are used to classify respondents, and also determines how many groups the cluster analysis produces. The fact that the analyst chose six clusters, instead of any other number between two and ten, was purely a subjective decision.

What is most surprising about the analysis is that it is issue free. The stereotypical complaint by political observers about the news media is that reporters focus on the horserace almost to the exclusion of any real substantive issues. This USA Today analysis fits that criticism to a T. I believe there is a widespread consensus among political observers these days that the war in Iraq (and national security more generally), the economy, and healthcare are among the most salient issues dividing the two major presidential candidates. Yet, there is nothing in the newspaper's analysis that groups voters according to their views on any of these major issues. Nor is there any mention of party identification, which often acts as a catch-all variable for a host of issues.

The variables chosen to classify respondents were 1) respondents' enthusiasm about the election, 2) whether respondents think the election would make a difference to them, 3) respondents' opinions (favorable or unfavorable) of each of the two major candidates, and 4) how certain respondents were to vote for the candidate of their choice. As these variable make clear, the classification scheme focuses almost exclusively on election turnout factors, with no mention of issues. Even the favorability ratings can be considered turnout variables in this context, because voters who are negative about both candidates are least likely to vote, while those who are positive about both candidates are mostly likely to vote. This is not to say that a mostly horserace-driven analysis, as this one is, doesn't provide some insights into the electorate. There are many different angles from which to analyze the electorate, and this is certainly a valid one. To me, it's just not as interesting as one that is more political in context.

Like most political junkies, I find intriguing almost any statistical analysis of polling data that goes beyond the simple marginals, and USA Today should be congratulated for making the effort. Still, I'd like to see a little more politics thrown into the mix - even if only to take these six types and describe their party identification, as well as their responses to other public policy questions. But mostly I would like to see a completely new cluster analysis that included policy attitudes as the defining variables for the groups. This is not to say that issues alone will determine the election. But I don't think we can get a good read on the electorate, and which types of voters will ultimately "decide the election," if we ignore issues altogether.


* The percentages exceed 100 percent because of rounding error.


Moore: Gallup's "Swing Voters" - A Major Underestimate?

Topics: David Moore , Gallup , NBC/Wall Street Journal , Newsweek , Time , UNH Survey Center , USA Today

Today's Guest Pollster article comes from David W. Moore, a senior fellow with the Carsey Institute at the University of New Hampshire. He is a former vice president and senior editor with the Gallup Poll, where he worked for 13 years, and is the founder and former director of the UNH Survey Center. He manages the blogsite, Skeptical Pollster.com.

In a recent post, Gallup's Jeff Jones reports that for the first time this election cycle, Gallup has measured the number of "swing voters" in the electorate. That's certainly a step in the right direction, but one might well wonder why it took so long for pollsters to admit that there is a substantial proportion of the public not committed to a candidate.

According to the post, Gallup finds that only 6 percent of "likely voters" are undecided as to which presidential candidate they will support. That number defies credulity. With five months to go in the campaign, neither major candidate the incumbent, no vice presidential candidates chosen, and no debates between the presumptive nominees, Gallup wants want us to believe that 94 percent of voters have already made up their minds? Yes, indeed! Not only that, CNN says 99 percent are decided. Time says 92 percent. Newsweek claims 87 percent. USA Today with Gallup says 97 percent. ABC/Washington Post - 96 percent. The NBC/Wall Street Journal poll says 90 percent. (For sources, see The Polling Report.)

With all these major media polls (not to mention numerous other polls not affiliated with the major news media organizations) in rough agreement that about nine in ten voters or more have made up their minds, any challenge to this conventional wisdom may seem futile. But here's something to consider. In a Sept. 3-5, 1996 Gallup poll, 40 percent of voters said they were undecided about whom they would support in the November presidential election between Robert Dole and Bill Clinton. How could such a large number be undecided in that poll - taken after the major party conventions and with just two months to go before the election, in which there was a popular incumbent candidate - and yet so few voters admit they are undecided in the current polls?

The answer, of course, lies in the way the voting question is asked. The standard vote choice question, which dates to 1935 when George Gallup first asked about presidential preferences, is deliberately designed to obfuscate the number of undecided voters. Gallup knew that the press wouldn't be interested in results that showed perhaps a majority of voters undecided months before an election, so he asked respondents which candidates they would vote for "today."1 And for the past seven plus decades pollsters have blindly followed that same format. In the September 1996 poll, however, Gallup abandoned the standard vote choice question, and instead first asked voters if they had even made their decision as to which candidate they would support. In that context, 39 percent said they hadn't, and an additional one percent were unsure.

In the current Gallup report, mentioned at the beginning of the article, Gallup retains the forced-choice standard format, but follows up by asking respondents if they could change their minds before election day. Those who said they could - 9 percent who initially said they would vote for McCain if the election were held "today," and 8 percent who initially favored Obama - were added to the six percent who initially said they were undecided, producing a 23 percent group Gallup characterizes as "swing voters."

Thus, according to Gallup, about a quarter of the electorate is up for grabs. I'm skeptical about that number - I suspect the percentage is much higher, perhaps even greater than the 40 percent measured by Gallup late in the 1996 campaign. But at least it's a recognition that there is a substantial number of voters who are not yet committed to a candidate.

Still, I would argue that most of the swing voters are not people who "could" change their minds before election day, as Gallup asserts, but rather people who have not yet even decided whom to support. Gallup (and any other national poll), of course, could test that proposition. All they need to do is replicate the question that Gallup asked in its September 3-5, 1996 poll: Ask voters up front if they have made up their minds whom they will support in November.2 My prediction - much more than a quarter of the electorate is up for grabs in 2008.


1 Three times in 1935, Gallup asked if respondents would vote for Roosevelt "today." The first time he pitted Roosevelt against anyone was in January 1936, when he asked: "For which candidate would you vote today - Franklin Roosevelt or the Republican candidate?" See George H. Gallup, The Gallup Poll: Public Opinion 1935-1971, Volume One (New York: Random House, 1972), pp. 1-10.

2 The exact wording is, "Have you made up your mind yet about who you will vote for in the presidential election this fall, or are you still deciding?"


Moore: Hunter College's LGB Poll and Prevalence Rates

Topics: David Moore , George Bush , UNH Survey Center

Today's Guest Pollster article comes from David W. Moore, a senior fellow with the Carsey Institute at the University of New Hampshire. He is a former vice president and senior editor with the Gallup Poll, where he worked for 13 years, and is the founder and former director of the UNH Survey Center. He manages the blogsite, Skeptical Pollster.com.

The new Hunter College poll of lesbian, gay, and bisexual (LGB) Americans provides important insights into the lives of this difficult-to-reach population. The poll is an excellent example of what polls can do best - reveal how people view their own experiences, thus providing history with important information on how people lived and thought at any given point of time. The forthcoming presentation1 by the authors should provide additional information about the study.

One of the intriguing findings of the study is the percentage of people who identify as LGBs. The prevalence rate, 2.9 percent, is in line with other studies over the past couple of decades, which suggest that somewhere under five percent of Americans report they are homosexual. A decade ago, NORC's Tom Smith reported that "a series of recent national studies indicate that only about 2-3 percent of sexually active men and 1-2 percent of sexually active women are currently homosexual."2 The Hunter College poll differs somewhat from these numbers, suggesting that the percentage of men and women identifying as LGBs is about equal - though half the women, and only a third of the men say they are bisexual.

It's important to recognize that these figures are lower-bounded estimates, and that the actual percentage of Americans who are LGBs is probably higher than what the polls can measure. While public acceptance of LGBs is higher now than it was, say, a couple of decades ago, there is still considerable public disapprobation of homosexual behavior. Such an environment cannot help but deter many LGBs from admitting their true sexual orientation.

In this context, it is noteworthy that the percentage of people willing to admit they are LGBs correlates with the political environment in which they live. The Hunter College poll shows that among people living in "strong Democratic states" (where John Kerry beat George W. Bush by five percentage points or more), the number of LGBs is about 3.6 percent; in swing states, it's about 3.2 percent; and in strong Republican states (where Bush won by five percentage points or more), it's about 2.0 percent.3 These differences appear to be statistically significant, though the authors could provide statistical tests to verify the observation.

If it is true that the percentages vary by political environment, there are at least two explanations. One is that LGBs move to states that are generally more accepting of homosexuals. The other is that LGBs are simply more willing to admit their sexual orientation when they live in a more favorable environment. One test would be to compare the figures by age by political environment - with the hypothesis that older LGBs might be more likely to move to friendly environments, while younger LGBs would not yet have had the time to do so. Thus, the correlation between political environment and willingness to admit that one is an LGB would be higher among older than younger people. If the rates are similar, it rules out the notion that the correlation is due to LGBs moving to a more friendly environment, and suggests instead that it is the environment itself that influences whether LGBs are willing to admit their sexual preferences.

Whatever the results, the poll itself deserves careful consideration of all of its findings. The methodology appears to be rigorous, while the findings provide innovative insights into the personal experiences and political orientation of LGBs.


1 Wednesday, June 18, 2008 at the Lesbian, Gay, Bisexual and Transgender Community Center, 208, West 13th Street, New York City.

2 Tom W. Smith, "American Sexual Behavior: Trends, Socio-Demographic Differences, and Risk Behavior," GSS Topical Report No. 25, National Opinion Research Center, University of Chicago, updated December, 1998, p. 7.

3 These percentages are my recalculation of figures provided in the report in Table 2. The authors should be able to provide more precise calculations.


Moore: The Frontrunner Myths

Topics: 2008 , ABC , David Moore , Gary Langer , Hillary Clinton , Rudy Giuliani , UNH Survey Center

Today's Guest Pollster article comes from David W. Moore, a senior fellow with the Carsey Institute at the University of New Hampshire. He is a former vice president and senior editor with the Gallup Poll, where he worked for 13 years, and is the founder and former director of the UNH Survey Center. He manages the blogsite, Skeptical Pollster.com.

Eons ago, it seems, the press was touting Rudy Giuliani and Hillary Clinton as the dominant frontrunners in their respective party presidential contests. The press was wrong in doing this, of course, but the pollsters told them that was true, and journalists believed. Now ABC's Gary Langer has taken a "Look Back" at the 2008 primary season, and once again endorsed the myth of the two frontrunners:

"It was going to be short and simple: Hillary Clinton vs. Rudy Giuliani. Those were the long-ago and far-away days of initial preferences, when the two best-known candidates held commanding leads for their parties' presidential nominations. That it didn't end that way underscores an eternal truth of American politics: Campaigns matter."

I agree with Langer that campaigns matter, but disagree with his starting point. Indeed, that Giuliani was ever proclaimed the frontrunner is perhaps the most amazing myth of this whole campaign season.

The contest for delegates, as everyone knows, begins with voting in Iowa and continues from state to state, with election results in the early states inevitably affecting the results in later states. During the time that Giuliani enjoyed his so-called "commanding" frontrunner status (in the summer and fall of 2007), he was not the frontrunner in any of those early state contests - not in Iowa, not in New Hampshire, not in Michigan, not in Nevada, and not in South Carolina. He was the frontrunner in Florida, but if he didn't win any of the previous contests, it wasn't likely he would even be viable, much less the national frontrunner, by the time that primary was held.

This isn't just 20-20 hindsight.1 Right from the beginning, critics challenged the media pollsters' use of "national Republicans" and "national Democrats" as indicative of what the voters were thinking. In fact, Langer acknowledged the problem back in July 2007, and it's worth citing his response:

"A colleague here sent me a nice pointed challenge to our latest election poll yesterday: National surveys by themselves are 'close to meaningless,' he said, because they measure national preferences in what'll really be a series of state caucuses and primaries.

"It's a fair complaint, and a serious one - because it cuts to the heart of just what our new survey, and its multifarious brethren, are all about. It's true, of course, that a poll of current preferences nationally does not tell us about current preferences in Iowa, New Hampshire or anywhere else. Without knowing who's thriving in Iowa and New Hampshire, it's hard to predict who survives to South Carolina, much less who wins where on Mega Tuesday and wakes up with the crown on Feb. 6....

"We ask the horse race question in our national polls for context - not to predict the winner of a made-up national primary."

Langer is absolutely right - national polls of the party faithful don't predict state winners, and without an idea of who they might be, there's no way to tell who the nominee might be. By this reasoning, no matter how well Giuliani might have been faring in the national polls, that said nothing about how he might do in the state contests and in his effort to win the presidential nomination. So, on what grounds was he the frontrunner?

It turns out, apparently, that all along ABC was using the national numbers of what Langer calls the "made-up primary" not just "for context," but in fact to predict the winner of the actual nomination process. That's the only way in which Giuliani could be called a frontrunner.

Of course, ABC was not alone. Every major media polling organization reported results, at one time or another, based on that "made-up national primary." And in the summer and fall of 2007, they all reported that Giuliani was the dominant frontrunner - while ignoring that he trailed in all of the early state contests.

Similarly, Hillary Clinton was hardly the "solid" favorite as virtually every major news organization claimed. It's true the polls showed her leading in the several primary states after Iowa, but in this latter state she was never dominant. She trailed John Edwards for the first seven months of 2007, until she moved into a modest lead in the late summer and fall. But there were many undecided voters, and if she lost in Iowa, who could predict how she might fare elsewhere? Howard Dean's experience four years earlier, when his leading status in New Hampshire evaporated in the two-day period following his loss in the Iowa Caucuses, should have been a cautionary note for pollsters.

The reality was that in the summer and fall of 2007, there was no Republican frontrunner, and the Democratic frontrunner had only a tenuous lead. That so many pundits and politicians and members of the general public still think otherwise, because that's what the pollsters told us, should be the biggest embarrassment of the polling industry since Dewey beat Truman in 1948.


1 For my take on this matter last October 2007, see this post.


Moore: "Noise and Clamor" 2008

Topics: AAPOR , ABC , CBS , CBS/New York Times , David Moore , Divergent Polls , Gallup , Gary Langer , UNH Survey Center , USA Today

Today's Guest Pollster article comes from David W. Moore, a senior fellow with the Carsey Institute at the University of New Hampshire. He is a former vice president and senior editor with the Gallup Poll, where he worked for 13 years, and is the founder and former director of the UNH Survey Center. He manages the blogsite, Skeptical Pollster.com.

This month, the American Association for Public Opinion Research (AAPOR) gave its most coveted honor, the AAPOR Lifetime Achievement Award, to Kathleen A. Frankovic, director the CBS News polls and a former AAPOR president. In her acceptance speech, she referred to her presidential address of a decade and a half ago, when she leveled several incisive criticisms at the media polls - criticisms that deserve to be re-examined today.

Since joining CBS News over thirty years ago, Frankovic has amassed an impressive set of accomplishments, including being president of both AAPOR and its sister organization, the World Association for Public Opinion Research (WAPOR); a member of the Market Research Council; a trustee both of The Roper Center for Public Opinion Research and, separately, the National Council on Public Polls; and a former chair of the Research Industry Coalition. She is also the author of numerous published articles and book chapters on public opinion. If anyone could be considered a pillar of the polling industry establishment, she would be it.

Yet, her writings on polls have not been effusive encomiums to the presumed benefits they bring to society. While always attributing much importance to the role of media polls in American politics, she has also expressed concerns about them, nowhere more evident than in her 1993 AAPOR presidential address, "Noise and Clamor: The Unintended Consequences of Success."1

Her theme is reflected in the title, as she raised questions about the increased frequency of polls and the lack of thought that goes into many poll questions - "Immediate response is more important than what the response is or what it really means. In other words, we may no longer have to think." She also worried about the decreased value and import of polls - "It's so easy to conduct polls now that it may actually cheapen the value of each one we do. Instead of meaning, we may just be getting noise - noise and clamor."

She noted that with the advent of scientific polls, we now have a "continuous ballot box," the dream of early democratic idealists. But is that good? Not always, apparently. In the several months prior to her presidential address, polls taken at two-week intervals had showed President Bill Clinton's approval rating bouncing all over the map, "from 58 percent to 53 percent to 59 percent to 53 percent to 57 percent to 49 percent to 57 percent to 45 percent." She lamented, "This is information, but how informative is it? It's almost like what Truman Capote once remarked about Jack Kerouac's novel, On The Road: 'That isn't writing-it's typing.' Continuing ballot boxes shouldn't bounce around so much." Indeed.

Today, the uninformative information provided by polls is even more acute, obvious to anyone who has followed the pollsters' fascination with the 2008 national Democratic electorate. It isn't the results at two-week intervals, but contemporaneous results that bounce all over the place these days. One has to look only at pollster.com from April 30 to May 4 to find five polls with three different results: Gallup by itself, reporting a dead heat (Obama up by two points); Gallup with USA Today and, separately, AP/Ipsos each reporting Clinton leading by 7 points; while CBS/NYT and Diageo/Hotline each reporting double digit leads (12 and 11 points respectively) for Obama. And this isn't the only time Gallup has contradicted itself this campaign season, or that different polling organizations have come up with contrasting results when interviewing in the same time period. (See comments by ABC's Gary Langer, Dec. 12, 2007 and Feb. 26, 2008; and Mark Blumenthal's "Dueling Gallups.")

Despite her criticisms, Frankovic proposed no remedies, nor special panels to investigate the problems, perhaps in recognition that they might entail a fundamental change in the way that polls are currently conducted. In the wake of the miscalls in the New Hampshire Democratic Primary, AAPOR's president, Nancy Mathieowetz, did in fact establish a special panel "to examine what occurred, provide a timely report of our findings, and promote future research on pre-election primary polls." No such panel has been established to examine all the subsequent conflicting polls, though the New Hampshire panel might want to consider broadening its scope. The "continuing ballot boxes" are not just bouncing around, they're running into each other going in opposite directions.

Frankovic concluded in her presidential address in 1993 that "We have achieved the ability to cut through the noise and clamor of unscientific measures, even as we risk making some noise and clamor of our own." This observation suggests that the radical question the AAPOR panel needs to address is whether the noise and clamor of "scientific" polls is any better than that of the unscientific ones.


1 Kathleen A. Frankovic, Presidential Address "Noise and Clamor: The Unintended Consequences of Success," Public Opinion Quarterly, Vol. 57, No. 3 (Autumn, 1993), pp. 441-447.


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR