Pollster.com

Guest Pollster


 

Wilson: O'Donnell's Delaware Win About Turnout and Message

Topics: 2010 Elections , Christine O

David C. Wilson is a professor of Political Science and International Relations, and Psychology, at the University of Delaware. He studies public opinion, polling and survey methods, and political psychology. His research has appeared in the Journal of Applied Psychology, Public Opinion Quarterly, and the Du Boise Review.

Christine O'Donnell's win over the long tenured U.S. Representative Mike Castle, 53% to 47% (+6% points), might have been a shocker to most, but what really happened, and what most observers missed, was that turnout was higher than normal in lower Delaware (Kent and Sussex Counties), and average in upper Delaware (New Castle County).

Polls underestimated these levels for most of the campaign, and thus, missed the trend. Plus, the lack of in-state polling provided no clues about the sources and substance of information that mobilized voters. It turns out that lower Delaware counties, which are traditionally Republican, are losing their liberal and moderate appeal. It suggests that the GOP leadership may not be in as much touch as they think with their constituents. And, questions abound about the ability of existing state GOP leadership's ability to mobilize support given the shock of the O'Donnell win. In sum, evidence points to a geo-political realignment of the GOP within Delaware.

Castle won New Castle County 58% to 42%, but lost Kent and Sussex counties, 64% to 36%. O'Donnell's support in both Kent and Sussex was twice that of Castle's. It appears that Castle failed to mobilize liberal and moderate Republicans, and relied too heavily on the state party for his campaigning. Although Castle was well funded, O'Donnell's last minute support from outside sources allowed her to communicate her message and get out the vote; and it paid off.

Segue to the polls. The last poll conducted before the election (Public Policy Polling), 9/11-9/12) showed O'Donnell with a 47% to 44% advantage over Castle with 8% undecided, and a margin of error of roughly 4%. So how did O'Donnell beat her estimates? It could be that the 8% of formerly undecided voters decided to go with O'Donnell over Castle. However, I think the answer is probably turnout.

Approximately 57,582 registered Republicans voted in Tuesday's primary. An estimated 27,021 voted for Castle and 30,561 voted for O'Donnell; a vote difference of 3,540 (6% points). Interestingly enough, Castle received far more actual votes in the 2008 general election for Representative than O'Donnell received for Senate that same year, suggesting that Delawareans voted for Castle and Biden (or Castle and not O'Donnell). This splitting of the ticket in 2008 raises questions about how turnout might affect the state's mid-terms; especially across counties in the state. O'Donnell should expect that her win will move some Castle supporters to her Democratic opponent, New Castle County Executive Chris Coons.

I think turnout will be the key in November because some of the popular media arguments about what's going on in the state are somewhat untenable. The September PPP poll found that only 24% of Republicans consider themselves "members of the Tea Party," and a plurality of 47% felt the Republican Party was "about right" in terms of their ideology; 17% felt they were "too conservative." Approximately 42% of Republicans said that a Sarah Palin endorsement would not make a difference in their vote for a candidate, and 24% said it would make them "less likely" to vote for a candidate. Thus, I see no big Tea Party movement in terms of attitudes and beliefs. However, Tea Party funding is related to turnout.

According to the state of Delaware's Elections Commissioner, the 2010 Republican primary produced a 32% turnout rate. On the surface this might seem low; however, the turnouts for past Republicans primaries were 16% in 2008, 8% in 2006, 12% in 2004, 14% in 2002, and 16% in 2000. Thus, the 2010 primary doubled Republican turnout.

The PPP polling likely underestimated this higher than usual turnout when they calculated their likely voter estimate or in weighting their final estimates. So what does this mean going forward? It's likely that O'Donnell will continue to run the same type of campaign but receive more outside funding and attention. The interesting part will be how the electorate in Delaware, and the nation, responds to the results. Mid-term turnout percentages in the state usually hover around the mid to upper 40s, while in presidential election years, turnout is in the mid to high 60s.

Coons has been leading in the polls in all head to head match-ups against O'Donnell. And, in the general election, O'Donnell will have to convince independent voters, moderate Republicans, and Castle supporters that she will represent their interests. This will be an uphill battle given that she's already indicated that she feels she can win without "them" referring to the Republican Party Organization, and suggesting the GOP might be too lazy to help her.

All of this bodes well for Coons who will certainly win the Wilmington area, and much of the Wilmington suburbs which make up the largest portion of the state's electorate. But it's tough to gauge Democratic turnout in the state because Coons did not have a primary challenger, and thus we cannot use primary numbers as an indicator of enthusiasm. Traditionally, Republican turnout during the primaries is slightly higher than for Democrats, but in 2008 the latter's turnout was 12% points higher than the former's. O'Donnell's win could actually work to mobilize support for Coons. It will also be interesting to see if Castle's supporters, and perhaps Castle himself, will remain loyal to the party or decide to support Coons because he has governing experience and is not considered an outside candidate.

According to 2008 exit poll data on that year's Senate race, 75% of Republicans voted for O'Donnell, while about 25% voted for Joe Biden, who was also running for Vice President. Biden won the contest by nearly 30% points, 64% to 35%. More telling, approximately 38% of Democrats voted for Mike Castle over his Democratic challenger, Karen Hartley-Nagel. Half of the individuals who say they voted for Castle in 2008, also voted for Democrat Joe Biden. In fact, 36% of Democrats who voted for Biden also voted for Castle. This all suggests that Castle has good standing among Democrats, which could help Coons, who according to Public Policy Polling, in early August held a 31% approval rating with 39% saying they were "unsure" about their approval of him.

What does all of this signal?

First, the media will heavily scrutinize the race and the candidates. O'Donnell is particularly vulnerable because she is a woman (yes, sexism still exists), she has no governing experience, she is not well know or at least revered by the state and national GOP, and there are many questions about her personal and campaign finances, educational background, ethics issues related to non-profit work, past gender discrimination lawsuits, and her personal relationships. O'Donnell does appear to be media savvy, but as things heat up, those skills will be tested.

Second, Coons' single most important priority will need to be turnout. If he can mobilize support among the electorate in New Castle country, especially the suburbs of Wilmington, he will win the election. He should not ignore Kent and Sussex counties either; they hold more opportunities than barriers to his election. His message must be at least two-fold: he can govern and he will represent Delawareans with pride and uphold the reputation of the state. How he frames and packages those messages will be up to his campaign.

O'Donnell's single most important priority will be to somehow move slightly more to the ideological and political center, and make friends with the state and national party. The September PPP poll showed O'Donnell having strong support only among self-described conservatives. Conservatives make up the largest portion of the Republican Party in DE, but they are heavily outnumbered in the state when moderate Republicans are combined with all Democrats regardless of ideology.

Also, the outside funding by the Tea Party movement may become a problem if Delawareans, who traditionally like to handle their own politics, perceive too much outside influence. O'Donnell must now come up with solid policy proposals that will show she can actually be effective in the male dominated, seniority ruled world of the Senate. She also has weak support among seniors, who heavily favored Castle.

Finally, regardless of the outcome Delaware will elect someone other than Joe Biden for the first time in almost four decades. That's big.


McGoldrick: What Voters Expect Of A GOP Majority

Topics: 2010 Election , election results , voter expectations

Brent McGoldrick is a Senior Vice President with FD, a communications strategy consulting firm. He leads public affairs research for FD's Washington, D.C. office.

In the last week, polling junkies and reporters alike have been delving into a fresh batch of post- Labor Day polls and debating just how big of a majority the Republicans will win in the House of Representatives in November.

Last week my company, the communications and strategy consulting firm FD, fielded several questions on a national survey that pre-supposed Republicans would win majority control of the House. The question we wanted to answer was "How do Americans feel about that prospect?" Like other polls, our polling finds news to cheer the GOP. But, we also find a note of caution about taking a potential takeover in stride.

Namely, in our poll, we find that voters generally believe:

  1. A GOP majority in the House will improve overall economic conditions;
  2. A GOP House would do a better job than past GOP-controlled Congresses (i.e., the party has learned their lesson);
  3. But, voters want a GOP Congress to work with President Obama and Democrats, as opposed to pursuing their own agenda.

Let's take each of these one by one.

1. More voters think economic conditions will improve as a result of a Republican takeover of The House.

Our polling finds that 47% of voters think economic conditions will significantly or somewhat improve as a result of GOP control of the House, while 38% think conditions will significantly or somewhat worsen. Among those "very likely" to vote, 49% say conditions will improve and 39% say conditions will worsen.

2. More voters think a Republican-controlled House will do a better job than past Republican Congresses.

Specifically, our Poll finds that 49% of voters say that a Republican -controlled Congress would do a better job than past Republican Congresses, while 36% say they would do a worse job. Among "very likely" voters, a majority (51%) say that a Republican-controlled Congress would do a better job than previous Republican Congresses, while 37% say they would do a worse job.

Interestingly, this finding clearly signals that the GOP has begun to repair its "brand" in less than two years. Additionally, taken together, the similar double-digits margins on these questions do suggest to me that a double-digit GOP lead on the Generic Ballot that we have seen in other polls might not be far off.

3. That said, voters want a Republican Congress to work with President Obama and Democrats.

When asked which approach they would prefer a hypothetical GOP-controlled Congress take, a whopping 71% of voters say they would prefer them to "compromise and work with President Obama to get things done." Only 27% of voters would want Republicans to "pursue their own agenda to get things done."

Among "very likely" voters, 68% want to see the two parties to work together, while 27% want the GOP to pursue their own agenda. (I won't know until I field it, but my bet is if we had put the question to voters whether a Republican victory in November is a signal to President Obama and Democrats that it is time to compromise, we would see similar numbers.)

Most significantly, even among Republican "Very likely" voters, while 50% say they want Republicans to pursue their own agenda, a sizeable 47% say they want Republicans to work with President Obama and Democrats.


So, what do all of these data tell us? By a significant margin, voters appear poised to vote for divided government, with the expectation that it will improve the economy. But, they also expect that the two parties will work together to solve economic challenges.

It seems like we hear that message from every election. But, I would posit that, in the face of such dire economic conditions, the data show us the limits of either party's pursuit of a "base" strategy have been reached. The Great Recession as an added an "or else" to what seems to be the electorate's biennial electoral plea, and the failure of a party in power (or perceived to be in power ) to heed that message carries major electoral risks.


Berinsky: Poll Shows False Obama Beliefs A Function of Partisanship

Topics: Barack Obama , Birthers , Obama birthplace , Obama Hawaii , Obama Indonesia , Obama Kenya

Adam J. Berinsky is associate professor of political science at The Massachusetts Institute of Technology and is the author of Silent Voices: Public Opinion and Political Participation in America and In Time of War: Understanding American Public Opinion from World War II to Iraq.

In politics, as in life, where you stand depends upon where you sit. Recent polling I have conducted demonstrates that what people believe to be true about the political world is in large part a function of whether they are a Democrat or a Republican.

Last month the Pew Center for the People and the Press conducted a poll which found that almost 20 percent of Americans mistakenly believe that President Obama is a Muslim, and another 43 percent cannot identify his religion. Recently released polls by Time and Newsweek confirm the prevalence of this false information.

These findings have sparked a flood of analysis. Some commentators have rightly pointed out that large numbers of Americans believe a number of crazy things. For instance, according to Gallup, 18 percent of Americans believe the sun revolves around the earth. Others have argued that Republican politicians and conservative media sources have helped perpetuate the myth of Obama's religious identity. Recent polling I have conducted seems to support the latter view. There is a strong political component to misinformation about Obama's beliefs and identity. But politically motivated misinformation is not limited to Republicans. Some Democrats are quite willing to believe false information about Republican politicians. The politics of misinformation, it seems, is not so much a product of direct reactions to Obama as it is to the polarized nature of the current political times.

At their heart, questions about Obama's religion are critical because they are tied into broader questions about his character and ability to lead. As part of a larger project on the political consequences of misinformation, I measured belief in another controversy that gets to heart of Obama's identity as an American - whether people believe that he is a citizen of the United States.

I contracted Polimetrix/Yougov to conduct a national internet sample of 800 Americans, from July 8th to July 15th, 2010. I asked, "Do you believe that Barack Obama was born in the United States or not?" Consistent with other polls on the "birther" controversy, I found that 27 percent of respondents said that Obama was not born in the U.S. and another 19 percent did not know if he was or not. These findings paint a picture that is similarly unsettling to the Pew polling - misinformation about Obama's national and religious identity is pervasive.

My results raise a number of important questions. One question is whether some people are simply ignorant about politics - as they are about other aspects of the world (as the Gallup question mentioned above would suggest) - or if instead the uncertainty about Obama's background is politically motivated.

To adjudicate as best I could between these two explanations, I asked a follow-up question of those people who said that Obama was not born in the U.S. or were unsure about where he was born. Specifically, I gave them a multiple choice question: "Where do you think Obama was born: Indonesia, Kenya, The Philippines, Hawaii, or some other place."

I picked this multiple-choice question rather than an open-ended question in part because it was easier to ask the question this way, but also to see how the story dominant among "birthers" (Obama was born in Kenya) fared in relation to other possibilities, including one that could be derived from general ignorance (Hawaii was made a state in 1959; Obama was born in 1961).

The vast majority of these respondents subscribed to the dominant conspiracy story, choosing Kenya as Obama's birthplace. Among the 46 percent of respondents who either said that Obama was not born in the U.S. or were unsure if he was, two thirds said he was born in Kenya. This pattern was especially pronounced among those who said that Obama was not born in the U.S. - almost three-quarters of these respondents said he was born in Kenya.

There is some evidence that, since the beginning of the year, the story about Obama's citizenship has become clearer. Earlier in the year, in January 2010, I designed the follow-up question described above for inclusion on a survey conducted by Angus Reid Global Monitoring. In that poll, the distribution of beliefs about Obama's citizenship were roughly similar to what they are now - 25 percent said that he was not born in the U.S. and 20 percent were not sure where he was born. However, the follow-up looked very different - only 41 percent chose Kenya (the dominant "birther story"), while 25 percent chose Hawaii (a clear demonstration of ignorance). Thus, over the last seven months, it seems that the "birther" story has become more pervasive.

Partisan differences in beliefs about Obama's citizenship also indicate that the uncertainty about Obama's background is politically motivated. Though it has been said before, the difference between partisans in their beliefs about Obama's citizenship is striking. As the data show, the vast majority of Democrats say that Obama was born in the U.S. and a plurality of Republicans say that he was not. Similar patterns emerge when beliefs are broken down by approval for Obama; the President's supporters think he is a natural-born citizen and his opponents do not. Put simply, on the question of Obama's citizenship, where you stand depends on where you sit.

This pattern of partisan misperception is striking and carries over to other political rumors. On the July Polimetrix/YouGov survey, I also asked my respondents questions about whether they thought that the changes to the health care system that have been enacted by Congress and the Obama administration create "death panels" and whether John Kerry lied about his actions during the Vietnam war in order to receive medals from the U.S. Army.

The large partisan gaps found in the acceptance of false beliefs about Obama's citizenship, not surprisingly, extended to rumors about Obama's policies. But they also extended to rumors about other Democratic politicians as well - a majority of Republicans said that Kerry lied to receive medals and a majority of Democrats said that he did not.

The pervasiveness of politically motivated perceptions of reality is not limited to Republicans. On my survey I also asked respondents if they thought that "people in the federal government either assisted in the 9/11 attacks or took no action to stop the attacks because they wanted the United States to go to war in the Middle East." The overall acceptance of this particular piece of misinformation was lower than the Obama citizenship case - 18 percent thought that government officials were aware of the attack beforehand and another 18 percent were unsure - but the accusation here is certainly more severe. What is important for present purposes is that partisan differences in acceptance of this statement were large, as shown in this graph (which has been placed on the same scale as the birther graph above to facilitate comparisons).

These same differences do not, however, extend to rumors that are not grounded in partisan politics. I also asked respondents a question that has been asked on several surveys in the past, "Do you believe that a spacecraft from another planet crashed in Roswell, New Mexico in 1947?" As the graph below shows, the stark partisan differences found on the other questions do not emerge in the case of beliefs about alien life.

All these results beg the question of what can be done to correct these persistent misperceptions. The answer is difficult, largely because the incorrect beliefs about politics are as much a function of partisan perceptions as they are about genuine ignorance.

Clearly, some people hold false beliefs because they do not pay much attention to the political world. Providing these individuals with greater knowledge of politics might improve the situation. In order to assess the impact of general ignorance, I measured how much my respondents knew about politics by asking them a series of three factual questions about political figures and political processes.

The results here are somewhat heartening. I found that the more of these factual questions the respondents got right, the more likely they were to think that Obama was a citizen. Contrary to the findings of some scholars who examined beliefs about rumors concerning death panels, I found that information had the same effect for both Democrats and Republicans. However, the news is not all rosy on this score; even information can only get us so far. There were large differences between the beliefs of Democrats and Republicans at all levels of political attentiveness and even among Republicans who got all three of my factual questions right, 27 percent believed that Obama was not born in the U.S.

So what can be done? In a recently published paper that has received a great deal of deserved attention, Brendan Nyhan and Jason Reifler hold out little hope for the possibility of correcting false beliefs. In fact, they argue that providing misinformed people the truth can exacerbate the problem, because these people just cling more firmly to their false beliefs. In a project associated with the Polimetrix/YouGov survey, I have begun to explore other possibilities and I remain hopeful. Still, given the nature of the current political climate, it may be a long road to find a common political reality that everyone can believe in.


Abramowitz: Registered vs. Likely Voters- How Large a Gap?

Topics: 2010 Election , Generic House Vote , Likely Voters , registered voters


According to several recent national polls, Democrats may be headed toward their worst showing in a congressional election since World War II. A new NBC/Wall Street Journal Poll has Republicans leading Democrats on the generic House ballot by 9 points among likely voters while a new Washington Post/ABC News Poll has Republicans with an astonishing 13 point lead. The most recent Rasmussen weekly tracking poll has Republicans with a 12 point lead among likely voters.

If these polls prove to be accurate, Republicans could achieve their biggest popular vote margin since the 1920s. In 1946, Republicans won the national popular vote for the House of Representatives by a margin of about 9 points and that was their biggest win in the past 64 years. The Republicans' second biggest popular vote margin was 7 points in 1994.

What would such a popular vote margin mean in terms of seats? In 1946, Republicans won 246 seats in the House--a gain of 56 seats over their previous total of 190. A 12 or 13 point Republican margin would likely produce close to 260 Republican seats--a gain of about 80 seats over their current total of 179. That would be the biggest seat swing in a House election since 1932 when Republicans lost 101 seats. It would dwarf the 1994 shift when Democrats lost 52 seats, their worst showing since 1946.

It is very likely that Republicans will make substantial gains in this year's midterm election. Democrats are defending many seats in Republican-leaning districts that they picked up in 2006 and 2008, Americans are very anxious about the condition of the economy, and President Obama's approval rating has fallen into the low-to-mid 40s in recent weeks. My own forecasting model now has Republicans gaining between 40 and 50 seats in the House. But how realistic are polls that show Republicans winning the national popular vote by a double digit margin-- enough to produce record-setting Democratic losses?

There is one reason to be skeptical about some of these recent poll results--they reflect an enormous gap between the preferences of registered and likely voters. Rasmussen does not release generic ballot results for registered voters, nor do they provide any information about how they identify likely voters. But the recent NBC/Wall Street Journal Poll reported a tie on the generic ballot among registered voters. Likewise, the new Washington Post/ABC News Poll reported only a 2 point Republican advantage among registered voters.

It is not surprising that Republicans would be doing better among likely voters than among all registered voters, especially in a low turnout midterm election. Republicans generally turn out in larger numbers than Democrats because of their social characteristics and this year Republicans appear to be especially motivated to get to the polls to punish President Obama and congressional Democrats. But a double-digit gap between the preferences of registered and likely voters is unusually large.

According to data compiled by the Gallup Poll, in 13 midterm elections between 1950 and 2006 for which relevant data were available, the average gap between the preferences of registered and likely voters was 5 points. Only once, in 2002, did the gap reach double digits. In that year Democrats had a 5 point lead among registered voters but Republicans led by 6 points among likely voters. However, the gap in party preference between registered and likely voters did reach 9 points in 1962 and 8 points in both 1974 and 1982 and in every one of these years, the preferences of Gallup's likely voters were closer to the actual election margin than the preferences of registered voters. In fact, across all 13 midterm elections, the Democratic margin among likely voters differed from the actual Democratic margin in the national popular vote by an average of only 2.1 percentage points while the Democratic margin among registered voters differed from the actual Democratic margin by an average of 6.5 percentage points.

These results appear to support two conclusions. First, while a double-digit gap between the preferences of registered and likely voters is unusual, based on the history of Gallup's generic ballot polling, it is not unprecedented. Second, result of the final Gallup generic ballot among likely voters has been a very good predictor of the national popular vote for the House of Representatives. If that poll finds Republicans with a double-digit margin, Democratic losses in November could be substantially greater than those the party suffered in 1994.


Bafumi, Erikson, and Wlezien: A Forecast of the 2010 House Election Outcome

Topics: 2010 Election , Election forecasting , election results , Generic House Vote

Joseph Bafumi is an assistant professor in the government department at Dartmouth College. Robert S. Erikson is a professor in the political science department and faculty fellow at the Institute for Social and Economic Research and Policy at Columbia University. Christopher Wlezien is a professor in the political science department and faculty affiliate in the Institute for Public Affairs at Temple University.

How many House seats will the Republicans gain in 2010? To answer this question, we have run 1,000 simulations of the 2010 House elections. The simulations are based on information from past elections going back to 1946. Our methodology replicates that for our ultimately successful forecast of the 2006 midterm. Two weeks before Election Day in 2006, we posted a prediction that the Democrats would gain 32 seats and recapture the House majority. The Democrats gained 30 seats in 2006. Our current forecast for 2010 shows that the Republicans are likely to regain the House majority.

Our preliminary 2010 forecast will appear (with other forecasts by political scientists) in the October issue of PS: Political Science. By our reckoning, the most likely scenario is a
Republican majority in the neighborhood of 229 seats versus 206 for the Democrats for a 50 seat loss for the Democrats. Taking into account the uncertainty in our model, the Republicans have a 79% chance of winning the House.

The model has two steps. Step 1 predicts the midterm vote division from only two variables, the generic poll result and the party of the president. With this estimate of the partisan tide in place, step 2 forecasts the winners of 435 House races using separate statistical models for open seats and races with incumbent candidates. At each step, the forecast takes into account uncertainty about the inputs.

First, we simulate 1000 separate outcomes of the national vote. The pooled generic polls
conducted 121 to 180 days in advance of the 2010 election show a very close division of 49.1% Democratic and 50.9% Republican. But a near tie in the polls in mid-summer projects to a significant vote plurality for the Republicans in November, close to a 53%-47% split. This prediction is not due to any bias in the polls, but rather stems from the electorate's tendency in past midterm cycles to gravitate further toward the "out" party over the election year--ultimately gaining about two extra points beyond what summer polls would otherwise show.

The national vote only tells us part of the story, and we still need to determine how it would translate into seats. For each of the 1000 simulated values of the national vote, we simulate the outcome in 435 congressional districts. Open seats and incumbent seats are treated separately. Open seat outcomes are estimated based on the simulated national vote swing plus the 2008 presidential vote in that district. Outcomes with the incumbent on the ballot are estimated based on the simulated national swing plus the incumbent's vote margin in 2008 and whether the incumbent is running as a freshman. The weight that these variables are given in predicting the final outcome depends on their explanatory power in past elections. Full details are presented in our forthcoming PS paper.

To sum up, first, we generated 1,000 simulations of the national vote. Then, we applied each of the 1,000 simulated national outcomes to each congressional district, noting the party of the "winner." For each of the 1,000 simulated outcomes of the national vote, we project the partisan division of the 435 congressional districts.

The figure below displays the range of simulated results. As can be seen from the predominance of red bars, the Republicans win the majority of seats in 79% of the trials. On average, the Republicans win 229 seats, 23 more than the Democrats and 11 more than the 218 needed for a majority. However, the simulations yield considerable variation, with a 95% confidence interval of 176 to 236 Republican seats.

This prediction comes with important caveats. Applying our model to 2010 assumes that the forces at work in 2010 are unchanged from past midterm elections. However, we should be wary of the possibility that the underlying model of the national vote works differently in 2010 or is influenced by variables we have not taken into account. Because the 2010 campaign started to heat up earlier than usual, the usual tilt toward the out party may already be complete, with no further drift to the Republicans. It is also uncertain how voters will react to the tea-party movement as the public face of the Republican Party.

The key will be to follow the generic polls from now to November. If the polls stay close, the Democrats have a decent chance to hold the House. But if the polls follow the past pattern of moving toward the "out" party and move further toward the Republicans--even by a little--the Republicans should be heavily favored.


Abramowitz: OMG! GOP Up by 7 in Gallup Tracking Poll

Topics: Gallup , Generic House Vote , Interpreting polls , Likely Voters

Alan I. Abramowitz is the Alben W. Barkley Professor of Political Science at Emory University in Atlanta, Georgia. He is also a frequent contributer to Larry Sabato's Crystal Ball.

If you heard a loud thump on Monday afternoon it just may have been the sound of worried Democrats hitting the panic button. That's when the latest Gallup weekly tracking poll was released and it showed Republicans with their largest lead yet on the generic ballot--7 points. It's the third consecutive week that Republicans have had a significant lead--following a 5 point lead two weeks ago and a 6 point lead last week. And that's among all registered voters, not just those likely to vote in November. Once Gallup begins screening for likely voters the GOP lead will almost certainly get larger since registered Republicans traditionally turn out at a higher rate than registered Democrats and this year Republicans are more enthusiastic about voting than Democrats.

But do Gallup's latest results actually mean that Republicans are likely to maintain a significant advantage on the generic ballot? Not necessarily. A closer examination of Gallup's weekly generic ballot data indicates that the current GOP advantage is likely to shrink over the next few weeks. In fact almost all of the week-to-week change in the standing of the parties appears to be due to random variation. There is little evidence of any real trend, at least so far.

Over the past 18 weeks, from April 12-18 through August 8-15, Republicans have received an average of 46% of the vote to 45% for Democrats on the generic ballot. There has been considerable week-to-week variation, from a 6 point Democratic lead only four weeks ago, to the current 7 point Republican lead, but no clear trend. Over this period, the correlation between the week of the survey and the size of the GOP lead is a very small and statistically insignificant .14.

Figure 1 displays both the week-to-week and the five week running averages for the Republican margin on the generic ballot between week 5 and week 14 of the Gallup weekly tracking poll. While the weekly average has shown considerable volatility, the five week running average has been fairly stable, fluctuating between a 2 point Democratic lead and a 2 point Republican lead with no clear trend.

The results in Figure 1 suggest that the weekly fluctuations in the generic ballot results are largely random. This conclusion is reinforced by the fact that there is a fairly large negative correlation of -.55 (p < .025) between the size of the GOP lead one week and the change in the size of that lead the next week. This means that the larger the GOP margin in a given week, the more that lead tends to shrink in the following week. These results again suggest that the week to week variation in the results is largely random.

Of course the fact that the current 7 point Republican lead on the generic ballot is likely to shrink doesn't alter the fact that Republicans are poised to make substantial gains in the midterm election. Even a tie on the generic ballot, given normal turnout patterns, is good news for the GOP. So while it may not be time yet for Democrats to hit the panic button, there is plenty of reason for them to be worried.


Rivers: Random Samples and Research 2000

Topics: Daily Kos , Nate Silver , Research2000 , Sampling

Douglas Rivers is president and CEO of YouGov/Polimetrix and a professor of political science and senior fellow at Stanford University's Hoover Institution. Full disclosure: YouGov/Polimetrix is the owner and principal sponsor of Pollster.com.

I am, like most in the polling community, shocked by the recent accusations of fraud against Research 2000. Marc Grebner, Michael Weissman, and Jonathan Weissman convincingly demonstrate that something is seriously amiss with the research reported by Research 2000, which may well be due to fraud.

But some of the claims by the critics, such as Nate Silver's post this morning on FiveThirtyEight.com (as well as part of the Grebner et al. analysis), exhibit a common misunderstanding about survey sampling: "random sampling" does not necessarily mean "simple random sampling." I do not know what Research 2000 did (or claimed to do), but very few surveys actually use simple random sampling.

To recapitulate Nate's argument: if you draw a simple random sample of size 360 from a population of 50% Obama voters and 50% McCain voters, the day to day variation in the Obama vote percentage in the sample should be approximately normal, with mean 50% and standard deviation 2.7%. (Nate gets this by simulating 30,000 polls and rounding the results, but most students in introductory statistics would just calculate the square root of 0.5 x 0.5 / 360, which is about 2.6%.) This would give you the blue line in Nate's first graph, reproduced below.

obr2k.png

However, what happens if the poll is not a simple random sample? Suppose (and this is entirely hypothetical) that you polled off of a registration list composed of 50% Democrats and 50% Republicans (to keep things simple, let's pretend there are no independents). Further, suppose that 90% of the Democrats support Obama and 90% of the Republicans support McCain, so it's still 50/50 for Obama and McCain in the population. Instead of drawing a simple random sample, we draw a "stratified random sample" with 180 Democrats and 180 Republicans each day. That is, we draw a simple random sample of 180 Democrats and a simple random sample of 180 Republicans and combine them. What should the distribution of daily poll results look like?

I should caution that there is a little math in what follows, but nothing hard. The variance (the square of the standard deviation) of each subsample is 0.90 x 0.10 / 180 = 0.0005. The combined sample mean is just the average of these two independent subsamples, so its variance is 0.0005/2 or 0.00025, so the standard deviation is the square root of 0.00025 or approximately 1.6%, not the 2.6% that Nate thought it should be. This distribution is shown in the figure below as a green lines, which is a lot closer to the suspicious red line in Nate's graph, showing the Research 2000 results.

riversgraphic.png

Does this absolve Research 2000 of fraud? Of course not. There are other factors (such as weighting) that usually increase the variability, so Nate is right that the Research 2000 results look suspicious. But we should be a little more cautious before convicting upon the basis of this sort of evidence.


Murray: Are Nate Silver's Pollster Ratings 'Done Right'?

Topics: AAPOR , AAPOR Transparency Initiative , Fivethirtyeight , Nate Silver , Patrick Murray , Poll Accuracy , Polling Errors , Transparency

Patrick Murray is director of the Monmouth University Polling Institute

The motto of Nate Silver's website, www.fiverthirtyeight.com, is "Politics Done Right." Questions have been raised whether his latest round of pollster ratings lives up to that claim.

After Mark Blumenthal noted errors and omissions in the data used to arrive at Research 2000's rating, I asked to examine Monmouth University's poll data. I found a number of errors in the 17 poll entries he attributes to us - including six polls that were actually conducted by another pollster before our partnership with the Gannett New Jersey newspapers started, one eligible poll that was omitted, one incorrect candidate margin, and even two incorrect election results that affected the error scores of four polls. [Nate emailed that he will correct these errors in his update later this summer.]

In the case of prolific pollsters, like Research 2000, these errors may not have a major impact on the ratings. But just one or two database errors could significantly affect the ratings of pollsters with relatively limited track records - such as the 157 (out of 262) organizations with fewer than 5 polls to their credit. Some observers have called on Nate to demonstrate transparency in his own methods by releasing that database. Nate has refused to do this (with a somewhat dubious justification), but at least he now has a process for pollsters to verify their own data.

Basic errors in the database are certainly a problem, but the issue that has really generated buzz in the polling community is his new "transparency bonus." This is based on the premise that pollsters who were members of the National Council on Public Polls or had committed to the American Association for Public Opinion Research (AAPOR) Transparency Initiative as of June 1, 2010 exhibit superior polling performance. These pollsters are awarded a very sizable "transparency bonus" in the latest ratings.

Others have remarked on the apparent arbitrariness of this "transparency bonus" cutoff date. Many, if not most, pollsters who signed onto the initiative by June 1, 2010 were either involved in the planning or attended the AAPOR national conference in May. A general call to support the initiative did not go out until June 7.

Nate claims that, regardless of how a pollster made it onto the list, these pollsters are simply better at election forecasting, and he provides the results of a regression analysis as evidence. The problem is that the transparency score misses most researchers' threshold for being significant (p<.05). In fact, of the three variables in his equation - transparent, partisan, and Internet polls - only partisan polling shows a significant relationship. Yet, his Pollster Introduced Error (PIE) calculation awards "transparent" polls and penalizes Internet polls, but leaves partisan polls untouched. Moreover, his model explains only 3% of the total variance in pollster raw scores (i.e. polling error).

I decided to run some ANOVA tests on the effect of the transparency variable on pollster raw scores for the full list of pollsters as well as sub-groups at various levels of polling output (e.g. pollsters with more than 10 polls, pollsters with only 1 or 2 polls, etc.). The F values for these tests range from only 1.2 to 3.6 under each condition, and none are significant at p<.05. In other words, there may be more that separates pollsters within the two groups (transparent versus non-transparent) than there is between the two groups.

I also ran a simple means analysis. The average error among all pollsters is +.54 (positive error is bad, negative is good). Among "transparent" pollsters, the average score is -.63 (se=.23), while among other pollsters it is +.68 (se=.28). A potential difference, to be sure.

I then isolated the more prolific pollsters - the 63 organizations with at least 10 polls. Among this group, the 19 "transparent" pollsters have an average error score of -.32 (se=.23) and the other 44 pollsters average +.03 (se=.17). The difference is now less stark.

On the flip side, organizations with fewer than 10 polls to their credit have an average error score of -1.38 (se=.73) if they are "transparent" - all 8 of them - and a mean of +.83 (se=.28) if they are not. That's a much larger difference. Could it be that the real contributing factor to pollster performance is the number of polls conducted over time?

Consider that 70% of "transparent" pollsters on Nate's list have 10 or more polls to their credit, but only 19% of the "non-transparent" organizations have been equally as prolific. In effect, "non-transparent" pollsters are penalized for being affiliated with a large number of colleagues who have only a handful of polls to their name - i.e. pollsters who are prone to greater error.

To assess the tangible effect of the transparency bonus (or non-transparency penalty) on pollster ratings, I re-ran Nate's PIE calculation using a level playing field for all 262 pollsters on the list to rank order them. [I set the group mean error to +.50, which is approximately the mean error among all pollsters.] Comparing the relative pollster ranking between his and my lists produced some intriguing results. The vast majority of pollster ranks (175) did not change by more than 10 spots on the table. On its face, this first finding raises questions about the meaningfulness of the transparency bonus.

Another 67 pollsters moved between 11 to 40 ranks between the two lists, 11 shifted by 41 to 100 spots, and 9 pollsters gained more than 100 spots in the rankings, solely due to the transparency bonus. Of this last group, only 2 of the 9 had more than 15 polls recorded in the database. This raises the question of whether these pollsters are being judged on their own merits or riding others' coattails, as it were.

Nate says that the main purpose of his project is not to rate pollsters' past performance but to determine probable accuracy going forward. The complexity of his approach boggles the mind - his methodology statement contains about 4,800 words including 18 footnotes. It's all a bit dazzling, but in reality it seems like he's making three left turns to go right.

Other poll aggregators use less elaborate methods - including straightforward means - and have been just as, or even more, accurate with their election models (see here and here). I wonder if, with the addition of this transparency score, Nate has taken one left turn too many.


Yost & Borick: The Silver Standard

Topics: AAPOR Transparency Initiative , Berwood Yost , Chris Borick , Disclosure , Franklin and Marshall College , Muhlenberg College , Nate Silver , Poll Accuracy

This guest pollster contribution comes from Berwood Yost, director of the Floyd Institute for Public Policy Franklin and Marshall College, and Christopher Borick, director of the Muhlenberg College Polling Institute.

Nate Silver's compilation of performance data for election polling in the United States and his ratings of polling organizations should be applauded for increasing the ability of the public to judge the accuracy of the ever increasing number of pre-election polls. Helping the public determine the relative effectiveness of polls in predicting election outcomes can be compared to Consumer Reports equipping individuals with information about which products meet minimum standards for quality. As with the work of Consumer Reports, Mr. Silver is explicit in his methodology and provides substantial justification for the assumptions he adopts in his calculations. But as is the case in the construction of any measure, there are some reasonable questions that can be raised about what was included in those calculations. One such question has to do with the "affiliation bonus."

Silver's decision to include an "affiliation bonus" for pollsters that are either in the NCPP or have joined AAPOR's Transparency Initiative has significant consequences for his final ratings. Table 1 provides two pollster-introduced error (PIE) estimates for a sub-group of academic polling organizations, one that uses the calculation for all telephone pollsters and the other that uses the calculation for those pollsters who receive the "affiliation bonus." We chose this group because all of the organizations, regardless of their affiliation with NCPP or the AAPOR Transparency initiative, consistently release full descriptions of their methodology and provide detailed breakdowns of their results. The scores highlighted in yellow are those reported for each pollster on Silver's site. As Table 1 shows, the rankings are substantially different depending on whether a firm receives the "affiliation bonus."

[Editor's note: Chris Borick informs us that Muhlenberg University has signed on to the AAPOR Transparency Initiative, but did so after June 1, so they were not classified as a participant in Silver's ratings. Berwood Yost tells us that Franklin and Marshall intends to sign on, but has not done so yet].

2010-06-14-borick-Yost-538scores.png

As part of his rating methods Mr. Silver makes the decision to discount the "raw scores" for polls despite noting that those scores are the most "direct measure of a pollster's performance." His primary justification for discounting the "raw scores" is because his project is, "not to evaluate how accurate a pollster has been in the past--but rather, to anticipate how accurate it will be going forward" (taken from Silver's methodological discussion). Those who read his rankings should take care to understand the distinction that Silver is making between past performance and expected future performance. We are not sure why the scores based on past performance are inferior to PIE and he does not make a sufficiently strong case for the very heavy discount that he applies to those scores in his calculations. It would be valuable to see some more evidence about what makes PIE a better indicator of polling performance. The "affiliation bonus" may indeed be correlated with the performance of polls, but is it actually the affiliations that are leading to better performance or is it some other unmeasured variable that is at work? Silver's calculations show that the "affiliation bonus" explains only three percent of the variance in his regression equation and has a p value that is greater than .05. One may ask if that is sufficient evidence to provide such a strong advantage to some pollsters.

In closing we would once again like to applaud Mr. Silver for taking on the important task of applying solid methods to the evaluation of pollster accuracy. The public needs such efforts in order to more effectively sift through the avalanche of polls that greet them every election season. Our intention is simply to note that the scores produced by Silver should be evaluated in terms of both their strengths and limitations.


Lundry: Twitter as Pollster

Topics: Interpreting polls , Measurement , Sampling , Twitter

Alex Lundry is Vice President and Director of Research for TargetPoint Consulting, a conservative political polling, microtargeting, and knowledge management firm. You can connect with him on Twitter where he expresses his opinions with great clarity so as to avoid confounding CMU's sentiment analysis.

Researchers at Carnegie Mellon have shown that unstructured text data pulled from Twitter can in some instances be used as a reliable substitute for opinion polling (link to study PDF). The results are impressive, and though pollsters needn't start looking for another line of work, I think they ignore this study at their peril.

Using very simple tweet selection mechanisms along with measures of the tweet's sentiment ("Obama's awesome" = approve, "Obama sucks" = disapprove), these researchers were able to:

  • extract an alternate measure of consumer confidence that was very highly correlated (r=73.1%) with the standard poll derived confidence metric,
  • use this Twitter-derived measure of consumer confidence to accurately forecast the results of the consumer confidence poll, and
  • measure President Obama's job approval rating and correlate it with Gallup's daily tracker at a level of r=72.5%.

However, the same methodology failed miserably when it came to the 2008 presidential horse race obtaining a correlation of r=-8% with Obama's level of support in the Gallup tracker.

It seems then that aggregate Twitter sentiment shows great promise as a polling substitute for high volume and relatively binary opinions and attitudes: are you hot or cold on the economy, do you like or dislike the President? But the polynomial nature of items like a campaign horserace or the health care debate makes it difficult to extract meaningful opinions amid a crush of unstructured data.

Yet this is no reason for pollsters to shrug away these results. There is great predictive power hidden away inside this sort of latent data just waiting for the extraction of opinions, attitudes and trends in voter sentiment. Pollsters would be wise to begin incorporating these data into their work: analyzing Google Trends search data, counting Facebook friends, YouTube views and web traffic, or simply doing more with the rich verbatim data we typically capture in our surveys and focus groups. (And it's not just politics where this is applicable; tweet volume and sentiment have also been shown to be an incredibly accurate predictor of a movie's box office returns).

This study also highlights a debate the polling community must have sooner or later: can the shortcomings of dirty data be overcome by a mix of sheer volume, sound data preparation/manipulation and savvy analysis? In this new era of IVR, online panels, social media and big data, the answer is increasingly pointing to yes - especially when you consider the advantages of speed, cost and access that these non-traditional data collection methods enjoy.

Finally, it's worth taking a moment to consider just how stunningly impressive these results are. What level of precision might there have been with a more sophisticated methodology? Tweets were selected for study based merely upon the presence of a single word - imagine the accuracy if selection allowed for the use of synonyms, alternate spellings or Boolean operators. Moreover, as the researchers themselves point out, there were no geographical restrictions and no consideration of either online idioms or the practice of retweeting.

This is an exciting, important study, and the polling community should be taking it very seriously. It is well worth your time to read the whole thing, and I'm very curious to hear your take on it in the comments section below.


Wolf, Downs, and Ortsey: Who's Your Tea Party? Evidence from Indiana

Topics: Indiana , Interpreting polls , midterm , Tea Party movement

Michael Wolf is an Associate Professor of Political Science at Indiana University. He can be reached at wolfm@ipfw.edu.

Andrew Downs is Director of the Mike Downs Center for Indiana Politics and is an Assistant Professor of Political Science at Indiana University. He can be reached at downsa@ipfw.edu.

Craig Ortsey is a Continuing Lecturer of Political Science at Indiana University. he can be reached at ortseyc@ipfw.edu.

The authors would like to thank Brian Schaffner for his suggestions on an earlier draft of this piece.

Tea Party observers have floated two explanations for the group's emergence since their unexpectedly intense protests last year. The first explanation - embraced by conservative commentators and the movement itself - is that the Tea Party is comprised of grassroots citizens upset at the direction of the country and the deficit. Democrats champion a second explanation, that the Tea Party is composed of Republicans upset that President Obama and the Democrats control Washington. If the Tea Party is a movement against Washington politicians no matter their political stripes, then establishment Republicans must be wary of disaffected voters picking off their incumbents in primaries and President Obama faces a genuine rejection among voters he attracted in 2008. If it is simply Republicans upset at losing the presidency, 2010 looks more like a normal midterm election than an anti-incumbent revolt.

To get a better feel for the political dynamics behind the Tea Party, The Mike Downs Center for Indiana Politics asked registered Indiana voters whether they identified with the Tea Party, their vote intention for the Republican primary, and a series of election-related questions. Our first noteworthy finding is that 36% of registered likely Hoosier voters identified themselves with the Tea Party, while 61% of Republicans did.

Contrary to the "throw the bums out" rhetoric surrounding the movement, however, a plurality of Tea Partiers intended to vote for Dan Coats to be the Republican nominee for US Senate. Coats, a former senator and lobbyist who had homes in North Carolina and Washington, D.C. (but not Indiana) prior to jumping into the race, was recruited by the National Republican Senatorial Committee and was clearly the Washington establishment candidate. The candidates who reached out most aggressively to the Tea Partiers, Bates, Behney, and Stutzman, did relatively better with Tea Partiers than with non-Tea Party identifiers, but they still lagged behind Coats. Between this poll and the election, Stutzman's support surged, but that movement was more likely due to Senator Jim DeMint's Senate Conservatives Fund's late but strong support of his candidacy than due to a grassroots shift of non-Republican Tea Partiers looking his way.

downs-table.png So where were the pitchforks and torches against establishment Washington? Our findings demonstrate that Tea Partiers are overwhelmingly Republican. The blue bars in Figure 1 show the percentage of Indiana Tea Partiers in each partisan category. Four in ten Hoosier Tea Partiers are strong Republicans, and when weak Republicans and independents who lean Republican are added to the strong Republicans, nearly 80 percent of Tea Party identifiers are Republican Party adherents. Less than 10 percent of Tea Partiers are Democrats or independents who lean Democratic. True independents make up less than 13 percent of Tea Partiers.

The second piece of evidence that supports the position that the Tea Party is a Republican phenomenon comes from the red bars of Figure 1. Here the percentage of Indiana Tea Partiers who voted for Obama in 2008 is presented across each category of party identification. Less than seven percent of all Tea Party adherents voted for Obama, and they are largely comprised of a handful of disappointed Democrats. The differences between the red and blue bars represent McCain supporters, implying that the great majority of Tea Party independents were McCain voters and even half of the Tea Party Democrats were McCain voters. The genesis of Tea Party identification does not result from a rejection of Obama by his own supporters; rather, it arises more from upset McCain supporters - hardly a broad-based grassroots movement.downs-chart1.png What explains this pattern of Tea Party identification that looks as if it may have begun on November 5, 2008 rather than after the stimulus bills or auto bailouts? If the Tea Party were a response to the conditions of the country or frustration with spending, then a negative view of the direction of the country or a concern over the deficit should lead to an even distribution of Tea Party identification across party identification, or perhaps a bell-curve distribution concentrated among those independents who identify with the Tea Party. To test for these possibilities, we ran a logit model that yielded four significant explanatory variables. Two of these variables are issues associated with the Tea Party: believing that the US is on the "wrong track," and holding that the deficit is the most important issue facing the US. Another two significant variables are longer-term determinants: party identification and voting for John McCain in 2008. A factor analysis shows that party identification, view of national direction, and 2008 presidential vote all hang together as a single factor (the results of the logit model and factor analysis are available upon request). These outcomes imply that it is unlikely that the distribution of those viewing the national direction poorly is separate from Republican identifiers who voted for McCain. However, the salience of the deficit issue may still lead non-Republicans to be more apt to identify with the Tea Party and that distribution may be concentrated outside of Republicans.

Figure 2 indicates that this hypothesis is not correct. It presents the predicted probability of identifying with the Tea Party when one views the deficit as the most important issue (blue bars), the probability of Tea Party identification when the respondent believes that the deficit is the most important issue and views the US as being on the wrong track (maroon bars), and these two factors combined with voting for John McCain in 2008 (yellow bars) across each category of party identification. The overall message of this figure is that party identification conditions all of the factors that increase the probability of Tea Party identification. The distribution of deficit hawks' likelihood of identifying with the Tea Party is not a bell-shaped curve centered around independents, and in fact follows the strength of party identification in a nearly perfect progressive step-by-step pattern. When combining this factor with the view that the country is on the wrong track and (in a second step) with having voted for McCain, it is clear that the robust explanation for Tea Party identification is related to Republican Party identification rather than a populist reaction to national direction and deficits. What makes this result even more dramatic is when Figure 2 is juxtaposed against Figure 1. The predicted probability of Democrats identifying with the Tea Party given these attitudes looks impressive in Figure 2 (roughly 0.5 to 0.6 probability of Democrats identifying with the Tea Party when they hold these attitudes and voted for McCain). However, there are almost no Democrats who hold these attitudes and who voted for McCain. Only nine of the 343 Tea Party identifiers are strong Democrats, weak Democrats, or Democratic-leaning independents who hold these attitudes and voted for McCain. In other words, the maximum 0.6 probability of Tea Party identification by Democrats of any stripe given these conditions is deceptively strong. On the other hand, it is very telling for Republicans. Indeed, the variable with the largest marginal influence on Tea Party identification is voting for McCain in 2008, but for Hoosiers this act is intertwined tightly with Republican Party identification and viewing the country on the wrong track. downs-chart2.pngOf the two explanations for the Tea Party's rise (a grassroots non-partisan movement upset at Washington policy versus Republican frustration with losing the 2008 election and the Obama administration's policies), our evidence from Indiana supports the latter. The Tea Party in Indiana is a Republican phenomenon whose effects will most likely be on voter mobilization rather than voter choice in next November's elections. While these results are only from one state, there is reason to think that a state with a culture of Midwestern agricultural individualism would be more likely than most states to have a Tea Party movement independent of partisan politics. The fact that it is not bodes ill for the grassroots explanation being correct in other states. The Tea Party is popular because it has provided aggrieved Republicans with a "reset" button unconnected to the past. Rather than voicing their frustrations by placing "Don't Blame Me! I Voted for McCain!" bumper stickers on their cars (which we do not expect to see soon in Indiana or elsewhere), the development of the Tea Party has operated as a convenient vehicle for Republican grievances that is unconnected to the unpopular end of the Bush era.

Note: Statement on Methodology: This SurveyUSA poll was conducted by telephone using the voice of a professional announcer. Respondent households were selected at random, using a registration based sample (RBS) provided by Aristotle of Washington DC. All respondents heard the questions asked identically. The calls were conducted from April 22-26, 2010. The number of respondents who answered each question and the margin of sampling error for each question are provided. Where necessary, responses were weighted according to the voter registration database. In theory, with the stated sample size, one can say with 95% certainty that the results would not vary by more than the stated margin of sampling error in one direction or the other had the entire universe of respondents been interviewed with complete accuracy. There are other possible sources of error in all surveys that may be more serious than theoretical calculations of sampling error. These include refusals to be interviewed, question wording and question order, weighting by demographic control data, and the manner in which respondents are filtered (such as determining who is a likely voter). It is difficult to quantify the errors that may result from these factors. Fieldwork for this survey was done by SurveyUSA of Clifton, NJ.


Ford: Response to Nate Silver

Topics: modeling , Nate Silver , Politics Home , Robert Ford , UK elections

This morning, Nate Silver responded to yesterday's guest post by Robert Ford on the PoliticsHome UK poll tracking and seat projection model. In this entry, Ford responds on behalf of his team of political scientists that also includes Will Jennings, Mark Pickup and Chris Wlezien.

In a previous post at pollster.com I explained the model we have developed with politicshome. Nate Silver has since posted a lengthy critique of our approach, which I will respond to here on behalf of the team.

First I'd like to clarify a little the background to the post. We were asked by pollster.com to provide an explanation of the differences between the two approaches, and we did so. We provide our projections free of charge to politicshome.com as a way of contributing to the understanding of the current state of play, which in Britain as in the US is too often driven by a focus on individual polls, on spurious margin of error changes and on naive applications of uniform swing. Nate's model is also a valuable contribution to analysis of the British situation and as such we view it as a complement to our work, not as a competitor.

We agree with Nate that naive uniform swing performs poorly. We disagree that this implies abandoning the swing approach entirely, as the evidence from past elections suggests it is relatively straightforward to modify the swing model to improve its accuracy.

Modified swing models such as this have been employed with considerable success to the task of forecasting British election results from exit polls over the last 35 years. We attempt to build on this work, rather than start afresh. We do not consider this stubbornness or traditionalism but rather an appropriate approach given our aims: we wanted to provide a better tool for understanding and interpreting the polls, so we turned to research tools with a strong track record. We do not believe that this approach is necessarily superior to the approach Nate takes, and we agree that science is well served by putting alternative approaches to the problem out in the public domain in as much detail as possible.

We do, however, think there are two important problems with the approach Nate takes. The first is that it is necessarily more subjective than ours: the data needed to construct the matrices Nate uses simply do not exist, and the modeller therefore needs to construct them based on his own judgement. Nate quite rightly deals with this by adopting a scenario based approach to his forecast, so we can see how different assumptions lead to different outcomes. Again, I'd like to emphasise that we don't think this approach is wrong - it may well lead to a better forecast - it is just not our approach. We do feel that when a model involves subjective judgements like this, there should be a lot of clarity about how the decisions are made. Of course, model selection always requires some exercise of judgement, but we prefer an approach which requires fewer decisions, and therefore leaves less subjective judgement to justify. Nate's response to our post has certainly clarified his modelling process a great deal, although we still have a number of unanswered questions. For example: how do his team decide what to put into each cells of their matrix? What might lead him to change their entries? How are the matrices changed for subsets of the data- how does the vote split down in Scotland, for example? Where do the votes lost by retiring incumbents go? I'm sure good answers exist for all of these questions, I would just like to learn more about them.

The second problem is that the proportional swing methodology Nate proposes does not have a good intellectual basis. Proportional swing supposes that most voters have a roughly similar propensity to switch votes, so when a party starts with a high level of support it will lose more than when it has a low level of support. As was first pointed out by Iain Maclean in 1973, there is no reason to suppose that all voters have an equal propensity to switch in this way. Many voters may be committed to one party, and may never consider voting for another. If the propensity to switch votes is unrelated to the strategic situation in the seat - in other words, if "floating voters" are equally distributed across seats - then uniform swing is more likely than proportional swing. We would still, however, observe proportional swing if floating voters were disproportionately influenced by local factors, and if these local factors tended to drive them away from the locally dominant party.

Yet in reality, the distribution of British floating voters is fairly uniform across seats. David Voas' analysis of the 2005 British Election Study shows no difference between marginal and safe seats in the proportions of voters who are undecided, who are thinking of changing their votes, and who have changed their minds about whether to vote at all. As Voas notes, in such a situation we would expect a uniform swing if the influences driving voters' decisions are primarily national. Our view is that the influences in British elections generally are national - the television and print media markets operate at a national level, the parties are national operations, the operation of government is national. The most salient issues - unemployment, the recession, immigration - are national issues. The unique new factor in 2010 - the "Clegg Bounce" was the consequence of a debate aired on national television. There is therefore every reason to suppose that floating voters are being swayed by national factors. And as they are distributed evenly across seats, there is therefore also every reason to suppose that the change in vote will be distributed evenly. We may, of course, be wrong about this, as about every other aspect of our model. But we would welcome a clearer explanation from the fivethirtyeight.com team as to why they think proportional swing should operate in Britain, given the even distribution in floating voters and the dominance of national issues, national parties and a national media.

So our overarching justifications for using models based on uniform swing are that they have a long and strong track record, and a strong intellectual grounding. We apply similar criteria of strong empirical and intellectual grounding when making our adjustments. We adjust the swing in Scotland because Scotland has a uniquely distinct political culture, with a strong devolved Parliament where a different party currently governs, a distinct national media, and a different party system. The empirical evidence from repeated polling also confirms that the pattern of swing is very different there. We haven't made other regional adjustments because both the intellectual case and the evidence base are weaker. I should apologise to Nate for misunderstanding which regional data he used to make his adjustments. The data he is using is fine in terms of recency but the differences in it are not very large and some of the sample sizes for individual regions are quite small.

We adjust the swing in marginal constituencies because we know that the parties concentrate their spending and campaign resources in such seats, and we know from past research that such campaigning efforts make a difference. We also have a good evidence base from a series of recent polls of Labour held marginal constituencies, all of which have shown around a 2% swing bonus to the Conservatives. I'd like to spend a little time clarifying all the decisions here, as Nate has described this section of the model as its "weakest facet" and considers the choices to be "arbitrary". We apply the swing bonus in Labour held seats where the party holds 6 to 14 point majorities. There are two reasons for this choice- again they are intellectual and empirical. The intellectual aim here was to capture the subset of seats where the Conservatives would be concentrating their resources in order to win a majority. Seats requiring very small swings are almost certain to fall, and so are likely to receive less campaign resources, which is why we apply the 6 percent cut-off: the polling in 2010 has consistently shown a swing from Labour to the Conservatives of well more than 3 percent, implying nearly all of these seats should fall. The 14 point cut-off is chosen as if they capture seats above this point, the Conservatives are almost certain to have a majority. As achieving a majority is their primary goal, the seats needed to achieve it should receive the most resources. The empirical justification is that the 6 to 14 point range roughly equates to the range of seats that have been polled. The choices are therefore not arbitrary, although they must involve a degree of judgement. We are not wedded to them and will happily adopt more elegant solutions to modelling this issue - we would welcome suggestions on this front. However, the choices we currently make are well grounded both theoretically and empirically.

We do not apply such adjustments to seats involving the Liberal Democrats, again for both intellectual and empirical reasons. Intellectually, the Lib Dems have far fewer resources available, and until recently they were focussing most of these on defending seats from the Conservatives, not winning them from Labour. We do not see any strong theoretical reason to expect the recent pickup in Lib Dem fortunes to apply most strongly in Lab-Lib Dem seats. We also have very little polling data on this subject - there has been one poll suggesting the Lib Dems are doing better in Labour held seats than Tory held ones - but it has a relatively small sub-sample of each.

It is also by no means clear that applying a marginality adjustment to Lab-Lib Dem marginals would have a dramatic effect. Current polling suggests around a 7 point rise in Lib Dem support from 2005 and a nine point decline in Labour support. There are only 28 seats where the Lib Dems are close enough to win seats on the basis of such an 8 point swing from Labour to the Lib Dems. Allowing a 2 point bonus only brings about another ten seats into view. Applying the bonus to Conservative seats would have a somewhat larger effect, although it would be dampened because the Conservatives are also expected to improve their vote, reducing Lib Dem opportunities. However, we would not rule out strong Lib Dem performance in such seats entirely, and our approach allows us to model it effectively. We have not chosen to do so yet because we don't think there is enough evidence to do so but we could certainly explore what the effect will be on our estimates. If time permits, we will do so.

With regards incumbent effects, I concede that I was not clear about what I meant by robust effects and was too harsh in my assessment of Nate's modelling choice here, though to be fair he had not previously provided details of the source of his estimates. We decided not to add an incumbency adjustment for two reasons. Firstly the pattern of effects changes quite considerably between elections. A quick regression analysis of the 2001 election, identical to Nate's, shows a negative Labour incumbency effect twice as large as in 2005, and a Conservative effect which is about the same. The Lib Dem effect - which is the largest in Nate's model and the most consequential - is the least robust. A 3 point negative retirement effect becomes a one point positive effect. In 2005, the Lib Dems did much worse when the incumbent MP retired. In 2001, they did slightly better. Secondly, there are strong reasons to expect the incumbent effect to operate very differently in this election. Parliament was convulsed by a massive expenses scandal in the summer of 2009, with many incumbent MPs abusing their privileges to buy property and luxury goods at the tax payer's expense. This is widely expected to have a significant impact on many races, and has significantly altered both the pattern of retirements and the value of incumbency. Voters may choose to punish the worst offenders, or reject all incumbents as tainted. We simply do not know. To make strong assumptions about how incumbency works in 2010 derived solely from how they worked in 2005 seems imprudent to us given the circumstances.

Nate's conclusion makes three arguments: that uniform swing models make strong, unfounded assumptions, that uniform swing models have failed badly in some elections and that uniform swing models are inelegant. I think each is a little unfair. Uniform swing's assumptions are strong, but there has been a lengthy and fruitful academic debate in Britain about their foundations, and there is more intuition and empirical evidence to support them than to support the assumption of proportional swing. Nate is right that a basic, naive uniform swing model performs poorly in elections like 1997, but this is an argument for improvement, not abandonment. In fact, a modified probabilistic uniform swing model, with data based differential swing adjustments was employed in 1997 to model the result based on the exit poll, and performed very well. Finally, Nate accuses the uniform swing model of inelegance. I disagree - it is true that the model can predict negative votes, but this is simple to correct for. A negative vote simply suggests the party's achieved vote will be very low. I don't see what is so inelegant about that, and I find much that is elegant in a model that can condense vote changes into a small number of coefficients which can easily be derived from commonly available polling data.

My colleagues and I agree with Nate that models should be constantly analysed, tested and improved. We have attempted to build a model that incorporates thirty years of such analysis, testing and improvement in the realm of BBC exit poll forecasting. Our view is that the technology developed in this context can provide a valuable resource for understanding how current polling will translate into results, and this was the motivation for making a set of projections based upon it available via the politicshome.com website. It is possible that the developments of the past few weeks have rendered this technology obsolete and require a radically new approach. We remain unconvinced, but in the end it is the British voters will provide the final verdict on the debate, at least for this election cycle.


Ford: Our Model vs. 538

Topics: modeling , Politics Home , UK elections

As noted yesterday, we are now following the PoliticsHome UK poll tracking and seat projection model developed by political scientists Robert Ford, Will Jennings, Mark Pickup and Chris Wlezien. I asked Ford if he could explain how their efforts differ from the model developed by Nate Silver and his colleagues at FiveThirtyEight.com. This is his response.
--Mark Blumenthal

How does our model differ from Nate Silver's recently unveiled model of UK elections? The very brief answer is that our model involves applying a modified version of "uniform swing" - the same change of vote in each seat, with some modifications - while Nate's involves proportional swing where the change in each seat relates to the balance of party power beforehand. Under Silver's model, we should see a greater swing against Labour where Labour start more strongly, and this effect should increase proportional with Labour's starting strength.

Empirically, there is little support for Nate Silver's conception of proportional swing, as shown in this recent paper by my colleague David Voas.

There is no evidence of larger swings in recent elections (including 1997) where parties start off more strongly. There is some evidence that swings are larger where the parties are competing more closely, but in our view Nate's model is a poor way to capture this dynamic.

We agree with Nate that there is plenty of evidence that a naive application of uniform swing is misleading, however we feel the best approach is to improve on uniform swing rather than abandon it entirely. Two major factors are seldom accounted for in popular applications of uniform swing. Firstly, uniform swing is generally applied deterministically, making no allowance for random variation in swing between seats. Secondly, it is applied too rigidly, making no allowance for systematic deviations identified in the data. We apply a probabilistic model, based upon a formula developed by John Curtice and David Firth for application in the 2005 General Election, where it was employed very successfully to project the result from exit polls. The model allows for a non-normal distribution in swing variations, and calculates a probability of each party winning each seat based on the vote shares expected (from opinion polls or exit polls). The seat totals are simply the sum of the probabilities.

This model also incorporates systematic differences in swing suggested by the polling data. We anticipate stronger Conservative performance in the marginal seats where they are competing directly with Labour by allowing an extra 2 points of swing to them in such seats. We also anticipate a different pattern of party performance in Scotland - which has its own government and a different party system - by incorporating the latest polling data estimates from Scotland, and adjusting the change in the rest of England and Wales to ensure the aggregate changes sums up the same. These adjustment are based on differentials which have shown up robustly in several recent polls of marginal constituencies and of Scotland

Nate also makes a variety of adjustments of this kind, but his changes are not as well grounded in empirical evidence from the polling data. Firstly, the transition matrix he applies to vote shares is based upon a weak evidence base - while pollsters provide details of respondents' recalled 2005 vote, the transition matrices calculated from this are subject to bias due to respondents' tendency to misremember their votes - in particular remembering voting for the winning party when they did not. This phenomenon is well established, and British pollsters attempt to correct for it in their weighting. However, any model which uses transitions in vote from polling data is likely to overestimate the extent of switching from the current governing party to opposition parties, because many people who say they voted for the governing party last time did not actually vote for them. We suspect this may contribute to Nate's high estimate of change from Labour to the opposition parties.

Secondly, the changes Nate makes for regional differentials in swing are based on polling data that is two years old and was collected in a very different political environment to the current one - the Conservatives were a long way ahead in the polls while the Lib Dems were far below their current tally. We considered incorporating regional swings based on this data, but rejected the change due to the age of the data. We incorporate changes for Scotland as we have a good evidence base from Scotland specific polling, which is regularly updated.

We do not attempt to model "tactical voting", or the effects of incumbent retirements because we simply do not have good quality, recent data on the pattern or level of such effects. Our own regression analysis of incumbent effects did not reveal robust effects of incumbent retirements in recent elections, so we are rather surprised to learn that Nate has uncovered some. Modelling effects such as these, where the statistical evidence is weak requires making strong assumptions. We prefer not to make such assumptions, sticking only to effects where the evidence base is very strong.

On top of our votes to seats projection, we also make efforts to develop a robust estimate of current public opinion. Nate freely admits that his public opinion figures are "educated guesses based on recent cross-tabular results". We employ a state space model to estimate current public opinion every few days, while controlling for systematic "house effect" differences between the pollsters and differences in the sample sizes they employ in their polls. The polling data inputted into our model is therefore based on a more systematic aggregation of available public opinion, although to be fair our current estimate of public opinion is quite close to Nate's.

To sum up, we believe our model has a stronger basis in existing analysis of UK voting patterns, and is based upon techniques that were employed successfully in 2005. Our approach is more sophisticated than other available UK resources, both in terms of its poll aggregation technique and in terms of its seat projection technique. We disagree with Nate's claim that uniform swing models are a low bar to clear - a model based upon a modified uniform swing approach, which employed the probabilistic techniques we use, got the Labour majority in 2005 exactly right based upon exit poll data and early seat declarations. This looks to us like rather a high bar to clear!

Of course, this election is perhaps the most difficult to predict since polling began in Britain, and it may be that uniform swing fails miserably, and that proportional swing of the form Nate proposes manifests strongly next Thursday. We prefer to navigate these uncharted waters with tried and tested methods as a guide, Nate suggests a radically new environment requires radically new methods. We will all know for sure in a week!

For those interested in learning more, the model used to forecast the 2005 election based upon exit poll data and early results is detailed here. Our seat projection techniques are based on those used in this model.

Further details of the model are also available on our PoliticsHome.com page.


Anthony Wells Interview: Part 2

Topics: Internet Polls , Interpreting polls , Measurement , Sampling , UK elections

Anthony Wells is the editor of the UK Polling Report and an associate director at YouGov [interests disclosed: YouGov is Pollster.com's parent company]. He spoke with Emily Swanson on Tuesday about polling and the UK elections. Below is part 2 on polling methodology. Part 1 on the state of the race and interpreting polling data is available here.

Could you tell me about some of the different pollsters active in the UK?

We've got about 5 who I'd call established pollsters. Just in the run-up on this campaign there's been a lot of new entrants. But over time, there's been about 5 since have been there since the last election. Four of those are telephone pollsters, so they'll all use random digit dialing, and IPSOS-MORI has some degree of quota sampling in terms of who they asked depending on who in the household picks up the phone. The other three I think are pretty much random. The fifth one is ours, YouGov, and we have a panel-based internet methodology, so our sampling basically is quota sampling.

You said [in Part 1] that up until last week it looked like a Conservative blowout and suddenly it doesn't really look like that anymore. Given how quickly things can change, is it difficult for pollsters and analysts to deal with the brevity of the election cycle?

It's not really a problem in that sense - what I always noticed, the difference between US polling and UK polling is that away from election times the main currency of US polling seems to be presidential approval rating. And the generic, would you vote for Republican or Democratic candidates in congressional elections, is pretty much divided. Here, the currency is voting intention, because we always know who the alternative prime minister is going to be. So that question is asked, year in, year out, throughout the whole parliamentary term. So really, the sort of questions that pollsters ask in an election campaign are much the same as the ones we ask outside an election campaign, we just ask it more often.

So it's much like our generic congressional ballot, where they'll ask throughout the entire cycle, 'if the election were held today...'

Yeah. Like that, but we pay it far more attention. We don't really pay much attention to government approval rating, where presidential approval rating seems to be the question that everyone looks at in the US.

It seems like YouGov has really caught on in the UK, and generally speaking there's more acceptance of the idea of internet polling.

Well it was a long time, and basically we kept getting things right. There was extreme distrust to start with, and then we got the 2001 election right, and then got several sort of "mid-term" elections right, and got the 2005 election right as well, and after that point I think we began to be accepted. A stopped clock tells the right time twice a day, but if it keeps telling the right time you have to basically concede that it's working.

It was sort of a hard slog to begin with. I mean now, I talked about the 5 main established pollsters - in the run-up to this campaign, there's been at least 3 or 4 new entrants who are largely online-based ones, basically following in YouGov's footsteps. At least 2 of them were founded by people who used to work for YouGov, and they're trying to go off and do it themselves.

Are a lot of the media outlets reporting on the newer internet polls, or are they waiting until those prove themselves as well?

In terms of newspaper media, they'll mostly report on polls they've commissioned themselves, however shoddy, and then they'll probably more often report the established pollsters if they're going to mention someone else's polling, but they do tend to be very parochial about it and make a big fuss about their own one, and mention other peoples' at the bottom of page 57. The broadcast media - largely the BBC - because we've got a much more limited pool of pollsters, they largely seem to do it on a case-by-case basis. In the previous election, they really were very sniffy about internet polling, and they mentioned the 4 main phone and face-to-face pollsters at the time and didn't mention internet as much. These days, we seem to be one of the ones they do refer to, while they'll ignore most of the new entrants, so it's all whether they're established or not, not methodology, in terms of the BBC.\

Is there anything else you think US audiences should know about UK polling or the UK elections more generally?

Actually there is something that might be worth pointing out as a difference, which is our figures nearly always exclude don't knows. We percentage them out, and US ones don't. Most companies just sort of ignore them. They just assume they won't vote of they'll vote in exactly the same way. Two of them, ICM and Populus, reallocate 50% of them based on what they voted for last time.

The common parlance is that it's the "Shy Tory adjustment" - they first started doing it after 1992, when the polls got it horribly wrong, and they underestimated the Conservative vote, and one of the reasons amongst others they thought was that people were embarrassed to admit to pollsters that they were actually going to vote Conservative. And they saw, looking at the people who were saying "Don't know," a disproportionately large proportion of those were people who had voted Conservative in 1987. There was also solid evidence based on post-election callback surveys that people who said don't know did tend to vote for the party they had done previously. So they reallocate 50% or so according to their previous vote. But, while people still call it the Shy Tory adjustment sometimes, it doesn't actually help the Conservatives anymore. Now it tends to help Labour - they're actually shy Labour voters now.

Are there any resources that American audiences should know about if they're interested in UK elections?

The main one's really are the BBC website would probably be the best place to start, and beyond that it tends to be the main newspaper websites, so the Telegraph, the Guardian, the Times, and so on.


Rivlin & Rivlin: Is Trust in Government Really at an All Time Low?

Topics: Pew Research Center , Sheri and Allan Rivlin , Trust

Sheri Rivlin and Allan Rivlin are the Co-Editors of CenteredPolitics.com. Allan Rivlin is a Partner at Hart Research Associates. In 1993 Allan Rivlin was a Special Assistant in the U.S. Department of Health and Human Services.

Some survey findings have more legs than others and the recent report "Distrust, Discontent, Anger and Partisan Rancor" from the Pew Research Center for the People and the Press seems to be getting more attention than most. Conservative commentators such as the Wall Street Journal's Dan Henniger are using the finding that just 22% trust the government to do what is right "just about always" or "most of the time" to suggest that Obama's policies are completely at odds with the mood of American voters and predicting the Democrats will lose big in the November election.

Democrats can take heart in the fact that this number was 17% in a CBS poll taken in October 2008 just before the Republicans were swept out of power. Henniger falsely asserts that the poll results are an "historical low" point for the measure. Nonetheless the fairly low reading is further evidence that the mood of America is more anti-incumbent than anti-Democratic.

Indeed, the chart and accompanying data table Pew compiled from the iPOLL database maintained at the Roper Center for Public Opinion Research at the University of Connecticut are very illuminating. From readings as high as 73% in 1958 and 77% in 1964 we see the long steady decline in the chart through the Vietnam War years to 53% in 1972. There is a sharp fall off after the Watergate scandal to 36% in 1974, and a continued decline through the Ford and Carter years to 25% in 1980. That's about where the number is now.

The number moves upward through the first Reagan term to 47% in 1984, but then falls again to 40% in 1988 and hits a lows of 22% (again the current number) as George Bush the elder is running for reelection during an economic downturn. The number stood at 17% in 1994 after the defeat of the Clinton health reform effort, and just before Democrats lost control of Congress for the first time in 40 years, so Democrats cannot afford to ignore the fact that the number is again near its lowest measures.

But the number did rise dramatically during the Clinton years hitting 44% before the 2000 election. The measure spiked to 60% just after September 11, 2001 but then declined through the Bush years to 17% just before Obama was elected.

So what do these results really tell us?

The survey question is really capturing three things at once. 1) The number rises and falls with the economy which is a key driver of overall satisfaction with government. 2) The number falls in response to a major scandal such as Watergate or Iran Contra.

And 3) we are not the same as the American public in the 1950s. Belief in institutions, all institutions, from the Catholic Church to large corporations to the military, the political parties and the federal government is something to be read about in historical novels and seen only in the first season of Mad Men. After hearing the justification for the Iraq invasion, what grown up in 2010 would say they trust the government "just about always" as 3% do in the current survey? These days the modal choice of conservatives and liberals is "some of the time" the answer chosen by a majority of Democrats and Republicans in the survey -- but counted as "distrust" when the results are summarized.

So how worried should Democrats be based on the results reported in the Pew study? The answer is more than a little but less than the survey's conservative trumpeters. The truth is the survey tells us the conservative movement has been successful in its decades long campaign to reduce the trust in government built by FDR's successful response to the great depression and World War II. This effort most emblematically captured in Ronald Reagan's "The government IS the problem" mantra, has had a long term effect and combined with a poor economy, it means the Democrats are challenged by an anti-incumbent headwind heading into this election.

But in our most recent post we explain why the predictions of doom are misplaced and what we think Democrats should do to turn the tide in our favor.

Cross-posted
at Centered-Politics.com.


Anthony Wells Interview: Part 1

Topics: election results , Interpreting polls , UK elections

Anthony Wells is the editor of the UK Polling Report and an associate director at YouGov. He spoke with Emily Swanson on Tuesday about polling and the UK elections. Below is part 1 on the state of the race and interpreting polling data. Tomorrow we will post part 2 on polling methodology.

First of all, could you tell me a little bit about UK Polling Report and about yourself?

UK polling report I started about 2005, really just a similar thing to what Mark does. About me, I'm associate director at YouGov, which is with Polimetrix in the US and in the UK now as well, so it's the parent company for pollster.

Could you tell me a little bit about what the state of the race is in the UK right now?

Until last week, it was a Conservative lead of about 6 or 7 points, before our first televised election debate between the leaders, and after that , the third party, the Liberal Democrats just sort of rocketed in support, so they're up to just about 30 percent or so, and realistically it's about neck and neck between them and the Conservatives as the polls bounce back and forth. [Note: The second debate in this race took place earlier today. Wells has posted instant reaction polls here and here.]

So what does that mean for the Liberal Democrats? If they're neck and neck in the polls, could they get the most seats in parliament?

Assuming a uniform swing, then no, they'll still be miles behind. On the latest polls, we had the Liberal Democrats on top, neck and neck with the Conservatives, then Labour, but in terms of seats, it would equal out to Liberal Democrats having the fewest seats, then the Conservatives, then Labour having the most, despite having the fewest votes.

How is it that that could happen?

It's actually different reasons. The Conservative and Labour disparity is largely out of demographics. Labour seats tend to be smaller because the demographic movements in the UK population is people moving from the inner cities, which tend to be Labour seats, out into the suburbs, which tend to be Conservative seats. Boundary distributions normally lag about 10 years behind. So that helps Labour. You also get very low turnout in a lot of Labour seats, and you also get lots of tactical voting against the Conservatives. So they all mean, on an equal vote [between Conservatives and Labour] Labour do much better.

The Liberal Democrats, they do much worse for different reasons, it's basically because they almost broke through in the early '80s, then they were third place in the 1983 election, but it was a fraction of a percentage point behind Labour. Then they didn't get any seats, they got about 20 seats, despite being a quarter of the vote, because their vote was very evenly spread across the country. Since then, they've become very, very good at targeting - very, very good at focusing their campaign on winner-take-all seats. Which means now they win more seats on a lower percentage of the vote than they had back in 1983. So they've suddenly got this great big grapefruit. They've got 60 or so seats they hold, then maybe 40 or 50 more marginals, and then in the rest of the country their vote is very very low. Over half the seats, they've got under 20 percent, so there's just a huge gap for them to climb over before they start getting a large number of seats.

Labour vote is very efficiently distributed, the Liberal Democrats one is very efficiently distributed for a party that has 20%, but if you suddenly get up to 30%, their vote is atrociously distributed.

You talked a little bit about the assumption of a uniform swing. Is that usually how analysts predict the results?

Yes. And right now, we're all being very cautious (laughs) and hedging a bit. Typically, yes, everyone uses uniform swing. All the media, all the newspapers, all the broadcasters will talk about uniform swing. And it's not that bad. Labour, in 1997 in the big landslide, they outperformed it by a lot because there was tactical voting against the Conservatives. And that stayed around a bit in 2001, but apart from that, uniform national swing has been a pretty good predictor. Now you get lots of pundits and pollsters on television saying, "If there's a uniform swing, this would happen," but if one of the parties suddenly has gone from 20% to 33%, overtaking the other two, then we really don't know!

So when you say uniform swing, that's from when?

From the last election.

Right now you're predicting [at UK polling report] a hung parliament. Could you talk a little bit about what that means?

It's no overall majority, so the largest party is fewer than 326 seats. What happens in practice is there will either be some form of coalition, so two parties both taking seats in the cabinet, or there will be some form of informal pact where one party will opt to support another one without taking cabinet seats, but will vote in favor of their broad program and their budget resolutions, but take other legislation on a case by case basis.

What role, if any, do any regional differences in party support play in interpreting poll results?

The main one would be Scotland. In the past there haven't been big regional differences in swing apart from Scotland, which has quite often gone in sort of an opposite direction from the rest of the country. 1992 is sort of the obvious example - the rest of the country swung towards Labour, but actually the Conservatives gained strength in Scotland. Elsewhere in the country there's just no history of big differences, so no one pays it too much attention.

So is Scotland treated differently? I noticed there was maybe at least one pollster who was polling specifically in Scotland.

Quite a lot do - it depends who commissions it. That's probably why more separate Scottish polls exist, because there's more separate Scottish media. There isn't really a separate media for the East Midlands, or separate polls. In Scotland, there are separate newspapers so they do commission separate polls. In terms of projections, most of the time the media just do go for a simple uniform national swing, even though actually factoring in Scotland separately would make it a bit more accurate. The reason is probably part simplicity and not being bothered to do that bit, and partially because there aren't a huge number of marginal seats in Scotland, so while it would make it more accurate it's not going to make a great difference in the headline figure.

Note: Check in tomorrow for part 2 of this interview


Lakoff: The Poll Democrats Need to Know About

Topics: ballot initiatives , Framing , George Lakoff , Interpreting polls

George Lakoff is Goldman Distinguished Professor of Cognitive Science and Linguistics at the University of California at Berkeley, and the author of The Political Mind. He Is the author of the California Democracy Act and chair of Californians for Democracy, the campaign organization advocating its passage.

This is a case study of how inadequate polling can lead Democrats to accept and promote a radical Republican view of reality. This paper compares two polls, one excellent and revealing, the other inadequate, misleading, and counterproductive. The issues raised are framing and value-shifting (where voters shift, depending on the wording of questions, between two contradictory political world-views they really hold, but about different issues). It also discusses how polls can reveal the difference between what words are commonly assumed to mean, versus what they really mean to voters -- and how polls can test this.


It is a truism that poll results can depend on framing. For example, the NY Times reported last month on a NYT/CBS Don't-Ask-Don't-Tell poll on whether "homosexuals" or "gay men and lesbians" should be allowed to serve openly in the military. Seventy-nine percent of Democrats said they support permitting gay men and lesbians to serve openly. Fewer Democrats however, just 43 percent, said they were in favor of allowing homosexuals to serve openly. That's a 36 percent framing shift on the same literal issue, but not surprising since the words evoked very different frames, one about sex and the other about rights. Newsworthy for the NY Times, but hardly earthshaking.

But a recent poll by David Binder perhaps the premier California pollster, showed a framing shift of deep import for Democrats -- a shift of 69 percent on the same issue, depending on the framing. It was noteworthy not just because of the size of the framing shift on the main question, but because the shift was systematic. Roughly, around 18 percent of voters showed that their values are not fixed. They think like BOTH liberals and conservatives -- depending on how they understand the issue. With a liberal value-framing, they give liberal answers; with a conservative value-framing, they give conservative answers. What is most striking is that conservatively framed poll questions are all too often written by Democrats thinking they are neutral. The result is a Democratic move to the right for what are thought to be "pragmatic" reasons, but which are actually self-defeating.

The poll was conducted between March 6 and 11, 2010 and sponsored by Californians for Democracy. There were 53 questions, 800 respondents, and a ±3.5% margin of error.

Here is the background.

California is the only state with a legislature run by minority rule. Because it takes a 2/3 vote of both houses to either pass a budget or raise revenue via taxation, 33.4 percent of either house can block the entire legislative process until it gets what it wants. At present 63 percent of both houses are Democrats and 37 percent are far-right Republicans who have taken the Grover Norquist pledge not to raise revenue and to shrink government till it can be drowned in a bathtub. They run the legislature by saying no. This has led to gridlock, huge deficits from lack of revenue, and cuts so massive as to threaten the viability of the state.

Unfortunately, most Californians are unaware of the cause of the crisis, blaming "the legislature," when the cause is only 37 percent of "the legislature," the 37 percent that runs the legislature under minority rule.

I realized last year that the budget crisis was really a democracy crisis, and that a ballot initiative that could be passed by only a majority could eliminate the 2/3 rules, replacing minority rule by majority rule. The idea was to bring democracy to California. Only two words are needed to be changed in the state Constitution, with "two-thirds" becoming "a majority" in two paragraphs, one on the budget and the other on revenue. The changes could be described in a 14-word, single-sentence initiative that went to the heart of the matter -- democracy. It is called The California Democracy Act:

All legislative actions on revenue and budget must be determined by a majority vote.

One would think voters would like the idea of democracy -- and a ballot initiative they could actually understand. And they do. David Binder of DBR Research recently conducted a poll showing that likely voters support it by a 73-to-22 percent margin -- a difference of 51 percent!

There were 800 randomly selected likely voters, with a ±3.5 percent margin of error -- and 53 questions. In short, it was a thorough and responsible poll.

In California, the Attorney General gets to write the "title and summary" -- the description of the initiative that actually appears on the ballot. At present, the Attorney General is Jerry Brown, who is running for Governor. He had announced that he was against getting rid of the 2/3 rule for taxes, though in favor of a majority for budget alone. The result would make Democrats responsible for the budget, but with no extra money to put in it, they would be presiding over the further decline of the state.

When the Democracy Act came across Brown's desk, he personally penned the following title and summary:

Changes the legislative vote requirement necessary to pass the budget, and to raise taxes from two-thirds to a simple majority. Unknown fiscal impact from lowering the legislative vote requirement for spending and tax increases. In some cases, the content of the annual state budget could change and / or state tax revenues could increase. Fiscal impact would depend on the composition and actions of future legislatures.

Instead of the original initiative text, Brown's wording would appear on the ballot if it qualified, and would have to appear on all petitions. This wording uses the word "taxes" three times paired with the verbs "raise" and "increase," as well as the conservative phrase for vilifying liberals "spending and tax increases."

When DBR Research polled voters on both the original initiative text and the Brown title and summary (all respondents saw both questions, but the order was randomized), the results came out as follows:

                                                     Support            Oppose      Difference

Original initiative text                    73%                  22%               +51%

Brown title and summary              38%                  56%             -18%

The Brown wording shifted the result by 69 percent! The largest shift Binder had ever seen.
But this was not mere wording. I had expected a large shift, but the neural theory behind my cognitive linguistics research had made a deeper prediction: Many voters have both conservative and liberal value-systems in their brain circuitry, linking each value-system to different issues. Each value-system, when activated, shuts down the other, and each can be activated by language. The prediction was that this shift was systematic, tied to value-based ideas -- not just a matter of one wording or another.

A second prediction was made from long experience. After a strong attack from the right, a liberal poll advantage on an initiative can be expected to drop by around 10 percent.
Brilliantly, the DBR poll tested both for the systematic effect and simulated the effect of a right wing attack. The systematic effect was tested by a battery of pro-arguments followed by a battery of con-arguments, each in distinct wording. The pro-arguments were given first, followed by the battery of con-arguments. Right after the con arguments, the original wording and the attorney general's title and summary were tested again.

                                                     Support            Oppose      Difference

Original initiative text                    62 %                  34 %             +28 %

Brown title and summary              43 %                  52 %              -9 %

                                                                                                            37 % shift

As predicted, in the face of con-arguments, the 73 - 33 percent advantage for the original initiative dropped to a 62 - 34 percent advantage, a loss of 11 points, but still a 28-point advantage. The attorney general's wording also suffered a loss after the pro-arguments, going from 38-to-56 percent before the arguments to 43-to-52 percent after the arguments, a 9 percent drop for the attorney general's language, about as expected. The total shift after the arguments, from +28 to -9 is 37 percent.

The current explanation of the shift is as follows. There are two political value-systems that voters have, call them Pro and Con. (You might think them as Progressive and Conservative, though no overall views are tested in the poll.) About 40-to-45 percent have a consistently Pro worldview. About 35-to-40 percent have a consistently Con worldview. About 18 percent have BOTH worldviews, and the understanding provided by language can trigger one or the other, resulting in a shift.

Now things get really interesting. The DBR poll found a way to test this explanation. The respondents to the poll were asked if they found the pro- and con-arguments convincing or unconvincing. On the battery of pro-arguments, an average of 57 percent found the pro-arguments convincing and 38 percent found them unconvincing.

On the battery of con-arguments, 57 percent found the con-arguments convincing and 41 percent found them unconvincing. The same high percentage -- 57% on average -- who were convinced by the pro-arguments were also convinced by the con arguments! As in the shift found in the support for the initiatives, the wording resulted in a shift of about the same magnitude. On the pro- and con-arguments, it was 35 percent -- well within the ±3.5% margin of error.

                                                     Convincing            Unconvincing      Difference

Pro-initiative arguments               57 %                      38 %                             +19 %

Con-initiative arguments                57 %                      41 %                             -16 %

                                                                                                                                 35 % shift

This result fits the explanation given above: About 40-to-45 percent are consistently Pro and about 35-to-40 percent consistently Con, with about 18 percent having both Pro and Con worldviews -- and shifting, depending on how language leads them to understand the issue. A large majority of voters stay the same, but a value-shift of about 18 percent of the voters makes for a huge "public opinion discrepancy" of around 36 percent.

What is public opinion on the initiative? It depends on what the initiative is taken as saying. Is it about democracy and majority rule or is it about raising taxes? Overall, public opinion is very favorable on one understanding and very unfavorable on the other.

Is there a fact of the matter? Is one understanding more true than the other?

At this point, the DBR Research poll gets even more interesting. When a voter hears "raise taxes", he or she usually understands the phrase as meaning "raise my taxes." In short, there appears to be a difference between what the words say and what the voter taking the poll understands. Technically, plugging a tax loophole previously given to certain corporations can be seen as "raising taxes" since those corporations would now be paying their fair share instead of a previously reduced amount. Charging oil companies for the oil they take out of the ground in California is called an "oil severance tax." But such actions would not be "raising taxes" on any individual.

This raises the question of whether the attorney general's title and summary was misleading. When it said "raise taxes", were most voters misled into thinking it meant raising their taxes?

The DBR poll found a way to test this. It asked the following question:

Some experts on the state budget say that enough money to solve the budget crisis can be raised without raising taxes on those in the lower or middle income brackets. Instead, tax loopholes for corporations can be closed and a fee can be assessed to oil companies for extracting their oil from the land. Do you support or oppose solving the budget crisis by closing tax loopholes on corporations and charging oil companies an extraction fee without raising taxes on lower and middle income Californians?
The response: Support -- 62 % Oppose -- 34%

In short, most Californians, those hurting most in the lower and middle income groups, are not opposed to raising taxes in general. They just think they are already paying fair taxes. What does this mean for the shifts we have seen toward the Attorney General's title and summary, which says that the initiative is about "raising taxes"? It means that most voters are misled by the language into thinking that the initiative is about raising their taxes.

For this reason, I have resubmitted the California Democracy Act, asking Attorney General Brown for a new title and summary, one that does not mislead the voters.

(Since California ballot initiatives must include a "fiscal impact" statement, my suggestion to the Attorney General includes this description: "Fiscal impact is unknown. It will depend on how voters choose the majority of legislators and how those legislators vote." The idea is that that's how things would work if there were democracy and the majority of voters had a voice).

Do most voters really care about democracy? Hardened Democratic political leaders told me they didn't believe it. They thought voters only cared about their pocketbooks. So DBR Research tested this as well. The poll asked voters if they agreed or disagreed, as follows:

In a democracy, a majority of legislators should be able to pass everyday legislation.

Agree -- 71 % Disagree -- 24 %

In a democracy, a minority of legislators should be able to block everyday legislation.

Agree -- 25 % Disagree -- 68 %

In short, voters do care overwhelmingly about democracy.

The DBR Research poll is remarkable, and brilliant in many ways. But to see its true significance, one should compare it to other polls, supposedly on the same issue.
In the spring of 2009, when I first thought of this initiative and started discussing it in public, I was told over and over that polls were taken and that my initiative didn't poll. I heard it first from a state senator, then from a powerful official in the State Democratic Party, then from the political directors of various unions who had spoken with that party official. They were against my initiative on the grounds that it couldn't win, supposedly because it didn't poll. Perhaps the most influential of these polls was one by someone I will call the Other Pollster, taken just after I had submitted the California Democracy Act to the Attorney General.

(Incidentally, I am not identifying the individuals involved because the issue is not about individuals. As we shall see, the other pollster, the party official, and the political directors were acting normally, all too normally.)

Here is what the Other Pollster, in his summary, referred to as the "direct question."

Would you favor or oppose allowing the state legislature to increase taxes by a majority vote rather than the current two-thirds vote requirement?

Favor -- 35 percent Oppose -- 62 percent

Notice the assumptions built into the question: "allowing the state legislature to raise taxes". Again, the "raise taxes" will be heard as "raise your taxes" and "allow" suggests that the legislature will want, be able to, and will raise your taxes."

The Other Pollster also asked a slightly different version of this question (emphasis from the summary):

Regarding taxes and government, would you prefer less government and
lower taxes, or SLIGHTLY HIGHER TAXES FOR BETTER GOVERNMENT SERVICES ?

LESS GOVT/LOWER TAXES. ........ 59 % BETTER GOVT/HIGHER TAXES. ..... 41 %

The results are what we would expect.

The Other Pollster was also asked by the party official to see if the California Democracy Act had any serious support.

The question the Other Pollster asked on the poll embedded my initiative language into the linguistic frame, "Some people say ......... Do you agree or disagree with this viewpoint?" It was the only question embedded into this particular linguistic frame.

Notice that this frame presents a contrast between "some people" and "you," introducing a bias against whatever is in the "..." . In addition, "some people" indicates a minority opinion, which introduces a second bias. Third, he referred to it as "this viewpoint," distancing it from the person taking the poll (it is only a "viewpoint") -- a third bias.

Here is his question and result:

Some people say that "all state legislative actions on revenue and
budget issues should be determined by a majority vote." Do you agree
or disagree with this viewpoint ?

AGREE. ............ 51 DISAGREE. .......... 43

Even in that triply-biased frame, the original initiative language about majority rule came out ahead by 8 percent, while the language about raising the respondent's taxes came out between 27 and 18 percent behind -- shifts of 35 to 26 percent.

The Other Pollster noted the shift, but concluded:

"the question of simply lowering the two-thirds budget approval threshold to a majority vote, without any conditions, was asked two ways:

• 35% of voters supported, and 62% opposed, the direct question of "allowing the state legislature to increase taxes by a majority vote, rather than the current two-thirds vote requirement."

• 51% of voters agreed, and 43% disagreed, with the "Lakoff" question which read: "All state legislative actions on revenues and budget issues should be determined by a majority vote?"

Neither one of these 2 concepts meets the initial 60% voter support threshold needed to withstand the onslaught from a well-funded opposition campaign.

The difference between the "Lakoff question" and the "direct question" can largely be explained by recognizing that the Lakoff question which read: "all state legislative actions on revenues and budget issues shall be determined by a majority vote" (51% support), did not fully convey the real consequences to voters that the Lakoff language would mean: "allowing the state legislature to increase taxes by a majority vote rather than the current two-thirds vote requirement" (35% support).

On subjects like taxes, it can be dangerous to assume that voters can be moved to vote differently from their true beliefs by using cleverly crafted language."

First, the Other Pollster does not mention the question he actually asked, using the some-people-say frame. Second, he assumes that the "direct question" is the one that does not mention democracy or majority vote, but rather the one that assumes that "the legislature" wants to, would be able to, and would increase the respondent's taxes. This is misleading, not "direct," for reasons discussed above. He calls this the "true belief" of the voters. Third, he suggests that asking about democracy and majority rule is "cleverly crafted language" to "move voters to vote differently from their true beliefs."

If you take the Other Pollster's poll and his description of the results at face value, you might very well think that the California Democracy Act "does not poll" when it, in fact, polls 73 percent on the first pass and 62 percent right after a barrage of right-wing attacks.

Why does the Other Pollster's poll and poll description look that way, and what does it say about the Democratic leadership that commissioned it and believes the Other Pollster's description of his results.

What Does All This Mean?

Polls have come to matter, in at least four ways.

First, the issues matter. The issue here is the future of California and whether a minority of ultra conservatives will continue to bankrupt the state government purposely to keep it from meeting desperate public needs. In short, the issue is as serious as any issue in public life. And the question "Does it poll?" becomes literally a matter of life and death for many people, and of impoverishment and suffering for others.

Second, what the Other Pollster calls the "direct questions" and "true beliefs" are the radical conservative ideas about taxes that conservatives have put forth misleadingly year after year. Here Democrats have been so whipped for so long that they accept conservative framings as simply "true beliefs." What happens when those Democrats are confronted with a question about simple democracy and majority rule, rather than the minority rule that they and the majority of citizens have been suffering under? They cave. When such Democrats see a statement that they actually believe in and wish would happen, they see it as only "cleverly crafted language." The Democratic leadership in California has come to believe a false Republican view of reality, to own it and promote it, and to help make it real. Through polls.

Third, it is rare for polls to discuss what DBR called "the 33%-percent discrepancy group," that is, the people who have TWO distinct value systems applied to different ideas (e.g., democracy vs. additional taxes on them), and shift depending on the ideas expressed in the language of the poll. These voters need to be studied, isolated as a culturally important demographic group, and taken into account in future polls. This may involve admitting that there may not be such a thing as overall "fixed public opinion" that includes this significantly large group. Polls should be detecting public understanding -- and studying voters with dual value-systems is crucial if the value-shifters are to be identified and understood.

Fourth, the word "taxes" is not neutral or objective. It has been hijacked by the right. By virtue of their communications system, they have changed the framing of the word to mean, according to radical conservative doctrine, "money that individuals have earned without government help that is taken out of their pockets by the government and given to people who haven't earned it and don't deserve it." For many voters, "taxes" has come to be a word defined by the Con ideological worldview, able to activate that worldview in the approximately 18% of voters who switch, depending on language. The last thing Democrats -- or independents -- should be doing is using language that activates a Con worldview and whose effect is to create a shift to the right. It is unfair. In this case it goes against democratic principles. And politically, it is shooting oneself in the foot.

It is for this reason that I have chosen the word "revenue." "Revenue" is a neutral word in that it has no such doctrinal meaning. It is a word that comes from business. To run a business, you need revenue; and the same is true of running a government. It is just false to think that the use of the word "taxes" is neutral or objective. In the poll questions cited, that right-wing doctrinal meaning is sneaked in, misleadingly.

Finally, these results show the effectiveness of the radical conservative communication system operating 24/7 using the same effective framing year after year. It operates on an unconscious level, slowly changing the brains of those engaged (on either side) of the discourse that the conservatives define. Their communication system is so effective, and Democratic leaders have to deal with it so often, they too can get taken in.

This poll revealed that, in California on this issue, 18% of the likely voters were value-shifters, that is, they seem to have BOTH worldviews. Given that Democrats have 63% of the seats in the legislature at present, that means that the 18 percent has been voting in the Democratic column, either as Democrats or independents. But if they have BOTH worldviews, that means they are susceptible to conservative arguments in conservative language, and could shift, as happened in the case of Scott Brown's election in Massachusetts. Democrats cannot take value-shifters for granted. They have to identify them and convince them using value-based language of their own.

The results of this poll go AGAINST the idea that such voters are "in the middle" and that one can appeal to them by moving to the right. The use of the language of the right can move them to think like conservatives, and hence to vote like conservatives.

I am a cognitive scientist and a linguist, and have been applying what has been learned in those disciplines to our politics. I have been arguing over the past decade and a half that progressives need to build a communication system of their own to (1) express the values they really believe in, to (2) to communicate the truth, (3) to use their own values-based language to show the moral significance of those truths, and (4) avoid communicating conservative beliefs they do not hold, especially by avoiding the language of conservatism. The poll results just discussed reflect the failure of progressives to do so.

Pollsters have an awesome responsibility. I see the DBR Research poll as a model for carrying out that responsibility. And I have chosen to discuss that poll at length because of the general lessons it has to teach.


McDonald: Does Enthusiasm Portend High Turnout in 2010?

Topics: Gallup , Likely Voters , Nate Silver , Turnout

This guest contribution comes from Michael McDonald, an Associate Professor of Government and Politics in the Department of Public and International Affairs at George Mason University and a Non-Resident Senior Fellow at the Brookings Institution.

As Nate Silver notes, a recent USA Today/Gallup poll finds that that 62% of registered voters say they are "more enthusiastic than usual about voting" in the upcoming midterm elections.

Nate focuses his attention on differential enthusiasm between Democrats and Republicans. Republicans appear more enthusiastic than Democrats, but enthusiasm among partisans of both stripes are at record levels in Gallup polling for a midterm election. I'd like to focus on a different question. What does this level of enthusiasm potentially tell us about voter participation in the 2010 November elections?

This level of enthusiasm at 62% is indeed the highest level of enthusiasm among registered voters in a midterm election since Gallup began asking this question in October, 1994. The next highest level was recorded at 49% in a June, 2006 poll, a difference of 13 percentage points.

2010-04-08-McDonald_image001.png

USA Today notes that this is "a level of engagement found during some presidential election years but never before in a midterm. " Indeed, this is the case. Looking back at the same question asked in presidential elections since 1996, enthusiasm peaked at 69% in June, 2004 and again at 69% in October, 2008. At a similar point in February, 2008, 63% of registered voters said they were more enthusiastic than usual about voting in that election.

2010-04-08-McDonald_image002.png

The enthusiasm question appears to tap into underlying voting propensities. Voter turnout rates among those eligible to vote has been relatively stable in the 1994, 1998, 2002, and 2006 midterm elections, as has the self-reported enthusiasm measure. In presidential elections, enthusiasm appears to be related to voter participation. Turnout rates have increased from a low point in 1996 to progressively higher levels in 2000, 2004, and 2008, along with the enthusiasm measure.

2010-04-08-McDonald-Turnout-Rates.png

If this high enthusiasm for congressional elections translates into similar voter turnout rates as recent presidential elections, this would be exceedingly rare. In the course of U.S. history, midterm turnout rates only exceeded presidential turnout rates at the time of the country's Founding, when Congress was the preeminent branch of government and when presidential elections were occasionally not contested or presidential electors were still occasionally selected by state governments. Over the past century, midterm turnout rates have been on average about 15 percentage points lower than contemporaneous presidential elections. History tells us that it is unlikely that the 2010 midterm turnout rate will equal recent presidential turnout rates of 60%+ of those eligible to vote.

Still, absent any knowledge about enthusiasm, we might expect that turnout rates would increase in 2010. The long term pattern has been for midterm election turnout rates to generally move with presidential elections. An increase in presidential turnout rates has occurred recently without a breakout to the upside for the midterm rates. Looking back to the 1960's, just by looking at the aggregate election data alone we might expect midterm turnout rates to rise near 50% in 2010.

Further tamping expectations down is that level of enthusiasm of 39% in the October 2000 survey is on par with the 41% in October, 1998 and the 41% in October, 2002, yet the turnout rate in that presidential election was still approximately 15 percentage points higher than either of these midterm elections. Indeed, the lowest level of enthusiasm of 17% was registered on the October, 1996 survey. The 1996 presidential turnout rate of 51.7% is a modern low, but it still easily exceeds any recent midterm election.

This disconnect may have something to do with the question wording. The question asked is, "Compared to previous elections, are you more enthusiastic than usual about voting, or less enthusiastic?" Note that the question elicits a respondent to refer back to previous elections as a comparison point. It may be that respondents are thinking about comparable midterm or presidential elections when answering the question, rather than a baseline enthusiasm that may be compared across different types of elections.

There is one further caveat to consider. The presidential data shows that it is possible that this enthusiasm may swiftly wane. In 2008, voters' enthusiasm in the primaries faded by summer, dropping from 63% in February to 48% in June, before peaking again at 69% in October as the election neared. The enthusiasm observed at this point in time may be a product of circumstances that may not be sustainable until November. Then again, even if enthusiasm wilts in the summer this does not mean it may not perk up again as November draws near.

At this point, the most reasonable conclusion to draw from the totality of the evidence is that turnout in 2010 will most likely exceed the 41.4% of 2006, and if these current conditions hold the turnout rate may come in just shy of 50%.


Lundry: Graphing the Stimulus

Topics: Alex Lundry , Charts , data visualization , Edward Tufte

Alex Lundry is a political pollster, microtargeter, data-miner and data-visualizer. He spends most of his time searching for big ideas hidden inside of big data. He has visualized historical tax receipts,White House visitor logs, ideological estimates of Supreme Court justices (called a "very cool graphic" by the Washington Post), and hundreds of thousands of survey interviews. In 2009, Politics Magazine named him a "Rising Star."

President Obama's recent appointment of Yale professor Edward Tufte to the independent commission charged with tracking stimulus funds underscores the growing importance of data visualization in both public policy and political debate.

Tufte is inarguably the modern era's leading authority on data visualization, the transformation of raw data into graphical form. These visuals - graphs, charts and other types of information graphics - are frequently responsible for remarkably stunning revelations and deep insights that may otherwise have been obscured among large and cumbersome spreadsheets or databases.

The federal stimulus is just that - an incomprehensibly enormous $787 billion piece of legislation being distributed across 50 states, 435 congressional districts, 28 federal agencies and over 160,000 individual projects. President Obama's challenge is to convincingly show the American public that their money is being well-spent.

Thanks to a neurological phenomenon called the pictorial superiority effect, the human brain is hardwired to find visualizations more compelling than a spreadsheet, speech or memo. So it's no wonder that Obama has turned to a data visualization guru for the monitoring of his administration's largest legislative accomplishment to date. Meaningful visualizations of stimulus data can make the project more transparent, accountable, and could ultimately even impact the legislation's perceived success.

Transparency, allowing the public to see the who, what, when and where behind stimulus funding, will help alleviate any perceptions of waste, inefficiency, or unfairness. Indeed, the most common criticisms of government spending are that it is unequally or unfairly distributed across communities, that it goes to unworthy projects, or that it simply isn't doing those things it was meant to do: stimulate the economy and create jobs. But states like California have already engaged with design firms to visualize the disbursement of stimulus funds, mapping dollars to projects and locations, in turn increasing voters' investment in the bill as they see its direct benefits to their community.

Data visualization can also make the federal stimulus more accountable, revealing fraud, abuse or even honest mistakes. A case in point: the public outcry over the recent revelation that stimulus funding seemed to go to congressional districts that didn't exist. This seemingly innocuous data entry error quickly became an anti-stimulus talking point, whereas a simple visualization of the data could have revealed the problem well ahead of its entry into the news cycle.

Finally, there is also great political advantage to effective visualizations of the Stimulus Act. Convincing voters of its merit will take more than declarative speeches and number-drenched spreadsheets, and the Obama administration knows this. Their appreciation for the political power of data visualization was on display last month when it released a graph of weekly job losses since December 2007. The bars, color-coded by presidential administration, tell a distinct, if not debatable, story about the stimulus' impact. The visualization took the internet by storm as pro-stimulus voters shared, linked, blogged and tweeted the image, and anti-stimulus voters denounced it as infographic propaganda, all the while scrambling to create their own charts telling their side of the story.

These chart wars are only going to become more and more common in political discourse. President Obama understands this acutely - and this was certainly the subtext in appointing Edward Tufte to the stimulus board.


Erikson: Would the Health Care Bill Become More Popular After Passage? The Lesson from Medicare

Topics: Barack Obama , Health care , Lyndon Johnson , Medicare

Robert S. Erikson is a professor of political science at Columbia University.

If the health care reform bill finally passes Congress and is signed into law, what will be the response of public opinion? Would it turn out that support goes up once the public learns the details of the law, as the Democrats claim? Would Obama's image improve following successful passage? Which party would receive the net benefit?

For clues, we can turn to public opinion polls from the 1960s both before and after passage of Medicare in June 1965. There was a far lesser density of public opinion polling in that era, but the small set of available polls from back then (retrieved via iPOLL) reveal the following.

During the 1965 health care debate, public opinion was ambivalent on how to deliver health care to seniors. Whether a plurality favored President Johnson's public plan or the Republican alternative designed to expand private coverage depended on the exact question wording. But there was considerable popular support for Medicare when presented to the public for an up-or-down vote. In a February 1965 Harris Poll, 62 percent answered affirmatively when asked "Do you favor or oppose President Johnson's program of medical care for the aged under Social Security?"

The lesson for today is that following passage in June 1965, support for Medicare increased further. By December of 1965, the percent who told Harris they "approved" of Medicare rose to a consensus of 82 percent. Ever since, the public's support for Medicare has never been in doubt.

Perhaps even more telling, support for Johnson's handling of health care rose even as his overall popularity began to plunge. In April 1965, when President Johnson was enjoying 67 percent approval in the Gallup Poll, a similar 65 percent told Harris they favored "what [Johnson] has been doing on Medicare under Social Security." After passage, in October 1965, 80 percent of Harris respondents rated Johnson's job as "excellent" or "very good" on "working for Medicare for the aged."

The year 1966 brought a fading of Johnson's political fortune, largely due to declining support for his handling of Vietnam. By August 1966, Johnson's overall approval in the Gallup Poll had sunk to 47 percent. But in the same month, the percent in the Harris Poll who rated Johnson's performance as "excellent" or "very good" on Medicare held firm at 84 percent.

The lesson of 1960s polling can provide some encouragement to today's Democrats. If the analogy holds for today's political scene, a Health Care Reform Law of 2010 will become popular and Obama will be credited with a success in the eyes of public opinion. But like all analogies when applied to today's politics, it must be interpreted with considerable caution. Medicare was considerably more popular at the time of passage than is the current health care bill on the eve of its final vote. And Medicare's opponents at the time of passage were weaker politically than today's Republican leadership, united in opposition.


Enten: But What About the Incumbent's Margin?

Topics: 2010 , Incumbent , Incumbent Rule , Nate Silver , Senate

Harry Joe Enten is a junior at Dartmouth College and will be interning with Pollster.com this spring and summer.

Yesterday, Nate Silver posted a well thought out post on why the 50% incumbent rule no longer applies. I think Nate's post is straight on, but I think that he misses a potentially larger point. In his chart, you'll notice something very interesting: no incumbent from 06, 08, or 09 won when trailing by more than 1.5 points in the January to June average of polls. I think that points to potentially very large problems for Democrats in the 2010 United States Senate Elections. Why? If current polling averages hold through June, the Democrats would be on the verge on losing the United States Senate, according to Silver's findings. What follows is a simple rundown of the top (and some not so top) United States Senate races involving seats held by Democrats. I apply Nate's rule of averaging all the polls available (including partisan ones). I supply a two month (starting in January as Nate did) and six month (using length of time of) Nate's average (when available) to try and catch short and long term trends. To be fair, I take only the highest polling Republican candidates. I don't intend this to be a be all end all, but the results are still amazingly scary for Democrats.

I find 6 Democratic incumbents who would most likely lose re-election, if the polling averages held through June. One Democratic incumbent does lead, but she is also going to have a difficult time in her fight for re-election.

2010-02-26-enten-Dem-incumbent.jpg

1. Arkansas- Democratic Senator Lincoln trails by an average of 21.5 points since January to Rep. John Boozman and 10 points since January and 5.4 points since September to Gilbert Baker. Not only is Lincoln in trouble, but her trouble seems to be getting worse by the day. Unless a dramatic turn occurs in the polls (and considering Boozman is the likely Republican candidate), Lincoln is probably a goner.

2. Nevada- Senate Majority Leader Harry Reid is in major trouble. He had trailed potential Republican candidates Danny Tarkanian by an average of 7.8 points since January and 6.9 points since September and Sue Lowden by 7.8 points since January and 8.2 points since September. Such polling and past history would argue that Reid is dead in the water; however, the emergence of Tea Party candidate Jon Ashjian has thrown the race somewhat into doubt. Reid still trails both candidates, but, with Ashjian in the race, Lowden leads by only 5 and Tarkanian only leads by 1 point in the only poll including the Tea Party candidate. Still, Reid's position is precarious at best, and he would almost definitely lose to Lowden, if the averages held through June.

3. Colorado- Senator Michael Bennet is not an incumbent in the traditional sense (he was appointed to the post), and both appointed Senators Bob Menendez and Roger Wicker were among the incumbents who performed significantly better than the average of polls between January and June indicated. Bennet is also facing a primary challenge from Andrew Romanoff. If Bennet makes it out of the primary (an if, but the only poll conducted so far indicated Bennet leads), he trails by 9.5 since January and 9.3 points since September to Republican Jane Norton, 2.3 points since January and 2 since September to Tom Wiens, and 2 points since January and 0.8 since September to Ken Buck. If those leads hold (and they seem to slightly be expanding), Bennet is in major, major trouble especially against Norton. Ramonoff does not do much better; he trails Norton by 6.8 points since January and 7.7 points since September, 1.7 points since January and 1.5 points since September to Tom Wiens, and 2 points in both the January and September averages to Ken Buck. Romanoff seems to be a slight underdog, especially against Norton.

4. Pennsylvania- Republican turn Democrat Arlen Specter is in as in much troubled as the 2010 New York Mets. He trails Republican challenger Fmr. Rep. Pat Toomey by 8.8 points in an average of the polls since January and 4 points since September. Both of these averages would render him on life support applying Silver's standard come June. Specter is being challenged in the Democratic primary by Congressman Joe Sestak. Specter currently leads Sestak in that primary by 20 points (a lead, which is growing). If Sestak somehow won the primary, he trails Toomey by an average of 12 points and 6.9 points in polls conducted since January and September respectively. In a Republican year, it would be very difficult for Sestak to make a comeback from being this far back.

5. New York- Senator Kirsten Gillibrand, like Senator Bennet, is an appointed Senator in trouble. Her negative net approvals indicate a good challenger would have a fair shot. The mostly unheard of Bruce Blakeman trails Gillibrand by 22 points and 24.7 points in the polling average since January and September respectively. Potential candidate Fmr. Governor George Pataki would make it a race. He leads Gillibrand by 5.5 points in the average since January and 1.4 points in the average since September. Pataki leads the other potential Democratic candidate Harold Ford (who has trailed by 14 points or greater in every primary poll against Gillibrand) by an even larger 14.8 points since January. If Pataki does get into the race, he would be a very formidable challenger. Of course, even if Pataki does not enter the race, Gillibrand's approvals leave her in a vulnerable position.

6. Washington- Senator Patty Murray is not the first Senator you think of as in danger. The Cook Political Report, Rothenberg Political Report, and Larry Sabato's Crystal Ball all have this race rated as safely Democratic, but one Republican challenger could make it a race. Republican Dino Rossi leads by 2 points over Murray in two recent polls. Rossi nearly won the Governor's mansion in 2004, losing in a recount, and he only lost by 6.5 points in 2008 when President Obama carried the state by 17 points illustrating his appeal as a statewide candidate. If the recent polls hold, Rossi could give Murray one heck of a fight.

7. California- Democrat Barbara Boxer leads her strongest challenger Republican Tom Campbell by on an average of 5.5 points in polls taken since January. Boxer has the edge in this matchup, but the Cook Political Report and Larry Sabato's Crystal Ball have the race at only Lean Democratic. Keep in mind, Republicans Carly Fiorina and Chuck DeVore poll considerably weaker. As such, Republicans should hope that Campbell wins the nomination, if they are looking for the candidate with the best shot at winning. He leads in recent primary polls.

The two and six month polling averages indicate that the Republicans are in a position to defeat six Democratic incumbents. This position seems to have strengthened over the last two months. In two states, New York and Washington, they need to hope they can recruit two candidates. If they do and the averages hold, Republicans could easily be up to 47 seats in the United States Senate.

When you combine these races with open Democratic seats, the Democratic majority looks like it could fall.

2010-02-26-enten-incumbents.jpg

1. North Dakota- Republican Governor John Hoeven leads all opponents by at least 21 points, and he is over 50% in all polls conducted since January. He'll win unless a divine miracle happens for the Democrats.

2. Delaware- Congressman At-Large (meaning he represents the entire state) Mike Castle leads Democrat Chris Coons by an 22.7 and 20 points in polls conducted since January and September respectively, and he has been over 50% in every poll ever conducted in this race. Coons is not as dead as the Democrats in North Dakota, but he has a very high hill to climb.

3. Indiana- Republicans Fmr. Senator Dan Coats and Fmr. Representative John Hostettler lead both Democratic Congressmen Brad Ellsworth and Baron Hill by at least 14 points in the only poll conducted since Democratic incumbent Evan Bayh announced he was not running for re-election. We'll have to see if this poll is an aberration, but the Cook Political Report already has this seat leaning Republican.

4. Illinois- Republican Congressman Mark Kirk trails Democrat Alexi Giannoulias by 0.2 points in the polling average since January, but Kirk leads Giannoulias by 0.4 points in the average since September. It could go either way.

In conclusion, I am by no means saying that the Republicans will take back the Senate; however, the polling in conjunction with past results indicate that it not that long of a shot that they do. Democratic candidates seem to be consistently weak over the last six months, and the Republicans seem to be moving into a stronger position in the last two months. Keep in mind that in 06 and 08, Democrats pretty much swept all the hotly contested races (save Tennessee in 06 and Georgia in 08. In these years as well as 1994, the party who lost seats (Democrats in 1994 and Republicans in 2006 and 2008) did not win a single seat belonging to the other party.

If the national environment for Democrats does not improve, these polling averages probably will not get that much better for Democrats. And if the averages do not get better, Silver's findings show the Republicans are at least in a position to win 10 seats and take back the United States Senate.



Mokrzycki: Additional details from Wash. Post poll in MA SEN aftermath

Topics: Barack Obama , Harvard School of Public Health , Interpreting polls , Kaiser Family Foundation , Massachusetts , Scott Brown , Washington Post

A survey The Washington Post conducted in Massachusetts last week in the aftermath of the U.S. Senate special election shocker was unusual for a political poll in that it interviewed non-voters as well as voters. (See Post story and full release with topline and other links.) I consulted for the Post on this project – which was fielded in conjunction with the Kaiser Family Foundation and the Harvard School of Public Health – and with permission I’ll take a look here at Massachusetts adults who sat out the special election. I’ll particularly focus on those who said they did vote for president in 2008 – to try to assess evidence of an "enthusiasm gap" in Republican Scott Brown's victory over Democrat Martha Coakley for the seat long held by the late Democrat Edward M. Kennedy, and possible implications for the state's elections in November.

The sample of special election non-voters was small - 242 adults (sampling error plus or minus 8 points) - but it's safe to say they generally differed from voters little if at all on many questions such as the direction of the country and whether Brown should work with or mainly try to block Democrats when he gets to Washington.

Non-voters also were no different from voters on overall support for proposed health care reform, though they were more likely than voters to think those changes would be good for themselves and their family and for Massachusetts. Those who didn’t vote last week also may have been slightly more likely to favor a bigger role for government; only 37% of them, compared to 47% of voters, said government is doing too many things best left to businesses and individuals.

Of those 242 who did not vote in the special Senate election, 104 said they did vote for president in November 2008. We start getting into pretty big sampling errors with that subgroup but I feel comfortable concluding that what for shorthand I'll call these "occasional" voters were predominantly Democratic in their outlook:

  • Seven in 10 occasional voters said they voted for Obama in 2008 and about as many approve of how he's handling his job now (Obama got 62 percent of the Massachusetts vote in 2008, and 61 percent job approval in the Post poll)
  • Nearly half call themselves Democrats (this includes independents who lean Democratic), just one in 10 Republican
  • Fewer than three in 10 said they feel "enthusiastic" or "satisfied" about policies offered by Republicans in Washington, while nearly six in 10 felt that way about the Obama administration's policies
  • Only around two in 10 said that when Brown gets to Washington, he should mainly work to block the Democratic agenda and should stop Democrats on health care reform; nearly all the rest said he should work with Democrats.

On these and other measures, these occasional voters looked more like people who cast ballots for Coakley than Brown supporters. That suggests some people who on the whole might have been inclined to vote Democratic were not sufficiently motivated to turn out last week - evidence supporting the notion of an enthusiasm gap that the Democratic get-out-the-vote operation could not overcome.

The poll had little good news for Democratic Gov. Deval Patrick as he prepares to face the electorate himself. Just 40 percent of who voted in 2008 but not last week approved of how Patrick is handling his job – a number not significantly different than among those who did vote last week (36 percent). And should Democratic-oriented voters remain less inclined to turn out in November, obviously that could hurt other Democrats on the ballot, for U.S. House and other offices.


Mokrzycki: Are MA Senate Polls Prone to Non-response Bias?

Topics: 2010 , Likely Voters , Martha Coakley , Massachusetts , non-response bias , representativeness , Scott Brown

Mike Mokrzycki is an independent consultant who was the founding director of the Associated Press polling unit. He may be reached at mike@mikemokr.com. His guest contribution is cross-posted from his blog, MJM Survey Musings.

One thing is certain about the polling in the last days before Tuesday's special election in Massachusetts to fill the late Ted Kennedy's U.S. Senate seat: Someone's going to end up being very, very wrong.

Polls completed in the past week and recorded at Pollster.com range from a 14-point lead for Democrat Martha Coakley - just weeks ago considered a shoo-in in heavily Democratic Massachusetts - to a 15-point advantage for Republican Scott Brown, who has become a darling and major fund-raising beneficiary of conservatives nationwide.

I'm not going to do a deep methodological dive into all these polls to try to explain the differences. Pollster.com and Fivethirtyeight.com have done their usual stellar job with that already, including analyzing the extraordinary uncertainty inherent in trying to determine who really will vote in this mid-January special election.

I will try to provide a little perspective as someone on the ground in Massachusetts who also knows a thing or two about polls.

My hypothesis: While Brown supporters clearly are more enthusiastic than Coakley backers, that may serve him relatively better in the pre-election telephone polls than it will Tuesday.

I've lived in Massachusetts on and off since 1980 and I can't ever recall Republicans here as energized as they are now. Sure, they had a 16-year run in the governor's office despite the state's overall leftward tilt. But Bill Weld, elected in 1990, was fairly unusual - socially liberal enough that "Weld Republican" became its own label. Paul Cellucci sure didn't inspire a lot of passion and I can't say Mitt Romney did either, with his eye on the White House all along. Brown seems an agile campaigner but I don't think his personal charisma is what's charging up Republicans here and elsewhere; rather, it's the once almost-unthinkable notion that any Republican might actually win the seat Ted Kennedy held for nearly half a century, especially with such extremely high stakes for policy and politics nationally.

This enthusiasm is abundantly evident in internal data from numerous polls. I'd add a couple anecdotes:  I don't put stock in lawn signs but when you see a voter (like someone on the main street in my town) posting a handmade Scott Brown placard, or an ice cream stand using its roadside sign to advertise "VOTE FOR SCOTT BROWN," it may be an indication something beyond rote partisanship is at work.

This race has been Coakley's to lose, and she's seemingly been doing her best to do that. The most recent example was in a radio interview the other day when she called Curt Schilling - famous for helping pitch the Boston Red Sox to a long-awaited World Series championship in 2004 on an ankle stitched together and visibly bleeding through his sock - a New York Yankees fan, of all things. In little more than the time it used to take a Schilling fastball to reach the plate, his recorded voice was on my phone telling me this faux pas was proof Coakley was out of touch with Massachusetts voters. Silly, weighed against the import of issues such as health care reform? Perhaps. But - last baseball metaphor, I promise - Coakley served up a big fat meatball and I sure don't blame the Brown campaign for hitting it out of the park.

Schilling's was one of countless phone calls we've gotten on this race since before the primaries last month. Many have been "robo-calls" like his (as I write this paragraph I just got one from Brown's daughter), though plenty feature live human beings (like someone from Coakley's phone bank who called as I started writing this post).

At this point it's hard to blame people in Massachusetts for screening incoming calls even more so than usual. For years there's been plenty of screening, part of the reason why response rates for all kinds of telephone polls have declined dramatically. (An article in the Winter 2009 issue of the journal Public Opinion Quarterly (subcription required) gives response rates for numerous respected telephone polls it cites and many of them barely crack 10 percent. A response rate greater than 20 percent now is extraordinarily good.) Response rates are even lower for automated polls, which use a recorded voice for interviews and require respondents to punch in answers on the touchtone keypad.

But - and this is an important "but" - a growing body of research indicates decreasing response rates have not hurt the accuracy of survey estimates.  That happens when there's no systematic difference between those who cooperate and take the survey and those who decline.

I'm thinking the Massachusetts Senate race may be a case where we do see non-response bias in surveys. It comes down to relative enthusiasm for the candidates. It's tough to prove, but I'd venture a guess the dynamic works like this:

  • Republicans are excited Brown might win and thus more likely to answer their phone and listen to political messages -possibly be invited to take a survey - when the phone is practically ringing off the hook with such calls. I suspect they'd be particularly enthused to participate in a poll and tell the world they're voting for Brown, to help build the sense he has unstoppable momentum. These folks certainly will vote but there's no upside in Brown's election-day numbers compared to the pre-election poll estimates.

  • Democrats may be demoralized and scared after several weeks of Coakley campaign missteps and bad headlines. They may not be all that eager to pick up the phone for political calls. They also might be more skeptical of or angry about polls since they've been such downers for Coakley and President Obama lately, and thus, I would speculate, more likely to take a pass if invited to participate in one. None of that means these folks are less likely to vote, though - by now any sentient Democratic-leaning voter will know Coakley needs all their votes, and what's at stake. They might not be happy about how Coakley has run her campaign but they'll still be motivated to vote by a desire to deny a Republican the chance to do serious harm to Obama's agenda from Ted Kennedy's old seat. Obama is in Massachusetts today to remind them of exactly that (not that they're necessarily all that enthused about him at this point, either).

Of course, truly independent or "swing" voters are another vital factor, and if Brown wins enough of them he could overcome the inherent Democratic advantage in Massachusetts.  But I'd think enthusiasm, or lack of it, would be more of an issue among stronger partisans.

In pollster speak, what this boils down to is "differential non-response," where one candidate's supporters are more likely than the other's to take a survey.  It's suspected to be a big reason why exit polls in recent years have tended to overstate support for Democratic candidates.  In the Massachusetts special Senate election I suspect it's inflating the Republican's poll numbers. Coakley has room to outperform the polls Tuesday even if her natural base is motivated by nothing more than fear of what would happen if her opponent pulls off an historic upset.


Taylor: Were the Benchmarks Wrong?

Topics: Harris Poll , Internet Polls , Opt-in internet polls , Sampling

Humphrey Taylor is chairman of the Harris Poll at Harris Interactive, which conducts surveys on the internet.

I have read Yeager and Krosnick's recent, well researched essay on this subject with great interest.  It was written in response to my comments (of October 26) on their paper comparing the accuracy of RDD telephone surveys and Internet surveys conducted with probability and non-probability samples posted in August 2009.

In their new essay Yeager and Krosnick provide evidence to refute my two criticisms of their original paper.

"Consistency"

My first criticism was the data they presented, even if completely accurate, did not show that the "RDD telephone data was consistently more accurate than the non-probability surveys." Yeager and Krosnick agree with me that the Harris Interactive's data points are closer to the benchmarks on two of the six items they used by 2.64 and 0.56 percentage points. They argue that the word "consistently" was justified because these differences are small (and they are). So this is really a question about semantics. The Oxford English Dictionary defines consistently as "uniformly, with persistent uniformity." IF the RDD sample produced more accurate data on six out of the six variables, that would be consistently more accurate but four out of six is not.

Social Desirability Bias

Yeager and Krosnick agree with me that "Internet surveys are less subject to social desirability bias than are surveys involving live interviewers," and provide some useful references to support this conclusion. However, they argue that "the measures of smoking and drinking we examined were not contaminated by social desirability bias."

Smoking and Drinking

The authors provide several hypotheses, other than social desirability bias, that might explain why Harris Interactive's online surveys found more drinkers and smokers than the benchmark survey and the RDD survey, both involving live interviewers. For example they suggest that "perhaps the people who agreed to participate in the opt-in Harris Interactive Internet surveys generally possessed the studied undesirable attributes at higher rates than did respondents to the RDD sample." This is possible, of course, just as it is possible that Harris Interactive's online respondents are much more likely to be gay or lesbian, and less likely to give money to charity, to clean their teeth, believe in God, go to religious services, exercise regularly, abstain from alcohol and drive under the speed limit. However this hypothesis sounds very like the argument used by the tobacco industry for thirty years or more that the correlation between smoking and lung cancer could be because those prone to this disease were more likely to smoke .

Yeager and Krosnick also address the evidence I quoted from the Federal government's NHANES survey which found that based on blood samples more people had apparently smoked than admitted to smoking cigarettes when they were interviewed.The authors present several hypotheses to explain this difference, all of which may be true but none of which are proven. It is surely true, as they suggest, that part of the increase is due to people using tobacco in ways other than smoking cigarettes. But they also argue that the data from the blood samples cannot be usedas a check on respondent's answers because for most respondents there was a gap of "between two and nine weeks" between the interview and the drawing of the blood sample, and that smoking behavior may have changed during this time. If so this would be a big increase in the number of smokers over a short time, and this trend if it continued would rapidly increase the number of adult smokers, which has not happened.

As I suggested at the beginning, I am impressed by Yeager and Krosnick's research on the literature on this topic. Furthermore, I concede that I have not proved that social desirability bias is the only possible explanation for the differences between our online survey data and the live interviewer surveys on smoking and drinking (including our own). However, Yeager and Krosnick have not proved my hypothesis is wrong and their explanations for these differences are also hypothetical and, I submit, less plausible.

The 7 "secondary demographics"

This was not part of my argument about "were the benchmarks wrong?" but was in the original paper by Yeager and Krosnick and was referenced again in the authors' reply, so a few comments may be useful. The seven variables were picked by the authors from a long list that they might have used. Had they chosen other variables the results might have told a different story, but we do not have those data. The average errors involved were modest ( 3.0 and 1.7 respectively) and the differences between the two samples was small. One of the seven variable was the number of adults in the household, a variable for which Harris normally weights; I am not sure why it was not weighted in this survey. By far the biggest error in the Harris survey was for people in households with incomes of between $50,000 and $60,000 (why that particular bracket and not others?) Replies to questions about incomes are notoriously unreliable and here again social desirability bias may well be at work.

One other thing

At the risk of extending this dialogue, there is one other important point that should be made about the research on which Yeager and Krosnick have based their paper and their conclusions.
They reported that the RDD telephone survey used in these comparisons was very different from the typical telephone surveys used by any of the published polls. It was in the field for six months, non-respondents were offered a $10 incentive to participate, and it achieved a 35.6% response rate. In other words, the sample was presumably much better than the samples used in all the published telephone polls, which do not pay incentives, are usually in the field for only a few days, and achieve much lower response rates. Even if the RDD survey used by the authors had been more accurate than our online poll (which, of course, I dispute) it would say nothing about the accuracy of the RDD telephone polls published in the media.


Usher & Omero: Hey Pollsters, Time to Make a Better Chart!

Topics: Charts , data visualization , Pollsters

Doug Usher is Senior Vice President and Research Director at Widmeyer Communications.

Margie Omero is President and founder of Momentum Analysis LLC and a frequent Pollster.com contributor.

Back when the two of us collaborated on polling presentations (in the mid-to-late 1990s), PowerPoint had more competitors and transparencies were as common as LCD projectors. Even today, it can sometimes take more technical savvy than it should to create a slide that's both legible and informative. But presenting data in an understandable visual way is one of the most important things pollsters do for their clients. While pollster.com usually chooses to simply lead by example on this topic, we thought we'd have some back-to-work fun.

Inspired by this comment from a couple of months past, we've selected a few examples of (frankly) awful charts. Certainly we're not perfect. But it's time to take a stand! Clients pay us to help use data to build effective strategies - and part of our job is to present graphics that illuminate, not confuse and distract.To paraphrase a famous political philosopher, pollsters of the world, unite - the only thing you have to lose is your outdated and clunky templates.

To this end, below are a few examples of subpar graphics from mainstream polling firms - to give a sense of just how far we have to go as a profession.To protect the guilty, we've obscured references to the pollsters, but have kept everything else intact.

This is just a start - do you have some better examples of graphic crimes by pollsters?

EXAMPLE A: Getting too much interpretive analysis out of very little data:

Example A.jpg


EXAMPLE B: Color schemes that add no insight.

Example B.jpg


EXAMPLE C: Do big numbers convey your point more effectively?

Example C.jpg


EXAMPLE D: Are two graphics always better than one?

Example D.jpg


EXAMPLE E: Really?

Example E.jpg


Here are a few "action items" for pollsters to think about as they put together charts for presentations.

  1. What is the point of your chart - and what data are critical to making that point? Try to have the exact amount of information to make your point, not too little and not too much.
  2. Is it accessible to a non-pollster? The goal of an effective presentation is for it to be passed along to (and understood by) many.
  3. Are there extraneous slides/data that can be effectively summarized in a few words? Yes, a picture is worth a thousand words, but in too many presentations hundreds of numbers can be replaced by a few summary points about significant subgroups.
  4. Is every additional color, font, chart type and piece of clip-art required to make an additional insight? If not, then refrain. What once was seen as "plain" is now more likely to be viewed as "crisp" and "concise."

Let's make it a New Years' resolution: simpler, clearer data slides!


Mokrzycki: Cord-cutting Continues at Steady Pace

Topics: CDC , Cell Phones , Economic Issues , Probability samples , Sampling , Young Voters

Mike Mokrzycki is an independent consultant who has studied implications of the growing cell-phone-only population for survey research. He was the founding director of the Associated Press polling unit. He may be reached at mike@mikemokr.com.

Sometimes a study is more intriguing not for what it finds but for what it doesn't. That's the case with the latest federal estimates, released this morning, of how many Americans can no longer be reached by landline telephones.

First, what the semiannual update from the Centers for Disease Control did find: Steadily worsening news for surveys that exclude cell phones. Americans keep abandoning landline phones at about the same pace as in the last couple years - in the first half of 2009, 21.1 percent of adults live in households with no landline, up from 18.4 percent in the second half of 2008. By a slightly different measure - particularly relevant to random digit dial surveys using households as a sampling frame - 22.7 percent of households now have only wireless phones, up 2.5 percentage points from six months earlier. (With sample sizes of 12,447 households and 23,632 adults, sampling error for overall results is generally around plus or minus 1 percentage point.)

NHIS200912.jpg

Somewhat surprisingly, though, the CDC's National Health Interview Survey (NHIS) - the ongoing in-person study that is the benchmark for telephone status estimates in the United States - did not find a disproportionate increase in cord-cutting overall or among the poor or unemployed, despite the deep and sustained economic downturn. In the latest NHIS, 33 percent of those households falling below the U.S. Census poverty threshold were cell-only, up 2.1 percentage points from six months earlier; 14 percent of those who were unemployed or gave "something else" as their employment status (but weren't students) were cell-only, up 3 percentage points from the previous report.

"We would have expected that the recession would have led to outsized increases, both in overall rate of wireless substitution and also perhaps among the poor relative to those with higher incomes. We did not see that effect," Stephen J. Blumberg, co-author of the CDC study with Julian Luke, told me this morning.

"It appears that lifestyle issues, such as where you live, who you live with and age are still bigger predictors of cord-cutting," Blumberg said. "Ever since we've been tracking these data, income has not been a strong predictor of being wireless-only. Yes, the poor are more likely to be wireless-only than those with higher income, but that has largely reflected the fact that people who have substituted wireless for landlines are younger, more likely to still be in school, and more likely to be renters than homeowners."

The NHIS not only measures the cell-only population but attempts to gauge what proportion of Americans still have landlines but can't really be reached on them, contributing to non-coverage for survey researchers. The NHIS began tracking cellular telephone trends in 2003 to understand the implications for landline-only federal health surveys and in 2007 also started asking respondents whether in their households "all or almost all calls are received on cell phones, some are received on cell phones and some on regular phones, or very few or none are received on cell phones." Some of these results are eye-opening:

About one in seven U.S. households (14.7 percent) are "cell-mostly." Add that to the cell-only figures and at least 37 percent of households definitely or probably cannot be reached by landline. (The cell-mostly group has been growing at a slower rate than cell-only.)

Landline abandonment is most prevalent among people age 25-29, 63.5 percent of whom live in cell-only (45.8 percent) or cell-mostly (17.7 percent) households. (Not as many people age 18-24 can't be reached by landline as they're less likely than those 25-29 to live in wireless-only households, probably because some younger people still live with parents who haven't cut the cord.)

Blumberg observed: "Interestingly, we see an increase in cell-phone usage among people living with relatives, people living with children, and older adults. More people in these groups tell us they receive all or most calls on their cell phones, but they haven't given up their landlines in disproportionate numbers."

What does it all mean for surveys that only sample landline phones? Clearly, sample non-coverage is a growing problem, at least as a perception - it's easy to wonder about survey validity if more than a third of the population of interest has little to no chance of being included. True, excluding cell phones didn't appreciably harm presidential vote preference in 2008 pre-election polls. But a deep dive into a phone-status question on the 2008 national exit poll yields cause for concern for anyone interested in not just the overall horserace but understanding why different subgroups behave and think as they do - more on this in an article I wrote with two co-authors for a soon-to-be-published issue of Public Opinion Quarterly (an earlier draft, presented in May at the annual conference of the American Association for Public Opinion Research, is available here). See also extensive Pollster.com coverage of who is abandoning landlines and what it means for the survey profession.


Winston: Drop in Polls Threatens Obama Agenda


David Winston is President of the Winston Group, a strategic planning, communications, and survey research firm. He was formerly Director of Planning for Speaker Newt Gingrich and is presently an election analyst for CBS.

This week, President Obama finds himself facing his first public opinion crisis as several different national surveys showed his job approval below 50% over the past 10 days. The Marist and Quinnipiac surveys both put his job approval at 46%. CNN had it at 48% while Ipsos/McClatchy had it at 49%. But it was the Gallup daily tracking, which finally dipped below 50%, that pushed Press Secretary Robert Gibbs to an uncalled for denigration of the respected polling organization, comparing their results to a "six-year-old with a crayon."

Why is this important? Gibbs testy response isn't. But the president's downward spiral certainly has serious implications for both his ability to govern and to enact his policy agenda. Simply put, if a Presidential job approval is below 50%, a governing majority coalition does not exist and without a governing majority, controversial policies like health care and cap and trade are relegated to the uphill climb of minority status.

But presidential polling numbers can be worse than simply slipping below that 50% mark. When a President's job approval is under water, meaning more people disapprove than approve of the job he is doing, that's when every alarm bell in the West Wing ought to go off. President Obama is dangerously close to needing a life jacket.

In the CNN survey, 48% of those surveyed approved of his job performance, while 50% disapproved. Ipsos/McClatchy had it at 49-49, and both Marist and Quinnipiac had it at 46-44.

If Obama's numbers continue to slide, his policy agenda is at serious risk. Don't think for one moment that members of the House and Senate don't pay attention to these national polls. They do, especially those who find themselves in competitive races. Equally important, their own internal state or district polls will likely also have a presidential job approval question. Whether Obama is under 50% or under water back home could and, in many cases, will impact their voting behavior in D.C.

It's premature to suggest that it's time the Obama team break out the life boats, but contrary to Mr. Gibbs assertions, numbers do matter. They will determine, in part, whether his legislative agenda succeeds this year and survives the elections next year.


Enten: Polling and the Maine Marriage Vote

Topics: Fivethirtyeight , Gay marriage , Maine Question 1 , Nate Silver , Polling Errors , Pollster.com

Harry Enten is a student at Dartmouth College

The past two years have shown us that predicting voter support for same-sex marriage ballot measures is no easy task. Pollster.com's aggregate trend estimates, reflecting pre-election polling, incorrectly projected that voters in California and Maine would vote against measures to ban same-sex marriage. Nate Silver, using a regression model that included a state's religiosity, year of the measure, and whether the measure included a ban on civil unions, also incorrectly predicted that Maine's amendment to ban same-sex marriage would fail.

In a post this past Friday, Silver offered a possible explanation: "It's not clear that the results in Maine are comparable to those in other states. Question 1 was the only gay marriage ballot initiative that did not seek to rewrite its state's constitution... there was no particularly good way to model the uncertainty."

While Question 1 was rare in that it did not amend the state constitution, it is not the only anti-same-sex marriage ballot measure to do so. In 2000, California voters passed Proposition 22 (the California Defense of Marriage Act), an ordinary statute, by a margin of 61%-39%. I was interested to see if including California's 2000 vote and a variable signifying that it was not a constitutional amendment would have improved Silver's model. To do so, I simply added a dummy variable controlling for whether the measure in question amended the state's constitution or merely altered state law.

The result is a model that would have actually done worse in Maine with a predicted yes vote for Question 1 of only 33.4% (vs. 43.5% for Silver's initial model), when the actual yes vote was 52.9%. If one were to add a dummy variable for an off-year election to this model as Silver did "ad-hoc" to his, the yes vote would still only get 37.9%.

Still, I was inspired by Silver's 2008 presidential regression models that combined polling and states' demographic data to find out if combining polling data with other variables could create a more accurate prediction of same-sex marriage ballot measures.

I have built a linear regression model based on 25 state gay marriage referenda from 1998 to 2009. The model attempts to predict support for banning same-sex marriage using five variables: projected support for the measure from pre-election polls, a state's religiosity, year of the measure (where 1 is 1998, 2 is 1999, and so on), a dummy variable controlling for whether the measure in question amended the state's constitution or merely altered state law, and a dummy variable controlling for whether the election was off-year.

The results for this model are very encouraging for those of us hoping to add value to polling data and predict future results of same-sex marriage ballot measures. I found that 92.1% of the variation between the different same-sex marriage elections was explained by the model compared with 80.7% for Silver's unaltered model. The average difference between the model's predicted support for an amendment in an election and the actual support for the amendment was 2.69% (compared with Silver's 4.46%). Importantly, this difference was greater than 2.00% in only 4 instances (Michigan 2004, Montana 2004, North Dakota 2004, and South Dakota 2006) and greater than 4.00% in only two (Michigan 2004 and North Dakota 2004).

The polling data is the best predictor for support for same-sex marriage amendments. Indeed, a simple regression in which the poll variable alone predicts the final result explains 86.4% of the variation in support for same-sex referenda across elections.

Despite the polling variable's dominance, the year variable is statistically significant with 95% confidence in the model. That is, we can be 95% sure the effect this variable has on the model did not occur by simple chance. The year variable has a negative coefficient, suggesting that in more recent years polling is less likely to underestimate support for the propositions. This finding supports a study by NYU's Patrick Egan that concluded that any possible "gay Bradley Effect," the theory that some respondents were uncomfortable sharing their opposition to gay marriage with a stranger on the telephone, has subsided in recent years.

The reason for this abatement is unclear, but it may have to do with the fact that the issue of same-sex marriage is no longer heavily used as a wedge issue nationally. Senator McCain mentioned the issue fewer times in 2008 than President Bush did in 2004, and Congress has not voted upon the Federal Marriage Amendment since 2006. This explanation would be consistent with Georgetown's Daniel Hopkins finding that the Bradley Effect for black candidates began to disappear in the mid 90's once issues (such as welfare reform and crime) with a racial undertone began to recede from the national debate.

The off-year and religiosity variable are statistically significant with 90% confidence in the model. The coefficient for the off-year variable is positive implying that polling underestimates support for the "yes" vote in off-year elections. This is not surprising considering these elections tend to have lower turnout (and are thus more difficult to poll) and are dominated by older voters who are more likely to be opposed to same-sex marriage.

The coefficient for the religiosity variable is positive meaning that, when controlling for the other variables, polls tend to underestimate support for the measures in more religious states. Last year, Mark DiCamillo, director of The Field Poll in California, argued that polling errors for same-sex marriage referenda resulted from late shifts and a boost in turnout among Catholics and regular churchgoers. He speculated that these shifts resulted from "last minute appeals" from religious figures. If DiCamillo is correct, and if gay marriage opponents have used similar tactics elsewhere, we would expect this effect, and thus the polling error, to be larger in more religious states.

The variable controlling for whether the measure in question amended the state's constitution or merely altered state law is not statistically significant. That is, there is a relatively high probability that any effect this variable had on the predictive value of this model occurred only by chance. It is important to point out that the results from this variable should be viewed with caution because we only have two observations.

Of course, I was also interested in testing if my model can work proactively and not merely explain past results. I wanted to investigate if, unlike the Pollster.com aggregate, it would have accurately predicted the results for California and Maine. To estimate the result for California as I would have prior to the 2008 election, I eliminated all the observations from the 2008 and 2009 elections from my dataset: California 2008, Florida 2008, and Maine 2009. This altered model called for the "yes" side to win in California with 51.9% of the vote, an error of 0.3%. To estimate the result for Maine, I simply eliminated the Maine 2009 observation. This modified model called for the same-sex marriage ban to pass in Maine with 50.6% of the vote, an error of 2.3%.

All of these findings support the argument that we can add value to polling data on same-sex marriage amendments when we control for them with variables such as religiosity of a state and year of the measure. We should recognize that polling ballot measures is always very difficult due to their confusing language. Polling same-sex marriage measures is especially problematic because of added factors such as a possible same-sex marriage Bradley Effect. My model helps to eliminate some, but no means all, the possible errors that result from these problems.

Notes on Data

1. For my model, off-year is defined as any election that did not place during a presidential election (primary or general) or a midyear general. This includes Missouri 2004, Kansas 2005, Texas 2005, and Maine 2009. Silver's model only counts Kansas 2005, Texas 2005, and Maine 2009 as off-year elections. I used my measure because non-presidential primaries, like traditional off-year elections are often plagued by low turnout.

2 For Silver's and my model, religiosity is measured by the percentage of adults in a state who considered religion an important part of their daily lives in a 2008 Gallup study.

3. Because prior studies have found that due to the confusing nature of ballot questions voters become increasingly aware of the meaning of a "yes" and "no" vote for same-sex marriage ballot measures closer to the election (most likely relying on advertisements), my polling variable only uses data taken within three weeks of the election. In the case that more than one firm conducted a poll within three weeks of the election and less than a week separated the polls, I used an average of the firms' final polls. For Maine, this rule means I included an average of the final Public Policy and Research 2000 polls in my dataset, but not the Pan Atlantic poll because it was taken more than a week before the Public Policy's final poll was conducted.

While most of the data in my model is easily available, prior polling for same-sex marriage referenda is surprisingly difficult to find. I managed to locate and verify 25 elections with a measure to ban (or allow the state legislature to ban as is the case with Hawaii) same-sex marriage and a poll within three weeks of the election. I simply allotted undecideds to how already decided voters were planning on voting: projected vote in favor of the amendment by polls = those planning on voting yes / (those planning on voting yes + those planning on voting no).

Complete dataset is available here.


Wilson: Toplines and Headlines- Misreading Public Sentiment about the Economy

Topics: CNN , Economy , Interpreting polls , Measurement

David C. Wilson is an assistant professor of political science and international relations at the University of Delaware, who previously served as a Senior Statistical Consultant for The Gallup Organization in Washington, D.C.

In this edition of "Toplines and Headlines," (previous notes can be found at the CPC blog) I examine headlines and data from a recent poll about the economy. The poll was sponsored by CNN, and conducted by the Opinion Research Corporation (ORC). The headline from the story on CNN's website read, "CNN Poll: Optimism on economy fading." The headline implies that positive beliefs about the economy are actually on the decline, and that readers should be concerned. Yet, after reading the Topline results provided by CNN, a sophisticated reader of polls might (and probably should) come to the exact opposite conclusion as CNN.

CNN's polling director, Keating Holland, supports the headline citing data that suggest "Americans don't see economic conditions getting better any time soon," and notes that 34% of respondents say that things are "going well in the country today," a 14% increase from a year ago, BUT a 3% point decrease since November. Holland also cites a 6% increase, from 33% to 39%, in the percentage of people who say the country is still in a downturn. These Toplines form the initial thrust of the support for the "negative" headline.

Yet, this narrative should be questioned just on a couple of simple survey methodological factors. The margin of error (MOE) for the poll is plus or minus 3%, which means the aforementioned percentage decline in things "going well in the country today" is within the margin of error; thus, since last year, while more people see things going well, statistically, those numbers have not changed since last month. This counters CNN's headline.

More questions about the negative headline are raised when one examines the entire trend (p. 7 of the Topline release) since November of 2008; at that time, the percent saying things in the country are going well was 16%. In every poll since that date, except last month, the trend increased. Thus, it's very possible, and quite likely, that the results from November were a random (larger than expected) bump in the trend. In reality, the percentage thinking things are going well is actually continuing to increase rather than decline; another counter to the headline.

Turning to another question ostensibly supporting the headline, the trend showing an increasing percentage of those who say "the country is still in a downturn" is important, but the results from that particular question do not necessarily describe the fading optimism cited in the headline (see p. 7 of the Topline release). In fact, since June, 60% or more of Americans believe the economy is either "recovering" or "stabilized and is not getting any worse." This trend may have gone DOWN 6% since October, but has gone virtually unchanged since June.

In the CNN story, Holland also notes that 43% say the "chances of the recession turning into another Great Depression" are either somewhat or very likely (see p. 7 of the release). Yet, strong majorities in 2009--58% in the Dec. poll, 58% in a July-Aug poll, and 54% in a March poll--believe this is less likely to happen. Moreover, it's true the trend is up 5% from a year ago, but if one examines the entire trend, the Dec. 2008 poll cited by Holland appears to be another blip in the trend (see the Topline data for yourself).

On another question, 84% say the economy is still in a recession, but since May of this year that trend has decreased by 6%, while those believing the country is "not" in a recession has increased 6%, from 10% to 16%, over the same period. Thus, while there's broad agreement the American economy is in a recession, that agreement is actually decreasing, rather than increasing.

Lastly, Holland connects his interpretation of the pessimism about the economy to President Obama. He says, "it's clear why Obama is again addressing the economy," noting that "most Americans (40%) continue to say that the economy is the most important issue for them. Yet, one need only examine the trend in this question (p. 2 of the Topline release) to become skeptical of the narrative.

Since March of this year, the percentage saying the economy is the "most important issue facing the country today" has DECREASED by a whopping 23%, while the percentage saying "the wars in Iraq and Afghanistan" has INCREASED by 10%. Even Health care has a significant increase since March (the 3% decline since Aug. is within the margin of error). Thus, while the economy is the most important problem, it's been losing steam since March of this year.

When reading the headline and the story in concert with the data, it becomes clear that there is a narrative that CNN, Holland, or both CNN and Holland are trying to promote. At some points the story ignores the overall trend, and at others it mentions only the snapshot point estimates, dismissing the trend completely. In other words, the "trends" (i.e., "fading") that the story and headline emphasize are selective, not comprehensive, and thus in many ways the story is an overly biased take on things.

The point to remember here is that the reader of polls, and "headline," that emphasize trends must be considerate of the starting date of the trend, as well as the other responses not mentioned in a story/narrative. Bottom line, while the CNN headline reads that hopes are fading, a sophisticated poll watcher might easily disagree.


Abramowitz: A Note on the Rasmussen Effect

Topics: Automated polls , House Effects , IVR Polls , job approval , Measurement , Rasmussen

Alan I. Abramowitz is the Alben W. Barkley Professor of Political Science at Emory University in Atlanta, Georgia. He is also a frequent contributer to Larry Sabato's Crystal Ball.

In his recent post, Mark Blumenthal provides an excellent discussion of some of the possible explanations for the differences between the results of Rasmussen polls and the results of other national polls regarding President Obama's approval rating. What needs to be emphasized, however, is that regardless of the explanation for these differences, whether they stem from Rasmussen's use of a likely voter sample, their use of four response options instead of the usual two, or their IVR methodology, the frequency of their polling on this question means that Rasmussen's results have a very disproportionate impact on the overall polling average on the presidential approval question. As of this writing (December 4th), the overall average for net presidential approval (approval - disapproval) on pollster.com is +0.7%. The average without Rasmussen is +7.1%. No other polling organization has nearly this large an impact on the overall average.

A similar impact is seen on the generic ballot question reflecting, again, both the divergence between Rasmussen's results and those of other polls and the frequency of Rasmussen's polling on this question. The overall average Democratic lead on pollster.com is 0.7%. However, with Rasmussen removed that lead jumps to 6.7%. Again, no other polling organization has this large an impact on the overall average.

According to Rasmussen, Republicans currently enjoy a 7 point lead on the generic ballot question among likely voters. Democracy Corps, the only other polling organization currently using a likely voter sample, gives Democrats a 2 point lead on this question. To underscore the significance of this difference, an analysis of the relationship between popular vote share and seat share in the House of Representatives indicates that a 7 point Republican margin of victory in the national popular vote next November would result in a GOP pickup of 62 seats in the House, giving them a majority of 239 to 196 over the Democrats in the new Congress. This would represent an even more dramatic shift in power than the 1994 midterm election that brought Republicans back to power in Congress. In contrast, a 2 point Democratic margin in the national popular vote would be expected to produce a GOP pickup of only 24 seats, leaving Democrats with a comfortable 234 to 201 seat majority.

One of the biggest problems in trying to compare Rasmussen's results with those of most other polls is that Rasmussen is almost alone in using a likely voter sample to measure both presidential approval and the generic ballot. Moreover, Rasmussen has been less than totally open about their method of identifying likely voters at this early stage of the 2010 campaign, making any evaluation of their results even more difficult. However, there is one question on which a more direct comparison of Rasmussen's results with those of other national polls is possible--party identification. Although the way Rasmussen asks the party identification question is somewhat different, reflecting its IVR methodology, Rasmussen's party identification results, like almost all other national polls, are based on a sample of adult citizens. Despite this fact, in recent months Rasmussen's results have diverged rather dramatically from those of most other national polls by showing a substantially smaller Democratic advantage in party identification. For example, for the month of November, Rasmussen reported a Democratic advantage of only 3 percentage points compared with an average for all other national polls of almost 11 percentage points.

Rasmussen's party identification results have only a small impact on the overall average on this question because they only report party identification once a month. However,
Rasmussen's disproportionately Republican adult sample does raise questions about many of their other results, including those using likely voter samples, because the likely voters are a subsample of the initial adult sample. If Rasmussen is starting off with a disproportionately Republican sample of adult citizens, then their likely voter sample is almost certain to also include a disproportionate share of Republican identifiers. Of course, there is no way of knowing for certain whether Rasmussen's results are more or less accurate than those of other polling organizations. All we can say with some confidence is that their results are different and that this difference is not just attributable to their use of a likely voter sample.


Young and Amic: Polling on fuzzy issues like healthcare reform- You can't measure what doesn't exist

Topics: health care , Health Care Reform , Question wording

Cliff Young is Senior Vice President at Ipsos Public Affairs. Cliff is head of the Public Sector practice and responsible for the Ipsos McClatchy poll. Aaron Amic is Vice President at Ipsos Public Affairs and is responsible for analytics for the Ipsos McClatchy Poll.

When the definitive history of the 2009 healthcare reform debate is written, one footnote will read how varied, even contradictory, the polls had been. We see this now. Indeed, on any given day, different people can cite different polls and come to very different conclusions. "Americans are in favor of healthcare reform-no, wait, they are against it!"

It goes without saying that given this uncertainty, cherry picking of polls has been rife on both the right and the left. Democrats prefer to cite polls on the "public option" which has consistently shown strong majority support. Republicans, on the other hand, point to polls on general support for healthcare reform-most showing only a plurality.

At a methodological level, pollsters have been grappling with this dilemma as well. The original debate centered around the variability of question wording and its effect on levels of support. The overriding question was-what is the ideal healthcare question, if even such a thing exists?

More recently, the debate has shifted to explaining the differences between generic healthcare questions and more specific ones referring to the "public option". The controversy lies in the differential levels of support-generic questions have shown only plurality support, while specific questions referring to the "public option", show majority support. The consensus explanation is that the healthcare debate is quite distant from people's day-to-day lives and so their answers are "uninformed"-in methodological speak, a classic case of "non-attitudes."

Both lines of reasoning have their merit. However, we believe that they miss the mark because they assume that polling on healthcare reform is analogous to polling on presidential elections. In our opinion, it isn't.

Indeed, in presidential elections, our job as pollsters is made easy with ballot questions being basically fixed after the primaries. Simply put, we know which candidates will be running. This, in turn, all but defines our ballot question for us.

In contrast, issues like healthcare reform are quite fuzzy as no bill typically exists at the beginning of the process. This makes the construction of a single question impossible if not simply disingenuous.

Put another way, we have no "true value" to measure against- no concrete bill exists (or at least did not exist until recently). You can't measure what doesn't exist!

The problem is most apparent when looking at generic questions on healthcare[1]. Such questions are broadly worded and lack any concrete anchor. People, consequently, can (and do) read into them what they want, making their meaning variable. To illustrate our point let's look at table 1 below.

table_1.png

The above question shows that only a plurality (34%) of Americans support healthcare reform (or at least the proposals in Congress). Simple Conclusion: Americans do not support healthcare reform.

table2.png

However, a simple follow up question shows that about a quarter (25%) of those that oppose the reform bills actually think the proposals "do not go far enough" (see table 2 above)! This same 25% actually is much more likely to be Democrat and more likely to support the public option. People, once again, read into the question what they want.

In contrast, questions which refer to "the public option" and other specific policy measures can introduce greater certainty into the ballot question, helping to establish a clear reference point for people (See table 3 below). However, once again, such questions are nothing more than hypothetical as we do not know a priori which items will (and will not) be included.

table3.png

So what are our takeaways here? What does polling on American healthcare reform teach us about polling on non-electoral policy issues involving the legislative process?

First, polling on healthcare reform is quite different than polling on presidential elections because our "true value" is not fixed. This makes the construction of singlequestions impossible and misleading. Such issues are, well, fuzzy and, therefore, only a multiple indicators approach will tell the entire story-some generic, some specific questions. Here triangulation is key.

Second, genericquestions should be used with caution. At the least, they should include a follow up question in order to determine why people favor or oppose healthcare reform. We only included such a follow up after struggling with interpreting the results.

Are such generic questions valid at all? We think they are but with caveats.

Indeed, before the final bill, such questions seem to be nothing more than a measure of optimism about the reform process, much like "right track, wrong track" questions. Looking forward to a final bill, we do expect that such generic questions will become relevant. Only then will they have a "true value" to be measured against.

Third, questions which reference specifics like the "public option" are hypothetical and have to be understood as such. Indeed, without a final bill, they should be used more for sensitivity analysis than anything predictive-which policy measures garner more support, which ones less so. While such questions say nothing about "general support for healthcare reform," they do help us understand which measures are more (and less) likely to be in the final bill as politicians read polls too.

To this end, we have tracked specific items for most of the healthcare debate. Here we understood that healthcare reform would be fundamentally a debate about the role of government (or lack thereof). All of our items fall along a government intervention continuum. In our experience, polling on "fuzzy" issues places a premium on understanding the underlying value cleavages related to the policy debate at hand. At its essence, healthcare reform is a debate about the proper role of government.

Fourth, from an analytical perspective, the combination of generic and specific (hypothetical) questions makes total sense. They allow us to be both predictive as well as diagnostic with our clients but only make sense when used together.

Fifth, from a media polling perspective, the combination of general and specific ballot questions is much less tidy than a single "up or down" measure and, thus, more complicated to explain. Looking forward to future non-electoral legislative reform debates, we, as an industry, need to do better in explaining these complexities.



[1] Examples of some questions recently fielded:

Ipsos wording: As of right now, do you favor or oppose the healthcare reform proposals presently being discussed?

ABC wording: Overall, given what you know about them, would you say you support or oppose the proposed changes to the health care system being developed by Congress and the Obama administration?

AP-GFK wording: In general, do you support, oppose or neither support nor oppose the health care reform plans being discussed in Congress?

PEW wording: As of right now, do you generally favor or generally oppose the health care proposals being discussed in Congress?

CBS wording: Do you mostly support or mostly oppose the changes to the health care system proposed by Barack Obama, or don't you know enough about them yet to say?


Christie's Pollster on NJ Polls

Topics: Disclosure , Divergent Polls , New Jersey , New Jersey 2009

Adam Geller is the CEO of National Research, Inc. and conducted polling for Chris Christie's campaign in New Jersey this year.

I'd like to contribute a few thoughts on the performance of the public polls during the recently concluded New Jersey Gubernatorial race. On this topic, I bring a unique perspective, as the pollster for the Christie campaign, and I'd like to offer my thoughts not as any type of authority, but rather to contribute to an important professional discussion.

I should mention that, for what it's worth, some observers may have been surprised by the results on November 3rd, but neither Governor Elect Christie nor his advisers were surprised.

Before the cement hardens and ink dries on the post election wrap up, let me offer the following five thoughts:

  1. The automated polls were more accurate than the live interview public polls, due in part to the methodology of the live interview polls.
    From polls that were in the field for an entire week (Quinnipiac) or even longer (FDU), to polls that oversampled Democrats (Democracy Corps, among several others) to polls that asked every single name in the ballot (Suffolk), an essential reason for the poor performance of the live interview polls had less to do with the fact that a live person was administering the poll and more to do with methodological issues.
  2. The partisan spread in the polls ought to be reported up front.
    Some public pollsters make it difficult to determine how many Republicans, Democrats and unaffiliated voters they interviewed. Why not just put it into the toplines? Reporters and bloggers should demand this before they report on the results. Not to pick on Quinnipiac, but they had Corzine and Christie winning about the same amount of their own partisans, and they had Christie winning Independents by 15 percentage points, and yet they STILL had Christie trailing overall by 5 points. Quinnipiac did not publish their partisan spread, but then an astute blogger was able to ascertain the fact that there were, in fact, too many Democrats in the sample. Other polls, notably Democracy Corps, regularly produced samples with too many Democrats (though, in their parlance, some of these were "Independent - Lean Democrat"). That their sample was loaded up with Democrats had the obvious effect on their results. Whether this was intentional or not, I would leave to others to speculate.
  3. In general, RDD methodology is a bad choice in New Jersey, if the goal is predictive accuracy.
    In New Jersey, there are many undeclared voters (commonly but mistakenly referred to as Independents). These undeclared voters identify themselves as Republicans or Democrats - even though they are not registered that way. In our polls, we frequently showed a Democrat registration advantage that matched their actual registration advantage - but when it came to partisan ID, the spread was more like a six point Democrat advantage. By using a voter list, we knew how a respondent was registered - and by seeing how they ID'ed themselves, we gained insight into the relative behavioral trends of undeclared voters and even registered Democrats who were self identifying as Independents. Public pollsters who dialed RDD missed this. Partisan identification in New Jersey is not enough, if the goal is to "get it right."
  4. The public polls oversampled NON voters.
    Again, this is a function of RDD versus voter list dialing. It is easy for someone to tell a pollster they are "very likely" to vote. With no vote history and no other nuanced questions, the poll taker has little choice but to trust the respondent. Pollsters who use voter lists have the benefit on knowing exactly how many general elections a respondent may have voted in over the past five years, or when they registered. By asking several types of motivation questions, the pollster can construct turnout models that will have a better predictive capacity. The public polls did not seem to do this.

    To this end, we had heard all about the "surge strategy" that the Corzine campaign was going to employ. This refers to targeting "one time Obama voters" and driving them out in force on election day. With voter lists, we were easily able to incorporate some "surge targets" into our sample. After running our turnout models, we saw no evidence that the surge voters would be game changers.
  5. The Daggett effect was overstated in the public polls.
    Conventional wisdom holds that Independent candidates underperform on election day. But the reality is, many analysts could have easily predicted Daggett's collapse, based not on history, but on simple a simple derivative crosstab: for example, voters who were certain to vote for Daggett AND had a very favorable opinion of him. They could have asked a "blind ballot" where none of the candidate choices were read. We did these things - and we estimated Daggett's true level of support to be around 6%.
None of this is meant to pick on the "live interview" public pollsters. For the most part, these polls are conducted and analyzed by seasoned research professionals. But in non-Presidential years, RDD methodology can lead to inaccurate results, which can then lead to inaccurate analysis. It is tough to conclude that the automated polls are somehow superior to live interview polls, given the methodological issues I've outlined.

What does it mean for next year? At the very least, journalists, bloggers and reporters need to ask more questions about the methodology and construction of the poll sample. They need to understand the partisan spread, and the extent to which it conforms to reality. They need to know how long the survey was in the field. They also need to beware of polls being released that are designed to manipulate opinion rather than manage it. They need to ask if certain polls are being constructed to reflect what is happening, or if they are being constructed to reflect what the poll sponsor would LIKE to happen. The public polls add to the dialogue, and given their ever increasing contributing role, we all ought to be more demanding when reporting their results.


Humphrey Taylor: Social Desirability Bias - How Accurate were the Benchmarks?


Humphrey Taylor is chairman of the Harris Poll at Harris Interactive, which conducts surveys on the internet.

These comments are prompted by the paper Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Non-Probability Samples by Yeager, Krosnick, et al, and by Mark Blumenthal's two excellent articles in the National Journal reviewing their paper.

The paper's conclusions were based on a comparison between six "benchmarks" and the findings of the various polls they examined. They assumed that the benchmarks were perfectly accurate, and that any differences between the polls and the benchmarks were "errors." I believe that this is not the case and that some of the benchmarks were inaccurate because of the social desirability bias that is often found in surveys where respondents are interviewed, by telephone or in-person, by live interviewers.

Social desirability bias occurs where respondents are not comfortable telling interviewers the truth because they are embarrassed to do so, or where their behavior or attitudes may be seen as unethical, immoral, anti-social or illegal.

Our online surveys have always found substantially more people than our telephone surveys who tell us they are gay, lesbian or bisexual (by a 3-to-1 margin). Our online surveys also find fewer people who claim to give money to charity, clean their teeth, believe in God, go to religious services, exercise regularly, abstain from alcohol, or drive under the speed limit.

Furthermore, in-person surveys by the Census Bureau report substantially more people claiming to have voted in elections than actually voted. If there is a better explanation than social desirability bias, I haven't heard it.

This conclusion - that surveys with live interviewers underreport "socially undesirable" behavior is supported by the data used by Yeager et al.

Our online survey, used by Yeager, found more smokers and more people having had 12 drinks in a life time than either the benchmark surveys conducted by government agencies or the RDD sample (and our own telephone surveys). Our online survey found that (to the nearest whole number) 28 percent were smokers compared to 26 percent in the RDD sample and 22 percent in the benchmark survey. Our online survey found only eight percent who had not had 12 drinks in their lifetime compared to 15 percent in the RDD sample and 23 percent in the benchmark survey.

Another government study, the NHANES study reported that 24.9 percent of adults said they were smokers but that blood tests showed that an additional 4.5 percent had smoked in the previous 24 hours but had not reported it when asked by an interviewer. The resulting NHANES estimate of 29 percent is closer to our estimate of 28 percent than to Knowledge Network's 26 percent or the RDD sample's 24 percent.

Two of the six benchmarks used by Yeager et al come from government sources where one would not expect to find any social desirability bias. In both cases, the Harris Interactive data were slightly closer to the benchmark data than were the findings of the RDD telephone survey. Our surveys found 28 percent of adults with passports compared to 30% for the RDD sample and the 23 percent in benchmark. Our survey found 92 percent having a driver's license compared to 93 percent in the RDD sample and the 89 percent benchmark.

In addition to the presence or absence of live interviewers there is one other reason why our online polls may have less social desirability bias than most telephone and in-person surveys. Our panel members have agreed in advance to be surveyed, which suggests that they trust us with confidential information, and are therefore more likely to tell the truth.

All this evidence suggests that the Harris Interactive data used by Yeager et al is generally more accurate than the RDD sample and that some of the so-called benchmarks probably overstate socially desirable behaviors because they were obtained in surveys with interviewers.


McDonald: Obama's Job Approval is in the House Effect

Topics: Barack Obama , Charts , job approval , Michael McDonald , Pollster.com

This guest contribution comes from Michael McDonald, an Associate Professor of Government and Politics in the Department of Public and International Affairs at George Mason University and a Non-Resident Senior Fellow at the Brookings Institution.

Saturday Night Live's sketch mocking Obama prompted CNN to run a story stating that the 'SNL' Obama sketch marks end of [Obama's] honeymoon. Actually, SNL is not leading public opinion here. Polling suggests that Obama's honeymoon ended in early August. Since then, Obama's job approval rating has remained essentially flat.

If you are an Obama supporter, you might ask how this is possible, since an Oct. 1-5 AP-GfK survey shows a resurging six percentage point increase in support for Obama since their Sept. 3-8 survey. Or, if you oppose Obama, you might point to the slight downward trend in Obama's job approval among all polling firms from early September clearly evident on Pollster.com.

2009-10-08-McD_All.png


What is going on here is that Pollster.com's trend line behaves fine when there are lots of polls to average together, but it does not work as well when two daily tracking polls are averaged together with more sporatic national polling. The two daily tracking polls - Gallup and Rasmussen - consistently find lower Obama job approval ratings than other polling firms. In addition to these two daily tracking polls, there are approximately bi-monthly internet polls from YouGov/Polimetrix and Zogby that also consistently show lower Obama job approval numbers compared to other polls.

These so-called "house effects" whereby different pollsters consistently report different numbers is well-known. I do not want to get sidetracked into speculation about why these polls have lower numbers, since we really cannot know what the true population value is for Obama's job approval rating.

What is interesting is what happens when these polls are disaggregated into two types (1) the tracking and internet polls and (2) all other polls.

To examine the first type of polls, let's use Pollster.com's filter tool to include all internet polls and the two daily tracking polls.

2009-10-08-McD-OnlyDailyAndInternet.png

According to this trend estimate, Obama's job approval rating leveled out in early August at about 50 percent, and may be slightly increasing since.

To examine the second type of polls, let's use Pollster.com's filter tool to exclude all internet polls and the two daily tracking polls.

2009-10-08-McD_NoDaily-Internet.png

According to this trend estimate, Obama's job approval rating leveled out in early August at about 53 percent.

Seen in this light, Obama's job approval rating has remained steady since early August, and it is here that Obama's honeymoon likely came to an end. Most pollsters took a vacation during August, except those conducting the first type of polls, which show lower Obama job approval than the second type. The bump up in Obama's job approval at the beginning of September is an artifact of the increased number of the second type of polls conducted when Obama delivered his health care speech to Congress. Subsequently, the absence of the second type of polls allows the first type of polls to again dominate the trend line, thereby giving the appearence that Obama's approval is now decreasing from the (non-existent) short-term early-September rally. The different mixes of the first and second types of polls are confounding the trend line and incorrectly coloring perceptions of the direction of Obama's job approval rating. Indeed, if you squint closely at Pollster.com's trend line for all pollsters, you'll see a long-term periodicty that apparently fluctuates along with the mix of the first and second types of polls.

[Editor's Note: So that Professor McDonald's commentary will always match the graphics, we  replaced the embedded, interactive version of charts with screenshots, although you can click the link above each chart to see the most recently updated version with the filtered polls he selected].



Shapiro: Will Obama's Speech Increase Public Support for Health Care Reform?

Topics: Barack Obama , Brandon Rotttinghaus , Health Care Reform

Robert Y. Shapiro is a professor of political science at Columbia University who specializes in public opinion, policymaking, political leadership, and mass media. He is a member of the board of directors of the Roper Center for Public Opinion Research.

The polling and pundit world is now looking to see if President Obama's speech will rally public support for his health care reform plan. In addition to looking at the stream of polls that will now follow, I direct your attention, hot off the presses, to the latest issue of the journal Political Communication. A timely article by Brandon Rottinghaus provides a broader political science view on presidential efforts to influence public opinion. What we know from George Edwards' book, On Deaf Ears: The Limits of the Bully Pulpit (Yale, 2003), is that it is difficult for presidents to succeed at influencing public opinion. However, Rottinghaus's article provides evidence for why Obama correctly chose to take his best shot in a nationally televised speech.

The article uses "a comprehensive data set spanning 1953 to 2001," to examine "several strategic communications tactics through which the presidents might influence temporary opinion movements." Specifically, it finds that "presidential use of nationally televised addresses is the most consistently effective strategy to enhance presidential leadership, but the effect is lessened for later serving presidents." In contrast, other strategies such as those involving domestic travel do not have positive effects and "televised interactions"--press conferences and the like - tend to have negative effects. While some may not be surprised with these findings, it is good to have empirical evidence to wrestle with.

But getting to the point, how will this now play out for Obama? My sense is that Obama's speech will come out on or above average in impact, though there is a question of what its half-life will be. What I see as most important, however, is not the new polls that we will soon see (if they are not out already). Putting Rottinghaus' article aside, what will count most is not what the public thinks at this moment, but rather the extent to which Democratic leaders unite around Obama's plan (which may well be close to Baucus'?); it is this elite consensus that will enable any positive effect of the speech to last or even widen. This assumes that the consensus will be more salient and striking than any continued Republican opposition.

Echoing the famous political scientist, V.O. Key, what matters more than the immediate polls is political leadership more broadly. The speech itself is the start of what could be a stronger consensual message than we have seen to date from Democratic and potentially other political leaders. The relevant public opinion research comes from Richard Brody's book on presidential leadership, (Assessing the President: The Media, Elite Opinion, and Public Support. Stanford, 1991), John Zaller's seminal book on public opinion (The Nature and Origins of Mass Opinion, Cambridge, 1992), and what Ben Page and I examined (The Rational Public. Chicago, 1992).

Larry Jacobs and I (Politicians Don't Pander, Chicago, 2000) looked at the President Clinton's 1993-94 health care reform effort from this perspective. What happened there was the Democratic leaders never supported any Clinton plan, and this, along with the strong Republican leadership opposition caused the public to become apprehensive and turn against health care reform. This happened much earlier in the legislative process than what occurring now, as the Clinton plan got to Congress later in Clinton's first term. In contrast, we are at that same juncture now --- however, earlier in Obama's first term but later in the legislative process, as there are now actual bills that have made it through congressional committees. Clinton never made it that far. The Democrats now have a better chance than Clinton did, since at this moment they are poised to unite around a president's plan. But if they don't do that quickly, then it's 1994 all over again. If by all appearances they come together, they can prevent public support from tapering off and very likely increase it.

In the end, Obama may have timed his entry into the fight just right--it's earlier than when Clinton entered the actual legislative fray in 1994--and this may have been the only way he could have gotten a major health care reform bill through. Given the financial crisis, the stimulus bill, and the two wars, he may well have been stopped in his tracks earlier on--without the health care reform bills making it through multiple committees as they have. He needed to enter the fight when he could rally congressional support in both houses, with drafted legislation in hand and already substantially debated. Of course we will never know since as we can't replay history. For now, the main point is don't just watch the polls-watch the leaders. The public will not just be responding to Obama but to the extent to which he has liberal, blue dog, and any (albeit unlikely) Republican leadership support.


Riehle: Just Don't Do It

Topics: CNN , Instant Reaction Polls , Speech Reaction

Today's Guest Pollster article comes from Thomas Riehle, a Partner of RT Strategies.

Technological capabilities can become temptations to conduct research studies that add nothing to our knowledge of public opinion, just because we can. Get thee behind me, Satan!

For example, it would be no problem, technologically, to display squiggly lines with the moment-by-moment reactions of a panel of viewers to the blathering of the talking heads on news show panels. The Onion demonstrates what a mess that would be, in a parody entitled "New Live Poll Allows Pundits to Pander to Viewers in Real Time."

What would happen if we let the talking heads see whether viewers at home agreed or disagreed with what they were saying, "using the Insta-Poll Tracker on our web site"? The talking heads would become self-conscious about the direction of their own squiggly line and start tailoring their statements...word by word...to make the squiggly line go up.

Insta-polls like September 9th's CNN/Opinion Research Corporation poll of adults who watched President Barack Obama's address to Congress may have a similar effect on poll respondents. Mark Blumenthal correctly points out the age-old problem of such polls--the partisan make-up. Last night, the audience for this address was heavily weighted with Obama supporters rallying to watch their leader, supplemented with a few civic-minded Americans who would watch any Presidential address, regardless of their own partisanship. Of the 427 adults in this study, all of them interviewed September 5-8 in advance of the speech, and all of whom indicated both an intention to watch the speech and a willingness to be re-interviewed after the speech, 18% were Republicans, 45% Democrats. These kinds of post-speech poll samples always skew heavily in favor of the speaker. Pollster.com's report on this poll last night squeezes out what knowledge can be gleaned by comparing the "bump" among this group of speech watchers to the bump registered among similarly situated groups of speech watchers in the past.

The problem with this kind of insta-poll may be exacerbated when the study is designed, as this one was, to compare the pre-speech responses of speech watchers to opinions after the speech. In the pre-speech survey, I would guess that respondents would strive to express their opinions as forthrightly as possible, as most survey respondents do. In the follow-up poll after the speech, however, I am afraid respondents would be like the Onion's self-conscious pundits. They'd be aware that they are about to become as much a part of the story as South Carolina Republican Rep. Joe Wilson who heckled the President. They'd tailor their answers to make their leader look good. Drawing much of a conclusion from their answers would not be any fairer than judging the entire Republican caucus by the boorishness of a few Members.


Doug Rivers: Second Thoughts About Internet Surveys

Topics: Douglas Rivers , Gary Langer , Internet Polls , Jon Krosnick , Probability samples , Sampling , Weighting

Douglas Rivers is president and CEO of YouGov/Polimetrix and a professor of political science and senior fellow at Stanford University's Hoover Institution. Full disclosure: YouGov/Polimetrix is the owner and principal sponsor of Pollster.com.

I woke up on Tuesday morning to find several emails pointing me to Gary Langer's blog posting, which quoted extensively from a supposedly new paper by Jon Krosnick. These data and results appeared previously in a paper, "Web Survey Methodologies: A Comparison of Survey Accuracy," Krosnick coauthored with me and presented at AAPOR in 2005. The "new" paper has added some standard error calculations, some late arriving data, and a new set of weights, but the biggest changes in this version are a different list of authors and conclusions.

The 2005 study compared estimates from identical questionnaires fielded to a random digit dial (RDD) sample by telephone, an Internet-based probability sample, and a set of opt-in panels. Of these, Internet probability sample had the smallest average absolute error, followed closely by the RDD telephone survey, and the opt-in Internet panels were around 2% worse. In his presentation of our paper at AAPOR in 2005, Krosnick described the results of all the surveys, both probability and non-probability, as being "broadly similar." My own interpretation of the 2004 data, similar to James Murphy's comment on AAPORnet, was that although the opt-in samples were worse than the two probability samples, the differences were small enough--and the cost advantage large enough--to merit further investigation. Even if it were impossible to eliminate the extra 2% of error from opt-in samples, they could still be a better choice for many purposes than an RDD sample that cost several times as much.

Krosnick now concludes that "Non-probability sample surveys done via the Internet were always less accurate, on average, than probability sample surveys" and, tendentiously, criticizes "some firms that sell such data" who "sometimes say they have developed effective, proprietary methods" to correct selection bias in opt-in panels.

In fact, the data provide little support for Krosnick's argument. The samples from the opt-in panels were, as we noted in 2005, unrepresentative on basic demographics such as race and education because the vendors failed to balance their samples on these variables, while the two probability samples were balanced on race, education, and other demographics. This is not a result of probability sampling, but of non-probabilistic response adjustments. It is too late to re-collect the data, but the solution (invite more minorities and lower educated respondents) doesn't involve rocket science.

Instead, Krosnick tries to fix the problem by weighting, and concludes that weighting doesn't work. A more careful analysis indicates, however, that despite the large sample imbalances in the opt-in samples, weighting appears to remove most or all selection bias in these samples. Because the samples were poorly selected, heavy weighting is needed and this results in estimates with large variances, but no apparent bias. In fact, if we combine the opt-in samples, we can obtain an estimate with equal accuracy to the two probability samples.

First, consider the RDD telephone sample. The data were collected by SRBI, which used advance letters, up to 12 call attempts, $10 incentives for non-respondents, and a field period of almost five months. Nonetheless, the unweighted sample was significantly different from the population on ten of the 19 benchmarks. RDD samples, like this one, consistently underrepresent male, minority, young, and low-education respondents. These biases are reasonably well understood and, for the most part, can be removed by weighting the sample to match Census demographics.

Next, consider the Probability Sample Internet Survey, conducted by Knowledge Networks (KN). The unweighted sample does not exhibit the skews typical of RDD. How is this possible, since the KN panel is also recruited using RDD? Buried in a footnote is an explanation of how KN managed to hit the primary demographic targets more closely than SRBI (which had a much better response rate). The answer is that "The probability of selection was also adjusted to eliminate discrepancies between the full panel and the population in terms of sex, race, age, education, and Census region (as gauged by comparison with the Current Population Survey). Therefore, no additional weighting was needed to correct for unequal probabilities of selection during the recruitment phase of building the panel." That is, the selection probabilities that are supposedly so important to probability sampling were not used because they would have generated an unrepresentative sample!

The opt-in panels, for the most part, were not balanced on race and education. Only one of the opt-in samples, Non-Probability Sample Internet Survey #6 actually used a race quota. Another, the odd Non-Probability Internet Sample #7, claims to have sent invitations proportionally by race and ended up with 46% of the sample white, despite a 51% response rate. (This survey will be excluded from subsequent comparisons.) Non-probability Sample Internet Survey #1 involved large over-samples of African Americans and Hispanics. I could find no explanation of how Krosnick dealt with the oversamples in the 2009 paper, but it should either match exactly (if the conventional stratified estimator is used) or be far off (if the data are not weighted). In fact, the proportion of whites and Hispanics is off by 1% to 2%.

The selection of a subsample of panelists for a study is critical to the accuracy of opt-in samples. Regardless of how the panel was recruited, the combination of nonresponse or self-selection at the initial stage along with subsequent panel attrition, will tend to make the panel unrepresentative. In 2004, we instructed the panel vendors to use their normal procedures to produce a sample representative of U.S. adults. The practice then (and perhaps now for some vendors) was to use a limited set of quotas. If you didn't ask most opt-in panels to use race or education quotas, they wouldn't use them.

Even without correcting these obvious imbalances, the opt-in samples provided what most people would consider usable estimates for most of the measures. For example, the percentage married (unweighted) was between 53.7% and 61.5% vs. a benchmark of 56.5%). The percentage who worked last week (unweighted) was between 53.6% and 63.1% (vs. a benchmark of 60.8%). The percentage with 3 bedrooms (unweighted) was between 41.2% and 46.1% (vs. a benchmark of 43.4%). The percentage with two vehicles (unweighted) was between 40.1% and 46.9% (vs. a benchmark of 41.5%). Home ownership (unweighted) was between 64.8% and 72.8% (vs. a benchmark of 72.5%). Has one drink on average (unweighted) was between 33.8% and 40.2% (vs. a benchmark of 37.7%). The KN sample and phone samples were better, but the difference was much less than I expected. (Before doing this study, I thought the opt-in samples would all look like Non- probability Sample Internet Survey #7.)

The 2009 paper attempts to correct these imbalances by weighting, but the weighted results do not show what Krosnick claims. He uses raking (also called "rim weighting") to compute a set of weights that range from .03 to 70, which he then trims at 5. The fact that the raking model wants to weight a cell at 70 is a sign that something has gone wrong and can't be cured by arbitrarily trimming the weight. If there really are cells underrepresented by a factor of 70, then trimming causes severe bias for variables correlated with the weight and not trimming causes the estimates to have large variances. In either case, the effect is to increase the mean absolute error of estimates.

The fact that the trimmed and untrimmed weights have about the same average absolute error does not mean that weighting is unable to remove self-selection bias from the sample. The mean absolute error is a measure of accuracy. It is driven by two factors: bias (the difference between the expected value of the estimate and what it is trying to estimate) and variance (the variation in an estimate around its expected value from sample to sample). The usual complaint about self-selected samples is that you can never know whether they will be biased or the size of the bias. Inaccuracy due to sampling variation can be reduced by just taking a larger sample. Bias, on the other hand, doesn't decrease when the sample size is increased.

Obviously, uneweighted estimates from these opt-in samples will be biased because the vendors ignored race and education when selecting respondents. This wouldn't have been difficult to fix, but it wasn't done. Apparently very large weights are needed to correct demographic imbalances in these samples, but the large weights give estimates with large variances and, hence, a high level of inaccuracy. If one tries to control the variance, as Krosnick does, by trimming the weights, then the variance is reduced at the expense of increased bias. The result, again, is inaccuracy. We are asking the weighting to do too much.

A simple calculation shows that all of Krosnick's results are consistent with the weighting removing all of the bias from the opt-in samples. One way to combat increased variability is to combine the six opt-in samples. Without returning to the original data, a simple expedient is to just average the estimates. Since the samples are independent and of the same size, the average of 6 means or proportions should have a variance about 1/6 as large as the single sample variances. The variance is approximately equal to the square of the mean absolute error which, after weighting, was about 5 for the opt-in samples, implying a variance of about 25. If there is no bias after weighting, then the variance of the average of the estimates should be 25/6 or approximately 4, implying a mean absolute error of about 2%.

How does this prediction pan out? If we average each of the weighted estimates and compute the error for each item using the difference between the average estimate and the benchmark, the mean absolute error for the opt-in samples is 1.4% -- almost identical to the mean absolute error for each of the weighted probability samples. That is, the amount of error reduction that comes from averaging the estimates is about what would be predicted if the all bias could have been removed by weighting. Thus, the combination of these six opt-in samples gives an estimate with about the same accuracy as a fairly expensive probability sample (which also required weighting, though not as much).

There is no reason, however, why you should need six opt-in samples to achieve the same accuracy as a single probability sample of the same size. If the samples were selected appropriately, then we could avoid the need for massive weighting. It is still an open question what variables should be used to select samples from opt-in panels or what the method of selection should be. In the past few years, we have accumulated quite a bit of data on the effectiveness of these methods, so there is no need to focus on a set of poorly selected samples from 2004.

Probability sampling is a great invention, but rhetoric has overtaken reality here. Both of the probability samples in this study had large amounts of nonresponse, so that the real selection probability--i.e., the probability of being selected by the surveyor and the respondent choosing to participate--is not known. Usually a fairly simple nonresponse model is adequate, but the accuracy of the estimates depends on the validity of the model, as it does for non-probability samples. Nonresponse is a form of self-selection. All of us who work with non-probability samples should spend our efforts trying to improve the modeling and methods for dealing with the problem, instead of pretending it doesn't exist.


Reifman: Health Care Age-Group Comparisons

Topics: health care , Health Care Reform

Prof. Alan Reifman teaches social science research methodology at Texas Tech University, and is compiling the results of public opinion polls on the specifics of health care reform at his blog, Health Care Polls.

There's been a lot of discussion of how seniors, who already are on Medicare, appear to be the least supportive age group of President Obama and the Democrats' plans for enacting health care reform. Seemingly at the center of seniors' concerns is the idea of cutting federal support for a program called Medicare Advantage. According to a Los Angeles Times article:

Although scaling back payments would have no effect on a sizable majority of Medicare users, it would create an opening for opponents to make the blanket allegation that the president wants to cut back on Medicare benefits -- as some Republicans are already starting to say.

Also, of course, seniors were more likely to vote for John McCain in last year's presidential election than were younger voters, who went overwhelmingly for Obama.

The diagram below (which you may click on to enlarge) compares different age groups' attitudes toward health care reform in four recent polls. Compiling these percentages was not as easy as I thought it might be, for a variety of reasons. First, only some pollsters make a public release of cross-tabulations between demographic characteristics and health care-related attitudes (other pollsters reserve such cross-tabs for paid subscribers). Second, age cross-tabs on a common attitude item were not always available. My plan was to use general favor/oppose items toward Obama and the Democrats' reform plan, but such an item was not always available so I had to substitute other types of items, as described below. Third, different pollsters use different cut-points to create their age groups. There's always a youngest age group, for example, but some pollsters bracket it from 18-29 whereas others use 18-34; similar discrepancies exist for other age groups, as well.

hc age groups.jpg
Having said all this, the pattern of seniors showing the least support for Obama/Democratic reform plans is clear and well replicated. For any given color of bar (purple, light blue, green, or orange; each representing a different pollster and question), the shortest height is with the seniors.

One other thing to notice is that two polls, ABC/Washington Post and The Economist/YouGuv, only reported on a 30-64 broad middle-age group rather than having two groups like other pollsters; whether groups in the lower and upper halves of the 30-64 age range were combined because they did not differ much in their responses, or the pollsters never broke 30-64 year-olds into smaller subsets, I don't know. For these two polls, I have taken the percentage on the respective attitude measures attributed to 30-64 year-olds and plotted them twice (linked by a light-blue or green horizontal line), where a 30s-40s group and a 50s-60s group would ordinarily go. Now that these "housekeeping" matters are out of the way, here are the question wordings used:

Survey USA (Aug. 19): “Now I am going to tell you more about the health care plan that President Obama supports and please tell me whether you would favor or oppose it. The plan requires that health insurance companies cover people with pre-existing medical conditions. It also requires all but the smallest employers to provide health coverage for their employees, or pay a percentage of their payroll to help fund coverage for the uninsured. Families and individuals with lower- and middle-incomes would receive tax credits to help them afford insurance coverage. Some of the funding for this plan would come from raising taxes on wealthier Americans. Do you favor or oppose this plan?”

ABC/Washington Post (Aug. 13-17): “Reform’s supported by 58 percent of adults under age 30, but 44 percent of 30- to 64-year-olds and just 34 percent of seniors, apparently concerned about its potential impact on Medicare” (this quote comes from an article and does not depict the actual survey item).

Economist-You Gov (Aug. 16-18): “If President Obama and Congress pass a health care reform plan, do you think you personally would receive better or worse care than you receive now?" (% Saying Better).

Kaiser Family Foundation (Aug. 4-11): “Do you think you and your family would be better off or worse off if the president and Congress passed health care reform, or don’t you think it would make much difference?” (% Saying Better).

The four polls above were not the only ones that made some type of age-related comparison. Others did, as well, but their age groupings and/or survey items appeared non-comparable in some way to the four polls whose results I plotted. Two additional polls are as follows:

A Harris Interactive poll used what I think are the most interesting age-group descriptors (shown in Table 2 of the linked document): "Echo Boomers (18-32), Gen. X (33-44), Baby Boomers (45-63), Matures (64+)." Harris plotted the percentage of respondents in each age group who rated Obama's job performance in various issue domains as "fair" or "poor." On health care, higher percentages of Matures (71%) and Gen. X (69%) gave Obama these unflattering ratings than did Echo and Baby Boomers (each 62%). Along with some of the figures from other polls plotted above, this finding from Harris shows a non-linear trend (i.e., support does not decline in perfect progression from the youngest to the oldest voters).

Finally, a Penn, Schoen, & Berland poll released in conjunction with AARP reported only comparisons between respondents younger than 50 and 50-plus. A section of this poll's report entitled "Specific Policy Proposals" (on pages 6-7) is perhaps the most worthy of attention. On most of the items, the younger respondents are more favorably inclined, but on others, there is little or no difference.

(Cross-posted to Health Care Polls)


Reifman: The "Public Option"


Prof. Alan Reifman teaches social science research methodology at Texas Tech University, and has begun compiling the results of public opinion polls on the specifics of health care reform at his new blog, Health Care Polls.

Perhaps the most contentious issue among congressional negotiators and interest groups in Washington, DC (and elsewhere) is the so-called public option. The idea is that the government would create a new health-insurance program (modeled to one degree or another on Medicare, the government insurance program for seniors) that people could join. Proponents argue that, by having it compete with private insurers, the public option would help control costs. Opponents, on the other hand, see the public option as yet another government intrusion into an area they feel should be left to the private market.

Where does the public seem to stand? Not surprisingly, the public option has been widely polled, and we shall focus exclusively on it today. As seen in the diagram below (which you can click on to enlarge), levels of support for the public option vary widely according to different polls, despite the relative consistency of question wording (all the survey items refer in some fashion to the public option being a government health-insurance program that would compete with private insurance companies). The predominant trend, I would say, is that a majority of respondents supports a public option, with five of the eight polls showing between 52-66 percent in favor.


Still, though, two other polls show support in the mid-40s and one poll (Rasmussen) has support way down at 35%. What to make of this? Let's start with Rasmussen. Whereas Rasmussen's presidential-election polling has tended to be highly accurate (relative to the actual results), other types of polls from this outfit appear to have had a Republican slant. Here are some examples:

*Whereas most polls tended to have George W. Bush's job-approval ratings during the waning months of his administration in the low-30s or even the 20s, Rasmussen consistently had it around 35%.

*Whereas virtually every pollster other than Rasmussen has shown a majority of voters to prefer the Democrats (at this early point) in next year's U.S. House elections, Rasmussen has been showing the Republicans in the lead (albeit with large percentages undecided).

Polling analysts refer to systematic differences in the results (on the same basic issue) between different survey firms (or survey "houses") as house effects. These may stem from different firms' practices regarding question-wording, sample weighting, etc. On health care reform and other issues, it looks to me as though Rasmussen has a substantial house effect.

There's one other aspect of the public-option polling I'd like to point out. As can be seen in the diagram above, I have highlighted in red the words "option" and "offering" in the wording of some of the survey items. It appears that wordings stressing the voluntariness of the public option (i.e., that it is an "option," or something "offered" to the consumer) tend to elicit higher support than wordings that don't highlight voluntariness as much. This is just a hunch. If anyone has other explanations for the large variation in support between the polls, please share them in the comments section.

(Cross-posted to Health Care Polls)


Nyhan: Overstating public incoherence on the deficit


Today's guest pollster contribution comes from Brendan Nyhan, a political scientist and Robert Wood Johnson Foundation Scholar in Health Policy Research at the University of Michigan. This entry is cross-posted at his blog, Brendan-Nyhan.com.

Matthew Yglesias calls the public "ill-informed and hypocritical" based on a New York Times poll that found "Most Americans continue to want the federal government to focus on reducing the budget deficit rather than spending money to stimulate the national economy... [y]et at the same time, most oppose some proposed solution for decreasing it."

The problem, however, is that the available evidence doesn't support Yglesias's conclusion (which is encouraged by the way the poll is framed in the Times). When you look at the raw poll results (PDF), you'll see that the public prefers reducing the deficit to stimulating the economy 58%-35%, but 53% oppose cuts in public services and 56% oppose higher taxes. Those numbers may seem "ill-informed and hypocritical," but the problem is that we're dealing with aggregate data (this is what is known as an ecological inference problem). We can't draw any strong conclusions about the proportion of individual members of the public who have incoherent preferences about deficit reduction without access to the raw data. Ideally, we would break out the members of the public who advocate deficit reduction over stimulus and see how many of them oppose both higher taxes and reduced services. That's the quantity of interest, but it's unfortunately not available to us at this point.

Update 7/30 12:12 PM: Yglesias has generously updated his post to note that you "can't infer very much about individual preferences from this aggregate data."


Nyhan: The End of the Obama Honeymoon


Today's guest pollster contribution comes from Brendan Nyhan, a political scientist and Robert Wood Johnson Foundation Scholar in Health Policy Research at the University of Michigan. This entry is cross-posted at his blog, Brendan-Nyhan.com.

Just to briefly elaborate on the point I made last week, here are comparable plots of President Obama's overall job approval and approval of his handling of health care:



As you can see, what's happening on health care is a leading indicator of the end of Obama's honeymoon period. As we return to our normal, highly polarized political climate, most Republicans and Republican-leaning independents will disapprove of a Democratic president's performance in office and his handling of high-salience issues, especially in a bad economy. As a result, Obama's numbers will inevitably decline across the board -- this reality shouldn't be surprising to anyone who works in or reports on politics.

Going forward, we should focus on more important questions. First, how much will Obama's approval numbers decline? Given the state of the economy, it wouldn't be surprising to see him in the low- to mid-40s by the end of the year. Second, what is the distribution of opinion on Obama's handling of health care? Aggregate public opinion on the issue is less relevant than how it's playing in the states of key senators whose votes will determine the fate of the legislation in Congress.


Nyhan: The Collapse of Sarah Palin


Today's guest pollster contribution comes from Brendan Nyhan, a political scientist and Robert Wood Johnson Foundation Scholar in Health Policy Research at the University of Michigan. This entry is cross-posted at his blog, Brendan-Nyhan.com.

The Washington Post is reporting that a new ABC/WP poll shows a major decline in Sarah Palin's favorability ratings. Her favorables have dropped from a peak of 58% after the GOP convention in September to 40% now, while her unfavorables have surged from a low of 28% to 53% now. Her 40/53 favorable-unfavorable ratio puts her into Hillary/Bush/Cheney territory as one of the most polarizing figures in American politics -- quite an achievement for someone who was a completely unknown less than a year ago.

It's almost impossible to imagine Palin getting the GOP nomination in 2012 at this point (though Intrade still puts the probability at 16%). With numbers like that, her general election prospects are dim, and the Post poll shows growing doubts about her among Republicans as well:

Republicans and GOP-leaning independents continue to rank Palin among the top three contenders in the run-up to 2012, however, with 70 percent of Republicans viewing her in a positive light in the new poll. But her support within the GOP has deteriorated from its pre-election levels, including a sharp drop in the number holding "strongly favorable" impressions of her.

And while Palin's most avid following is still among white evangelical Protestants, a core GOP constituency, and conservatives, far fewer in these groups have "strongly favorable" opinions of her than did so last fall.

...Perhaps more vexing for Palin's national political aspirations, however, is that 57 percent of Americans say she does not understand complex issues, while 37 percent think she does, a nine-percentage-point drop from a poll conducted in September just before her debate with now-Vice President Biden. The biggest decline on the question came among Republicans, nearly four in 10 of whom now say she does not understand complex issues. That figure is 70 percent among Democrats and 58 percent among independents.

Her favorability numbers also stack up extremely poorly against the rest of the expected 2012 field, as this graph illustrates:

GOPfavs-nylan.png

The candidates are ordered left to right by their favorable-unfavorable ratio in the most recent poll on Pollingreport.com. As you can see, Palin's numbers are even worse than Newt Gingrich (!) -- the other highly polarizing candidate -- and she has less room to change her image because so many Americans already have an impression of her. By contrast, Romney, Huckabee, Jindal, and Pawlenty start the race without that sort of baggage and are therefore much more likely to make a serious run for the nomination.

To be sure, it's not impossible to come back from numbers like Palin's. Hillary Clinton overcame numbers that were nearly as bad and almost won the Democratic presidential nomination, but she did so with a great deal of hard work and discipline -- qualities that Palin appears to lack. Runner's World photo spreads, feuds with David Letterman, and useless policy op-eds are not going to turn her image around anytime soon.


Murray: Estimating Turnout in Primary Polling


Patrick Murray is the founding director of the Monmouth University Polling Institute and maintains a blog known as Real Numbers and Other Musings.

There are a couple of pieces of accepted wisdom when it comes to contested primary elections versus general elections: 1) turnout has a bigger impact on the ultimate margin of victory in primaries and 2) primaries are more difficult to poll (see point #1).

The voters who show up for primaries come disproportionately from either end of the ideological spectrum. Even in states with closed primaries (i.e. one has to pre-register with a party to vote in its primary), there is still a particular art for determining which groups of voters should be included in the likely voter sample.

Voters' likelihood to turnout generally correlates with their ideological inclination. Last year's Democratic presidential nomination provides a good illustration of this. Lower turnout caucus states saw a bigger proportion of higher educated liberal activists participate in the process. These same voters also showed up in the primary states, but they were joined by a good number of less educated, blue-collar Democrats. Result: Obama basically swept the caucus states, while Hillary Clinton held her own in the primaries. Texas, which held both a primary and a caucus that were won by different candidates, is a stark illustration of this turnout effect.

The same is true for Republican primaries. Lower turnout means a larger proportion of the electorate will be staunchly conservative in their views. As turnout increases, it's moderates who are joining the fray, thus diminishing the conservative voting bloc's overall power. And with the GOP being in its present ideologically-splintered state, small changes in turnout can have a real impact in primaries cast as battles between the party's ideological factions.

To some extent, we saw this play out in New Jersey's recent gubernatorial primary where the two leading candidates were seen as representing different wings of the Republican party. Former mayor Steve Lonegan cast himself as the keeper of the conservative flame, while former U.S. Attorney Chris Christie claimed to adhere to core conservative principles (e.g. anti-abortion), but presented himself as a more centrist option. New Jersey's Republican voters agreed - a plurality of 47% described Christie as politically moderate while a majority of 56% tagged Lonegan as a conservative.

The Monmouth University/Gannett New Jersey Poll released a poll nearly two weeks before the June 2 primary showing Christie with an 18 point lead over Lonegan - 50% to 32%. New Jersey has a semi-open primary - meaning both Republicans and "unaffiliated" voters are permitted to vote (although unaffiliateds have their registration changed to Republican if they do vote). So, technically about 3.5 million out of New Jersey's more than 5 million registered voters were eligible to vote in the recent GOP primary. But in the last two contested gubernatorial primaries only between 300,000 and 350,000 voters were actually cast.

So, how do you design a sampling frame for that? First, it's worth noting that state voter statistics show that extremely few unaffiliated voters ever show up for a primary - certainly not enough to impact a poll's estimates. So we are left with about one million registered Republicans, of whom still only one-third will vote. That is, of course, IF turnout is typical (more on that below).

Our poll for this primary used a listed sample of registered Republican voters who were known to have voted in recent primaries. It was further screened and weighted to determine the propensity of voting in this particular election (based on a combination of known past voting frequency and self-professed likelihood to vote this year). In the end, our model assumed a turnout of about 300,000 GOP voters, based on turnout in the past two gubernatorial primaries.

However, turnout in other recent GOP gubernatorial primaries in New Jersey have gone as low as 200,000 - that was in 1997 when incumbent Christie Whitman went unchallenged. Turnout in contested U.S. Senate primaries is also generally around the 200,000 level. On the other hand, turnout has been much higher than 300,000 as well. It even surpassed 400,000 as recently as 1981.

The GOP primary saw higher than average turnout in 1993 - another year when a trio of Republicans were vying to take on an unpopular Democratic incumbent. So, it was fair to speculate that Governor Jon Corzine's weak position in the polls would give GOP voters extra incentive to turn out in the expectation of scoring a rare general election win. On the other hand, perhaps the state's Republicans have become so demoralized by their poor standing nationally and 12-year statewide electoral drought that turnout could be lower than the 300,000 used for our poll estimate.

Because we had information on actual primary voting history for each voter in our sample - i.e. rather than needing to rely on notoriously unreliable self-reports - it was possible to re-model the data from two weeks ago with alternative turnout estimates. If the GOP primary turnout model was set well above 430,000 - a 40-year record turnout for a non-presidential race - the Christie margin in our poll grew to 23 points. Alternatively, if the turnout model was pushed down to about 200,000 - a typical U.S. Senate race level - the gap shrank to 13 points. In other words, adjusting the primary poll's turnout estimate from 5% to 12% of eligible voters could swing the results by 10 points!

Why? The analysis showed that "strong" conservatives comprise about half of New Jersey's 200,000 "core" GOP turnout - and this group was largely for Lonegan. But when we widened the turnout estimate, more and more moderates entered the mix. As a result, Chris Christie gained one point on the margin for approximately every 25,000 extra voters who "turned out."

On primary day, Christie ended up beating Lonegan by a respectable 13 point margin - 55% to 42% - on a 330,000 voter turnout. Based on the model above, if Republicans had been a lot less enthusiastic, Lonegan may have been able to narrow this gap to 8 points. On the other hand, record level turnout would have given Christie a 16 or 17 point win.


Abramowitz: Has there been a Shift in Abortion Attitudes?


Alan I. Abramowitz is the Alben W. Barkley Professor of Political Science at Emory University in Atlanta, Georgia. He is also a frequent contributer to Larry Sabato's Crystal Ball.

On May 15th, the Gallup Poll reported what they described as a significant shift in Americans' attitudes on the issue of abortion. For the first time since Gallup began asking the question in 1995, more respondents described themselves as "pro-life" than "pro-choice" on the issue of abortion. The proportion of Americans describing themselves as "pro-choice" fell from 50% in May of 2008 to 42% in May of 2009 while the proportion describing themselves as "pro-life" increased from 44% to 51%. To back up this conclusion, Gallup cited a recent Pew Poll that showed a decline from 54% to 46% in the proportion of Americans who wanted abortion legal in all or most cases and an increase from 41% to 44% in the proportion who wanted abortion legal in only a few or no cases.

While the results of these two polls appear to show a shift in public opinion on abortion, Gallup neglected to report an important fact about the Pew results that might have undercut this claim. Pew has asked the same question on at least seven occasions since early 2007 with results ranging from a 45-50 split in February/March of 2007 to a 57-37 split in June of 2008. Taken together, these results show no clear trend. The 2009 results could reflect a real change, or they could just be random noise.

Gallup also made no mention of a CNN poll in late April of this year that showed a 49-44 advantage for the "pro-choice" label over the "pro-life" label. CNN has asked the "pro-life" vs. "pro-choice" question three times since 2007 with results ranging from a 45-50 split in June of 2007 to a 53-44 split in August of 2008 to the recent 49-44 split. Again, no clear trend is evident in these results.

And now a new AP poll appears to show continued stability in public attitudes on the issue of abortion. This poll, conducted between May 28 and June 1, found that 51% of Americans want abortion legal in all or most cases vs. 45% who want abortion illegal in all or most cases. These results can be compared with two polls conducted last year. An NBC/Wall Street Journal Poll in early September found 49% of Americans wanted abortion legal always or most of the time while 49% wanted it illegal with no exceptions or only a few exceptions. And a Washington Post/ABC Poll in August found that 54% of Americans wanted abortion legal in all or most cases while 44% wanted it illegal in all or most cases.

The Washington Post/ABC Poll has actually asked this question 23 times between June of 1996 and August of 2008. In these 23 polls, support for keeping abortion legal in all or most cases has ranged from 49% to 59%. Interestingly, the highest and lowest levels of support for legal abortion were found in two polls conducted only a few months apart in 2001.

The safest conclusion one can draw from these results is that at this point the evidence for a significant shift in public attitudes toward abortion is far from conclusive.


Selzer: Study on Data Quality


[J. Ann Selzer is the president of Selzer & Company and conducts the Des Moines Register's Iowa Poll.]

Can you trust your data when response rates are low? And, in this age of the ubiquitous internet, do we make too much out of its inability to employ random sampling? We asked and answered those questions in a study we conducted a few years ago, commissioned by the Newspaper Association of America. Given recent online discussions of data quality, I revisited this study.

In April and May of 2002, five surveys-asking the same questions-were conducted in the same market. The only difference was the data collection method used to contact and gather responses from participants. This rare look at what role data collection methodology plays in the quality of data yields some fascinating results. Our goal for each study was to draw a sample that matched the market, to complete interviews with at least 800 respondents for each separate study, and to gather demographics to gauge against the Census.

Method of contact. Our five methods of contact were:

  • Traditional random digit dial (RDD) phone (landline sample);

  • Traditional mail;

  • Mail panel, contracting with a leading vendor to send questionnaires to a sample of their database of previously screened individuals who agree to participate in regular surveys, with a small incentive;

  • Internet panel, contracting with a leading vendor to send an e-mail invitation to a web survey to a sample of online users who agree to participate in regular surveys, with a small incentive; and

  • In-paper clip-out survey, with postage paid.

The market. We selected Columbus, Ohio as our market. It was sufficiently large that the panel providers could assure us we would end up with 800 completed surveys, yet it is perceived to be small enough that mid-sized markets would feel the findings would fit their situation.

Analysis. To compare datasets, we devised an intuitive method of analysis. For each of six demographic variables-age, sex, race, children in the household, income, and education-we compared the distribution to the 2000 Census, taking the absolute value of the difference between the data set and the Census. For example, our phone study yielded 39% males and the Census documents 48%, so, the absolute value of the difference is nine points. We calculated this score for each segment within each demographic, added the scores, then divided by the number of segments to control for the fact that some demographics have more segments than others (for example, age has six segments, education has three). We then summed the standardized scores for each method and those raw scores give us a comparison allowing us to judge the rank order of methods according to how well each fits the market. Warren Mitofsky improved our approach for this analysis.

Problem with the internet panel. I'll just note that both panel vendors were told the nature of the project-that we were doing the same study using different data collection methods to assess the quality of the data. I said we wanted a final respondent pool that matched the market. They would send reminders after two days. Participants would get points toward rewards, including a monthly sweepstakes. The internet panel firm e-mailed 7,291 questionnaires; after 850 completed responses were obtained, they made the survey unavailable to others who had been invited. Because the responses to the first 850 completed surveys were so far out of alignment with the Census, we opted to implement age quotas post-hoc, to systematically substitute some in the 45-54 age group (which were too plentiful) with respondents in other age groups (which were underrepresented) with additional invitations to the survey. We reported out both findings-those before and after the adjustment.

Results. Unweighted, the RDD phone contact method was best; the in-paper clip-out survey was worst.

selzer090504-1.png

Weighting just for age and sex improved all data collection methods. Most notable is traditional mail, which comes close to competing with traditional phone contact after weighting for age and sex. The in-paper survey showed the greatest improvement because the respondent pool was strongly skewed by older women. One in four respondents to that survey were women age 65 and older (26%). The median age was 61 (meaning, just to be clear, half were older).

selzer090504-2.png

Other data. This study was commissioned by the newspaper industry, so it was natural to look at readership data. Scarborough is to newspapers what Nielsen is to television, and we had their data from the market for comparison. Partly because of the skew toward higher income and especially in higher educational attainment in the internet panel, that method produced stronger readership numbers-higher than the Scarborough numbers and higher than any other data collection method. This was one more check on whether a panel can replicate a random sample, and casts suspicion on whether a panel can ever sufficiently control for all relevant factors to deliver a picture of the actual marketplace.

Concluding thoughts. I have to wonder how this study might change if replicated today. The rapid growth in cell-phone only households probably changes the game somewhat. Panel providers probably do more sophisticated sampling and weighting than was done in these studies. Our mail panel vendor indicated they typically balance their sample draw, though their database in Columbus, Ohio, was just on the low end of being viable for this study, so we're confident less rather than more pre-screening was done. We did not talk with the online vendor about how they would draw a sample from their database, though we repeatedly said we wanted the final respondent pool to reflect the market. It is our sense little was done to pre-screen the panel or to send out invitations using replicates to try to keep the sample balanced. Nor did they appear to have judged the dataset against the criteria we requested before forwarding it to us; it did not look like the Columbus market. We specified we did not want weighting on the back end because we were wanted to compare the raw data to the Census. Had they weighted across a number of demographics, they certainly could have better matched the Census. And, maybe that is their routine now. But, I wonder how the readership questions might have turned out, for example. The Census provides excellent benchmarks for some variables, but not all. Without probability sampling, I always wonder if the attitudes gathered in from panels do, in fact, represent the full marketplace.

Epilogue. Of course it would be a good idea to replicate this study given recent changes in cell phone use. The non-profit group that commissioned this study just announced it is laying off half its staff, so they are unlikely to lead this quest.


Rivlin & Rivlin: Public Opinion on Health Care Reform 1993 and 2009. Is this a New Day or just Groundhog Day?


Sheri Rivlin and Allan Rivlin are the Co-Editors of CenteredPolitics.com. Allan Rivlin is a Partner at Hart Research Associates. In 1993 Allan Rivlin was a Special Assistant in the U.S. Department of Health and Human Services.

Remember 1993? Snoop Dogg was on the radio. Grunge ruled the world of fashion, and one of the top movies was "Groundhog Day" where Bill Murray had to relive the same day over again until he figured out just what he had to offer the world and finally got it right.

A charismatic young Democrat had just been elected President promising, among other things, to reform a broken health care system. Public opinion seemed to be behind him but the effort ultimately failed and a more careful reading of public opinion in those early months of the Clinton Administration reveal some of the fault lines that eventually sank the effort. Not only did reform fail to make it out of either house of Congress, but in the 1994 election voters ratified the decision and punished Democrats who supported reform rather than the Republicans who had defeated the plan.

rivlinSlide1.JPGNow a new Democrat has taken office promising healthcare reform. The question becomes; has enough changed in public opinion to offer hope that the outcome will be different this time around? A thorough review of the available polling then and now is less than encouraging for supporters of comprehensive health care reform (a category that includes the authors who should be understood to be supporters of comprehensive reform albeit sobering ones.)

Where common questions can be found in polls leading up to health reform 1993 and 2009, the public is currently less attuned to the issue, expresses less dissatisfaction with the status quo, and offers lower levels of support for the general prospect of reform. But an even greater challenge for reformers is the fact that the basic contours of public opinion that undercut the previous effort continue to be true today - perhaps even more so.

Just as in 1993, it would be easy to read current polls as highly encouraging. Many of these measures appear quite strong, it is just that they are not as strong as comparable numbers in surveys taken before the start of the 1993 effort when many pollsters, including those advising the White House were fooled into believing they had a clear mandate for major change.

Now: A 2008 Harris Interactive survey finds 29% saying so much is wrong with the current health care system that it needs to be completely rebuilt, and an additional 53% says that while there are some good aspects the system needs fundamental changes. That adds up to 82% calling for fundamental change. Just 13% say the system works pretty well and only needs minor changes.

Then: The problem is, these results were typical, though a little stronger in the period before the failed effort. As early as 1991, the same pollsters (then Lou Harris and Associates, the word "Interactive" as we know it today had not yet been coined) using the same question recorded 42% saying so much is wrong with the current health care system that it needs to be completely rebuilt, and an additional 50% said that while there are some good aspects the system needs fundamental changes - for a total of 92% calling for fundamental change and just 6% said the system worked well and only needed minor changes.

Now: A 2008 Harvard School of Public Health survey found a 55% majority in support of "national health insurance" with 35% opposed. While this is unlikely to be a phrase that this round of reformers will find useful or descriptive of their proposals, the term that was in common use in 1993 does allow for an apples to apples comparison.

Then: The same researchers using the same phrase in 1993 found 63% supporting "national health insurance" and just 26% were opposed.

rivlinSlide2.JPGThen as now the real problems facing health care reformers were structural and clearly visible in the polls. As the nation reached near consensus that there was a problem, there was never any such agreement on the specific solution. While many people agreed then as they do now that it is wrong that so many Americans are either uninsured or underinsured, the priority then, as now, for most people was on finding ways to lower their own health insurance cost. Then as now most people had health insurance that they judged to be pretty good.

Then: In 1993 a 77% majority told Martilla and Kiley that they were at least somewhat satisfied with their own health care coverage.

Now: For comparison, 82% expressed a similar level of satisfaction with their own insurance in a 2007 Greenberg Quinlan Rosner Poll.

Then: A 1993 Gallup Poll asked people about their priorities for reform and 38% said they wanted health insurance that included all Americans. The bare majority, 51% wanted to control costs, and 10% volunteered that they want reform that did both.

Now: The comparison here is a little less direct, but in 2008 the Harvard School of Public and the Kaiser Family Foundation found similar results with 45% saying they want to make health care insurance more affordable and 22% saying their goal for reform would be to expand insurance to the uninsured.

Then: An NBC News Wall Street Journal Poll in March 1993 found 66% agreeing with the statement "I would be willing to pay higher taxes so that everyone can have health insurance." Just 30% were opposed. A Martilla and Kiley poll found a similar result but in a clear sign of the problems that would emerge, among their 65% willing to pay higher taxes, just 25% said in a follow up question that they would be willing to pay as much as $50 more a month, 40% said they would pay $30, and the majority 62% were only willing got go as high as $10 per month more in order to give coverage to everyone.

Now: In the most recent NBC News Wall Street Journal Poll conducted February 26 to March 1, 2009 the public is now split with just 49% agreeing with the statement "I would be willing to pay higher taxes so that everyone can have health insurance" and nearly as many 45% do not agree.

rivlinSlide3.JPGDoes all of this mean that the Obama plan is doomed before it has even begun? Of course not, but putting the apparently positive number from many of today's poll questions in the context of even more positive numbers from polls taken before the previous failed effort should serve to underscore the difficulty of the challenge ahead.

It is clear that the new team will benefit from lessons learned in the earlier health care reform effort. Reflecting a hard won understanding that most Americans are fairly satisfied with their current coverage, the first words out of any Administration spokesperson, including President Obama, on the subject of health care reform is that if you like what you have now you will be able to keep it. Also reflecting the priorities expressed in public opinion polls today (and back then), far greater emphasis is now being placed on cost containment than on extending coverage.

The real question will of course come in the details of the proposal. If Obama can come up with a plan that extends coverage to more Americans without a major increase in the burdens it places on the individuals and businesses who pay for it, then it will be difficult for those who want to see this effort fail to generate much public opposition. Naturally this is a tall order, but we would not want to be among the legions of commentators who have had to swallow their doubts that Barack Obama can achieve the difficult.

The only thing we will predict is that there will be a lot of articles written looking at statistics like some of the ones mentioned here (in fact they are likely to grow stronger as the heat is turned up on the issue) to make the case that this time around the public strongly supports reform. We hope this little bit of context will help keep these articles in perspective.

The authors wish to thank Julia Kurnik for Research Assistance and Robert Blendon of the Harvard School of Public Health for invaluable assistance. Would anyone try to write this article without first calling Bob Blendon?


Gould: Greenberg versus Penn, Continued

Topics: Dispatches from the War Room , Mark Penn , Philip Gould , Pollsters , Stan Greenberg

[This Guest Pollster contribution comes from Philip Gould, who served as a polling and strategy adviser to the British Labour Party for general elections held from 1987 until 2005.

Editor's note: Gould was a central figure in the dispute between pollsters Stan Greenberg and Mark Penn that we have covered this week, as he was responsible for managing the services that each provided to the Labour Party. He submitted his comments to Pollster.com in an effort to help clarify and resolve some of the issues raised here this week.

Since I emphasized the question of whether Penn delivered complete marginals and cross-tabulations, I want to promote the following paragraphs that come toward then end of Gould's memo:

After a poll Stan normally presented a filled in questionnaire, a full banner book containing complete cross tabs.

Mark had a different approach. Following a poll he quickly made available a full and extensive polling report. This went immediate to the whole campaign. This was not an inconsiderable document. I have one in front of me now: it is 18 pages long; it contains historic voting and favourability data; it closely examines 12 targeting groups ranging from rural lower class Conservatives to union households; it uses seven different batteries to examine campaign issues. It analyses responses to the news and key policy areas. And of course it contains numerous message batteries: in all well over 100 questions were asked and recorded. All of these were analysed by voting preferences, and sometimes by demographic categories.

These reports were extensive and useful documents, far in excess of a normal filled in campaign questionnaire. They did not constitute a full banner book and did not contain 'full marginal's' in the manner favoured by Stan Greenberg, but what Penn did supply was both exhaustive and useful, and certainly met the regular needs of the campaign. As one senior campaign official with responsibility for polling in 2005 has said: 'Mark Penn 'could quite fairly argue that the memos were intended for an audience that had no time or interest in delving into every corner of the data. I don't think that in any way illegitimises the findings or his advice'. On a personal note Mark Penn invariably supplied any additional cross tab or targeting data that I required, and I presume the same is true of others. Two pollsters, two approaches.

Gould's piece covers far more ground than this narrow excerpt.  It is well worth reading in full. 

-- Mark Blumenthal]


I am aware that intercession in the Greenberg/Penn polling war can precipitate what has probably never happened before: uniting Stan and Mark in the face of a common enemy (i.e. me). But with all the risks it entails I will press on. From the start I must declare an interest- I suspect I am one of the very few people around who can claim that they like and respect both Greenberg and Penn (I can already feel them starting to unite against me!). I worked with Stan for well over ten years and believe him to be an outstanding pollster and strategist. I worked with Mark for a much shorter time, and came to greatly appreciate his skills too, different from Stan's certainly, but considerable for all that. It is in that spirit that I write this piece.

There are so many issues here, of methodology, strategy, personality and of course memory that getting to the truth of what actually happened in the UK election campaign of 2005 is probably impossible, but I will try at least to clear away some of the fog. Not by focusing on the smaller, although I accept crucial disagreements between the two pollsters, but by trying to paint a bigger picture, and using where possible contemporary sources, notes written at the time, my rather sketchy diary, and in particular a lecture I made to the LSE on the campaign in 2006 which pretty accurately sums up what I believe about the campaign.

[Continue reading after the jump]

For me the starting point was a letter faxed to me by Stan in 1992 asking me to fly to Little Rock to observe the Clinton campaign, and to debrief on the negative campaigning that the Conservatives had used to win the 1992 election in the UK.

'I left immediately and in a way it saved my political life, taking me from a failing and dismal Labour project, to a world of political confidence and optimism. I wrote about this later: 'I still vividly recall arriving in Little Rock in 1992 still stunned by Labour's awful defeat in that same year, feeling the late summer heat as I left the airport, and arriving at the campaign headquarters and seeing a whole new world of possibility emerge. I remember the kindness I received; embarrassed by our failure in Britain but being told that defeat was a step on the road to victory, a badge of honour not of blame. Above all I remember the incredible energy and pulsating life of the campaign, its extraordinary confidence, and the way it had simply revolutionised the way campaigns had been run.'

I learned much in that campaign, and bought most of it back to the UK, including Stan Greenberg who became our pollster in 1994 when Tony Blair became leader of the Labour Party

For ten years Stan was part of the team, but by 2004 there were signs of dissatisfaction on both sides. You can clearly sense in Stan's book a growing sense of disenchantment with the New Labour project, provoked in part by the Iraq war but deeper than that. Equally there was in Downing Street a sense that it was time for new voices, and new ideas, in the face of mounting political difficulty. This was not my view but it was the view of some, and in the summer of 2004 Mark Penn was commissioned to conduct a series of polls, to see if new insights could be gained. I was not told of this principally I am sure because of my closeness to Stan (we used to be business partners as well as colleagues). Finally in mid-Sept I was told of Penn's involvement, and informed that Downing Street wanted to use Mark but keep Stan involved. I was mandated to manage both relationships, which was not to prove an easy task.

In December I told Stan of Mark's involvement, and it was a pretty grim meeting which caused me some distress. In his book Stan claims that I was afraid to meet with him alone to inform him which was quite untrue as I did tell him, was the first to do so, and did so with just the two of us present. By then I had started to have meetings with Mark Penn and so began the period of dual pollsters. This situation continued through 2005 until the election in May. Stan hated it and was clearly resentful and unhappy - but he kept going, and made the best of it. In the campaign itself Stan felt more unhappy still, with Mark Penn the dominant pollster. I certainly do not claim that I handled Stan perfectly in this period, and was too irascible and sometimes angry. I regret that, especially when it sometimes rubbed off on Sam Weston his excellent assistant on the campaign. It was tough dealing with two formidable pollsters, but I do not agree with Stan that it would have been better if he had left the campaign in 2004, as he suggests in his recent posting. In the first place there was the issue of loyalty to Stan, who had helped us so well over so many years. Secondly I felt that in this campaign, fighting as it was the headwinds of public hostility on Iraq and other issues, that more voices were better than less. Greenberg and Penn are very different pollsters, with very different approaches, and crucially with very different value sets, but both have significant contributions to make. Stan puts methodological exactitude first: he is the Volvo of pollsters highly engineered and meticulously thorough. He is strategically astute, but follows the data carefully because he has so much respect for it. His politics are modernising but rooted hard in fairness based populism: his favourite dividing line will always be based on a contrast between the many not the few, his emotional heartland will always remain hardworking families. He is a natural iconoclast, always challenging, often doubting, and leading him sometimes to put flexibility ahead of consistency. Mark also uses and understands data well but leans to strategy ahead of data, and he can be strategically brilliant. He prefers consistency to flexibility, believing that a strategic position once adopted should be held unless there is compelling evidence to the contrary. His instinct is to stick rather than to shift. Mark's politics are far less populist than Stan's, favouring aspiration to fairness as a guiding concept.

This then was the background to Stan's book, and to Mark response to it. If that is the context, what then of the issues - I will take them in turn.

1 Mark is wrong to say that I knew of his work from the outset, I did not. I first discovered Penn's involvement in the Labour Party election campaign on September 13th.

I repeat verbatim my diary entry for that day:

'Sally (Morgan) said to me: look we have been using Mark Penn for a polling project with Tony and we want you manage that relationship, and they were worried about Stan. I was very shocked by this, that behind my back I think they had conducted at least two and possibly three polls, and had a complete operation going since the July period. They did not want to involve me because they did not want to hurt Stan'.

This account will be supported by many who worked at Downing Street and on the campaign, and is unarguably the truth of what happened.

2 I disagree with Stan's characterisation of the campaign as over-rigid, and too inflexible. In my 2006 LSE lecture I explain why I believed that in this campaign consistency was at a premium, not least because of our determination to avoid the fate of the Kerry campaign. I wrote then:

'In truth Senator Kerry was trapped by ambivalence, not certain about the war, and he found it hard to appear certain about anything else. He had very talented advisers but he did not have a political project. The flexibility of the war room, essential twelve years ago was inadequate as a compass in the rough and treacherous waters of a nation at war. We needed battle ships not light cruisers. In Britain we watched and we learned. This time the view of the campaign at almost every level, and certainly my absolute conviction, was that in a time of uncertainty, turbulence, and electoral sullenness the first imperative was absolute strategic clarity, robustness, and constancy. That is why we were so determined to show courage and consistency under fire. To hold our nerve and certainty despite all the usual noises off. Above all to be strategic not tactical. That is why we were so determined to make the economy not just the campaign message but the campaign anchor, repeated endlessly until it broke through. Why Tony Blair and Gordon Brown campaigned together to hammer home the economic message. Why our message was simple and potent -Forward not back- and why it was repeated endlessly from the start of the campaign until the finish. The vicissitudes of terror, war and insecurity made robust confident clarity essential but the need to engage had not disappeared nor had the need to listen and respond as the public vented anger and concern. In our campaign strength and connection had to learn to co-exist. This was the paradox of the 2005 campaign. We had to be confident and strong in our message and leadership, but sensitive and responsive in our relations with the electorate. And these two apparently conflicting imperatives had to be implemented simultaneously, strong and responsive at one and the same time.'

This was the campaign we hoped to build and I believe we did: consistent, but also flexible, constant but also responsive. In my judgment it was this balance of strength and responsiveness that took us through to victory. And our polling played a big part in this.

3 I do not agree with Mark Penn that the distribution of polling information was restricted, and nor do I agree with Stan that the polling was rigged. The very nature of the campaign, the need for a plurality of polling sources, the immediate and rapid distribution of polling information make both positions impossible. Once again I wrote in 2006:

'The third element of connection was a new approach to polling which I called 'wrap around polling'. Which is diagnostic; intuitive; responsive; multi-faceted and pluralist, but also systematic and rigorous. We wanted polling not just to tell us what had happened, but to alert us to what might happen. To be a kind of early warning system to anticipate where an uneasy and dissatisfied public might flare up in protest or anger. Effectively research became radar for the election. To do this we used multitude of polling instruments: strategic message polling; standard daily tracking; internet panels of various sizes; marginal polling; daily focus groups. And this polling was not kept tight to a small group of insiders but distributed widely and openly throughout the campaign. Basically if anyone wanted to see the polling they could. The old days when a pollster is effectively Doctor dispensing tough medicine to uninformed politicians are gone. We are all polling experts now, or at least equal partners in the polling process. This approach was effective. For example the Prime Minister had always said that the issue of immigration should be left until the public turned against Michael Howard the Conservative leader when he went too far on the issue which he inevitably did. Night after night we tracked the public response on the issue seeing it move from whole hearted support of Howard on the issue, to a gradual bemusement that it was all they appeared to talk about, to a kind of contempt that this was the only issue they had, and they seemed to exploiting race for reasons of political advantage. At this point we pounced and the Prime Minister made a powerful speech arguing for a balanced approach to immigration shredding Tory polices and assumptions. That was the end of immigration as an issue in that campaign'

This was the culture of polling in the campaign, as pluralist as possible, and as open as possible. Every poll that Mark did, Stan got and so did everyone else. In this context I simply cannot agree with Stan that the polling was rigged, because there was so much of it, and it was so varied in methodology and type. Mark Penn's polls had so many questions, that were so varied, and which had so much outside input into them, that 'rigging' them seems impossible to me, and too strong a word to use. Equally it is not true to suggest, as Mark claims, that 'Stan was out of the loop', and to suggest that he could not be trusted with 'highly sensitive questions' is completely false. Everything the campaign got Stan got, and the campaign got everything, in its distribution of polling information this was probably the most open campaign the Labour Party had ever conducted.

4 I am confident that our strategy was right, and feel that Stan is unfair to it. At the core was a relentless focus on the economy, which dominated all else. That was the absolute bedrock of our campaign, everything else secondary to it. We did focus on women voters, and in particular younger female voters, and this focus worked, as the evidence shows. In the Ipsos-Mori exit poll we were shown to be level with men, but led by 6 points with women. Our focus on women gave us consistency, and gained us the vital votes we needed to win, as well as being in my judgement, the right and progressive thing to have done. As for older voters it was not those that deserted us but younger voters, as Stan acknowledges in his book.

5 Mark is wrong to say that Stan worked for Gordon Brown, and Mark for the Prime Minister. Both pollsters worked for the Labour Party, and in the campaign all information was shared. We are a party system, not a presidential one.

6 As for forecasting landslides: both pollsters got close to doing so, but at different times in the campaign. At the end Stan was probably more optimistic than Mark; at the campaign start the reverse was true. Stan's marginal polling was exemplary and accurate, but it was Mark's last major poll that was accurate to within one point.

7 On Iraq I did not 'delete' messages on Iraq as the book claims, certainly not as a consequence of polling by Mark Penn of which I had no knowledge. To give you an example, in one mid-summer polls that Stan mentions there are several Iraq batteries, and many references to Iraq as an issue. In any event it was not within my power to 'delete' anything, I was just one member of a team who collectively supervised questionnaires.

8 Finally one of the most contentious points of all: Did Mark Penn make available the 'agenda's, marginal and cross-tabs as requested and without reservation'. This is a grey area, but I will try and clear it up.

After a poll Stan normally presented a filled in questionnaire, a full banner book containing complete cross tabs.

Mark had a different approach. Following a poll he quickly made available a full and extensive polling report. This went immediate to the whole campaign. This was not an inconsiderable document. I have one in front of me now: it is 18 pages long; it contains historic voting and favourability data; it closely examines 12 targeting groups ranging from rural lower class Conservatives to union households; it uses seven different batteries to examine campaign issues. It analyses responses to the news and key policy areas. And of course it contains numerous message batteries: in all well over 100 questions were asked and recorded. All of these were analysed by voting preferences, and sometimes by demographic categories.

These reports were extensive and useful documents, far in excess of a normal filled in campaign questionnaire. They did not constitute a full banner book and did not contain 'full marginal's' in the manner favoured by Stan Greenberg, but what Penn did supply was both exhaustive and useful, and certainly met the regular needs of the campaign. As one senior campaign official with responsibility for polling in 2005 has said: 'Mark Penn 'could quite fairly argue that the memos were intended for an audience that had no time or interest in delving into every corner of the data. I don't think that in any way illegitimises the findings or his advice'. On a personal note Mark Penn invariably supplied any additional cross tab or targeting data that I required, and I presume the same is true of others. Two pollsters, two approaches.

These are the big points, there are many small ones, but this is not the time or the place for that.

My overall view is clear: the strategy was right, the balance between consistency and flexibility was right, the polling open and freely available, the campaign a success, conducted in the war-room spirit but in 2004 not 1992. Penn made a major contribution to that success, and deserves credit for that. Mark came into a new and difficult situation and helped give the campaign the consistency and strategic clarity. Whatever people say about Mark in other campaigns, and at other times, in this campaign he got most things right and did so with grace and good humour. I very much enjoyed working with him. Stan of course hated the whole process and I understand why. But he showed great courage in keeping going and got much right, particularly in his marginal polling. He played a part not just in one election but three, and he should be proud of that.

In all this lies the truth, but it is not clear-cut, nor certain. Finding the truth is never easy, and in any event it is always multi-faceted and complex, especially in a tense hard fought election campaign as this was.

This has been a long piece but unless you understand the circumstances of that campaign you have no chance at all of understanding why Stan wrote that book, and why Mark responded as he did.

In the end there is another, deeper truth. The battle between these pollsters may be intriguing to us, but in the great scheme of things it is politicians and leaders who decide, who make successful campaigns, and build great political projects. One of the most powerful recurring themes of this book is how Stan laments the fact Tony Blair so often ignored his advice and just went his own way. That was true of Stan, but was also true of me and Mark Penn as well. Tony Blair listened to pollsters and advisers but in the end went his own way on his own terms. He marched to his own drum. That may have led to bumps along the way, but that is why he was then, and still is, a great political leader. He listened, but he led. When the final histories are written, it is not Stan's or Mark's or my view that will matter, but the actions and decisions of politicians entrusted with the responsibility of leadership. We should all have the humility to recognise who are the real hero's in the world of politics that we all love so much.


Dispatches: Greenberg's Rejoinder to Penn

Topics: Dispatches from the War Room , Mark Penn , Pollsters , Stan Greenberg

This guest pollster contribution from Stan Greenberg is part of Pollster.com's week-long series on his new book, Dispatches from the War Room and responds to comments from Mark Penn in Mark Blumenthal's post earlier today.  Greenberg is chairman and CEO of Greenberg Quinlan Rosner.

To avoid this discussion descending into an ugly mud-wrestling match between two squabbling pollsters, I will only take up issues where the "facts" are indisputable and where we learn something about Tony Blair and political leadership and about differing approaches to polling and strategy.

What this exchange reveals even more clearly than the book itself are the limits of building a strategy from a coterie of target groups, rather than from the leader's vision or party's mission for the times. It underscores the need for frankness about what is holding voters back and the need to challenge leaders with blunt truths. It underscores the need for transparency and methodological rigor.

Penn's basic argument is straightforward. He took over the campaign's polling in July 2004 about nine months before the election when Blair was at a low point, working under Philip Gould, Blair's long time advisor for research and media. Greenberg was pushed out and was in no position to judge the character of Penn's work, as he was "not in the loop." Seems straightforward enough.

When I first learned in December of Penn's involvement and in January of our dividing the polling, I was convinced that Gould had played just such a role and I wrote about it. I was wrong. Philip was hurt by the accusation that he had concealed Penn's involvement and wrote me with detailed diary entrees that show he only learned of it in September and resisted Penn's involvement until the end of the year, when he decided to "make the best of it."

Penn's premature rush to anoint himself as Blair's pollster obscures Blair's effort to examine competing solutions to the problems he faced. In May, Blair had reached a low point in the polls, dragged down by Iraq, the "hyping" of pre-war intelligence and Abu Ghraib. He was very despondent, seriously considering not running again and consulted widely, including with President Clinton and Senator Clinton who urged him to run and to use Penn.

Penn offered his own path back for Blair, aided by huge surveys and "clustering work" that coughed up "school gate mums" as a key target. Because Labour got its highest marks on the economy, his message started there, but Penn's emphasis was on policies that appeal to the groups that can grow Blair's coalition. Penn's imprint was immediately evident in Blair's September conference speech when he spoke of the stresses of the need for "more choice for mums at home and at work." Blair's policy offer was grounded in this clustering and coalition building.

At the very same time, we were commissioned by Phillip to do a special research project and I reported in July with a very different approach to the problem - centered on New Labour's central mission. For the first time in a long time, respondents shifted to Labour on hearing of Blair's commitment to "a better life for hardworking families," though only when Blair expressed his own frustration with the state of public service reform and offered some learning by showing independence from Bush on climate change. Iraq was the elephant in the room. Finding a way to acknowledge it, even indirectly, allowed people to come back to Blair's project.

In the September party conference speech, Blair was eloquent about "hardworking families," but just could not get himself to be reflective on Iraq - perhaps with Penn's support. That was the learning voters needed if they were to come back.

I respect Blair for rejecting my advice and deciding to go with Penn who did not push him to address the Iraq question and who offered a way to make electoral gains. The mistake was not firing me and leaving both of us in the campaign.

In fact, I have all of Penn's memos - about a two-inch pile on my desk at the moment, available for inspection by Mr. Blumenthal. Philip's note to me confirms he shared all of them during the course of the campaign, as did many of my friends "in the loop."

The whole concept of "in the loop" betrays a lack of transparency and openness in Penn's approach to campaigns - painfully evident in the Blair campaign, perhaps a precursor to Hillary Clinton's presidential run two years later.

Pollsters as a rule share the results for all their questions and hypotheses, even the ones that didn't pan out. In the Blair campaign, Penn provided a memo with large tables including only the questions he wanted to report; he did not provide a standard book of demographic cross-tabulations. Read Penn's words carefully, "The campaign received all of the agendas, marginals, as requested without reservation." In short, he provided breakouts only when asked, in effect keeping his own client and campaign team "out of the loop."

The surveys were methodologically sloppy and included biased tests, though it is important to underscore here that Philip Gould came to value Penn's research and rejects my characterization of it in the book.

Specifically:

1) Penn failed to incorporate professional learning from Britain. Penn national polling - not some errant tracking program - showed Labour with landslide leads of 8 or 9 points for the entire six weeks prior to the election being called. Penn discovered just 27 days before the election what every pollster in Britain has knows: you have to weight to offset the "shy Tories" - Conservatives reluctant to be interviewed. In an instant, the Tories gained 6 points in Penn's polls.

2) Penn's fixed targeting let real targets slip away. With Penn focused on "mums," the campaign regularly rolled out initiatives on breast cancer screening and childhood obesity. But voters in the key marginal seats were older and among those most likely to return to Labour, two-thirds had no children at home and found this campaign irrelevant.

3) Penn exaggerated the reliability of findings. Penn conducted a valuable weekly open-ended Internet panel of undecided voters. When the sample dropped to 100, so did the reporting of sample size that produced a testy email exchange that restored it. Still, Penn reported this as a "Survey of Undecided Swing Voters" and reported the full percentage results over 18 pages, including results for men and women, with about 50 cases each.

4) Penn created biased tests. Two weeks before the election, Penn declared that "our policy approach remains stronger than the Tories," but the Labour statement was more than twice as long, with more rhetorical flourishes and covering a much broader range of policies with greater specificity (which I'm happy to share). Even with this biased test, the Conservative's statement ran 6 points ahead of its vote. An unbiased test might have revealed potential Tory gains.

To inform the decision of whether to close positively or negatively, Penn constructed a sensible experiment where half the respondents were read positive statements about Labour's progress and half read attacks on the Conservatives' record and plans, and then respondents were asked to vote again. But this was not meant to be a fair test. The negative statements were 50 percent longer by word count and helped foreclose an uplifting close.

Penn describes the 2005 third-term as "historic" but in the campaign everyone was disappointed with the result, what the media called Labour's "drastically reduced majority," produced by a disengaged electorate and historically low turnout. Many factors contributed to the result, but among them were Penn's research, not to mention having two polling teams with different theories on how to win.


Dispatches: Greenberg's Reponse to Schaffner and Moore

Topics: Dispatches from the War Room , Pollsters , Stan Greenberg

This guest pollster contribution from Stan Greenberg is part of Pollster.com's week-long series on his new book, Dispatches from the War Room. Greenberg is chairman and CEO of Greenberg Quinlan Rosner.

Brian Schaffner focuses on the role of pollsters in identifying groups and thus empowering them -- making their opinions relevant to political leaders. I am very conscious of the role and as you correctly point out, I put the spotlight on Macomb County's "Reagan Democrats" and after this election, moved the spotlight next door to upscale suburban Oakland County.

What groups "matter" in my work is not some blind search of the data to find interesting and distinctive groups.

Dispatches.jpg

In the period when the wall between my academic and political lives was starting to crumble, I was very taken by E. E. Schattshneider's argument that whomever decides what the fights about likely wins. Successful political leaders and campaigns control the subject, define the choice and choose the fight. Drawing that line decides what issues are important and critically, who gets engaged and who loses interest. In 1992 Clinton made the election about change and the economy stupid and President Bush failed to make it about trust and experience. This year, Obama made it about change and Hillary Clinton tried unsuccessfully to make it about experience, but when she shifted to the economy and the middle class, she put the spotlight on white working class voters who rallied to her.

"Reagan Democrats" derived from the political project that tried to put the middle class back at the center of a renewed Democratic Party -- but the groups emerged from the project. In the book, I argue for the strength of these five leaders because they made politics purposeful.

Related to this point is David Moore's important discussion of "intensity" of beliefs and and the ability of leaders to get people to change their views on an issue and follow them. Whether a leader touches people, understands the times and poses a choice that impacts their lives impacts both which issues get highlighted and how intense are reactions.

I fully agree that mapping intensity will give you a much better view of public thinking and how issues are likely to break. But what is interesting about my Jerusalem example is that people held intense views (which I measured and monitored closely) when they rejected the idea of dividing Jerusalem, but shifted their views nonetheless once the public debate forced them to think about all the possibilities. This is a life and death and emotional issue and voters followed it very closely but Ehud Barak, like earlier Israeli leaders, was able to move the deliberation to a longer-term framework for preserving a Jewish state.

Focusing on intensity will help pollsters know which opinions really matter and difficult to move, and I did a lot of simulation in my polls to see how dynamic are opinions. But I'm still in awe of how much opinions shifted on such a central issue in such a short period and still learning from the fact.


Dispatches: Greenberg's Response to Franklin (Part 2)

Topics: Dispatches from the War Room , Pollsters , Stan Greenberg

This guest pollster contribution from Stan Greenberg is part of Pollster.com's week-long series on his new book, Dispatches from the War Room. Greenberg is chairman and CEO of Greenberg Quinlan Rosner.

Charles Franklin rightly begins his comments by putting up my quote on page 58 that "the endgame in presidential campaigns brings out all sorts of irrationalities, starting with the media polls. Many are criminally bad." One of the problems in writing a book and a memoir is living with your words and thoughts, particularly when as unnuanced as those.

Dispatches.jpg

In retrospect, I might have been more nuanced. First, I made the comment in the context of the Clinton presidential campaign when the statement was clearly true, as described in the book. Second, it reflects my experience during the final weeks in campaigns in Britain and Israel and in Latin America, even very recently. But because of sites like Pollster.com, there is more transparency and exposure of shoddy methods, and despite strong budget pressures, the national media organizations in the US produced very credible polling programs in this last election. But as recently as 2004, there were stark examples of volatile polls without political weighting conducted by Gallup and aired on CNN, along with commentary on how fickle were the voters. The challenge will be what happens with media polls, as there is more upheaval in the industry and need for more costly multi-modal methodologies and greater use of IVR.

This is a very different matter when one goes down to the state and congressional level and when you are in lower turnout elections and primaries. The media polls, as well as polls conducted by universities and institutes, are often out of line with the campaign surveys, as they are less likely to screen or filter for likely voters, factor-in historic turnout patterns and consider use of exit polls, as well as CPS. That one in four state polls in 2008 were conducted one day suggests we are dealing with a genuine issue.

I Amen, Franklin's Amen. The biggest problem is the reporting, not the polls themselves. It is the "outlier" poll -- not the boring average that gets headlines. But it is even worse in the war rooms I'm writing about that are poised to explode in the closing week of the campaign. It is the errant poll, not the average, that sets off the sparks in the war room and gets the attention of the candidate.


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR