Pollster.com

Mark Blumenthal

Morning Update: McMahon Gaining on Blumenthal


We have seen some hopeful polls for Democrats in recent days, but the last 24 hours brings results that will cheer Republicans and restore Democratic heartburn, especially in Connecticut where a new Quinnipiac University poll out this morning shows a "very close" race between Democrat Richard Blumenthal and Republican Linda McMahon.

The Quinnipiac poll shows Blumenthal's margin over McMahon narrowing to just three points (49% to 46%), a slightly closer margin than on their previous poll earlier in the month (51% to 45%). Rasmussen Reports also released a new Connecticut poll yesterday that showing Blumenthal ahead by just five points (50% to 45%), a slightly closer than the 9-point margin they found earlier in September (53% to 44%). The new surveys narrow Blumenthal's lead on our trend estimate to just four percentage points (49.8% to 45.3%), shifting the race to "lean Democrat" status.

The Blumenthal campaign will likely quarrel with these numbers, as they preemptively shared results of an internal poll yesterday with other media outlets, purportedly showing their candidate with a double-digit lead. But while the levels of support measured by the Quinnipiac and Rasmussen surveys may or may not be right, the trend evident in their results is unmistakable: McMahon has narrowed the gap significantly since winning the Republican primary in August.

2010-09-28-Blumenthal-CTSen.png

Elsewhere, two new polls in Pennsylvania produced results consistent with previous data. In the Senate race, a new Muhlenberg College/Morning Call poll shows Republican Pat Toomey leading Democrat Joe Sestak by 7 points (46% to 39%), while a new automated survey by the Republican firm Magellan Data and Mapping puts Toomey ahead by 8 (49% to 41%). That makes 18 public polls in a row since July showing Toomey with a nominal lead. Our more sensitive trend line shows that while voters have been growing increasingly decided, the roughly 7-8 point margin between Toomey and Sestak has not changed since August.

2010-09-28-Blumenthal-PASenSens.png

In the Pennsylvania governor's race, the Muhlenberg College poll shows Republican Tom Corbett leading Democrat Dan Onorato by 9 points (48% to 37%), while the Magellan poll has Corbett up by 12 (50% to 38%). Corbett's margin on the Muhlenberg result is one of the narrower reported in recent weeks. Our trend estimate gives Corbett a roughly 12-point advantage (50.3% to 38.7%).

In Delaware, Rasmussen Reports' latest poll shows Democrat Chris Coons leading Republican Christine O'Donnell by 9 points (49% to 40%), a slightly narrower margin than the CNN/Time poll found a week ago (55% to 39%).

The Rasmussen poll also found 5% support for Mike Castle, the incumbent Senator who lost the Republican primary to O'Donnell earlier this month. Castle is said to be considering a write-in candidacy. Rasmussen's approach was to omit reference to Castle in the first part of the question, but offer him as a option in the second. If the answer categories followed Rasmussen's typical format, their respondents would have heard something like this:

If the 2010 Election for United States senate, were held today would you vote for Republican Christine O'Donnell or Democrat Chris Coons?
If you are for O'Donnell, press 1
If you are for Coons, press 2
If you are for Mike Castle, press 3
If you are for someone else, press 4
If you are not sure, press 5

In this case, if the Rasmussen system allows respondents to answer immediately (without waiting to hear all the choices), many would have chosen O'Donnell or Coons before hearing that Castle was an option. Measuring support for a write-in candidacy is difficult, especially when it is still hypothetical. This sort of question will tend to measure the floor of a write-in candidate's support. So don't be surprised if other Delaware polls in the near future offer Castle as a more explicit option and show more potential support.

[Cross-posted to the Huffington Post]


Polls Lift Boxer And Brown In California

Topics: 2010 , Barbara Boxer , California , Carly Fiorina , Florida , Jerry Brown , Meg Whitman , Washington

Are the Democrats experiencing a rebound on the Pacific Coast? Three new surveys, two in California and one in Washington State, indicate small gains for Democrats since mid-August. More specifically, two new California polls confirm that Democratic Senator Barbara Boxer is maintaining a narrow lead over Republican challenger Carly Fiorina.

A new Field poll of California released this morning shows Boxer leading Fiorina by six percentage points (47% to 41%), while a new automated SurveyUSA poll yields fewer undecided voters but gives Boxer the same six-point advantage (49% to 43%). A handful of automated surveys in late August and early September suggested a tighter race, including two earlier polls from SurveyUSA that gave Fiorina a slight edge, but the last five surveys conducted since mid-September all show Boxer with nominal leads.

Our standard trend estimate, based on all available public polls, now shows Boxer leading by a roughly three-point margin (47.5% to 44.3%). Our more sensitive trend line, shown below, illustrates the tightening in late August. Since the current estimate from that line gives greater weight to more recent polls, it gives Boxer a slightly larger lead (47.8% to 43.2%).

2010-09-24-Blumenthal-CaSen.png

The two new California polls also suggest a reversal of the previous trend favoring Republican Meg Whitman. The Field poll results released yesterday show Whitman tied with Democrat Jerry Brown (at 41% each), while SurveyUSA gives Brown a small but statistically insignificant advantage (46% to 43%). So while six surveys had shown Brown with nominal leads in August and early September, the five most recent polls show either a tie or a slight Brown edge.

Our standard estimate shows a slight Whitman lead (45.9% to 44.4%), but our more sensitive trend line points puts Brown ahead by a single percentage point (45.5% to 44.1%). Either way, the California governor's race is currently the closest in the nation.

2010-09-24-Blumenthal-CaGov.png

SurveyUSA also released a new data this week in Washington confirming a similar rebound by Democratic Senator Patty Murray over Republican challenger Dino Rossi. The new automated poll gives Murray a two-point edge (50% to 48%), a marked improvement from their last survey in mid-August showing Rossi leading by seven. Our trend estimate now shows Murray ahead by roughly five points (50.8% to 45.8%), as all four surveys released in the last two weeks show Murray at least nominally ahead. Rossi continues to do slightly better on automated surveys by SurveyUSA and Rasmussen than on live interviewer polls conducted recently by CNN/Time and the Elway Poll.

2010-09-24-Blumenthal-WaSen.png

Elsewhere, Democrats also received encouraging news in Florida, where a new Mason-Dixon poll shows Democrat Alex Sink holding a six percentage point lead (47% to 40%) over Republican Rick Scott. That margin is slightly better than on our trend estimate, which has Sink ahead by just over three points (49.6% to 46.4%), but five of the six polls conducted in September have shown Sink with nominal leads.

Late update: A new Mason Dixon poll published this morning by the Las Vegas Review Journal shows Democratic Senator Harry Reid and Republican challenger Sharon Angle tied at 43% each. We have now seen eight remarkably consistent surveys in Nevada this month reporting margins ranging from a 3-point Reid edge to a 2 point Angle advantage. Our trend estimate gives Reid a margin of a half a percentage point (45.6% to 45.0%) -- currently the closest Senate contest in the nation.

Cross-posted to the Huffington Post


Morning Update: Two Three Puzzling Polls in NY

Topics: New York , Senate

Today's big polling news comes from New York, where two new surveys show much closer results in the races for Senate and Governor than indicated by previous polling, and from four new surveys by CNN and Time in statewide contests elsewhere. But the New York results are the most different from other recent polling and thus likely to get political tongues wagging today. Let's take a closer look.

The two newest polls on New York's Senate contest out this morning come from Quinnipiac University and automated pollster SurveyUSA. Quinnipiac finds Democratic Senator Kirsten Gillibrand leading Republican challenger Joe DioGuardi by just six percentage points (48% to 42%). SurveyUSA shows an even narrower contest, with Gillibrand up by just one point (45% to 44%). Both results represent a sharp break from previous polls, which typically had Gillibrand leading by double-digit margins. Our standard trend estimate which still considers data from previous surveys, now shows Gillibrand ahead by just under eight percentage points (47.9% to 40.1%).

2010-09-23-Blumenthal-NYSen.png

Both surveys also indicate a closer race for New York Governor than other polls taken previously. Quinnipiac has Democrat Andrew Cuomo leading Republican Carl Paladino by six points (49% to 43%), while Survey USA puts Cuomo ahead by nine (49% to 40%). Our trend lines show the race narrowing to a 49.0% to 37.8% Cuomo lead.

******
UPDATE:
Just after we published the HuffPost version of this story, the Siena Research Institute released a new survey showing very different results from those from Quinnipiac and SurveyUSA. Their survey of all registered voters shows Gillibrand leading DioGuardi by 26 points (57% to 31%), and Cuomo leading Paladino by 33 (57% to 24%). While a likely screen would have likely produced closer margins, the differences between the Siena, Quinnipiac and SurveyUSA polls are still enormous and not easily explained.

Our updated trend estimates now show Gillibrand leading by nearly ten points (47.7% to 38.1%) and Cuomo leading by 13.5 (53.2% to 39.7%).
*****

Why the change? The most important factor is probably last week's primary elections in New York, which resolved hard-fought Republican contests for both offices. Previously divided partisans often rally to their party's nominee following a tough primary -- remember the way Barack Obama received an almost immediate boost in polls from Democrats once his battle with Hillary Clinton came to an end in 2008. So some of the change may represent a consolidation of support among Republicans. For example, DioGuardi now receives the support of 88% of Republicans on the Quinnipiac survey and 74% of Republicans on SurveyUSA's poll.

Probably just as important, both polls also represent a shift to likely voter screens. Quinnipiac's previous New York surveys have been among all registered voters, and this poll is SurveyUSA's first in New York for the 2010 cycle. Only Rasmussen had previously applied any sort of likely voter screen to the New York results -- and they also showed both races closer than other pollsters, though not quite as close as these two new surveys.

Also, while the pattern is not consistent, Quinnipiac and SurveyUSA have produced results in recent weeks that are much more favorable to Republicans than likely voter surveys by other pollsters -- in the races for Senate in Ohio and Governor in Pennsylvania for Quinnipiac, and for Senate in North Carolina for SurveyUSA. Nate Silver notes a similar pattern for SurveyUSA in House races.

The impact of likely voter screening on poll results, especially this year, is evident in the four new polls released yesterday by CNN and Time, and in three more they released last week. While their new results largely confirm what we have seen from other recent surveys, CNN and Time are somewhat unique in that they release results for both likely voters and their larger samples of all registered voters. As the table below shows, the difference on the Democrat-minus-Republican margin is often quite large -- as much as 8 or 9 percentage points in Wisconsin, Delaware, Colorado and Ohio.

2010-09-23-Blumenthal-CNNTimeLVvsRV.png

It is important to remember that few likely voter screens are created equal, as different pollsters often use very different methods to model or screen for what they all describe as "likely voters." And worse, only a handful of pollsters disclose the details of their process. This is an aspect of this year's polling that we will continue to watch closely.

[Cross-posted to the Huffington Pos]


Moving Day

Topics: Huffington Post , Pollster.com

Well, our much anticipated moving day is upon us.

Some of you may have missed the news when it happened, and some of you may have forgotten, but we joined forces with the Huffington Post this past July (and answered some common questions about the acquisition here). Sometime later tonight or tomorrow, if all goes well, we will flip a virtual switch and begin "redirecting" traffic from Pollster.com to Pollster's new home on the Huffington Post.

A lot of very talented HuffPost developers have worked very, very hard over the last few weeks to move all of the features, content and data you have come to depend on here at Pollster to HuffPost. Our primary aim during this first wave of our relaunch has been to move everything without "breaking" anything. Thanks to the superhuman efforts of the HuffPost tech team, we think you will be satisified that while the web address will be different, everything you like about Pollster will make the trip with us.

Once we have relocated, we will begin adding some exciting new features that take Pollster.com to the next level, including quite a bit that will debut in the next few weeks. So we hope you come along and stay tuned.

Meanwhile a few more specific notes about the move:

We have managed to move every entry -- every chart, every map, every blog post, every Poll Update -- to Huffington Post. That includes our collection of charts from 2006 (which for a variety of technical reasons, I feared we might not be able to move). Needless to say, we will continue to update all active charts with new data. And your bookmarks to our existing pages should continue to work. We will simply redirect you to the new home for each page.

  • Our classic format poll maps will be active and functioning and will help you scan and navigate to chart pages. These are actually already active on Pollster now for races for Senate and Governor. If you're glad to see them back, don't worry, they will remain in place on HuffPost.

  • Once we move, you will also see that our charts feature prominently in a new HuffPost feature called Dashboard. We think you will find Dashboard engaging and useful -- it will include more than just polling data -- but if you prefer our classic poll maps and charts, again, don't worry, those will be there too and easily accessible via our new Pollster page.

  • While we have made copies of all blog posts, the original reader comments left on those posts will remain in place on their original Pollster.com locations. The HuffPost version of each entry will include a special link to take you back to the comments left on the original Pollster.com version.

  • All of our RSS feeds will continue to operate without interruption. All of our feeds will continue to provide the full post and not excerpts. Author specific feeds will require a different link, although all will be active immediately.

Now all that said, despite the best of intentions and a lot of hard work, a few things -- such as a complete index to archived blog posts -- may not be in place immediately. We will work to move anything left behind over the next week or two and will try to keep you updated on any such issues as they arise.

I welcome your comments, suggestions, problem reports or complaints -- just email me. If we have managed to "break" something you care about, please let me know. I can't promise I'll have time to respond personally to every message, but I'll definitely read them all.

A special note on comments and the Pollster.com commenter community: Admittedly, given Huffington Post's far bigger audience, the posts from me and from other contributors that also appear on HuffPost's front page will draw far more comments than our posts here. And no, we will not be migrating the Typekey user logins to Huffington Post, although you can log in and comment there using an existing Facebook, Twitter, LinkedIn, Google or Yahoo account or create an account on HuffPost.

For those concerned about the changes to the comments section, let me highlight two things. First, over the last month, Huffington Post has implemented a new "Community Pundits" feature that, as HuffPost's social news editor Adam Clark Estes explained to WebNewswer earlier this week, aims to highlight the most "insightful, informative, and engaging commentary" on any feature from across the ideological spectrum. Such comments appear in a prominent Community Pundit box that appears at the top of the comments section of each post.

Moreover, those who leave such comments consistently can earn a Community Pundit badge, which comes with privileges: "Besides having their comments highlighted in the Highlights tab and the Community Pundits box," Estes said, "we also allow our Pundits to leave longer comments."

Better yet, Estes and the Social News team have pledged to create a special Community Pundit badge specific to the Pollster section that will identify and highlight comments that are consistently insightful, informative and on topic, which is to say relevant to our focus on political polls and survey research. We have not yet begun working on this feature, so we would welcome your input and suggestions for it.

Now all that said, you should know that some entries -- especially the Outliers feature and the many "Poll Update" entries that Emily posts constantly -- will appear only on the new Pollster page and not elsewhere on Huffington Post. We're hoping that the Pollster corner of HuffPost will attract its own unique community of readers, so we encourage those of you who comment frequently to come along and try it out. We hope that the existing community can move along with the charts and the blog archive.

If you have questions about Huffington Posts comments and moderation policies, please see this FAQ page.


Morning Update: New WV & WI Polls Brighten GOP Prospects

Topics: 2010 , Senate , West Virginia , wisconsin

While the evidence rests mostly on new automated polls in two states, Republican hopes of gaining control of the U.S. Senate brightened yesterday with results pointing to tougher than expected battles shaping up for the Democrats in Wisconsin and West Virginia. The new polls move Wisconsin to our "lean Republican" category and add West Virginia to a list of toss-ups that also includes Illinois, Nevada and California. Republicans can win control of the Senate by sweeping all four.

Within a few hours of my update yesterday, which highlighted a new Rasmussen survey in West Virginia showing Democrat Joe Manchin leading Republican John Raese by seven percentage points (50% to 43%), Public Policy Polling (PPP) released another automated survey there showing the Democrat trailing by 3 (43% to 46%). Whether you prefer our trend estimate or a simple average of the two surveys, the bottom line is the same: On the basis of these two recent polls, the race merits "toss-up" status.

In Wisconsin, a new PPP survey paints a picture that even the survey sponsor Daily Kos characterized as "uber-ugly" for the Democrats. It shows Democratic Senator Russ Feingold trailing Republican Ron Johnson by eleven points (52% to 41%), a slightly larger margin than measured by a Rasmussen automated survey a week ago (51% to 44%). Our trend estimate splits the difference these two results, the only two public polls released in Wisconsin so far in September, pushing the state into our "lean Republican" classification.

Democrats pushed back yesterday, sharing with TPM results on an internal poll conducted before last week's primary showing "Feingold ahead, by 48%-41% among all voters and 47%-43% among those definite to vote."

Incidentally, one reader took me to task last week, appropriately, for not noting PPP's status as a firm that polls for local Democrat candidates (though they have not disclosed doing work for candidates for U.S. Senate and Governor). That said, their results in West Virginia and Wisconsin tend to counter the notion that the Democratic firm produces results biased toward the Democrats.

A batch of new automated surveys released yesterday by Rasmussen Reports and their subsidiary Pulse Opinion Research (for Fox News) generally confirm other polling in the Senate races in California, Ohio, Pennsylvania, Nevada and New York.

The new Fox/Pulse survey in Nevada has Republican Sharon Angle up by a single, non-significant percentage point (46% to 45%), generally confirming what other recent polls suggest is a slight tightening in the race. Our standard trend estimate, which gives greater weight to the surveys conducted earlier in the month, shows Reid leading by a single percentage point (46.3% to 45.3%). Our more sensitive estimate (shown below), which gives greater weight to the most recent surveys, has it dead even (44.9% to 44.9%).

2010-09-22-Blumenthal-NVSenSensitive.png

In Alaska, Rasmussen was first out of the box with a poll testing a three-way race with incumbent Senator Lisa Murkowski running as a write-in candidate. They show Republican nominee Joe Miller with 42%, Murkowski with 27% and Democrat Scott McAdams with 25% of likely voters. While the Rasmussen release did not include the specific language of their vote preference question, they did provide this curious description:

Polling for write-in campaigns is always challenging, so results should be interpreted with caution. For this survey, Rasmussen Reports asked respondents about a choice between Miller and McAdams without mentioning Murkowski. That is the choice voters will see when they enter the voting booth. However, when response options were offered to survey respondents, Murkowski's name was mentioned.

They only provided results for a three-way contest, so this reference must be to the structure of their question. Presumably, they first mentioned that Miller and McAdams were the names on the ballot, then offered Miller, McAdams and Murkowski as choices. For more on how pollsters will measure vote preference in Alaska, see my Monday update.

California's race for Governor provided yesterday's ray of hope for Democrats, where a new PPP poll showed Democrat Jerry Brown leading Republican Meg Whitman by five points (47% to 42%) while a new Fox/Pulse survey has the race dead even (at 45% for each). Those results are a slight improvement over five other surveys conducted in late August and early September by Rasmussen, Pulse, SurveyUSA and CNN/Time.

Our standard trend estimate, which gives greater weight to the earlier surveys, shows Whitman leading by just under three points (47.1% to 44.2%). Our more sensitive estimate, which gives greater weight to this week's polls puts Whitman ahead by slightly less than two (47.0% to 45.1%). Either way, the polling puts the California Governor's race in our toss-up category.

And this just in: Quinnipiac University released two new polls early this morning, including a eyebrow raising result in the New York Governor's race where they show Democrat Andrew Cuomo leading Republican Carl Paladino by just six percentage points (49% to 43%). Previous surveys conducted over the summer had shown Cuomo leading Paladino by 30 or more percentage points.

In Pennsylvania's Senate race Quinnipiac shows Republican Pat Toomey leading Democrat Joe Sestak by seven percentage points (50% to 43%), roughly the same margin as our previous trend estimate.

Cross-posted at the Huffington Post


Morning Update: Manchin Holding a Narrow Lead (Updated)

Topics: 2010 , Joe Manchin , John Raese , West Virginia

The smattering of new statewide polls released over the last few days yields no new significant trends, although a new poll on the West Virginia Senate race shows the Democrat, Joe Manchin, maintaining narrow but consistent lead over Republican candidate John Raese in September.

The new survey, from automated pollster Rasmussen Reports, gives Manchin a seven-point lead (50% to 43%). Rasmussen is the only pollster to release results in West Virginia since July, but their last three polls conducted over the last four weeks show Manchin leading by 6, 5 and 7 percentage points respectively, for an average of 50% Manchin, 44% Raese. While those margins are far closer than what Rasmussen and other pollsters measured earlier in the summer, voter preferences in West Virginia appear to have stabilized, at least for now, leaving Manchin with a modest lead.**

Other recent polls of note:

Two new polls released over the weekend in Pennsylvania confirm the single digit lead that Republican Pat Toomey has held over Democrat Joe Sestak since July. Both the live-interview Wilkes Barre Times Leader poll and a PoliticsPA/Municipoll automated survey yield much larger numbers of undecided voters than other recent surveys, but the effect on our overall trend estimate) is minimal. Our trend estimate now shows Toomey leading Sestak by eight points (46.7% to 38.7%). All eleven public polls released in August and September have shown him leading by margins ranging from 2 to 11 percentage points.

2010-09-21-Blumenthal-PASen.png

This past Friday, Rasmussen Reports also released another automated poll yesterday on the Massachusetts governor's race that shows Democrat Deval Patrick running just three points ahead of Republican challenger Charlie Baker (45% to 42%), with independent Tim Cahill falling to just 8%. Massachusetts is another state where Rasmussen has produced most of the recent polling -- three of the four surveys released in August and September. Rasmussen's surveys of likely voters have shown a steady decline in support for independent Cahill, from 23% in April to 8% on the current survey, although a poll of all registered voters conducted in late August by the State House News and KRC/Communications Research showed Cahill winning 18% of the vote and slightly larger Patrick lead over Baker (34% to 28%).

**The recent polling in the West Virginia Senate presents a scenario that our classic polling chart does not handle well. With fewer than seven polls available, our standard practice is to draw a linear trend line (or, in plain English, a straight line) through the data points. In this case, the straight line tries to reconcile two polls conducted in July that showed Manchin leading by more than 20 points with the three more recent Rasmussen polls showing consistently narrower margins. The result is that the trend lines converge on an estimate of the margin that is closer than any of the last three surveys. Since the last three Rasmussen surveys show no discernible trend, we've opted to report on the average of those surveys rather than our chart's trend estimate.

Update (9/21): Earlier today, PPP released a new survey of West Virginia that shows Republican John Raese leading Democrat Joe Manchin, 46% to 43%. Our trend estimate now splits the difference between the two most recent polls, leaving the race essentially even (Manchin 47.1%, Raese 46.3%).


Murkowski's Write-in Campaign: How Will Pollsters Measure It?

Topics: Alaska Senate Race , Lisa Murkowski , murkowski , senate , Senate races , Senator Murkowski

The most consequential polling development over the the weekend involves the announcement by U.S. Senator Lisa Murkowski that she will re-enter Alaska's race as a write-in candidate after losing the Republican primary. At issue is not some new poll, but rather how pollsters will go about measuring support for a write-in candidate.

The short answer is that it will not be easy, especially since successful write-in candidacies are rare and Murkowski's bid has little precedent. Media pollsters, who often feel obliged to report voter preferences as a single set of numbers, may feel especially challenged, but the best measurement of the Alaska Senate race may require asking at least two of the following questions:

  • A totally open-ended question that offers no candidate names and instead tests the ability of respondents to remember the name of the candidate they're voting for.

  • A closed-ended question that closely mimics the ballot, asking voters to choose between Republican Joe Miller, Democrat Scott McAdams and "another candidate" (perhaps the choice to "write in another candidate"). Live interviewers could then probe for and record the name of the "other" candidate.

  • A closed-ended question offering a choice between voting for Republican Miller, Democrat McAdams "or writing in Lisa Murkowski?"

Those who ask two or more of these questions will do so with the understanding no single question will get it exactly right and that Murkowski's true support at any point in time lies between the extremes. Not prompting with Murkowski's name will likely understate her potential support, due to a lack of knowledge of her write-in candidacy or by an implied dismissiveness of it by omission.

Prompting that Murkowksi is running as a write-in risks overstating that support, either because such a mention gives that candidacy special emphasis or because some truly undecided respondents sometimes gravitate toward independent candidates on survey questions. But asking a completely open-ended question will tend to overstate the undecided percentage, because some respondents will have trouble remembering the candidate names and because some will be reluctant to share their decision without more of a push. So again, the truth will fall somewhere in the between.

As many of our readers have speculated in comments, this situation creates special challenges for pollsters that use an automated, recorded voice methodology rather than live interviewers. It may be technically possible to ask an open-ended question, create an audio recording the respondents's answers and subsequently allow live humans to code the answers, but doing so would be a costly departure from the automated pollsters' standard procedures. The purely open-ended question is far better suited to a live interviewer survey.

In other U.S. Senate polling news since our last update:

In Wisconsin, automated pollster Rasmmussen Reports released a new survey showing incumbent Democratic Senator Russ Feingold trailing challenger Ron Johnson by seven percentage points (44% to 51%). Their previous survey in late August had the race closer -- 47% for Johnson and 46% for Feingold.

We generally try to view any new poll in the larger context of other surveys by other pollsters -- since any one survey is subject to random error and pollster "house effects" -- but in this case, Rasmussen has been the only pollster active in Wisconsin since July.

In Ohio, Quinnpiac University released a new survey on Friday showing Republican Rob Portman leading Democrat Lee Fisher by an astounding 20 points (55% to 35%), although polls from three other pollsters conducted over roughly the same time period show Porter leading by closer margins of between 8 and 11 percentage points.


Races For Governor: Republicans Poised for Big Gains


On the basis of current polling, Republicans stand to gain roughly a dozen governorships, and possibly more. Right now, 26 of the nations governors are Democrats and 24 are Republicans. Our trend estimates based on public polls in the 37 states holding elections for governor this year show Republicans on the verge of gaining at least 11 seats.

Our focus this week has been largely on the U.S. Senate and particularly the outcome of the Delaware primary, which has boosted the prospects for that state's Democratic candidate and with it, the odds that odds that Democrats will maintain their Senate majority (despite significant losses). But the larger ongoing story this year is about a gale-force wind blowing in the Republican direction, and nothing demonstrates that trend as clearly as polling in the governors races.

The contests for governor have more potential volatility because more states (37) hold their gubernatorial elections this year and because so many of those (24) involve open seats. "It is always easier," writes Jennifer Duffy of the Cook Political Report (gated) "for the opposing party to win an open contest than it is to defeat a sitting governor." This higher than usual vacancy rate gives Republicans a better chance of capitalizing on a favorable political environment.

You can see that impact in the following table, which shows our current polling trend estimates in states now represented by Democratic governors. Polling in 13 states shows the Republican candidate leading, and 11 of those contests are open seats. The only incumbent Democrats currently trailing are Iowa's Chet Culver and Ohio's Ted Strickland.

2010-09-17-Blumenthal-GovDems2.png

Two more incumbent Democrats are in potential jeopardy. In Maryland, Democrat Margin O'Malley leads by a "toss-up" margin of less than three points (46.0% to 43.4%). In Massachusetts, Democrat Deval Patrick leads Republican Charlie Baker by roughly five points (40.1% to 34.7%), but Patrick's margin has narrowed over the summer as support for independent Tim Cahill's support has trended down.

Meanwhile, Republicans are running comfortably ahead in holding most of the states currently represented by a Republican. All of the five seats either trending Democratic or in the toss-up category are open.

2010-09-17-Blumenthal-GovReps2.png

Only Connecticut and Hawaii look like probable Democratic pick-ups based on current polling. Hawaii holds its primary elections tomorrow, and the two candidates competing for the Democratic nomination-- Neil Abercrombie and Mufi Hanneman -- both hold comfortable leads over likely Republican nominee Duke Aiona.

Our trend estimates do show Democrats with nominal advantages in Minnesota, Florida and Rhode Island, but all three margins are close enough to merit a "toss-up" designation.

Add it all up, and we show Republicans on the verge of flipping 13 states from blue to red, and Democrats on the verge of flipping two states from red to blue, for a net Republican gain of 11 seats. One small consolation for Democrats: Of the four contests currently close enough to merit our "toss-up" designation, three are currently represented by Republican governors.

But a caution: These statistics are all based on constantly evolving polling "snapshots" which reflect preferences "if the election were held today." In some states, the number of polls is small and their reliability may be questionable. Moreover, the efforts of some campaigns to communicate via paid advertising are just getting underway in many states.

On that score, it's worth noting that the Cook Political Report, which considers more than just polling in its assessments, still rates as toss-ups six of the states where we show Republicans on the verge of a pick-up (Illinois, Iowa, New Mexico, Ohio, Oregon). It also still rates Maine as lean Democrat. So public polls alone may not tell the full story in some states

Still, a quick glance at the many statewide polls available -- including nearly 50 released in September -- makes it very clear that Republicans stand to make major gains in races for governor in 2010.

[Cross-posted to the Huffington Post]


Morning Update: O'Donnell's Win Puts Coons Ahead

Topics: 2010 , Chris Coons , Christine O'Donnell , Fox News , PPP , Public Policy polling , Rasmussen

While pollsters released a flurry of new surveys yesterday in the most competitive Senate races, the surprise result in the Delaware Republican primary had a much bigger impact on the GOP's chances of taking control of the Senate this year.

Specifically, Christine O'Donnell's upset of Republican Congressman Mike Castle flips Delaware from a seat that looked comfortably in the Republican column this year, to one that now looks comfortably (if tentatively) Democratic. Four polls conducted since July all showed Castle leading Democrat Chris Coons by double-digit margins, while the most recent Rasmussen poll had Coons leading O'Donnell by 11 (47% to 36%).

Public Policy Polling (PPP) fielded a general election survey in Delaware over the weekend that they plan to release today, though they teased results yesterday that imply an even bigger Coons lead. They reported Coons "polls 26 points better" against O'Donnell than against Castle, that O'Donnell's personal rating is 29% favorable, 50% unfavorable and that only 31% of Delware's voters think she is "fit to hold office."

While we are on the topic, congratulations to PPP for going where all other pollsters feared to tread and producing an accurate forecast of the O'Donnell surprise in Delaware.

As of this hour, the outcome of the New Hampshire Republican primary remains in doubt, with Kelly Ayotte leading Ovide Lamontagne by just under a thousand votes. A victory by Lamontagne would also cheer Democrats, as most polls show Democrat Paul Hodes faring better against Lamontagne than Ayotte. Our current trend estimate based on all public polls shows Hodes trailing Ayotte by 9 points (38.5% to 47.5%), by leading Lamontagne by just over 4 points (41.8% to 37.3%).

Meanwhile, yesterday brought 11 new polls in 7 of the most competitive states. Seven of those survey came from either Rasmussen Reports or from a set of new Fox News tracking polls that -- as I reported yesterday -- use the same field service and essentially the same methodology as the automated Rasmussen polls.

The new surveys included three on the Nevada race between Democratic Senator Harry Reid and Republican challenger Sharon Angle. The two automated polls from Rasmussen/Fox show the race slightly closer (a tie and one point Angle edge) than the live-interviewer survey from Ipsos (which puts Reid up by two). Our trend estimate now shows Reid leading Angle by about a point and a half (46.9% to 45.5%).

2010-09-15-Blumenthal-NevadaChart.png

We also saw three new polls in Ohio -- all automated -- from Rasmussen, Fox and SurveyUSA. All three show Republican Rob Portman leading Democrat Lee Fisher by a comfortable margin. Our [trend estimate](/polls/oh/-10-oh-sen-ge-pvf.html) puts Portman ahead by eight points (47.4% to 39.4%) and gaining support since the summer.

2010-09-15-Blumenthal-OhioSenate.png

Finally, the new Fox News survey in Florida is the second since the primary to show Republican Marco Rubio hitting a new high of 43% and Republican-turned-independent Charlie Crist below 30%. Rubio's lead over Crist on our trend estimate is now nine points (40.1% to 31.1%) with Democrat Kendrick Meek still trailing (19.1%).


What Do Fox/Pulse and Rasmussen Have in Common?

Topics: Fox News , Pulse Opinion Research , Rasmussen

Earlier today, Fox News released five new polls measuring voter preferences in the Senate races in Florida, Nevada, Pennsylvania, Ohio and California. The Fox News story says the polls are conducted by Pulse Opinion Research. We will tackle the results in another article, but for now political junkies may be wondering, what is Pulse Opinion Research?

The answer (as reported earlier today by Political Wire) is that Pulse is a "field service" spun off of of Rasmussen Reports that conducts their well known automated, recorded-voice surveys. It also conducts polls for other clients including, as of today, Fox News. While the questions asked on specific surveys may differ, the underlying methodology used by Fox/Pulse and Rasmussen are essentially identical.

Earlier this year, Rasmussen launched a new website for Pulse that, as he explained to Tim Mak of the Frum Forum, allows anyone to "go to the [Pulse] website, type in their credit card number, and run any poll that they wanted, with any language that they want... In effect, you will be able to do your own poll, and Rasmussen will provide the platform to ensure that the polling includes a representative national sample." According to the Pulse web site, basic election surveys start at $1,500 for a sample of 500 state or local respondents.

Scott Rasmussen confirms, via email, that surveys conducted by Pulse for Fox News and for Rasmussen reports are essentially equivalent in terms of their calling, sampling and weighting procedures:

Pulse Opinion Research does all the field work and processing for Rasmussen Reports polling. They do the same for other clients using the system that I developed over many years. So, in practical terms, polling done by Pulse for any client, including Fox News, will be processed in exactly the same manner. In a Rasmussen Reports poll, Rasmussen Reports provides the questions to Pulse. In a Fox News poll, Fox News provides the questions for their own surveys.

Both will use the same targets for weighting, including weights applied for partisan identification:

The process for selecting Likely Voter targets is based upon partisan trends identified nationally (and reported monthly). In an oversimplified example, if the national trends move one point in favor of the Democrats, the targets for state samples will do the same. As Election Day draws near, the targets are also based upon specific results from all polling done in that state. In competitive states, Pulse can draw upon a large number of interviews to help estimate the partisan mix.

For Election 2010, the net impact is that the samples are typically a few points more favorable to the Republicans than they were in Election 2008. Also, most of the time, the number of unaffiliated voters is a bit lower than in 2008. The samples also show a lower share of minority voters and younger voters.

One positive aspect of the new Fox News/Pulse surveys is that Fox is making demographic cross-tabulations freely available (example here) that Rasmussen Reports keeps behind a subscription wall. And Fox is going a step further, adding weighted sample sizes for each subgroup (something Rasmussen does not currently make available even to subscribers). So if you want to see the demographic composition, you can use the weighted counts to calculate the percentages.

On the other hand, this development may well double the number of polls conducted with the Rasmussen methodology in some races going forward. For example, the Fox/Pulse surveys were conducted on Saturday, September 11 and included samples in Nevada and Ohio. Today, Rasmussen Reports released two additional surveys conducted in Nevada and Ohio on Monday, September 13. Rasmussen, again via email, confirms that his "Rasmussen Reports polling schedule in entirely independent of anything the Fox or Pulse does." He adds:

Our plans were laid out long ago, with the only variable being which races remain the closest as Election Day approaches. For example, we don't expect to poll Connecticut as often as California. But, if the CT race gets closer (as possibly suggested by Quinnipiac), we will poll it more frequently. Same thought process holds true for West Virginia.

As it is, the Rasmussen surveys have already grown far more numerous and dominant so far this election cycle than in 2008. Pollster.com has already tracked 237 Rasmussen Reports surveys on the 2010 elections for U.S. Senate, almost double the number at this point for U.S. Senate races in 2008 (120). While the total number of surveys fielded by all pollsters have also increased, Rasmussen's share of these polls has grown significantly, from 35% of all Senate polls so far to 49%

2010-09-14-Blumenthal-PollsCounts.png

Rasmussen is the only pollster active in about a half dozen less competitive contests and has fielded three out of four polls in states that have been only marginally competitive, like Indiana and Delaware.

The growing predominance of Rasmussen's surveys so far this cycle has consequences for all that follow and track polling data, including our efforts to track and chart polls at Pollster and Huffington Post. This is another story that we will focus on in the weeks ahead.

[Cross-posted to the Huffington Post]


Morning Update: Good News for Murray in Washington?

Topics: 2010 , Dino Rossi , Patty Murray , Washington

Today's new Senate survey of interest comes from Washington, where the state's Elway Poll shows Democratic Senator Patty Murray leading Republican challenger Dino Rossi by a nine point margin (50% to 41%). The result is much better news for Murray than three other surveys conducted in late August and nudges Murray ahead of Rossi on our trend estimate by 3.7 percentage points (49.4% to 45.7%).

Does the new poll mean that Murray has gained ground in recent weeks, following a post August 17 primary "bump" for Rossi, (as our chart implies)? Not necessarily. What may be going on is a combination of timing and wide variation among pollsters that we have seen elsewhere this year: The most recent polls conducted using live interviewers show Murray doing better than those using an automated, recorded voice methodology.

2010-09-14-Blumenthal-WaPollsDefault.png

Specifically, two automated polls conducted in late August by SurveyUSA and Rasmussen Reports show Rossi leading by 7 and 3 points respectively, while live interview polls conducted by the Democratic Senatorial Campaign Committee (DSCC) and by Elway show Murray leading by 5 and 9 points.

2010-09-14-Blumenthal-RecentWaPolls.png

Less obvious from the table is that the variation in the recent polling is far greater for challenger Rossi's support (a 9 point range varying between 41% and 52%) than incumbent Murray's (a 5 point range varying between 45% and 50%). That pattern is similar to what we saw in last year's New Jersey governor's race, where surveys showed much less variation in support for incumbent Jon Corzine than challenger Chris Cristie, but where Christie's number was consistently higher on automated surveys. In New Jersey, the automated polls were closer to the final result.

In this case, the new Elway poll puts far more voters in the "other" and undecided categories (9% total) than the recent automated surveys (3%). That's a typical pattern, and hints that a harder push of the undecided may work against a Democratic incumbent like Murray, at least for now.

We will have to wait and see whether these pollster "house effects" persist into October, although it is also possible that the two automated surveys late August were an anomaly. Automated surveys earlier in the summer by Rasmussen and Public Policy Polling (PPP) showed Murray leading by a margin of 2 to 4 points that is more consistent with the 2.2 percentage point Murray margin we get (48.7% to 46.5%) when we use our chart's "smoothing" tool to pay less attention to recent variation and plot a smoother line.

2010-09-14-Blumenthal-WAPollsLowSens.png

Either way, the Murray-Rossi race is shaping up to be one of the most competitive in the nation, so we will be watching it closely.

And this just in: Just as I'm about to post this update, my email inbox tells me that Quinnipiac University has released a new poll on the Connecticut Senate race showing Democrat Richard Blumenthal (no relation) leading Republican Linda McMahon by just six points. That margin is slightly closer than other recent polls in Connecticut.

[Cross-posted to the Huffington Post].


Morning Update: PPP Has Castle-O'Donnell Primary 'Too Close To Call'

Topics: Chris Coons , Christine O'Donnell , Delaware , Mike Castle , PPP

Of the weekend's new polls, the most talked about involves not next month's general election, but rather this Tuesday's Republican Senate primary in Delaware. The new survey from automated pollster Public Policy Polling (PPP) shows "a real possibility of a major upset," with Tea Party conservative Christine O'Donnell holding a three-point advantage (47% to 44%) over Congressman Mike Castle that falls within the poll's margin of error. PPP says the race is now "too close to call."

That result is stunning because Castle, a moderate Republican and popular former Governor, has led likely Democratic nominee Chris Coons by double-digit margins all year. His entry into the race last year was seen as a boon for the Republicans because Castle, as the Washington Post's Chris Cillizza reported, "is widely regarded as the only GOP candidate who can win the seat" in next month's general election.

While polling on an O'Donnell-Coons match-up has been relatively sparse, an August survey by Daily Kos and PPP and a early September Rasmussen poll both show the Democrat leading O'Donnell by margins of 7 and 11 points respectively.

Even though O'Donnell was the Republican Senate nominee in 2008, most handicappers gave her little chance against Castle. Yet aided by at least $250,000 in Tea Party Express television advertising and endorsements from Sarah Palin, Republican Sen. Jim DeMint, and the National Rifle Association, her campaign has made a significant dent in Castle's popularity among Republicans. PPP reports that Castle's favorable rating among Delaware Republicans has fallen from 60% favorable, to 25% unfavorable a month ago, to net negative (43% favorable, 47% unfavorable) now.

The only other public poll on the primary was an internal Tea Party Express survey of 300 Delaware Republicans shared with Hotline OnCall that showed Castle leading two weeks ago by six percentage points (44% to 38%).

PPP's new survey also measured general election preferences, which they promise to release later this week, but hint that Democrat Coon has benefitted from the contested primary: "He would start out with a large advantage over O'Donnell in a general election match up," writes PPP's Tom Jensen, "and is polling closer to Castle than he was when PPP polled Delaware last month."

The weekend's other notable Senate poll, conducted by the Las Vegas Review Journal and Mason-Dixon Polling & Research, shows Democrat Harry Reid with the same 2 percentage point margin over Republican Sharon Angle (46% to 44%) as they found two weeks ago. While Reid's margin fall's within the margin of error of both surveys, nine of eleven polls released since July have shown him with similar single digit margins. Reid's very narrow advantage is likely real, at least for now.

[Cross-posted to the Huffington Post].


Crist Falling in Florida

Topics: 2010 , florida , Illinois , senate , West Virginia

New Senate polls released yesterday confirm the current standings in four states, but a new independent poll in Florida shows a bigger than average lead for Republican Marco Rubio and a continuing decline for Republican-turned-independent Charlie Crist.

The automated Voter Survey Service poll poll in Florida shows Republican Marco Rubio with 43% of the vote and a double-digit lead over independent Charlie Crist (29%) and Democrat Kendrick Meek (23%). It confirms the decline in Crist's support shown in other surveys as support for Rubio and Meek began to rise following the August primary.

Rubio's position is enviable, since he now receives 70% of the Republican vote (on the two most recent surveys), while Crist and Meek continue to divide the Democratic vote. His lead over Crist has grown to six points on our trend estimate (38.7% to 32.6%), enough to classify the race as leaning Republican. In a pattern we noted yesterday, Rubio does better on the automated Voter Survey Service poll (43%) than on other recent surveys done with live interviewers.


But Crist's decline makes voter preferences in this race especially volatile. The last two polls have shown Crist ahead of Meek by an average 10 and 6 points. How many of Crist's supporters will stick with him if polls in the next few weeks show Meek tied with or slightly ahead of Crist?

Elsewhere, Rasmussen Reports released new automated survey results for four states, Arizona, Illinois, Missouri and West Virginia. The polls shows no consistent trend compared to Rasmussen's previous surveys in the same states a month ago, with non-significant variation in all but Arizona.

In Arizona, a state in which only Rasmussen has released public surveys since April, they show an eight point net improvement for John McCain's Democratic challenger Rodney Glassman since August, although McCain still leads comfortably (51% to 37%).

Rasmussen shows a slight drift for Democrat Alexi Giannoulias in Illinois, from dead even in August to a four-point deficit against Republican Mark Kirk (37% to 41%), although the change is not statistically significant. Our trend estimate, based on all recent public polls, shows this contest to be a virtual tie (38.7% Giannoulias to 38.6% Kirk) -- the closest in the nation as of this morning.

Rasmussen's West Virginia poll shows Democrat Joe Manchin continues to narrowly lead Republican John Raese by roughly the same margin (50% to 45%) that Rasmussen measured a month ago. As with Arizona, Rasmussen is the only public pollster to release results since a Repass and Partners poll showed Manchin leading by 22 points (54% to 32%) in early August.


Poll Update: Divergent Kentucky Polls Suggest A Pattern?

Topics: 2010 , kentucky

Two new polls released yesterday on the Kentucky Senate race by CNN/Time and Rasmussen Reports help illustrate two intriguing patterns we are watching this year: Bigger than usual differences among polls sampling only registered voters rather than likely voters and consistent gaps between polls that use live interviewers versus automated methods.

The new CNN/Time polls released for Kentucky, Florida and California yesterday reported results for all self-described registered voters, while other recent polls have started to narrow their samples to those most likely to vote in this year's mid-term elections. The split represents a divergence in philosophy among pollsters: Some have less faith in the ability of polls to identify the likely electorate before October, while others apply simple likely voter "screens" a year or more before the election.

This year, national results reported by the Pew Research Center and Gallup have shown bigger than usual gaps between Republicans and Democrats on enthusiasm and especially interest paid to the campaign (many pollsters use the latter measure as part of an index used to select likely voters). Moreover, two new national polls this week by ABC News/Washington Post and NBC/Wall Street Journal showed Republicans running much better among likely voters on the generic U.S. House ballot.

The "likely voters" identified by pollsters are typically a few points more Republican, as turnout is typically higher among Republican leaning demographic groups, those who are older, better educated and white. But, again, this year's gap -- at least in early national surveys -- appears to bigger than usual.

We are also seeing another pattern emerge that is mostly unique to 2010: In several states, pollsters using automated methods, particularly Rasmussen Reports, SurveyUSA and Public Policy Polling (PPP), are reporting results consistently more favorable to Republican candidates than those using live interviewers.

Until recently, the two differences were confounding and hard to disentangle, as the automated polls were usually the only ones that also screened for likely voters. But as more pollsters are now shifting to likely voter screens, we are beginning to see the differences that are more clearly about the survey "mode."

The Kentucky Senate race is a prime example. Yesterday's CNN/Time live interviewer survey of registered voters shows a dead-heat tie (46% to 46%) between Republican Rand Paul and Democrat Paul Conway. Yet surveys of likely voters conducted in August by Rasmussen and SurveyUSA have shown Paul leading by larger margins (roughly 10 points on average), with the surveys of likely voters conducted with live interviewers by CN2 Politics/Braun Research and Reuters/IPSOS fall somewhere in between (Paul leading by 5 point on average).

2010-09-09-Blumenthal-KYSenPolls.png

We do not see these patterns everywhere. For example, CNN and Time also released new surveys yesterday on the races for Senate and Governor in California and Florida which were generally more consistent with other recent surveys that used automated methods or likely voter screens.

But we are seeing the Kentucky pattern elsewhere and will certainly have more to say about it over the next nine weeks. or today, however, we offer this advice: Remember New Jersey.

Another highlight from yesterday's polls:

Although CNN/Time survey on the Florida Senate race shows Democrat Kendrick Meek running a distant third, his 24% of the vote is the most he has received since Charlie Crist announced his intention to run as an independent.

Yes, the registered voter screen used for the survey may produce a slightly more Democratic sample, but the results among Democrats explain much of the difference: Meek leads Crist in that subgroup on the CNN/Time poll (54% to 36%). When I averaged Meeks' standing among Democrats in polls conducted before the August primary, Crist actually ran slightly ahead among Democrats (43% to 37%).

Coming up tomorrow morning: An look at the races for Governor.


Poll Update: Senate Remains In Play

Topics: 2010 , Senate

With less than nine weeks remaining until Election Day, control of the U.S. Senate remains in play, as Republicans hold meaningful leads in five states currently held by Democrats, with six more Democratic seats remaining in our "toss-up" category. Since our last update two weeks ago, new polls have nudged our polling averages in a slightly more Republican direction in the more competitive states, particularly Florida, Kentucky, California and Washington.

Remember that to win an absolute majority in the Senate, the Republicans need to gain at least 10 9 seats (although as several Pollster and HuffPost commenters have pointed out, a gain of 9 8 seats would leave the Democratic majority dependent on vote of not always reliable Joe Lieberman).

Currently, Republican candidates hold strong double-digit leads in four states now represented by Democrats: North Dakota, Arkansas, Indiana and Delaware. The Delaware margin assumes that Mike Castle wins next week's Republican primary. Democratic hopes there will brighten considerably should Republican Tea Party candidate Christine O'Donnell prevail, as two recent polls show she would trail Democrat Chris Coons.

2010-09-08-Blumenthal-SenateDemSeats.png

Six seats currently held by Democrats remain in our toss-up category:

  • In Colorado, our most recent trend estimate shows Republican Ken Buck with a slim 3.3 point advantage (46.1% to 42.8%), over Democratic Senator Michael Bennet, although the two most recent polls point in opposite directions: The most recent Rasmussen Reports tracker gives Buck a four-point lead, while a survey conducted by a bi-partisan team of campaign pollsters gives Bennet a 3-point advantage.

  • In Washington state, two recent automated surveys by SurveyUSA and Rasmussen show Republican challenger Dino Rossi narrowly but not significantly ahead of Democratic Senator Patty Murray. Rossi's 1.9 point edge (49.7% to 47.8%) on our trend estimate is slightly improved, but leaves Washington very much in the toss-up category.

  • California has also seen two new automated surveys in the last week from Rasmussen and SurveyUSA both showing Republican challenger Carly Fiorina deadlocked with Democratic Senator Barbara Boxer. Our trend estimate now shows Boxer with an advantage of less than one percentage point (46.8% to 46.2%), an edge that has narrowed roughly two points over the last two weeks.

  • All of the recent pubic polling in Wisconsin comes from Rasmussen Reports, which has shown a deadlocked race between Senator Russ Feingold and his Republican challenger Ron Johnson. Johnson's less than one-point margin on our trend estimate (47.3% to 46.0%) mirrors those results.

  • In Illinois, a new live-interviewer survey by the Chicago Tribune confirms the results of the most recent Rasmussen automated survey. Both show Democrat Alexi Giannoulias and Republican Mark Kirk tied. Our trend estimate gives Giannoulias Kirk a two-point edge (39.7% to 37.7%)

  • In Nevada, two recent surveys by Mason-Dixon and Rasmussen both show Democrat Harry Reid with non-significant leads of 3 and 1 percentage points respectively. Our trend estimate gives Reid a 3.4 point advantage (48.6% to 45.2%), mostly because Reid has led nominally on 8 of 10 surveys conducted since July.

Of the seats currently held by Republicans, only Florida remains in our toss-up category, and there our trend estimate shows Republican Marco Rubio with a 3.1 point advantage over Republican-turned-independent Charlie Crist (37.1% to 34.0%) with Democrat Kendrick Meek running a distant third (16.6%)

2010-09-08-Blumenthal-SenateRepSeats.png

Two week ago, our trend estimate put Kentucky in the toss-up category, but two new recent polls by SurveyUSA and Kentucky cable news channel CN2 put Republican Rand Paul leading Democrat Jack Conway by margins of 15 and 5 points respectively. Our trend estimate now shows Paul leading by 5.3 points (44.9% to 39.6%), enough to shift Kentucky to lean Republican.

All tallied, we currently show 48 seats leaning or currently held by Democrats (including the two independents that caucus with the Democrats), and 45 seats leaning or currently held by Republicans. Thus, control of the U.S. Senate rests on the outcome of the seven contests now now in the toss-up category: Colorado, Washington, California, Wisconsin, Illinois, Nevada and Florida.


Political Scientists Forecast Big Losses For Democrats

Topics: 2010 , Alan Abramowitz , Alfred Cuzan , American Political Science Association , Charles Bundrick , Charles Tien , Christopher Wlezien , Gary Jacobson , Jim Campbell , Joe Bafumi , Michael Lewis Beck , Robert Erikson

With the midterm elections now just nine weeks away, a group of political scientists gathered for a conference in Washington D.C. this weekend forecast significant losses for the Democrats. Three of the five forecasts predicted that Republicans will gain majority control of the House of Representatives.

The annual meeting of the American Political Science Association (APSA), which featured nearly 5,000 participants and close to 900 panel and roundtable sessions, was about far more than election forecasting. Those most interested in the 2010 campaigns, however, gravitated to a Saturday session in which five political scientists presented the latest results from their forecasting models, some of which have been in development for 30 years or more.

Democrats currently hold a 256 to 179 seat advantage, so Republicans need to win at least 39 seats to gain majority control. Three of the models, two of which draw on national polls measuring whether voters plan to support the Democrat or Republican candidate in their district, point to Republicans picking up between 49 and 52 seats in the House, more than enough to win majority control. Specifically:

  • Alan Abramowitz of Emory University forecast a Republican gain of 49 seats, based on current polling showing Republican with a roughly five percentage-point lead on the generic House ballot.
  • Joe Bafumi of Dartmouth College presented his forecast of a 50-seat Republican gain, based on a model and paper co-authored with Robert Erikson of Columbia University and Chris Wlezien of Temple University (and summarized last month in the Huffington Post). Their model also rests heavily on national polling on U.S. House vote preferences.
  • James Campbell of SUNY Buffalo predicted a gain of 50 to 52 seats for the Republicans, using a model that combines assessments of the number of "seats in peril" by the Cook Political Report with the recent job approval rating of president Barack Obama.

Two more models offered a less pessimistic outlook for the Democrats:

  • Alfred Cuzan forecast a Republican gain of 27 to 30 seats based on a model, developed with University of West Florida colleague Charles Bundrick, that relies mostly on measures of economic growth and inflation rather than voter preference polling.
  • Michael Lewis Beck of the University of Iowa predicted a Republic gain of just 22 seats. He collaborated with Charles Tien of CUNY Hunter College on a more than 30-year-old "referendum" model based on measurements earlier this year. Their model was the only one to exclude measurements of the current seat division between Democrats and Republicans.

Why so much variation in the forecasts? Another speaker, Gary Jacobson of the University of California San Diego, pointed out that the number of previous elections typically used by forecasters (typically between 16 and 32) is "not a very big number," while a great many "plausible" predictive measures exist. Moreover, the national polling numbers used by the modelers are often "really, really noisy."

Jacobson also noted the differences between the "fundamentalist" models of Cuzan/Bundrick and Lewis-Beck/Tien that assume that views of the the economy and the Obama administration drive voting, and the others that use vote preference questions which, as he put it, "add in the information that's already the product of these fundamentals" as well as "the other stuff that's going on" with voter preferences.

Lewis-Beck argued that the "the best models are based on theory ... things that we know [or] that we're pretty certain we know," which in this case means the belief that "people vote about the main direction of the economy, and they vote about big macro political issues," especially in midterm elections.

At least one of the academics noted the apparent gap between what the fundamentals alone predict and what the polls are picking up. "Republicans are polling a lot better than they should be," Bob Erikson argued, "by [the] fundamentals."

[Cross-posted to the Huffington Post]./p>


Newsweek Poll 'Cooked?' Please

Topics: Generic House Vote , Newsweek , Party Weighting , Todd Eberly , Weighting

Is the latest Newsweek poll "fishy?" As we reported yesterday, their latest sample of registered voters split evenly on the question of whether they plan to vote for a Democrat or a Republican for Congress this year. Over the weekend, Todd Eberly, an assistant professor of political science at St. Mary's College of Maryland, argued that the poll seemed "fishy" and "cooked." Jim Geraghty gave Eberly's post a plug on National Review Online and, as a result, commenters on my report on Pollster.com have been howling with outrage that we gave any credence to a "dishonestly weighted" poll.

As I noted yesterday, the Newsweek poll did produce a result on the more positive end of the bell curve for Democrats. Make no mistake: A simple average of recent polls (including Newsweek) shows a roughly five-point Republican advantage on the so-called generic House ballot -- a result that points to Republicans winning 50 or more seats and with it, control of the House. Moreover, the trend is moving in the Republican direction. So no one should interpret anything that follows as evidence that "all is well" for the Democrats.

2010-08-31-Blumenthal-PollsterGeneric.png

If Eberly had confined his criticism to Newsweek's headline and story, which focused only on the Newsweek poll and thus concluded that Democrats "may not be headed for a bloodbath," I would be sympathetic. But Eberly goes much farther and alleges that the data are "fishy," that "someone at Newsweek cooked the books and hoped we wouldn't notice."

On that score Eberly has his math -- and the facts -- flat wrong.

The crux of his argument -- the evidence that he oddly alleges the Newsweek pollsters hoped we wouldn't notice -- appears at the very top of the "complete poll results" document produced by Newsweek's polling firm, Princeton Survey Research Associates (PSRA; interests disclosed: PSRA CEO Evans Witt is a neighbor and friend). Because they provide results for the entire survey tabulated by party identification, PSRA also discloses the unweighted sample sizes for all the party subgroups, Democrats (280 registered voters), Republicans (284) and independents (247) as well as the total of all registered voters interviewed (856).

Eberly finds that partisan mix inconsistent with the results that Newsweek reports for the generic ballot. "[I]t is mathematically impossible," he writes "for Democrats and Republicans to be tied at 45%" given that party breakdown.

Well of course it is. The party breakdown is unweighted. PSRA also discloses, on the same front page of their questionnaire, that their data are "weighted so that sample demographics match Census Current Population Survey parameters for gender, age, education, race, region, and population density."

Now in fairness, PSRA's report does not explicitly say that the subgroup sample sizes are unweighted -- an omission which often leads to this sort of confusion -- but they do provide weighted results for party identification at the end of their report. Among registered voters the weighted result is 32% Republican, 35% Democrat, 29% independent and the rest volunteering that they have no party (5%), are a member of another party (1%) or are unsure (3%).

"Now it's possible," Eberly concludes, "that after weighting for gender, age, education, race, region, and population density the partisan ID of the sample would change."

Yes. It's also likely. Any national pollster will tell you that weighting a sample of adults to match census statistics will typically make the sample a few points more Democratic. The four-point shift seen on this survey is slightly bigger than usual, but that's the way random variation works.

Newsweek does not weight it polls by by party. They weight their adult samples demographically and then, in this case, report on the results among registered voters. Most national media pollsters use the same procedure. A simple Google search on "weighting by party ID" will quickly yield a full explanation and more.

But Eberly is having none of that. His smoking gun? When he enters Newsweek's results-by-party ballot into a spreadsheet, and plugs in the "reported [party] breakdown" (36% Democrat, 32% Republican), he can't reproduce the 45% to 45% tie on the they report on the generic House ballot. By his calculations, "the Republicans still lead by 47.4% to 42.6% -- [so] the poll is pure nonsense."

Professor Eberly? Did you notice that the the unweighted sample sizes of Democrats (280), Republicans (284) and independents (247) add to just 811, not the 856 registered voters that Newsweek reported?* Did you wonder why? Did it occur to you that your tabulations *omitted results for 45 interviews conducted among the registered voters whose answers were "other," no party or unsure and that the omission might explain why your calculations don't match what Newsweek reported?

Apparently not.

Now there's nothing unusual about Newsweek's omission. Few public pollsters report results for subgroups of less than 100 interviews, and for good reason. The margin of error on 45 interviews would yield a margin of error of at least +/- 15%. But I asked PSRA to make an exception in this case and they kindly disclosed that the 45 other/none/unsure respondents support the Democratic candidate in their District rather than the Republican by a 40% to 29% margin. Put those numbers into a spreadsheet along with the rest of the result-by-party, apply the weighted party composition reported at the end of the questionnaire (36% Democrat, 32% Republican, 27% independent, 5% other/none/unsure), and I get a result on the generic ballot of 45.8% Democrat, 44.6% Republican. The slight difference from the 45% to 45% reported by Newsweek is likely due to the rounded numbers we plugged into the spreadsheet.

2010-08-31-Blumenthal-NewsweekByParty.png

Eberly calls on Newsweek "to release fully the effects of it's weighting." I have no idea what he means, but readers should know that Newsweek discloses more about its weighted and unweighted party identification results than most pollsters. Can you point to any Rasmussen poll of registered or likely voters, for example, that discloses either its unweighted or weighted party identification breakdown?

Now again, the results of this Newsweek's poll are arguably on the optimistic end of the bell curve for Democrats, but given the reported +/-4% margin of error, the 45%-to-45% result does not differ significantly from our 45.6% Republican to 41.1% Democrat trend estimate (as of this writing) based on all available public polls.

The charge that Newsweek and PSRA intentionally "cooked the books and hoped we wouldn't notice" is nonsense. Eberly owes them a retraction and an apology.

**Hat tip to Pollster reader John who did notice the discrepancy.

[Cross-posted to the Huffington Post]


Gallup vs. Newsweek on the Generic

Topics: Gallup , Generic House Vote , Newsweek

Two national polls released today and over the weekend report very different results leading to very different conclusions:

On Friday, under the headline "Democrats May Not Be Headed for Midterm Bloodbath," Newsweek reported results from a new national poll of registered voters showing Americans evenly split (45% to 45%) on the question of whether they would vote for the Democratic or Republican candidate for Congress in their district.

This afternoon, Gallup released another national survey of registered voters, also conducted last week, showing Republicans with an "unprecedented 10-point lead" (51% to 41%), the largest Republican advantage Gallup has measured in its nearly sixty years of tracking the so-called "generic ballot."

So what's going on?

Much of the gaping difference between the two polls is probably explained by the usual random variation that affects all polls. Use your mouse to poke around our interactive chart (posted below), and you will soon discover that the latest Gallup survey result is more favorable for the Republicans than most, the Newsweek poll is similarly more favorable for the Democrats and that both fall within the typical range of variation, amounting to +/- three or four points from the trend line. Our overall trend estimate based on all of the available polls gives Republicans a 5.2 percentage point advantage (46.8% to 41.6%)

We could obsess further over the consistent differences ("house effects") among pollsters, but what is far more important, is that the averages show a GOP lead that has been trending in the Republican direction all summer. That trend is consistent with the historical pattern identified here on Friday by political scientists Joe Bafumi, Bob Erikson and Chris Wlezien, the "electorate's tendency in past midterm cycles to gravitate further toward the "out" party over the election year."

Moreover, you see the same trend even if we drop all Newsweek and Gallup polls, plus all of the Internet-based surveys and automated surveys (including Rasmussen), and focus only on the remaining live-interviewer telephone surveys, as in the chart below. The margin for the Republicans is virtually identical (46.6% to 41.4%).

2010-08-30-Blumenthal-GenericLiveOnlysml.png

So while the "unprecedented 10-point lead" reported by Gallup probably exaggerates the Republican lead, any result showing a net Republican advantage on the so-called generic ballot is bad news for Democrats. Bafumi and his colleagues estimated their 50-seat gain for the Republicans assuming a two-point advantage for Republicans on the generic ballot, which they project will widen to a six-point lead by November. If the Republican lead on the generic ballot is already that wide (or close), their projection for the Democrats would worsen.

[Cross-posted to the Huffington Post].


Meek Will Likely Gain In Florida, But How Much?

Topics: 2010 , Charlie Crist , Florida , Kendrick Meek , Marco Rubio

So what's next in the Florida Senate race? Can Democratic nominee Kendrick Meek convince Florida Democrats to abandon Republican-turned-independent Governor Charlie Crist? And does Crist have a path to victory?

The current polling snapshot can help us understand the challenges that each face, but perhaps more than in any other Senate race, the horse race polling numbers here are potentially volatile and subject to change. This race is definitely one to watch.

The tabulations that pollsters have produced by party are, for now, the most important. I averaged the vote-by-party results reported for the general election by four pollsters, Ipsos Public Affairs, Mason-Dixon Polling & Research, Public Policy Polling (PPP) and Quinnipiac University (my tabulations do not include results from the Rasmussen poll conducted last night and released earlier today, mostly because they did not provide complete results by party for non-subscribers, but the numbers they reported are generally consistent with those below).

2010-08-26-Blumenthal-FLSenatebyparty.png

The by-party-numbers show that Meek faces a huge challenge: Crist leads Meek narrowly among Democrats (42% to 37%), while Crist wins a greater share of the vote among Democrats (42%) than among Republicans (20%). Meanwhile, Meek trails Rubio among independents by 22 points (9% to 31%)

The numbers also demonstrate the difficulty Crist will have growing his current support (and keep in mind that Crist trailed Rubio narrowly overall on three of the four surveys). Self-identified independents are a relatively small portion of the likely Florida electorate. In the four polls I looked at 18% of the voters, on average, identified as independent, and Crist is already winning 42% of their support. Thus, even if he can somehow boost his support among independents to 60%, it would add just 3 percentage points to his overall total.

Meanwhile, Meek's obvious strategy is to win over Democrats, fast, and his campaign is wasting no time touting Meek as "the only real Democrat" and reminding reporters of the many conservatives stands Crist took until days before abandoning the Republicans earlier this year. And that strategy also works for Marco Rubio, who joined Meek in pounding Crist this week for not saying who he plans to vote for for majority leader.

All of this, as Politico's Martin and Burns put it, "leaves Crist in the position of having to perform Houdini-like marvels of contortion to find a large enough space in the political middle to keep his independent bid on track."

Not surprisingly, both the Meek and Rubio campaigns agree that "political gravity" will work in Meek's favor. But can Meek really rally from a distant third to challenge Rubio? In a public memo, Meek campaign manager Abe Dyk argues that he can:

With Republicans coalescing around a Tea Party candidate, and Democrats with Kendrick, the math does not exist to elect Charlie Crist. With an expected turnout of 43% Democrats and 40% Republicans, Kendrick needs to win 75% of registered Democrats and just 17% of the registered Independent vote to secure 35% of the vote total. 35%-40% is all that is needed to win in a three-way race.

That math strikes me as a bit optimistic, first in assuming that Democratic voters will outnumber Republicans in Florida this year,** and second in assuming that a candidate can win with less than 40% of the vote while also assuming that Crist's support among Democrats will collapse. But that's mostly quibbling: To run even with Rubio, Meek will need to close the gap among independents and win a percentage of Democrats that is at least as high as Rubio's percentage among Republicans. So whether Meek's goal among Democrats is 75% or 80%, it's a tall order. Can Meek really double his support among Democrats between now and November?

To get a better handle on that question I asked two of the Florida pollsters to tabulate their results among a crucial subgroup: The self-identified Democrats in their surveys that support Crist. The resulting subgroups of "Crist Democrats" are relatively small -- just 106 interviews on the Ipsos survey and 147 on the Quinnipiac poll, yielding margins of error of +/- 10% and +/- 8% respectively -- but the results are largely consistent. They help explain Crist's current appeal among Democrats, but also why he will have trouble maintaining that support.

For example, the Crist Democrats overwhelmingly approve of Barack Obama's performance as president (79% on the Ipsos survey and 80% on Quinnipiac), but not surprisingly, they are even more approving of Crist as governor (90% on the Ipsos survey and 88% on Quinnipiac).

There are also hopeful signs for Meek: Quinnipiac finds that nearly half of the Crist Democrats (45%) say they haven't heard enough about Meek to rate him, and only 20% report an unfavorable rating. Quinnipiac finds that half of the Crist Democrats are self-described liberals (46%), and Ipsos finds 36% "strongly" identify with the Democratic party. Quinnipiac finds that nearly a quarter (23%) are African American.

So collectively these results suggest that Meek has much room to grow, and that "political gravity" is poised to work in his favor. On the other hand, they also suggest that some Democrats will stick with Crist no matter what. What is Crist's floor of support among Democrats? We will have to wait and see.

One thing is certain: Crist's independent candidacy will make voter decisions more complicated than in other races and, for that reason, potentially far more volatile. Voter preferences could shift, and fast, at any point this fall (including the final week). As such, this is a race worth watching.

**Those who want to go deeper into the wonky weeds should know that Dyk's memo references actual party registration, while poll respondents may sometimes report something different. More specifically, while Mason-Dixon asks explicitly about party registration, Quinnipiac and Ipsos use a more traditional party identification question that asks respondents what they "consider" themselves and PPP asks respondents simply whether they "are" Democrats, Republicans or independents. So the numbers I'm reporting are probably slightly different than what Dyk is using.

[Cross-posted to the Huffington Post].


Why Were Polls Off in Florida?

Topics: 2010 , Bill McCollum , Florida , Poll Accuracy , Rick Scott

Here's any easy bet to win today in Washington (or anywhere else where true political junkies gather): Where did polling miss the mark most yesterday, in Florida's Republican primary for Governor or Florida's Democratic primary for Senate?

Judging by the tweets I've seen (and my own snap judgment), most of you may be thinking the polls were most off in the Governor's race, where most of the final polls showed Bill McCollum leading. If so, you'd be wrong. The three polls fielded in the last week on the Democratic Senate contest understated Kendrick Meek's margin by an average of 11 percentage points. The three final week polls on the Republican Governor's underestimated Rick Scott's margin by an average of just 5 points (the absolute value of the errors was 7.7; all of these numbers are based on the unofficial count with all precincts reporting).

Thus, we have another example of the pre-election pollster's paradox: The errors that get noticed are those that are just wrong enough to give everyone the wrong impression about the likely winner.

But let's focus on the Republican primary for Governor, for now, since theories are flying about why some polls missed Scott's looming victory. I asked our Pollster.com colleague, University of Wisconsin Professor Charles Franklin, to run one of his patented "bullseye" polling error charts. The chart below displays each poll as a dot, with the vertical axis representing Scott's percentage, the horizontal axis representing McCollum's percentage, and the center of the bullseye representing the actual result.

In this case, two pollsters -- Public Policy Polling (PPP) and Sunshine State News/VSS/Susquehanna Polling and Research -- land in the center ring of the bullseye. Their final surveys yielded the smallest undecided percentages and were also the only two to show Scott ahead.

Pollsters have long debated how to handle the undecided category in measuring poll error. Should we allocate the undecided among the candidates and, if so, how? This chart sidesteps that debate, although keep in mind that polls that get the margin exactly right will fall in the lower left quadrant along an unplotted diagonal line from the center of the bullseye to the lower left corner.

So, aside from getting fewer undecided, why did the PPP and Susquehanna polls get closer to the final result?

One theory, floated by our intern Harry Enten (see the 9:01 p.m. entry in last night's live blog), is that the polls by PPP and Susquehanna used an automated, recorded-voice methodology and drew their random samples from the official list of registered voters. Harry argues that both methods provide a more accurate identification of truly likely voters: The registered voter list because it identifies actual voters (and can make use of their actual history of past voting) and the automated method because voters are theoretically more willing to provide honest answers about their vote intent to a machine rather than a live interviewer.

I have long speculated that automated surveys are better at selecting truly likely voters in especially low turnout races, on the theory that they can identify likely voters more accurately and are better able to interview a narrower slice of the electorate at reasonable cost. In this case, however, PPP's Tom Jensen speculates that their survey was closer to the final result because their they "used a loose screen" in selecting likely voters and thus "picked up more non-typical voters who went for Scott." Yesterday's Republican turnout (1.28 million voters) was slightly larger than the turnout in the August Republican primary for Senate in 2004 (1.16 million) and much larger that the Republican Gubernatorial primary in 2006 (0.96 million).

Another intriguing theory concerns the surprising 10% of the vote received by Mike McCalister, a third Republican candidate for Governor. Politico's Jon Martin observed, via Twitter, that McAlister's showing was perhaps "the biggest surprise of the night" because he aired no television ads, did not appear in the debates and got little media attention. Republican media consultant Mike Murphy suggests, also via Twitter, that some voters may have been confused by the similarity in the names: "McCollum + McCalister = McConfused."

What complicates this issue further is that only one pollster -- Mason Dixon -- offered McCalister as a choice on their vote question, and they showed him getting only 4% to 45% for McCollum. PPP and Quinnipiac University offered only Scott and McCollum as choices (although Quinnipiac recorded 4% who named "someone else" as their choice). So what may have happened is that some Republicans headed to the polls intending to vote for McCollum, but accidentally chose a similar looking name by mistake.

Whatever the explanation, we should always remember that polling in primary elections is more prone to mishaps than polling in general elections. Learning that first lesson, however, is the easy part. Anticipating which polls will be right, and even explaining why they differ after the fact, is much harder.


J.D. Hayworth's Poll Bluster: Anything To It?

Topics: 2010 , Arizona , J.D. Hayworth , John McCain , Primary elections

If nothing else, I have to give arch-conservative, former Congressman J.D. Hayworth credit for doggedly insisting, despite all available evidence, that he is "poised to pull one of the greatest upsets in political history" in Tuesday's Arizona Senate primary. After all, the most recent automated Rasmussen poll showed him trailing Sen. John McCain by twenty points (54% to 34%), and every public poll conducted in this race has shown McCain leading, including eight fielded since March that had McCain ahead by margins of between 5 and 45 percentage points.

This morning, Hayworth even offered NBC's Chuck Todd a theory for why the polls might be wrong:

Here's the limitation of public opinion polls. They cannot accurately gauge the turnout. Conservatives are motivated to go cast a vote for me and retire John McCain. Also on the ballot, the governor's race on the Republican side is devoid of any suspense. Several candidates dropped out. Governor Brewer has a clear march back to the nomination [and] that will suppress the moderate turnout.

Visit msnbc.com for breaking news, world news, and news about the economy

Well, maybe. We'll know the answer soon enough.

But whatever the outcome, Hayworth does have a point about one thing: Pollsters often have a hard time identifying true likely voters in low turnout primary elections. That's one reason why primary polls tend to produce bigger errors compared to the actual results than general election polls.

Also, the most recent survey in this race, the Rasmussen result I cited above, is now five weeks old. No other public polls have been conducted since July.

So as tempting as those Hayworth-as-Iraqi-Information-Minister jokes may be, we should probably hold off snickering until all the votes are counted.


Senate in Play...Barely

Topics: 2010 , senate

Is the Democratic Senate majority in peril? A lot of political observers have been asking that question in recent weeks, and for good reason. As of today, polls show Republican candidates running clearly ahead in 4-5 Senate seats currently held by Democrats, with contests in another six Democratic Senate seats falling into our "toss-up" category -- relatively close races where the leader's margin is far from secure.

Control of the Senate will largely depend on the outcome of the toss-up races. The Republicans have a path to majority control, but it will require sweeping virtually all of the close races.

Today we begin what will soon be a regular daily feature on HuffPost's Pollster in which we review the day's polls and monitor their impact on our polling averages and trends. We will be watching races for Senate, U.S. House and Governor, but for today I want to begin with an overview of the races for the U.S. Senate.

Let's first take a step back and consider the classic "horse race" poll question that we plot on our charts and use to assess where each race stands. The question usually asks voters to make a choice as if "the election were held today" and prompts with both the names and party affiliations of each candidate.

When voters are familiar with the competing candidates -- as they usually are a few days before the election -- the standard horse race question has proven to be an accurate and reliable measure of their preferences. However, when voters know some candidates but not all, that predictive accuracy of the vote question starts to erode, and the horserace results can be misleading. That scenario still exists in many Senate races, so there is still potential for shifting between now and November.

Currently, the Democratic Senate Caucus has 59 members: 57 elected as Democrats plus two independents (Bernie Sanders and Joe Lieberman). In order to be assured a majority, the Republicans would need to gain ten seats, since Vice President Biden would vote with the Democrats to break a 50-50 tie.

So based on the available public polling, how many seats would the Republicans gain "if the election were held today?" Let's use the Pollster.com trend estimates (based on all available polling data) to consider voter preferences in the competitive races in Senate seats currently held by Democrats.

2010-08-24-Blumenthal-SenateDemSeats.png

Republican candidates currently hold huge leads in four states currently represented by the Democrats (North Dakota, Arkansas, Indiana and Delaware), and in a fifth (Pennsylvania) Pat Toomey leads by just slightly more than five percentage points (45.7% to 40.6%). Using the categorizations system we applied in 2008, we classify all five of these states as strongly or leaning Republican.

What should concern Democrats even more, however, is that polling yields closer "toss-up" margins in another six states currently represented by Democratic Senators -- Colorado, Wisconsin, Washington, Illinois, California and Nevada. Of course, Republicans would need to win five of the six and prevail in two similarly close contests in states currently represented by Republicans (Florida and Kentucky) to gain control of the senate.

2010-08-24-Blumenthal-SenateRepSeats.png

The "toss-up" label above may overstate the degree uncertainty "if the election were held today" in some of the contests. In Nevada, for example, four of the five polls conducted in the last month show Harry Reid with small, nominal leads of 1 to 4 percentage points. Recent polling in California shows a similar pattern favoring Democrat Barbara Boxer.

We will soon unveil a more granular system of classifying each race. But for now, our older classification scheme helps put these results into the context of recent history. The lesson is that a lead of two or three percentage points in late August can be fleeting.

In late August 2006, it certainly looked like Democrats faced an "uphill fight" to win control of the Senate. At the time, Republican candidates led by low single digit margins on most polls conducted in Missouri and Virginia (the latter conducted just after George Allen's infamous "Macaca" moment), yet Democrats Claire McCaskill and Jim Webb went on to gain support during the campaign and win their respective races. Democratic Senate candidates also gained significantly on their Republican opponents during the fall campaign in New Jersey, Ohio, Rhode Island, Washington.

Four years ago, our similarly constructed assessments of the Senate polling as of early September added up to 50 seats held by Republicans or at least leaning that way, 46 seats held or leaning to the Democrats, and four toss-ups. Yet the Democrats ultimately gained enough support to over the course of the fall campaign to win the 51 seats necessary to gain control of the Senate.

Compare the 50 to 46 Republican advantage at this point in 2006 to the current standings: Right now, we show 48 states currently represented by Republicans or at least leaning that way in this year's elections (including Lieberman and Sanders), 45 states held by or at least leaning Republican and seven states in the toss-up column.

None of this argues that Republicans will see gains over the next two months comparable to what Democrats experienced in the fall of 2006, only that the possibility exists. For now a Republican takeover does not look probable, but we certainly cannot rule it out.

[Cross-posted to the Huffington Post]


Kendrick Meek Hitting His Targets

Topics: 2010 , Florida , Ipsos , Jeff Greene , Kendrick Meek , Mason-Dixon

It is not unusual to see highly contradictory poll results in statewide primary elections, but it's rare when we can find easy explanations for those differences. In the case of next week's Democratic primary for Senate in Florida, however, those differences are becoming increasingly clear.

Last week, I shared my hunch that the handful of polls pointing to a close outcome in the race were likely understating the support that Rep. Kendrick Meek would eventually receive in his race against self-funded billionaire Jeff Greene. Three new surveys released last weekend -- by Mason-Dixon Polling and Research, Sunshine State News/VSS and the Meek campaign itself -- all show Meek now leading by margins of between 8 and 15 percentage points. The poll by Ipsos Public Affairs, on the other hand, shows Greene maintaining an 8-point lead (40% to 32%) among a small subsample of 237 Florida Democrats.

2010-08-17-Blumenthal-RecentFLpolls.png

The most likely explanation for the difference involves turnout or, more precisely, the challenge of sampling the likely electorate for the Democratic primary. The last two August Democratic primaries held in Florida attracted just over 800,000 of the state's 5.4 million registered Democrats. At that size, the Democratic primary electorate would represent less than 5 percent of Florida's more than 14 million adults. Measuring a target like that is tough for any survey.

To get close, the surveys conducted by Sunshine State News and the Meek campaign used official voter lists to select and dial voters with some prior history of casting ballots in Democratic primaries. The Mason-Dixon survey began with a random-digit sample of all adults in Florida, but then screened for registered voters who say they "vote regularly in state elections" and that they are likely to vote in next weeks' primary.

Ipsos did something very different. Like Mason-Dixon, they began with a random-digit-dial sample of adults and screened for a total sample of 602 registered voters. But to get their sample of Democrats, they screened only on self-reported party identification, selecting the 43% of their full sample of registered voters that identifies or leans Democratic. As such, their sample represents a population of nearly 5 million of Florida's 11 million registered voters. Again, based on past history, the likely turnout is likely to be less than 1 million.

In fairness, Ipsos was doing what a lot of media pollsters do. Their survey was focused mostly on the general election and they appear to have included the primary voter question almost as an afterthought. Nevertheless, the looser likely voter screen they used helps explain why their Democratic primary subgroup is so much friendlier to Greene than the samples drawn by the other pollsters. It probably includes many voters who rarely vote in Democratic primaries and have less knowledge of or affinity for Meek, whose campaign has been touting endorsements from mainline Democrats like Bill Clinton.

On a related issue, Mason-Dixon's president, Brad Coker, kindly shared a cross-tabulation of the results of their poll by the race, education and ideology subgroups I wrote about last week. It turns out that the Mason-Dixon sample includes far more white, college educated liberals (32%) than an earlier survey by Quinnipiac University (14%), though the gap in those numbers may be due to differences in the questions about education or ideology. Nevertheless, the tabulation of results shows a pattern closer to my hunch about where the race seemed headed last week: Meek runs ahead among African-Americans by a margin of 59% to 9% and wins college-educated white liberals by almost two to one (54% to 23%).

2010-08-17-Blumenthal-MasonDixonGroups.png

Again, as I wrote last week, I'd expect Meeks' share of the African American vote to exceed 80%, so given the margins among other voters, Meek appears headed for a comfortable win next Tuesday.

[Cross-posted to the Huffington Post].


Palin Sinking? Or Over Reading the Horse Race?

Topics: Clarus , Sampling Error , Sarah Palin

I was on vacation last week, but nearly interrupted it when I saw the press release from D.C. public relations firm Clarus, touting the results of its new survey. "PALIN SUPPORT FOR GOP NOMINATION SINKS," the headline blared, followed by this lead paragraph:

A new nationwide survey of Republican voters finds that support for former Alaska Gov. Sarah Palin to win the GOP's 2012 presidential nomination has fallen by one-third since March, sliding from 18 points to 12 points. Palin is now running in fourth place for the nomination behind former Massachusetts Gov. Mitt Romney, former Arkansas Gov. Mike Huckabee, and former House Speaker Newt Gingrich.

The release has several lessons to teach us about how to best interpret horse race polling. First, the headline struck me as overly dramatic, especially when I checked the methodology. The survey, conducted from July 26-27, interviewed just 374 "registered voters nationwide who self-identified as Republicans or as Independents who lean Republican," yielding a reported margin of sampling error of +/- 5%. The March survey interviewed 415 Republicans or Republican leaners, so the margin of error would have been roughly the same.

It's not hard to do the math on that. Eighteen percentage points minus five (or 13%) is less than 12 percentage points plus five (17%). So I assumed, at first glance, that the much heralded drop in Palin's support was not statistically significant.

Problem is, the margin of error is a little more complicated than my quick arithmetic. While the references at the bottom of news articles and press releases rarely explain it, the margin of error gets smaller as a given result gets closer to zero or one hundred percent (explained in more detail here). In this case the sampling error probably shrinks just enough to make 18% and 12% "significantly" different had the two questions asked in March and July been identical (and I say "probably" because without knowing how severely Clarus weighted their samples, I can't calculate the precise margins of error).

And that brings me to the second lesson: The margin of error tells us nothing about what happens when the pollster changes the question, which Clarus did here in two important ways. First, in March Clarus asked Republicans to choose among seven potential candidates: Sarah Palin, Mitt Romney, Mike Huckabee, Newt Gingrich, Jeb Bush, John Thune, and Mitch Daniels. Last month, they presented nine choices. They dropped Bush (who received 8%) and added Lamar Alexander, Haley Barbour and Tim Pawlenty (who received a combined 8%). So was the apparent change in Palin's support about a decline in her support, or were Barbour, Pawlenty and Alexander collectively more attractive to some potential Palin supporters than Jeb Bush?

Equally important, Clarus changed the root question. In March, they asked Republicans which candidate they "would now most likely favor." On the more recent survey, they asked which they would favor "if you had to vote today." Is it a coincidence that the undecided percentage grew by five points (from 10% to 15%) when respondents were pressed how they would have to vote "today?" I think not.

We might also consider the results of other polls. CNN, which has asked an identical Republican preference question three times this year, shows Palin with exactly the same support a week ago (18%) as in March (18%). A new PPP survey released just this afternoon finds essentially the same result.

Also, a dozen or so national pollsters have been asking national samples of adults or registered voters to rate Palin (favorably or unfavorably), and our chart shows no consistent pattern in their measurements over the course of 2010 (click on the individual black and red dots on the interactive chart below to see the trends of individual pollsters).

And finally, there is a lesson about the value of this sort of horse race question, especially when asked at this stage of the contest. What they tell us about the Republican nomination race shaping up for 2012 is that none of the potential candidates -- not Palin, Romney, Gingrich nor Huckabee -- are the sort dominant front runner likely to begin with a huge advantage based on early name recognition or support. The same was true at this point four years ago when polls showed Rudy Guiliani as the "front runner" in early trial heat questions. Those early "leads" turned out to be meaningless as the real races in the early primary states got underway.

Over the weekend, Kevin Madden, the former press secretary to Mitt Romney during the 2008 campaign, tweeted that "2012 horserace polls are like pre-season football: Fun to watch for a few minutes until you realize they don't matter." That's about right.

[Cross-posted at the Huffington Post].


Understating Meek's Vote?

Topics: 2010 , Catalist , Florida , Jeff Greene , Kendrick Meek , Quinnipiac University Poll

Back in April, I asked rhetorically why Florida Senate candidate Kendrick Meek had earned so little respect from analysts and pundits. I was thinking mostly of the one-on-one general election contest then looming between Democrat Meek and likely Republican nominee Marco Rubio.

That was then. Within a few weeks, Republican Governor Charlie Crist launched his independent Senate candidacy and shortly after that, billionaire real estate investor Jeff Greene jumped into the Democratic primary. Since then, Greene has pumped at least $8 million of his own funds into television advertising and has rapidly gained support in public polls. By late June, the Cook Political Report (gated) was reporting that unnamed "Democratic strategists are beginning to come to terms with the idea that Greene may well win the primary."

2010-08-11-Blumenthal-FLDemPrimary.png

The three most recent public polls now show everything from a ten-point Greene lead (Quinnipiac University) to a narrow but not quite statistically significant four-point Meek advantage (Mason-Dixon). A third poll, conducted last week for the Meek campaign, shows Meek (at 36%) in a "statistical dead heat" against Greene (35%). "That Meek's campaign is releasing a poll showing him essentially running even with Greene," the Washington Post's Chris Cillizza concluded last week, is a testament to the dire straits that the Democratic congressman has found himself in."

While a general election contest is another story -- both Meek and Greene currently run far behind both Crist and Rubio -- I can't help feeling that those unnamed "strategists" are still underestimating Meek's base and potential in the August 24 primary.

To get a better sense of Meek's true base of support, I start by asking two questions: (1) What percentage of the primary electorate will be African-American and (2) what percentage will be white college-educated and liberal?

The African American percentage is the easier to estimate since the Voting Rights Act requires that Florida and other southern states maintain and report registration and turnout statistics by race. I obtained the following estimates from Catalist, the Democratic Party affiliated database of vendors. Based on the voters currently registered to vote (i.e. excluding any that have been purged over the last ten years because they have died or moved), they tell me that African Americans are:

  • 27% of the 5.4 million registered Democrats in Florida
  • 25% of the 828,401 who voted in the Democratic primary in August 2008
  • 21% of the 837,192 who voted in the Democratic primary in August 2006
  • 24% of the 1.98 million who voted in any Florida Democratic primary since 2000 held in August or September
  • 20% of the 1.7 million who voted in the Democratic presidential preference primary in January 2008

And in case you are wondering, yes there was a significant boost in African-American registration just prior to the 2008 general election. Catalist also reports that African-Americans were 41% of the 329,121 voters that registered after the 2008 presidential primary and cast a ballot in November 2008.

With those numbers in mind, let's consider the African American composition of the recent public polls.

2010-08-10-FLracialcomposition.png

Mason-Dixon (21%) matches the black turnout from 2006, while PPP (19%) is slightly lower and Quinnipiac (16%) much so. All three fall short of the African American percentage (25%) that Catalist reports for the August 2008 primary (25% -- and remember, Florida's controversial, non-binding presidential preference primary was held earlier, in January 2008). Of course, the composition in this year's primary is unknown, and what sort of turnout Kendrick Meek's candidacy will help produce among Florida's African American Democrats remains an open question.

But even more important than composition is Meek's vote share. All three polls show Meek under 50% among black voters, which probably speaks to his still less-than-universal name recognition even as recently as two to three weeks ago.

Here's a wager: On August 24, Meek will get at least 80% and probably closer to 90% of Florida's African-American vote. If I'm right, it means that all three polls are likely understating Meek's overall vote percentage by at least 8 to 10 percentage points.

The same thing happened over and over during the 2008 primaries, as pre-election polls significantly underestimated Barack Obama's eventual margins in states with large African-American populations like South Carolina, North Carolina and Georgia.

Now let's consider my second question: What percentage of the likely electorate is white, liberal and college educated? I ask because college-educated liberals were a bedrock of Barack Obama's support in the 2008 primaries and typically the most supportive subgroup of African-American candidates in Democratic primaries. Unfortunately, only polls can tell us the size of this subgroup and of the independent surveys, only the Quinnipiac University poll asked respondents to report both their years of education and ideology.

The pollsters at Quinnipiac University tell me that white, college-educated liberals were just 14% of their late July sample, but as the table below shows, their support for Kendrick Meek (42%) was comparable to African-Americans (39%) and more than twice that of all other voters (16%).

2010-08-10-Blumenthal-RaceIdeologyEduc1.png

Here's my hunch: Meek will do slightly better among both groups of non-black voters than indicated in the Quinnipiac poll. That's not exactly a bold prediction given that Quinnipiac found significantly less support for Meek overall than all of the other public polls fielded since June, including the two more recent polls by Mason-Dixon and the Meek campaign.

If Meek wins 85% of the African-American vote, and if that vote is 21% of the turnout, then Meek needs roughly 55% of white, college-educated liberals and 35% of everyone else to get to 48% of the vote overall (which should be enough to win, if Maurice Ferre and other minor candidates take 5% of the vote). If you assume that undecided voters in the non-black subgroups will either not vote or "break" along the same lines as those in the Quinnipiac poll who have already decided, Meek is pretty close to those targets already.

Yes, that is a hunch contingent on a number of "ifs." But if Meek wins the primary handily, remember where you heard it first.

[Crossposted to the Huffington Post] - The original version of this post incorrectly referenced the primary as occurring next week.


Daily Kos Hires PPP and a Pollster-To-Be-Named-Later

Topics: Daily Kos , PPP , Research2000

Markos Moulitsas, who first fired his former polling partner Research 2000 in June and subsequently filed suit alleging that polling conducted by that firm was fraudulent, announced this afternoon that his DailyKos website will soon resume polling with two new partners: Public Policy Polling (PPP) for "horserace" polling in statewide contests and another pollster to be named later for national surveys. The first survey, to be fielded in Delware, will be released next week.

Moulitsas was also quick to tweet what will amount to a new standard in polling disclosure:

And while we won't be able to do it next week, both pollsters have agreed to RELEASE ALL RAW DATA. We just have to figure out the logistics.

Access to raw data will mean that anyone with basic statistical software will be able to use the data to run their own tabulations or analysis. While many national media pollsters provide such raw data to academic archives like the Roper Center for Public Opinion and the Pew Research Center provides it on their website, those resources are usually available many months after the results are released.

Tom Jensen, PPP's polling director, provided this reaction for The Huffington Post:

We're very excited for the opportunity, especially because Daily Kos has shown such a strong interest in surveying 'under polled' races over the years. We're looking forward to getting data out there in states like Delaware, where we're kicking off, that don't usually see a lot of public polling. We're also glad in a time when a few bad apples have cast a shade over the polling industry to let people see that we're really doing our work. We really appreciate Markos' commitment to transparency and are happy to partner with him on that

UPDATE: Markos posts more details to Daily Kos, including news that they "hope to be back up to pre-scandal polling frequencies by September" as well as this comment:

I'm so excited about all of this I can barely contain myself. While the R2K mess has been a nightmare, it has opened up new possibilities -- the ability to work with some of the most accomplished pollsters in the biz, and break new ground by providing unparalleled transparency.

His post also notes that PPP was one of only two pollsters from a short list of those he considers most accurate that was available to do "horserace" polling. He explains that automated pollster SurveyUSA "couldn't do horserace polling because of exclusivity contracts with other media organizations." Could that be a clue to the identity of the yet-to-be announced pollster that will conduct non-horserace "weekly State of the Nation national polling" for DailyKos?"

[Cross posted at the Huffington Post].


Polling Repeal of the Bush Tax Cuts

Topics: Bush tax cuts , Fox News , Pew Research Center , Rasmussen , Taxes

"The people speak: Keep the Bush Tax Cuts." So reads the headline in today's New York Post.

Really?

I tend to flinch at declarations about the feelings of "the people," especially when based on survey questions that assume most Americans have pre-existing opinions on the subjects being probed. Just last week the Pew Research Center provided another reminder that many Americans struggle to identify political figures, foreign leaders or facts central to public policy debates. They find, for example, that only 28% can identify John Roberts as the Chief Justice of the U.S. Supreme Court and only 34% know that the TARP bailout of banks and financial institutions was enacted when George W. Bush was president.

As such, one of my first rules of poll interpretation is to remember that many Americans pay scant attention to policy debates, so their answers typically amount to reactions to the ideas and language presented rather than expressions of pre-existing opinion.

With that in mind, let's go back to the lead sentence of the New York Post article that inspired the headline:

Americans by a wide majority want to extend former President George W. Bush's tax cuts, and more than half believe that letting them expire will further hurt the country's shaky economy, according to a survey released yesterday.

The survey is an automated, recorded-voice telephone poll of 1,000 "likely voters" conducted August 1-2, 2010 conducted by Rasmussen Reports. Here are the results of the two questions highlighted by the Post article:

Should the Bush Administration tax cuts be extended or should the tax cuts end this year?

54% Tax cuts should be extended
30% Tax cuts should end this year
16% Not sure

Suppose you had a choice between extending the Bush Administration tax cuts for all Americans or extending the Bush Administration tax cuts for everyone except the wealthy. Which would you prefer?

48% Extending the Bush Administration tax cuts for all Americans
40% Extending the Bush tax cuts for everyone except the wealthy
12% Not sure

These questions follow two others that ask respondents how closely they have been following news reports about "the tax cuts implemented during the Bush years" and whether an expiration of "Bush Administration tax cuts" will help or hurt the economy. Not until the fourth question do respondents hear any mention that those tax cuts might benefit "the wealthy," and even then they do not define what "wealthy" means. You might wonder how many even heard the words "except the wealthy" in that last question before pressing the "1" or "2" on their touch-tone phones.

Now compare the Rasmussen results to two similar questions asked in recent weeks (via the Polling Report). First, a survey of 900 registered voters conducted July 27-28, 2010 by Fox News and Opinion Dynamics:

As you may know, a series of tax cuts that were passed at the beginning of former President George W. Bush's term are set to expire this year. If you were president, would you continue the tax cuts for everyone, continue the tax cuts for everyone except families earning more than $250,000 dollars a year, or allow the tax cuts to expire and let taxes go back up to their previous level?

44% Continue for everyone 36% Continue for those under $250,000
14% Allow to expire
6% Unsure

Next, consider a similar question posed by a Pew Research/National Journal Congressional Connection poll of 1,004 adults conducted July 22-25, 2010:

Which comes closer to your view about the tax cuts passed when George W. Bush was president? All of the tax cuts should remain in place. Tax cuts for the wealthy should be repealed, while others stay in place. All of the tax cuts should be repealed.

30% Keep all the tax cuts
27% Repeal the tax cuts for the wealthy
31% Repeal all the tax cuts
12% Unsure

So the Rasmussen story shows more likely voters prefer to extend the Bush tax cuts for everyone rather than letting them expire for the wealthy, while the Fox and Pew Research surveys show plurality or majority support for letting the Bush tax cuts expire entirely or only for the wealthy.

What's going on here? First, Americans like having their taxes cut. The more general the question, and the more it implies that everyone gets a tax cut, the more positive the response. Second, all three polls show that Republicans are more enthusiastic about keeping the Bush tax cuts in place than Democrats. That means that about a third of Americans react favorably to the notion of leaving all of the Bush tax cuts in place, regardless of question wording and format. The partisan skew in the results also tells us that the Rasmussen survey, which samples only "likely voters" (using an undisclosed definition), likely produces a more Republican-leaning sample, especially in the current environment in which Republicans are far more enthusiastic about voting than Democrats are.

Finally, we ought to be especially cautious about that final Rasmussen question, as it provides no clear answer category for those who want to say "neither" (i.e. those who want all of the Bush tax cuts to expire).

What these seemingly contradictory results imply, however, is that a large number of Americans are hazy on the details of whose taxes were cut when George W. Bush was president, whether those cuts were intended to be temporary or permanent, what impact they have had on the deficit and the terms of the current debate. As such, their reactions to poll questions on the subject may vary widely depending on the language used and the options offered.

If you want to produce a poll finding that supports your side of the tax-cut debate, you probably can, but the voice of "the people" may not be as clear as some headlines make it out to be.

[Cross posted at the Huffington Post].


Gallup 'Surge' Epilogue

Topics: 2010 , Gallup , Generic House Vote

For the last 10 days, we've been watching the bouncing ball that is the Gallup weekly tracking of the generic House ballot -- the question that asks voters if they are supporting "the Democratic Party's candidate or the Republican Party's candidate" in their district. It bounced up for the Democrats in Gallup's tracking two weeks ago, and appeared to remain up last week, but I wrote two posts arguing that the apparent "jump" was most likely random noise, especially since other tracking polls did not show a similar pattern.

Well, sure enough, the latest weekly update from Gallup out yesterday shows the numbers bouncing back in the Republican direction. Republicans now have a five-point advantage (48% to 43%), roughly the opposite of the lead indicated for Democrats for the last two weeks.

Having devoted nearly 1,400 words to this subject already, I'll keep this short: The week-to-week variation in the chart above is mostly random noise. In fact, if any real changes in vote preferences are afoot, we can't distinguish them from the random variation built into each poll. That variation, by the way, is what the "margin of error" is all about. The results above are basically a picture of 46%, plus or minus 3%.

I write this not to criticize Gallup: Their results are bouncy in comparison to some other polls because they do not weight their results by party identification, so random variation within the predictable range is inevitable.

That said, the reason we plot results from many different pollsters on one chart, as we have done at Pollster.com for the last four years, is to try to put new poll results into the larger context of all other public polls. Our national generic House ballot can be tricky, because some polls that report frequently -- especially the Rasmussen Reports automated survey -- have large "house effects" that make their results consistently different than other surveys. Sometimes results from one pollster can "fool" the chart.

However, what our chart distills from all of the available public data on the generic ballot is a slight trend in the Republican direction over the last month or so. You can see that trend even if I set our "smoothing tool" to its least sensitive setting (to minimize the impact of individual polls or pollsters):

You see the same trend even if you drop both the Gallup and Rasmussen tracking:

These are relatively small changes -- just a few percentage points movement at most -- but the changes are mostly consistent across polling organizations, which gives me more confidence that they are real than any brief "jump" in an individual pollster's results.

[Cross posted at the Huffington Post.]


Democratic Surge? Part II

Topics: 2010 , Gallup , Generic House Vote

Last week, I argued that a reported “jump” for Democrats in Gallup’s weekly tracking of the national generic U.S. House ballot was most likely a statistical blip. I boldly predicted that “more data” this week would “likely settle the issue.” That latter assertion turned out to be wrong, as the issue isn’t settled, but I’m still not convinced that we’re seeing a real shift in voter preferences nationally.

Let’s review: Generally speaking, the generic House ballot is a poll question that asks registered or likely voters whether they would support "the Democratic Party's candidate or the Republican Party's candidate" in their congressional district if the election were held today. Since March, Gallup has released a weekly result based on roughly 1,600 interviews of registered voters that has averaged a 46% to 46% dead heat, but mostly varied within the expected margin-of-error range of plus or minus 3%.

Last week’s Gallup result showed Democrats with a six-point lead (49% to 43%), a result that I argued was likely the sort of random statistical “blip” we should expect from time to time with this sort of tracking survey. This week, Gallup reported a Democratic margin is a slightly narrower four points (48% to 44%), but Gallup’s analysis noted that it “marks the second straight week in which Democrats have held an edge of at least four percentage points” and “the first time either party has held an advantage of that size for two consecutive weeks” in Gallup’s tracking.

So this week’s data doesn’t resolve things. As Charlie Cook writes today, these results mark “one of those periods of uncertainty” where those of us who watch polls closely are unsure whether the results “signal a key turning point in public opinion…just a hiccup, a passing blip…[or] an outlier poll, a statistical anomaly that is the political equivalent of a false positive medical test.”

I’m still dubious that the we are seeing a real change in voter preferences. First, while this week’s Gallup’s numbers do tend to confirm last week’s upward turn, they are also statistically consistent with the 46%-to-46% result that Gallup has shown on average since March. Combining samples for the last two weeks might yield a statistically significant difference from the average, although it would be close.

Second, none of the other pollsters that fielded national media surveys in recent weeks confirm a “jump” in the Democratic direction since the passage of financial reform legislation on July 16. While these surveys have “house effects” that produce different results over time, we can look at trends for individual pollsters. Six (Ipsos, Rasmussen, CNN/ORC and Zogby) show nominal shifts in the Republican direction as compared to their average result earlier this year, and one (YouGov/Polimetrix) shows no change. Only Gallup shows a shift to the Democrats.

Third, Charlie Cook took the next logical step and informally “canvassed several pollsters who see large quantities of data from around the country.” These are the campaign consultants that have been doing benchmark and tracking surveys for their clients over the summer. “None,” he writes, “seems to have detected any shifts in the past two weeks.”

Charlie also makes the point that he sees “no defining event has taken place,” including passage of the banking bill, “that would have triggered a significant shift in this year’s race” and says the pollsters he talked to are also “at a loss in figuring out what would have triggered a change.” Count me as similarly puzzled.

That said, I am not arguing that we ignore the Gallup data. Aside from its well-deserved reputation, Gallup is also the only organization polling on the generic ballot in recent weeks which interviews Americans on their landline and cellphones. It would be surprising for the cellphone interviewing to make that big a difference, but we can’t rule that possibility out.

So, as my friend Charlie Cook counsels, we need to “sit tight” and wait for more data.

[Cross-posted at Huffington Post]


Gallup Generic: Dem Surge or 46 +/- 3?

Topics: Gallup , Generic House Vote , Sampling Error

On Monday, Gallup released the latest update in its weekly tracking of “generic” ballot preferences for the 2010 Congressional elections. The generic ballot asks voters if they would vote for “the Democratic Party’s candidate or the Republican Party’s candidate” in their congressional district, if the election were held today. According to the analysis by Gallup’s Lydia Saad, this week’s showed “the first statistically significant lead” for the Democrats (49% to 43%), since Gallup began weekly tracking in March, so naturally the headline read: “Democrats Jump Into Six-Point Lead.”

Analysts and pundits wasted no time offering possible explanations for the “jump.” Saad’s lead sentence juxtaposed the news of the Democrats pulling ahead with passage of “a major financial reform bill touted as reining in Wall Street.” Elsewhere, Kevin Drum, Tom Schaller and Andrew Sullivan offered alternative theories, although all also cautioned that the pattern could be an “outlier” or “blip.”

Let me play the cautious pollster for a moment and make the case for “blip.”

Yes, this week’s reported six-point lead for the Democrats is statistically significant, but the bigger issue is whether it is significantly different from Gallup’s reading in previous weeks. Remember, all polls have random variation built in that we usually think about as the “margin of error.” Random up-and-down variation within that range is to be expected.

The chart below plots the percentage of respondents each week who tell Gallup they are voting for a Democrat (the blue dots) plus a vertical (blue) line for each poll that indicates the range associated with the reported +/- 3 margin of sampling error.

I have also added a black line showing the average Democratic vote (45.6%) over the full 20 weeks of Gallup tracking. The total lack of a trend is hypothetical, since we do not know for certain that the “true” support for Democrats has been an absolutely flat line since March. I’m plotting that line, however, in order to ask a question: Has this week’s poll, or any poll in the series for that matter, produced a result inconsistent with the average? In other words, does any result fall outside the range of 45.6%, plus or minus 3%? The answer is, just one — this week’s — and just barely. This week’s result (49% Democrat) minus three (46%) is just four tenths of a percentage point greater than the average (45.6%). Keep in mind that each week’s result, and the reported margin of error, have both been rounded to the nearest whole digit, so it’s possible that if we had all data calculated to one decimal, we might reach a different conclusion.

And keep something else in mind about the margin of error: It represents a probability. We can expect results to fall beyond the margin of error 5% of the time, or for one measurement out of twenty (that’s the idea behind the line in Gallup’s methodological blurb: “one can say with 95% confidence that the maximum margin of sampling error is ±3 percentage points”).

Guess what? Gallup has released exactly 20 results so far in its weekly tracking series and exactly one — this week — has fallen outside the average of all the polls combined (and then by just 0.4%).

Now let’s look at the same chart for the percentage voting Republican as compared to a flat line average of 45.9% Republican across all twenty weeks of Gallup’s tracking. In this case, this week’s result (43%) plus three (46%) captures the average for all 20 weeks (45.9%) by just one tenth of one percent. However, two polls conducted six and eight weeks ago fall (each showing Republican preference at 49%) fall just outside the range.

Thus, the case for true “jump” in Democratic performance on the generic House ballot is weak. If we add the context of other recent polls, it gets weaker still. Their results scatter around a dead-heat margin in ways that are more or less consistent with their typical house effects on the generic ballot.

As always, more data next week will likely settle the issue, but I wouldn’t be surprised to see the next move in Gallup’s weekly tracking in the Republican direction, not because of real-world events but rather due to what statisticians call a reversion to the mean.

P.S.: Jay Cost has similar thoughts on Gallup’s “Bouncing Ball.”


Divergent Polls: Deficits vs. Spend for Jobs

Topics: Budget , Deficits , Divergent Polls , Measurement , Spending

If you have been following coverage of polling on jobs and the deficit this week, you may be a little confused. “The public now sees reducing the budget deficit as a higher priority than increasing government spending to help the economy recover,” the Pew Research Center told us on Monday. But just today, the headline of the Quinnipiac University poll announces that “American Voters Want Jobs Over Deficit Reduction 2-1.” What gives?

I gathered results from media polls conducted in July that have asked respondents to choose in some way between creating jobs or cutting the deficit and created the following table:

Pew Research/National Journal (7/15-18, n=1,003 adults): If you were setting priorities for the government these days, would you place a higher priority on [rotate] Reducing the budget deficit OR Spending more to help the economy recover?40% spend to help economy51% reducing deficit9% don't know
CBS News (7/9-12, 2010. N=966 adults): Which comes closer to your own view? The federal government should spend money to create jobs, even if it means increasing the budget deficit OR the federal government should NOT spend money to create jobs and should instead focus on reducing the budget deficit.46% spend to create jobs47% reducing deficit7% unsure
Zogby Interactive (7/9-11, n=2,055 likely voters/online opt-in panel): Do you agree or disagree with the following statement: Right now, federal spending targeted to create and maintain employment is a more important concern than the federal deficit.53% agree (spend on jobs)42% disagree5% don't know
Quinnipiac University (7/13-19, n=2,181 registered voters): What do you think is more important - reducing the federal budget deficit or reducing unemployment?64% unemploy- ment30% deficit6% don't know
Bloomberg/Selzer & Co (7/9-12, 2010, n=1,004 adults): The U.S. currently has a huge budget deficit and a high unemployment rate. Which should take priority: reducing the budget deficit or reducing the unemployment rate?70% unemploy- ment28% deficit2% unsure


Let’s start with the last two questions in the table asked by the Quinnipiac University and Bloomberg/Selzer polls that show the most lopsided support for a focus on jobs and unemployment. If you look closely, the questions are very different than the others in the table, in that they ask respondents to prioritize between unemployment and the deficit as issues, but they do not introduce the idea of increasing government spending in order to reduce unemployment. The latter choice is closer to the policy argument playing out in Washington, but these Quinnipiac and Bloomberg results are still valuable. While Americans are concerned about the growing deficit, they worry about unemployment much more.

Note that while these numbers differ by party, even Republicans are overwhelmingly convinced that unemployment is a bigger issue than the deficit.

When pollsters introduce the idea of spending government money in order to create jobs or benefit the economy, the results narrow considerably, although not consistently. But notice the difference: The CBS News question produces a nearly even split (46% to 47% on jobs vs. the deficit) when it asks about the government spending money “to create jobs,” while the Pew Research/National Journal polls shows a greater preference for reducing the deficit (40% to 51%) versus spending “to help the economy recover.”

Not surprisingly, these results show the usual partisan polarization. Republicans overwhelmingly prefer deficit reduction, while almost as many Democrats prefer an emphasis on jobs and the economy. Independents, as they often do, divide by roughly the same margins as all adults.

My guess is that the more specific emphasis on “jobs” explains the modest difference between the CBS News and Pew/National Journal results. Other results from the Pew Research poll offer a possible explanation: Large majorities believe that the “federal government’s economic policies” since 2008 have done a “great deal” or a “fair amount” to help “large banks and financial institutions” (74%) and “large corporations” (80%), but only 27% say those policies have done a great deal or fair amount for “middle class people.”

These results tell us that most Americans believe the economic policies their government has pursued have helped some sectors of the economy recover while leaving the middle class and unemployed behind. As such, we shouldn’t be surprised that a slightly more react favorably to the notion of spending “to create jobs” rather than spending to “to help the economy recover.”

Still, it is not surprising that these sorts of forced choice questions produce inconsistent results based on minor wording differences. My guess is that most Americans see the unemployment and the deficit as complementary problems, and that only a few see the conflict that many economists do between cutting deficits and stimulating job growth. So it can be confusing for many Americans when pollsters ask them to choose. I’d wager that many want to ask, “why can’t we do both?”

PS: I have focused less on the Zogby Interactive question here for two reasons. First, their online sampling methodology has produced consistently less accurate results in pre-election horse race polling since 2004, even when compared to other opt-in, online panels. Second, this particular question is presented in an agree-disagree format which probably creates what pollsters call “acquiescence bias” favoring the agree-spend-to-create jobs response.

Thanks for Ann Selzer for providing results by party for the Bloomberg survey, and a hat-tip to the Polling Report for its compilation of questions on budget deficits and the economy.

[This entry is crossposted at the Huffington Post].


On the Politico/Penn DC Elites Poll

Topics: DC Elites , Internet Polls , Mark Penn , Politico

On Monday, Politico published two new surveys, conducted by pollster Mark Penn, that compare the views of ordinary Americans to “elites in Washington.” the story concluded that D.C.’s elites “have a strikingly divergent outlook from the rest of the nation,”:

Obama is far more popular while Palin, the former Alaska governor, is considerably less so. To the vast majority of D.C. elites, the tea party movement is a fad. The rest of the nation is less certain, however, with many viewing it as a potentially viable third party in the future.

The survey also reveals to a surprising degree how those involved in the policymaking and the political process tend to have a much rosier view of the economy than does the rest of the nation -- and, in some cases, dramatically different impressions of leading officeholders, political forces and priorities for governing.

But do D.C.’s elites have different views because of their proximity to power? Or are those differences inherent in the demographics used by this survey to define the “D.C. elite”?; Let’s take a closer look.

That label “elite” can mean a lot of things. In this case, it means more than just members of Congress, their staffs and senior political appointees in the executive branch. Rather, this poll intended to measure the larger D.C. political milieu, the upper income portion of D.C.’s “governing class.” Here is the description from Monday’s story:

To qualify as a Washington elite for the poll, respondents must live within the D.C. metro area, earn more than $75,000 per year, have at least a college degree and be involved in the political process or work on key political issues or policy decisions.

Now this point may seem like nit-picking, but there is a difference between the thousand or so individuals who wield real power and influence in D.C. and the much larger group — probably numbering in the hundreds of thousands — who live in the region, have a college degree, earn $75,000 a year and describe themselves as somehow “involved in” politics or policy. That larger group is no doubt far easier to survey, and it may well provide a decent surrogate for the attitudes and worldview of the smaller and more powerful few, but it is different.

Next, consider that the “strikingly divergent outlook” of D.C.’s political elites as measured in this survey may owe as much to their socioeconomic status and partisanship (as defined in this survey) as to their proximity to Washington policymaking. A quick check of the cross-tabs for the Penn/Politico general population sample, for example, shows that better-educated and higher-income adults nationwide tend to be more optimistic about the economy, feel more insulated from the effects of the economic downturn and are more convinced that the Tea Party “is a fad” (to name three).

Also, the 227 respondents identified as D.C. elites give Democrats a two-to-one advantage (51% to 26%) on party identification. That is probably an accurate reflection of D.C.’s upper middle class political milieu — which is certainly different from the nation as a whole — but it also helps explain some of the observed differences in attitudes toward the Tea Party, political leaders and issue priorities. Again, the cross-tabs show that among all adults sampled nationwide in the Penn/Politico survey, Democrats were more likely than Republicans to say the nation is headed in the right direction (51% vs 7%), to consider the Tea Party a “fad” (39% vs. 17%) or to rate President Obama favorably (84% vs 16%).

I wonder how different the “D.C. Elites” would look compared to “elites” nationwide with comparable demographics and partisanship (i.e. with college degrees and incomes over $75,000, weighted to show a 2:1 Democratic advantage)? Maybe socioeconomic elites in Washington are not all that different from similarly situated elites nationwide.

Finally, an important postscript: Both surveys were conducted “online.” In this case, I won’t condemn Penn and Politico for conducting an online survey (though many of my pollster colleagues would), mostly because polling a “rare” population like “D.C. elites” would be prohibitively expensive using more conventional methods. But I wish Politico would have at least offered a sentence or two to describe the methodology and acknowledge that the “science” of online surveys remains a subject of debate among pollsters.

Let me try to compress that debate to a few paragraphs. Unlike most conventional telephone polls, which begin with a random sample of telephone numbers or registered voters, online polls begin with non-random “panels” of Americans who agree to complete surveys online. They are typically recruited using banner advertisements on web sites and usually receive some form of token financial compensation for each survey they complete. Online pollsters then use various methods (usually statistical weighting) to try to transform the completed interviews into a representative sample of a larger population.

How well do the adjustments work? The few independent efforts to assess the accuracy of online polling against known benchmarks tells us that online polls are less accurate, although the degree of accuracy probably depends on the application and can be hard to predict. Some argue that online panels should never be used to estimate “population values,” others consider the observed differences in accuracy small relative to reductions in survey cost (for more details see my two columns on this subject written last year).

(Past interests disclosed: My website, Pollster.com, was owned and sponsored by an Internet polling company, YouGov/Polimetrix, until two weeks ago, when it was acquired by the Huffington Post).

Generalizations aside, the Politico articles offered no real description of how the poll was conducted, so I emailed Mark Penn to ask for more detail. He tells me they used the e-Rewards market research panel. They weighted the general population sample by gender, age, education and race to match Census estimates (“within 2 percent”). However, they did not weight the D.C. elite sample beyond screening for the “key criteria listed of college education or higher, 75k of income and selected occupation levels.”

All of this leaves me with two final questions: How many college educated, upper-income D.C. policy and political wonks “earn e-Rewards Currency just for sharing [their] opinions?” And if the D.C. elite that are part of the e-Rewards panel have characteristics or opinions that differ from those that are not, how would we know?

[Cross-posted at Huffington Post]


Dueling Obama-Palin Polls

Topics: 2012 , Barack Obama , PPP , Sarah Palin , Time/SRBI

Two new polls released yesterday asked about a hypothetical presidential contest between Sarah Palin and Barack Obama with very different results: The Time/SRBI poll (article, SRBI analysis & results) shows Obama with a massive, 21-point advantage (55% to 34% with 11% unsure or not voting), while a survey by Public Policy Polling (PPP) shows a dead heat (46% to 46% with 9% undecided). What gives? And which poll, if any, should we trust?

The two surveys were conducted by telephone just a few days apart (July 9-12 for PPP and July 12-13 for Time/SRBI) and sought to sample registered voters nationwide, but beyond those characteristics, they were very different surveys.

Time/SRBI uses live interviewers. PPP uses an automated, recorded-voice method that asks respondents to answer questions by pressing buttons on their touch-tone phones.

Their sampling methods are also very different: Time/SRBI used a method that selects telephone area codes and exchanges and randomizes the final digits of each number to theoretically reach a random sample of all working telephone numbers. In this case, they drew two samples, one of landline phones and one of cell phones, dialed each separately and combined the two with weighting. They attempted to select a random person in each household and ultimately questioned 1,003 adults (of whom 50 were interviewed by cellphone), although they asked the presidential vote question only of the 87% who said they are registered to vote.

PPP draws random samples of households from a list of all registered voters compiled by Aristotle International using the public lists gathered by voter registrars nationwide. Phone numbers are either provided by voters when they register or obtained by matching addresses to published phone directories, so some undisclosed percentage of the sampled voters lacks a phone number. PPP then interviews whomever answers the phone and asks respondents to “please hang up” if they are not registered to vote. They ultimately interviewed 667 registered voters.

The trade-off: Time/SRBI theoretically covers every registered voter, while PPP misses some undisclosed percentage of voters without phone numbers or that live in cell-phone-only households (federal regulations prohibit pollsters from using an “autodialier” to dial cell phone numbers). On the other hand, PPP’s identification of truly registered voters may be more accurate; self-reports tend to exaggerate the number of registered voters.

The PPP survey featured 17 questions, including demographics. The Time/SRBI survey asked 29 questions, not including demographics. Did the difference in length and mode (interviewer or no interviewer), produce a different response rate? Neither organization released a response rate, so we do not know.

Do all of these characteristics add up to different kinds of people interviewed? One big clue comes from the results for party identification: PPP’s respondents identified themselves as 39% Democrat, 34% Republican and 27% independent or other. Time/SRBI provided me with their party ID results for registered voters: On the initial question, 33% say they “usually think of” themselves as Democrats, 23% as Republicans, 30% as independents, 12% as “something else” and 2% were not sure. When they pushed the uncertain, a total of 47% identify or “lean” Democratic and 30% identify or lean Republican.

So the PPP sample has a closer partisan balance than the Time/SRBI sample, although we should keep in mind that the two surveys also asked slightly different party identification questions.

But wait, there’s more: The vote question also differs in an important way. Time/SRBI identifies the party of each candidate, while PPP omits party labels:

Time/SRBI: If the presidential election were held today and the candidates were Barack Obama, the Democrat, and Sarah Palin, the Republican, and you had to choose, for whom would you vote?

PPP: If the candidates for President next time were Sarah Palin and Barack Obama, who would you vote for?

Finally, the two polls asked their Obama-vs-Palin question in slightly different contexts. Time/SBRI asked their question following a set of probes of Obama’s performance as president, a question about whether Obama or George W. Bush “is the better president” and immediately following a job rating of first lady Michelle Obama. PPP asked their Obama-Palin question after a job rating of Obama, favorable ratings of each of the Republicans and immediately after a Huckabee-vs-Obama question.

The main point here is that these polls are very different in ways that go far beyond live interviewers versus automated polling. Their methods are dissimilar across the board.

So given these differences, which poll should we trust? My answer is neither. Or both.

First the case for both: When an attitude or preference is weak, small differences in methodology and question wording can make a big difference. Needless to say, asking for preferences in a hypothetical political match-up in 2012 two years before an election qualifies as weak. In such cases, it makes sense to look at a wide variety of polls in order to get a sense of the range of potential results.

The case for “neither” is a lot stronger in this instance, for the same reasons. Seven years ago while addressing the American Association for Public Opinion Research (AAPOR), my new boss Arianna Huffington offered a quip about a similar presidential vote preference question asked nearly four years before the 2004 election:

This is really about as meaningful as phrasing a question in the following way, which I will suggest you try one day, “If the world were to stop spinning, and all life were placed in a state of suspended animation, who would you like to see in the Oval Office when you thawed out?”

Yes, I’m guilty of sucking up a bit with that reference, but she’s right. How many ordinary voters have thought deeply about a contest between Obama and Palin? How many were forming an opinion on the spot when interviewed, only after hearing the question posed over the telephone?

My best advice to anyone trying to understand Sarah Palin’s potential is to put aside these two measures and focus instead on questions about opinions that are closer to real, such as Palin’s favorability rating (as asked in both the PPP survey and the results released by Gallup earlier today). Ordinary people do have genuine, pre-existing opinions about Barack Obama and Sarah Palin. Polls are on most solid ground when they measure these perceptions separately, rather than asking about a still hypothetical contest that has so far been of interest mostly to political junkies.


SurveyUSA Polls Cell Phone Only Voters

Topics: Automated polls , Cell Phones , IVR Polls , Jay Leve , Pew Research Center , SurveyUSA

Will this be the year that "cell phone only" voters wreak havoc on the results of pre-election polls? And does the cell phone only problem doom pollsters that depend on automated, recorded voice methodologies? Two new recent polls from SurveyUSA suggest the answers are not as obvious as some may think.

Let's start with the first question. SurveyUSA, a company that has been conducting recorded-voice surveys for local television news stations for nearly twenty years, has recently released two statewide surveys based on dual samples of both landline and mobile phones. In both cases including cell-phone-only voters interviewed over their cell phones did not make much difference in the results. Their recent Washington poll, for example, shows Democratic Senator Patty Murray leading by a not-statistically-significant four-point margin (37% to 33%) over challenger Dino Rossi in a combined sample of landline and mobile phones. Murray's lead would have been a virtually identical five-point margin (39% to 34%) had they interviewed by landline phone only.

Similarly, a North Carolina survey released just yesterday shows Republican Richard Burr leading Democrat Elaine Marshall by ten points (46% to 36%) in the combined sample interviewed over both landline and cell phones. Burr would also have led by a 10-point margin (47% to 37%) had they interviewed all respondents via landline phones only.

These are just two surveys, of course. A more comprehensive assessment of national data gathered by the Pew Research Center earlier this year found that, "weighted estimates" from a large landline sample "tend to slightly underestimate support for Democratic candidates when compared with estimates from dual frame landline and cell samples in polling for the midterm congressional elections this year." But if that slight understatement is real, it may not produce many "significant" differences, either statistically or substantively, in individual statewide surveys.

What is more interesting here, however, is that an automated pollster managed to conduct a "dual frame" survey at all. The underlying story gets us closer to an answer to the issue of the impact of cell phones on automated surveys.

Some background: Pollsters have a harder time interviewing Americans on their cell phones because of the provision in the 1995 Telemarketing and Consumer Fraud and Abuse Prevention Act (TCPA) that places restrictions on unsolicited calls to mobile phones. As explained by the Marketing Research Association:

The TCPA forbids calling a cell phone using any automated telephone dialing system (autodialer) without prior express consent. This rule applies to all uses of autodialers and predictive dialers, including survey and opinion research.

Virtually all pollsters use some form of "autodialer" to place calls to landline respondents, so virtually all pollsters are affected by the TCPA's restrictions on calls to cell phones. With the exception of CBS News (the only operation I know of where interviewers still hand dial each number), virtually all pollsters use some form of computerized interviewing system that dials the phone so interviewers don't have to. Some also use "predictive dialers" that place calls and only connect the respondent to an interviewer once a live person answers the phone (a process that creates that annoying pause that anyone who has answered a call from a telemarketer is all too familiar with). Finally, all recorded-voice pollsters use an "automated dialing system" for their complete process, though they could theoretically begin with a live interviewer and then hand off the process, with the respondent's consent, to an automated interview.

So when live-interviewer pollsters want to interview respondents on their cell phone, their interviewers need to place the calls manually. Their process becomes less efficient and more expensive, but they do not face a total barrier.

Pollsters that use a recorded-voice methodology face a much bigger problem. Yet somehow, SurveyUSA managed to interview voters in North Carolina and Washington over their cell phones. How did they do it? They used live interviewers:

Cellphone numbers were dialed one at a time, cellphone respondents were interviewed by call center employees. Landline respondents heard the recorded voice of a SurveyUSA professional announcer.

In North Carolina, SurveyUSA used more expensive live interviewers to conduct 404 out of 1,000 interviews, although only 250 of those were in cell-phone-only households (see their methodology statement for more details).

So while this approach amounts to a technical solution to the challenge of reaching cell-phone-only households, it creates a huge challenge to the underlying business model of automated pollsters like SurveyUSA. Consider the chart below, prepared by SurveyUSA CEO Jay Leve for a presentation last year. It suggests that in this case, their costs were somewhere between triple and quadruple what they would have been had they done all interviews using a recorded voice methodology.

2010-07-14-leve-costs.jpg

Any other lessons here?

First, this issue provides another demonstration of why all automated surveys are not created equal. In this case, SurveyUSA is actually doing more of a "mixed mode" poll that combines both recorded-voice and live interviewers.

Second, all "dual mode" surveys based on combining landline and cell phone samples are not created equal either. Pollsters have to decide whether to use the cell phone samples to reach just the "cell phone only" households, or whether to also include the "cell phone mostlys" as well. And either way, they need to decide how to weight the combined samples, often without reliable estimates of the percentage of cell-phone-only households at the state level (see the SurveyUSA release and the Pew Research report for more detail).

Third, as is true for many aspects of poll methodology, pollsters could do a better job disclosing the procedures and methods they use to interview Americans over their cell phones and combine those results with interviews conducted via landline phones. CBS News, for example, tells us only that the numbers for their just released survey "were dialed from random digit dial samples of both standard land-line and cell phones." The release for the NBC News/Wall Street Journal poll tells us that their sample of 1,000 adults included "200 reached by cell phone," but nothing more. There are exceptions, of course -- most notably the Pew Research Center -- but they are few and far between.

[Cross-posted at the Huffington Post].


Questions and Answers about the Huffington Acquisition

Topics: Arianna Huffington , Douglas Rivers , Huffington Post , Pollster.com

I want to try to answer some of the questions many of you have been asking about our acquisition by the Huffington Post, but I have to start with a personal story.

Seven years ago, I attended my first conference of the American Association for Public Opinion Research (AAPOR). I knew AAPOR well, and had long wanted to attend the annual conference, but until 2003 had never willing or able to devote the time and money necessary. One of the reasons I finally went was that I had been kicking around the idea -- "pipe dream" was probably more accurate at the time -- of starting a blog about polling. So I decided that attending the AAPOR conference would be a good way to get up to speed on the current methodologies and controversies.

As it happened, AAPOR had planned something bold for 2003: They invited the country's most prominent polling critic, Arianna Huffington, to be their plenary speaker. In 1998, Huffington had launched a "crusade" -- the Partnership for a Poll-Free America -- that urged her followers to hang up on pollsters in order to stop polls "at their source" because they "are polluting our political environment." In a 1998 column headlined "hang it up," she described herself as the "sworn enemy" of pollsters gathering at that year's AAPOR Conference, and hoped that "if all goes well" her crusade would spell the end of such meetings altogether.

What actually transpired was a fascinating and somewhat surprising discussion captured by the conflicting media accounts at the the time. AP's Will Lester wrote that although the pollsters were "prepared for the worst, they got charmed instead," as Huffington "set aside her apocalyptic view of the polling profession," and focused instead on points of agreement. An account in Businessweek took a different tack, noting that by evening's end, "Huffington was on the defensive, dodging accusations that he had her facts wrong and was protesting that she had been misunderstood." As an eyewitness, I can testify that both accounts were accurate. Either way, it was easily the best attended and most provocative event at any AAPOR Conference in my memory (I rescued the full transcript of the plenary session, once posted to AAPOR's web site, from the Internet Archive).

But let me stop there. If anyone had told me during that conference that (a) I would find time to start the MysteryPollster blog a year later, (b) that the blog would be a success, (c) that it would ultimately lead to a day job publishing Pollster.com, (d) that Pollster.com would win AAPOR's prestigious Innovator's Award, (e) that Pollster would eventually be sold to Arianna Huffington and (f) that I'd be truly excited by that prospect, well...let's just say that even after seven years the events of the last week or two have been a bit surreal.

So with that in mind, let's review some of the questions that friends and readers have been asking over the last 24 hours:

1) So what about Huffington's "Poll Free America" Crusade? First, for the record, I have never been a fan of the crusade: not in 2003, not when Arianna renewed it in early 2008 and not now. Even if the intended victims were, as she explained in 2003, only those polls "that are about the political questions of the day," a truly effective campaign to get Americans to hang up on surveys would also ensnare surveys that track consumer confidence, the costs of government programs, the incidence of illness and disease and the health needs of all Americans.

But that said, I can also tell you that Arianna Huffington has given her unqualified support to our longstanding mission to "aggregate polls, point out the limitations of them and demand more transparency," as she told the New York Times. She has also given us the editorial independence to disagree if we deem it appropriate, as I did in the previous paragraph. I also understand that there was a larger point she has been making all along that is in sync with our mission (something I noted in a column last year). As I said in the press release that went out today, I have long believed that to use polling data effectively, consumers need to understand its power as well as its limitations. That's what we have always been about, and that's the mission that Huffington Post has unambiguously endorsed.

So what does Arianna have to say about the apparent contradiction between her anti-polling crusade and buying a site named Pollster.com? Or, as Huffington Post commenter Marlyn, who said she "took Arianna's pledge to never participate in polls" asked last night, "What am I to do now?"

I asked Arianna how she would answer Marlyn's question. Here, via email, is her answer:

I've been a longtime critic of the accuracy of polls and how they're misused by the media, which continue to treat poll results as if Moses just brought them down from the mountaintop. That's why we launched the "Say No to Pollsters" campaign on HuffPost in 2008. And it's why I wanted to work with Mark and Pollster. Since it's clear that polls and polling are not going to go away - indeed, if anything, the media have only gotten more addicted to political coverage dominated by polling - we need to make sure that polls are as accurate as possible and that they are put in the proper larger context. So, though we come at it from different perspectives, Mark and I -- and the rest of the HuffPost team - share the same goal: we are committed to pulling the curtain back on how polls are conducted, and, in the process, make polls more transparent, help the public better understand how polls are created, and clarify polls' place in our political conversation.

2) Aren't you worried about Huffington's partisanship? Or to quote Pollster commenter IWMPB, who is troubled by our move and the greater "partisan division" of the news that it appears to herald: "So much for objectivity...whether or not it's true, perception is reality."

There is no question, given the comments here, in my inbox and elsewhere, that many of you are concerned by the perceived partisan slant at Huffington Post. I have no doubt that these perceptions are the biggest risk we are taking with this move, and represent a huge change from the inside-the-beltway prestige of The National Journal. The questions so many of you are asking are fair. My only hope is that those who have come to value Pollster will judge us on the basis of the work we do going forward and not pre-conceived notions about what this move may or may not mean in the future.

That said, concerns about my objectivity were equally valid when I started blogging six years ago while still actively polling on behalf of Democratic candidates and after more than 20 years as a pollster and campaign staffer for Democrats. Such concerns were equally valid three years ago, when we launched Pollster.com, a venture owned and backed by an Internet research company. I would never claim to be without bias, but I have worked hard from day one to be thorough, accurate and fair. If Pollster has a reputation for straight-shooting commentary and non-partisan poll aggregation, it is because we never took our eyes off those goals.

That's why we have regular contributors who have worked for both Republicans (Kristen Soltis, Steve Lombardo, Bob Moran) and Democrats (myself and Margie Omero) as well as those from academia (Charles Franklin, Brendan Nyhan, Brian Schaffner). That is also why all of these individuals have assured me that will continue to contribute once we launch our new virtual home at Huffington Post.

And one trivial question that keeps coming up:

3) Did this sale make you rich? Sadly...no. Pollster and its assets were purchased from YouGov/Polimetrix, not me, though I am privileged to have a stable new job in the news media doing something I love.

And unfortunately, despite the sale, Pollster.com resulted in a net loss for our former owners. Doug Rivers, the CEO of YouGov/Polimetrix, helped us launch Pollster.com with the hope of doing a service to the survey profession and making a profit. We succeeded, arguably, at the former but not the latter.

Which reminds me that, before ending this post, I need to offer thanks to two important sets of people.

First, to Doug Rivers, who first invested in Pollster.com despite strong advice that a business model would be elusive and who continued to support us long after it was clear he would never see a dime of profit. All along he kept his promise of total editorial independence, never once reaching out to complain if we wrote or linked to something critical of his business. Thanks also to the technology staff at YouGov/Polimetrix who helped keep our site up and running even though that task was far down their daily to-do lists.

Second, thanks to all of my valued friends from National Journal and Atlantic Media (too many to name, but they know who they are) but most of all to Kevin Friedl, Tom Madigan and Deron Lee who for two years took my typo-ridden copy and molded into weekly columns we could all be proud of. I will miss your skilled editing more than you know.

We are going to be moving tomorrow, so time will be limited, but if there are more questions -- and I'm sure there will be -- I will try to answer them in the comments below.


Huffington Post Acquires Pollster.com

Topics: Arianna Huffington , Huffington Post , Pollster.com

Yes, it's true. As reported this afternoon by the New York Times' Jeremy Peters, Pollster.com has been acquired by The Huffington Post:

The Huffington Post is venturing into the wonky but increasingly popular territory of opinion poll analysis, purchasing Pollster.com, a widely respected aggregator of poll data that has been a major draw for the website of the National Journal.

The purchase is something of a coup for The Huffington Post, which has been making a more aggressive push into political journalism ahead of the midterm elections in November.

"It's going to beef up our political coverage," said Arianna Huffington, the website's editor in chief and founder. "Polling, whether we like it or not, is a big part of how we communicate about politics. And with this, we'll be able to do it in a deeper way. We'll be able to both aggregate polls, point out the limitations of them, and demand more transparency."

I will have much more to add later, but for now let me just say how excited we are to joining forces with Huffington Post, as the change will ultimately super-charge everything we do. If you are a fan of Pollster.com, I assure you that what you like will stay the same, including our mission, editorial voice and commitment to providing a forum for better understanding poll results, survey methods and the polling controversies of the day. What will improve will be the overall quality of our site, the power of our interactive charting tools and even greater efforts to promote transparency and disclosure of polling methods.


A Lesson in Caveat Emptor

Topics: Daily Kos , Frank Newport , Gary Langer , Markos Moulitsas , Research2000

What is the most important lesson to be learned from the emerging Daily Kos - Research 2000 polling scandal? Two prominent pollsters, Gallup Poll Editor-in-Chief Frank Newport and ABC News Polling Director Gary Langer, both chimed in last week with a similar conclusion: More disclosure is good, but poll sponsors need to do a better job checking and verifying what they publish.

First up was Newport, who is also serving this year as president of the American Association for Public Opinion Research:

I would emphasize the ultimate responsibility which rests with the entity commissioning or releasing poll data, just as a newspaper or broadcast outlet has the ultimate responsibility for what it releases or publishes. The current controversy revolves around a client-contractor relationship between Daily Kos and Research 2000. It is unclear what procedures Daily Kos may or may not have used to verify and check the data it received from the survey research firm it employed (Research 2000) before publishing it. (Daily Kos ultimately, it says, fired the research firm). Nevertheless, in general, a news or web outlet has an obligation to check and verify what it puts out. This is often easier said than done, of course. A number of publications have been burned in recent times when outside contractors or freelance writers have not followed standard journalistic procedures.

Langer, as always, was a bit more direct:

Disclosure, then, as necessary as it is, does not in and of itself assure data quality. That takes another step: The need for those who fund and then promote or disseminate these data first to dig deeply into the bona fides of the product.

Indeed to my mind the delivery of methodological details, including original datasets, should be an initial and ongoing requirement of any polling provider, not a demand only when controversy arises. That points to a more basic lesson of this story: the principle of caveat emptor.

Polling is a complex undertaking that can be produced in many ways - some highly valid and reliable, some less so, some not in the least. Anyone buying it needs to take the trouble to ascertain precisely how it's being carried out - in sampling, questionnaire design, respondent selection, interviewing, quality control, weighting and more - and to assess the appropriateness of these methods for the intended use of the research.

Just so you don't miss the point, Langer published his comments under the headline, "Running With Scissors." Snark aside, I can't quibble with the larger argument. The first and most important step in assuring data quality and integrity rests with the organizations that sponsor it. I suspect that Markos Moulitsas agrees, despite Langer's implied argument that only the "adults" of the media world can be trusted to publish surveys.

Markos has taken his share of lumps in this controversy, but we should also give him credit for taking a stand in a way that will ultimately force every ugly detail into the public domain.** Important lessons have and will be learned that have nothing to do with politics and everything to do with how to be better data consumers, whether we are paying to conduct the polls or just reading about them.

I am glad that both Newport and Langer also underscored their longstanding commitments to better disclosure, including AAPOR's emerging Transparency Initiative. Better disclosure is a critical tool for the rest of us who do not fund polling. New media brands are emerging even faster than new polling technologies, and very few of those organizations have in-house experts who can access things like sampling, questionnaire design, respondent selection, and the rest.

Those of us who are part of "The Crowd" can do our part to help others make sense of polling methodology, but only if the designs are transparent and the underlying data available.

**A semi-related post-script: In my post on Saturday I noted the "the apparent lack of a written contract" between Daily Kos and Research 2000, based on the statement in the Daily Kos complaint that they entered into an agreement "reached orally" to conduct national polls in 2009. Markos Moulitsas subsequently emailed to say that while there was no formal "boilerplate" contract, "we hashed out our agreement via email." To be clear, a legally binding contract between two parties does not require a written document.


On Removing Research 2000 Polls From Our Charts

Topics: Charts , Daily Kos , Del Ali , Markos Moulitsas , pollster.com , Research2000

When Markos ("Kos") Moulitsas published the analysis last week that convinced him that the polls produced by Research 2000 were "likely bunk" and announced plans to sue his former pollster for fraud, he also made an unusual request:

I ask that all poll tracking sites remove any Research 2000 polls commissioned by us from their databases.

Given the still unexplained patterns in the results uncovered by Grebner, Weissman and Weissman, and the even more troubling response late last week by Research 2000 President Del Ali (discussed here), we have chosen to honor Kos' request, as least as it pertains to the active charts on Pollster.com that we continue to update (such as favorable ratings and vote preference questions for upcoming elections). As of this writing, we have removed the Daily Kos/Research 2000 results from the national Obama favorable rating and national right direction/wrong track charts. The rest should be removed from active charts by close of business today.

We have left in place, at least for now, Research 2000 poll results in active charts sponsored by others organizations, although we will also remove those if they so request. We may also revisit this decision as further developments warrant.

Finally, we will leave in place the results from prior elections as, for better or worse, we consider our final estimates (and the results upon which they were based) to be part of the public record. That said, we will likely follow Brendan Nyhan's lead and add a footnote about the controversy to our charts from 2008 and 2009 that include Daily Kos/Research 2000 data.


Daily Kos & Research 2000: A Troubling Story

Topics: Daily Kos , Del Ali , Fraud , Markos Moulitsas , Research2000

The battle between Daily Kos and pollster Research 2000 went from ugly to surreal last week, as the website and its founder, Markos ("Kos") Moulitsas, filed suit and pollster Del Ali fired back with a lengthy, frequently rambling reply to TPM's Justin Elliot. The Washington Post's Greg Sargent points out that the coming lawsuit could provide an "unprecedented look at the inside of a professional polling operation." I would argue that it already has, although "professional" is not necessarily the adjective I would choose.

Consider what we have learned in just the past few days. From Elliot's reporting, for example, we know that the Olney, Maryland address listed on the Research 2000 website is a post office box and the company "does not appear as incorporated on the state business records database." Ali told Elliott "he incorporated with 'self-proprietorship' in 2000."

That profile fits the description of his business that Ali gave the Baltimore Daily Record in 2006 (h/t Harry Enten). The article described Research 2000 as having just "three part time employees" and being "quite a bit smaller" than Mason-Dixon Polling & Research Inc., where Ali had worked until starting his own business in late 1998. "The actual legwork" of his business, the article said, "is farmed out to professional call banks." Ali also claimed that his firm did considerable non-media work:

Ali said diversification - working with interest groups as well as media - is important for business, and that there is a misconception surrounding polling contracts with large news agencies. We don't make a great deal of money, Ali said. If I were to depend on making ends meet with media polls, then I'd be broke.

Running a mom-and-pop polling business is not incriminating, in and of itself. Many research companies, including my former business, are small shops that depend entirely on third-party call centers to conduct live telephone interviews. But it's important to consider the Research 2000 profile in terms of the sheer volume of work it claims it did for Daily Kos, especially given that Ali told me, as recently as four weeks ago, that his work for Kos and progressive PACs was "less than 15% of our overall business." If that is true, the volume of surveys that Research 2000 farmed out to call centers over the last two years was extraordinary. Several very large call centers would have been involved. Why have we heard nothing from them?

From Yahoo Politics' John Cook we learn that court records show Ali "has been sued numerous times in his home state of Maryland for nonpayment of debt and has been hit with several tax liens," including a $2,360 lien just two months ago. Cook also notes that Ali and his company were sued eight years ago for $5,692 for non-payment to "polling and research company" RT Nielson Company (now known as NSON). Their website confirms that NSON "specializes in telephone data collection" and provides these services "to many market and opinion research consulting companies."

Maryland Court records also show a judgment against Del Ali for $5,714.09 from a suit filed by Zogby International in 2001 (document obtained via search here). So we do have documentation to show that Research 2000 was doing business with research companies and survey call centers, albeit eight to ten years ago.

The Daily Kos complaint, published by Greg Sargent, provides new information on the financial side of the polling partnership from the Daily Kos perspective. Shortly after the 2008 elections, for example, Daily Kos entered into an agreement "reached orally" to conduct 150 polls for the website over the following year, including a weekly national survey and various statewide polls to be conducted "as requested." Kos agreed to make an initial payment in late 2008 and two "lump sum" payments in 2009. The complaint implies (though does not state explicitly) that the parties agreed to either a total amount or a set cost per poll (or both).

The complaint goes on to explain that Kos agreed to advance the second lump sum payment to May 2009 -- right around tax time -- in exchange for an additional 59 polls "to be performed free of charge." Ali requested the advance and offered the free polls in exchange, according to the filing, "claiming it would provide 'immense' help for cash flow reasons."

What the Daily Kos complaint omits is any discussion of the dollar amounts involved. Just how much did they pay for hundreds of thousands of interviews conducted over the last two years? What was the typical cost per interview (especially when we include those 59 free surveys)? The answers to those questions alone will tell us whether Research 2000 could have plausibly conducted live telephone interviewing on such a large scale. As both Patrick Ruffini and Nate Silver have speculated, Ali appears to have been charging absurdly low prices given the likely budget of a site like Daily Kos and the realities of the costs of farming out live interviewing.

Moreover, the financial arrangement described in the complaint -- pre-negotiated lump sums for hundreds of surveys with no written contract -- is also extraordinary. Telephone interviewing costs vary considerably depending on the number of interviews, the length of the questionnaire, the incidence of the target population (how many non-registrants or non-voters need to be screened out) and several other factors. I know of no call center that would agree to field a survey without an advance bid based on precise specification of all of these variables. Given this potential variability and the relatively low profit margins typically involved, pollsters, call centers and their clients are usually careful about nailing down the specifications in advance. The idea that a pollster would propose conducting 59 free polls as a means of obtaining, as Nate Silver puts it, a short-term loan with an "alarmingly high interest rate," is simply unheard of.

While the story told by the Kos filing was strange, the controversy grew even more surreal after Ali "lashed out" at Moulitsas and others in a rambling 1,100-word statement sent via email to TPM's Justin Elliot on Thursday. Ali claims in his statement that the Daily Kos complaint contains "many lies and fabrications," that "every charge against my company and myself are pure lies, plain and simple," and that Kos still owes him a "six figure payment."

Ali promises to "expose" the alleged mistruths "in litigation, not in the media" and says calls by the National Council of Public Polls (NCPP) and others to "just release the data and explain your methodology" indicate a bias toward Kos "and a disregard for the legal process."

Hardly. I am not a lawyer, but I find it difficult to believe that the release of exculpatory evidence now would in any way prejudice Ali's ultimate defense in court. If the surveys were genuine, then raw data files exist somewhere, at least for the most recent surveys. If the cross-tabulations published on Daily Kos are genuine, then statistical software exists somewhere that can replicate the tabulations published on Daily Kos -- including the strange matching odd or even pattern observed by Grebner, Weissman and Weissman. This is not be the stuff of advanced statistical analysis: Either the data and processes exist and can be replicated, or they do not and cannot.

Moreover, if Research 2000 actually conducted the literally hundreds of thousands of live interviews behind the results published on Daily Kos since January 2009 (I count well over 200,000 reported for their national surveys and U.S. Senate surveys alone), a wealth of documentation and eyewitness should be readily available that would be easily understood by mere statistical mortals: Call center invoices, testimony from interviewers, supervisors and the employees that prepared cross-tabulations. That sort of evidence helped send a call center owner to jail in an unrelated Connecticut case in 2006. That sort of evidence could also help vindicate Ali and Research right now -- but only if it exists.

By far the most troubling part of Ali's response comes in these two sentences (left in their original form including typographical errors):

Regardless though. to you so-called polling experts, each sub grouping, gender, race, party ID, etc must equal the top line number or come pretty darn close. Yes we weight heavily and I will, using te margin of error adjust the top line and when adjusted under my discretion as both a pollster and social scientist, therefore all sub groups must be adjusted as well.

"Top line" in this context means the results for the full sample rather than a subgroup, but it still unclear exactly which "top line numbers" Ali is referring to. If he means the results of attitude questions -- vote preference horse-race numbers, favorable ratings, issue questions or possibly even the party identification question -- he comes close to admitting a practice that every pollster I know would consider deceptive and unethical. "Scientific" political surveys are supposed to provide objective measurements of attitudes and preferences. As such pollsters and social scientists never have the "discretion" to simply "adjust" the substantive results of their surveys, within the margin of error or otherwise. As a pollster friend put it in an email he sent me a few minutes after reading Ali's statement: "That's not polling. It's Jeanne Dixon polling."

Pollsters and social scientists do often adjust their top line demographic results, and some will weight on attitude measurements like party identification, to correct for non-response bias (though party weighting continues to be subject of considerable debate in the industry). In either case, however, the adjustment needs to be grounded in prior empirical evidence -- U.S. census demographic estimates or, perhaps, previous surveys of the same population -- and not merely the whim of the researcher.

Because of the apparent lack of a written contract,** the Daily Kos complaint relies in part on the concept of an "implied warranty," the idea grounded in common law that transactions involve certain inherent understandings between a buyer and seller. Most reasonable people would agree that a political poll should be an objective measurement based on survey data that has been "adjusted" only as necessary to correct statistical bias. If Del Ali believes a pollster has the discretion to "adjust" results arbitrarily within the margin of error, he has been selling something very different than the rest of us have been (figuratively) buying.

Greg Sargent was right. The legal process of discovery, if this case gets that far, will provide truly full disclosure. But what we have learned so far is already very troubling.

[Typos corrected]

**Update (7/6): Markos Moulitsas emails to say that while was no formal "boilerplate" contract, "we hashed out our agreement via email." To be clear, a legally binding contract between two parties does not require a written document.


The DailyKos Research 2000 Controversy: 'How Bad for Pollsters?'

Topics: Daily Kos , Disclosure , Jonathan Weissman , Mark Grebner , Markos Moulitsas , MIchael Weissman , Research2000 , Walter Mebane

The allegations of fraud leveled by Daily Kos founder Markos (Kos) Moulitsas and the analysis of Mark Grebner, Michael Weissman and Jonathan Weissman are compelling and troubling. As Doug Rivers wrote here earlier today, they demonstrate that "something is seriously amiss" in the Research 2000 data. All of us that care about polling data need to consider the larger issues raised by their analysis and their allegations.

The most urgent question a lot of non-statisticians have been asking, how damning is the evidence? The short answer is that some of the patterns uncovered by Grebner, Weissman and Weissman have no obvious explanation consistent with what passes for standard survey practice (even given the generous mix of art and science at work in pre-election polling). They demand a more complete explanation.

Of the patterns uncovered by Grebner, et. al., the easiest to describe to non-statisticians -- and for my money the most inexplicable -- involves the strange matching pairs of odd or even numbers. They examined the many cross-tabulations of results among men and among women posted to Daily Kos. If the result for any given answer category among men (such as the percentage favorable) was an even number, the result among women was also an even number. If the result among men was an odd number, the result among women was also an odd number. They found that strange consistency of odd or even numbers in 776 of 778 pairs of results that they examined.

Put simply, there is virtually no possibility that this pattern occurred by chance. Your odds of winning $27 million in the Powerball lottery tonight are vastly greater. Some automated process created the pattern. What that process was, we do not know.

While there are many true statisticians that design samples and analyze survey data, very few do the kind of forensic data analysis that Grebner, Weissman and Weissman have presented. One true expert in this field who is universally respected, is University of Michigan Professor Walter Mebane (Disclosure: Mebane was my independent study advisor at Michigan 25 years ago). I emailed him last night for his reaction.

Mebane says he finds the evidence presented "convincing," though whether the polls are "fradulent" as Kos claims "is unclear...Could be some kind of smoothing algorithm is being used, either smoothing over time or toward some prior distribution."

When I asked about the specific patterns reported by Grebner, et. al., he replied:

None of these imply that no new data informed the numbers reported for each poll, but if there were new data for each poll the data seems to have been combined with some other information---which is not necessarily bad practice depending on the goal of the polling---and then jittered.

In other words, again, the strange patterns in the Research 2000 data suggest they were produced by some sort of weighting or statistical process, though it is unclear exactly what that process was.

As such, I want to echo the statement issued this morning by the National Council on Public Polls calling for "full disclosure of all relevant information" about the Research 2000 polls in question:

"Releasing this information will allow everyone to make a judgment based on the facts," [NCPP President Evans] Witt added. "Failure to release information leaves allegations unanswered and unanswerable."

In the absence of that disclosure, and unless and until the parties have their day in court, it is also important that we give the Grebner, Weissman and Weissman analysis the respect it deserves and subject it to a thorough "peer review" online. It is all too easy to use a blog to lob sensational accusations at suspicious characters, especially when those accusations are grounded in subjects that are "all but impossible for a lay-person to be able to investigate" unless "you have a degree in statistics" (to quote our colleagues at The Hotline earlier today).

The courts have discovery and cross-examination, academic journals have a slow process of anonymous review. Online, we provide such review through reader comments and deeper analysis posted by "peers" that critique work in something much closer to real time. Examples I've seen already include the comments earlier today by Doug Rivers and the blog post by David Shor. Grebner, et. al. have made a compelling case, but it is vital that we kick the tires on their work before leaping to conclusions. Remember, the truly "full disclosure" that a law suit's discovery process will certainly provide may take months or even years to occur.

We will all have more to say on this subject in the days ahead, but for the moment, I want to echo a point Josh Marshall made yesterday. Research 2000 was not the creation of Daily Kos, nor was it the product of a business model built on ignoring the mainstream media and disseminating data over the internet. "They've been around for some time," Marshall wrote yesterday, "and had developed a pretty solid reputation." Their clients included local television stations plus the following daily newspapers (according to the Research 2000 web site): The Bergen Record, The Raleigh News & Observer, The Concord Monitor, The Manchester Journal Inquirer, The New London Day, The Reno-Gazette, The Fort Lauderdale Sun-Sentinel, The Spokesman-Review, and The St. Louis Post-Dispatch

A colleague asked me yesterday about the "upshot of this situation, how bad is it going to be for the [polling] industry?" The answer depends on where the evidence leads us, of course, but the early implications are ominous. The polling industry cannot simply continue on a business-as-usual course. We must push for complete disclosure as a matter of routine and we need to develop better objective standards for what qualifies as a trustworthy poll.

PS: The Atlantic Wire's Max Fisher has a thorough summary of the first wave of online commentary on the DailyKos/Research 2000 controversy. I'd also recommend the short-but-sweet commentary from Washington Post pollster Jon Cohen:

However this dispute turns out, there's a new, blazing light on the rampant confusion about the right ways to judge poll quality. Saving the longer discussion, one thing is clear: to assess quality, one needs to know the facts. At this point, too little is currently known about the Daily Kos/Research 2000 poll to make definitive statements. (Research 2000 has a record of releasing more information than about their polling than some other prolific providers.)


Daily Kos: We Were 'Defrauded' by Research 2000

Topics: AAPOR , AAPOR Transparency Initiative , Daily Kos , Disclosure , Markos Moulitsas , Research2000

Daily Kos founder Markos Moulitsas today rocked the polling world by posting an analysis that he says shows "quite convincingly" that the national surveys conducted for his website by Research 2000 since early 2009 were "largely bunk." Just three weeks ago, Moulitsas fired Research 2000 on the basis of low accuracy scores tabulated by Nate Silver. Today, on the basis of the work of "statistics wizards" Mark Grebner, Michael Weissman, and Jonathan Weissman, Moulitsas announced that Daily Kos "will be filing suit" against its former pollster "within the next day or two."

The core of his extraordinary explanation is worth reading in full:

We contracted with Research 2000 to conduct polling and to provide us with the results of their surveys. Based on the report of the statisticians, it's clear that we did not get what we paid for. We were defrauded by Research 2000, and while we don't know if some or all of the data was fabricated or manipulated beyond recognition, we know we can't trust it. Meanwhile, Research 2000 has refused to offer any explanation. Early in this process, I asked for and they offered to provide us with their raw data for independent analysis -- which could potentially exculpate them. That was two weeks ago, and despite repeated promises to provide us that data, Research 2000 ultimately refused to do so. At one point, they claimed they couldn't deliver them because their computers were down and they had to work out of a Kinkos office. Research 2000 was delivered a copy of the report early Monday morning, and though they quickly responded and promised a full response, once again the authors of the report heard nothing more.

While the investigation didn't look at all of Research 2000 polling conducted for us, fact is I no longer have any confidence in any of it, and neither should anyone else. I ask that all poll tracking sites remove any Research 2000 polls commissioned by us from their databases. I hereby renounce any post we've written based exclusively on Research 2000 polling.

Separately, Charles Franklin will soon post some thoughts on the evidence presented by Grebner, Weissman and Weissman, but for the moment let's consider the issues of disclosure raised.

First, there is some further history. Two years ago, when the American Association for Public Opinion Research (AAPOR) launched an investigation into the polling miscues during the 2008 presidential primary elections, it asked 21 firms to provide response rate information. Research 2000 could not calculate specific response rates for the calls they made. The estimates they provided were "about 1 complete of every eight attempts" in New Hampshire and "about" 1 in 9 in Wisconsin. They were also unable to provide "a full set of dispositions" (a tally of how many calls resulted in completed interviews, no answers, refusals to be interviewed, and so on).

Their incomplete answer did not result in a formal censure, since Research 2000 claimed they were sharing whatever information they had. Another half dozen or so pollsters also claimed they failed to keep an accounting necessary to enable such calculations. But whether a failure of disclosure or quality control, the inability to provide such a basic metric speaks to the importance of demanding greater disclosure as a matter of routine. That's why AAPOR's Disclosure Initiative is so important. If you have not yet read my column from last month on this subject yet, I hope you will. It is highly relevant to this story (h/t to Nirml for making the same point on Twitter).

Second, the most damning information for the layperson about Research 2000 provided in today's announcement -- and is certainly most troubling to me -- is their apparent reluctance to share raw data with their own client. AAPOR's Disclosure Initiative will not mandate the release of raw, respondent level data, but it is worth considering that some of the most respected media pollsters already make their raw data available to scholars by routinely depositing it the Roper Center archives. The Pew Research Center makes their raw data available to the general public through its own web site.

As this story broke, NBC's Chuck Todd noted the "stain" that irresponsible pollsters are leaving on the community of credible pollsters. Many are asking what the polling profession can do. Whatever one might conclude about Research 2000, there are two clear answers: Full support for and participation in AAPOR's Transparency Initiative and a greater willingness to deposit raw data to the Roper Archives.


Lie to Me 'Outliers'

Topics: Outliers Feature

Lymari Morales puts support for Elena Kagan in context; Chris Bowers has more.

Frank Newport finds an increasingly pro-gun environment as a backdrop for today's Supreme Court decision.

Benjamin Page and Lawrence Jacobs critique the America Speaks deliberative town hall meetings; John Sides summarizes, Andrew Gelman has more.

Chris Bowers adds context to the Gallup's finding of growing conservative self-identification.

Jim Geragthy notes that even Rasmussen shows improving numbers for Obamacare (though they're "still pretty lousy").

Tom Schaller gives Scott Brown a positive political check-up.

Daniel Indiviglio plots the relationship between Congressional approval and House seats gained or lost.

Tom Jensen teases a close result in Wisconsin.

Bob Groves reports on the work of Census enumerators and declares Census non-response follow-up 99.6% complete.

Andrew Sullivan shares the chart from an Economist/YouGov poll showing very few Americans closely following the World Cup.

Lie to Me viewers were no better at spotting deception (via Lundry).


Democratic 'Pollster' Doubles as Tea Party Candidate?

Topics: Alan Grayson , Glen Bolger , Nathan Gonzales , Pollsters , Public Opinion Strategies , Victoria Torres

Well, here's a story you don't read every day: A pollster for Florida Democratic Congressman Alan Grayson is running as a Florida Tea Party candidate for the Florida legislature. Roll Call's Nathan Gonzales has the truth-is-stranger than fiction full story:

On Friday, Victoria Torres, 44, of Orlando qualified to run as a Tea Party candidate in state House district 51 in the last hours of the qualifying period.

A call to Torres was returned by Nick Egoroff, communications director for the Florida Tea Party, who described Torres as a "quasi-paralegal assistant who works in a law office." But apparently, Torres is also a pollster.

According to records from the Florida Department of State office, Torres incorporated Public Opinion Strategies Inc. in December 2008. In the first quarter of this year, Grayson's campaign made two payments to her firm, totaling $11,000, for polling and survey expenses.

The whole story is a bit murky but the gist of the conspiracy theory is that Democrats like Grayson want to assist the organization known as the Florida Tea Party field candidates in the fall elections, rather than in Republican primaries, in order to divide the Republican vote. You can judge for yourself by reading the full piece, but Gonzales has gathered evidence of a lot of odd coincidences.

As for candidate Torres' work as a pollster, Gonzales got a Grayson spokesman to confirm that the House candidate is the same person who did polling for Grayson as a "side business," but could find no further evidence that her company had done polling for anyone else. He also reports that Torres was one of three Grayson pollster, adding, "the use of multiple pollsters simultaneously in the same cycle is highly uncommon for a Congressional candidate."

Yes it is.

Also, as Gonzales explains, "Public Opinion Strategies" is also the name of one of the "largest and best known" DC-based Republican polling firms, led by Glen Bolger, Bill McInturff, Neil Newhouse and a long list of partners. His article includes what for us is easily the quote of the day:

"We definitely do not poll for Democrats, nor do we have an office in Orlando," said Glen Bolger of the Virginia-based POS. "However, we do wish Congressman Grayson the worst of luck in November."

It's a strange story that's definitely worth reading in full.


Column on NCPP/AAPOR Effect & Answering Silver

Topics: Accuracy , David Shor , Fivethirtyeight , Nate Silver , Poll Accuracy

My column for this week follows up on last week's topic from a different angle: Nate Silver's intriguing finding that as a group, pollsters that are members of the National Council of Public Polls (NCPP) or that endorsed the worthy Transparency Initiative of the American Association for Public Opinion Research (AAPOR) appear to be more accurate in forecasting election outcomes than other pollsters. While I'd like to see more evidence on this issue, it is definitely a topic worth further exploration. I hope you click through and read it all.

And yes, this has been the fourth or fifth item from me on Silver's ratings in a week, with two more from guest contributors, so it's time for me to move on to other subjects. However, since Nate responded yesterday, I want to clarify two things about my post on Friday:

First, he quarrels with my characterization of his effort to rate polls "from so many different types of elections spanning so many years into a single scoring and ranking system" as an "Everest-like challenge:"

Building the pollster ratings was not an Everest-like challenge. It was more like scaling some minor peak in the Adirondacks: certainly a hike intended for experienced climbers, but nothing all that prohibitive. I'd guess that, from start to finish, the pollster ratings required something like 100 hours of work. That's a pretty major project, but it's not as massive as our Presidential forecasting engine, or PECOTA, or the Soccer Power Index, all of which took literally months to develop, or even something like the neighborhoods project I worked on with New York magazine, which took a team of about ten of us several weeks to put together. Nor is anything about the pollster ratings especially proprietary. For the most part, we're using data sources that are publicly available to anyone with an Internet connection, and which are either free or cheap. And then we're applying some relatively basic, B.A.-level regression analysis. Every step is explained very thoroughly.

The point of my admittedly imperfect Everest metaphor was not that Silver has attempted something that requires a massive investment of time, money or physical endurance, but rather that the underlying idea is ambitious: Using a series of regression models to combine polls from 10 years and a wide variety of elections, from local primaries to national presidential general elections, fielded as far back as three weeks before each election, with controls to level the playing field statistically that all pollsters are treated fairly.

I am not an expert in statistical modeling, but when I ask those that are, they keep telling me the same things: Nate's scoring system is based on about four different regression models (only one of which he has shared), and he does not provide either standard errors of the scores (so we can better understand what the level of precision is) or the results of sensitivity testing (to test what happens when he varies the assumptions slightly -- do the results change a little or a lot). If there is "nothing especially proprietary" about the models, then I don't understand the reluctance to share these details.

Second, I will concede that my headline on Friday's post -- "Rating Pollster Accuracy: How Useful?" -- was an attempt to be both pithy and polite that may have implied too broad a dismissal of the notion of rating pollster accuracy. I do see value in such efforts, as I tried to explain up front, especially as a means of assessing polling methods generally and new technologies in particular. SurveyUSA, for example, has invested much effort into their own pollster scorecards over the years as a means of demonstrating the accuracy of their automated survey methodology in forecasting election outcomes. That sort of analysis is highly "useful."

And I also agree, as Berwood Yost and Chris Borick wrote in their guest contribution last week, that individual pollster ratings offer the promise of "helping the public determine the relative effectiveness of polls in predicting election outcomes [that] can be compared to Consumer Reports." The reason why I have found past efforts to score individual pollsters not very useful toward that end is that it's difficult to disentangle pollster-specific accuracy from the loud noise of random sampling error, especially when we have only a handful of polls to score. And as I wrote in December 2008, very small changes in the assumptions made for scoring accuracy in 2008 produced big differences in the resulting rankings. Except for identifying the occasional clunker, efforts to rate the most prolific pollsters usually produce little or no statistically meaningful differentiation. So in that sense, they have not proven very "useful."

That said, as Nate argues, the challenges may be surmountable. I'm confident that other smart statisticians will produce competing ways of assessing pollster performance, and when they do, we will link to and discuss them. David Shor's effort, posted just last night, is an example with great promise.

Update: John Sides weighs in on the usefulness of pollster ratings.


Rating Pollster Accuracy: How Useful?

Topics: Accuracy , Brendan Nyhan , Courtney Kennedy , Fivethirtyeight , Nate Silver

I have been posting quite a bit lately on the subject of the transparency of Nate Silver's recently updated pollster ratings, so it was heartening to see his announcement yesterday that FiveThirtyEight has established a new process to allow pollsters to review their own polls in his database. That is a very positive step and we applaud him for it.

I haven't yet expressed much of an opinion on the ratings themselves or their methodology, and have hesitated to do so because I know some will see criticism from this corner as self-serving. Our site competes with FiveThirtyEight in some ways, and in unveiling these new ratings, Nate emphasized that "rating pollsters is at the core of FiveThirtyEight's mission, and forms the backbone of our forecasting models."

Pollster and FiveThirtyEight serve a similar mission, though we approach it differently: Helping those who follow political polls make sense of the sometimes conflicting or surprising results they produce. We are, in a sense, both participating in a similar conversation, a conversation in which, every day, someone asks some variant of the question, "Can I Trust This Poll?"

For Nate Silver and FiveThirtyEight, the answer to that question often flows from their ratings of pollster accuracy. During the 2008 campaign season, Nate leaned heavily on earlier versions of his ratings in posts that urged readers to pay less attention to some polls and more to others, with characterizations running the gamut from "pretty awful" or "distinctly poor" to the kind of pollster "I'd want with me on a desert island." He also built those ratings into his forecasting models, explaining to New York Magazine that other sites that average polls (among them RealClearPolitics and Pollster.com) "have the right idea, but they're not doing it quite the right way." The right way, as the article explained, was to average so that "the polls that were more accurate [would] count for more, while the bad polls would be discounted."

For better or worse, FiveThirtyEight's prominence makes these ratings central to our conversation about how to interpret and aggregate polls, and I have some serious concerns about the way these ratings are calculated and presented. Some commentary from our perspective is in order.

What's Good

Let's start with what's good about the the ratings.

First, most pollsters see value in broadly assessing poll accuracy. As the Pew Research Center's Scott Keeter has written (in a soon to be published chapter), "election polls provide a unique and highly visible validation of the accuracy of survey research," a "final exam" for pollsters that "rolls around every two or four years." And, while Keeter has used accuracy measurements to assess methodology, others have used accuracy scores to tout their organizations' successes, even if their claims sometimes depend on cherry-picked methods of scoring, cherry-picked polls or even a single poll. So Silver deserves credit for taking on the unforgiving task of scoring individual pollsters.

Second, by gathering pre-election poll results across many different types of elections over more than ten years, Silver has also created a very useful resource to help understand the strengths and weaknesses of pre-election polling. One of the most powerful examples is the table, reproduced below, that he included in his methodology review. It shows that poll errors are typically smallest for national presidential elections and get bigger (in ascending order) for polls on state-level presidential, senate, governor, and primary elections.

2010-06-17-silver-election-error.png

Third, I like the idea of trying to broaden the scoring of poll accuracy beyond the final poll conducted by each organization before an election. He includes all polls with a "median date" (at least halfway completed) within 21 days of the election. As he writes, we have seen some notable examples in recent years of pollsters whose numbers "bounce around a lot before 'magically' falling in line with the broad consensus of other pollsters." If we just score "the last poll," we create incentives for ethically challenged pollsters to try to game the scorecards.

Of course, Silver's solution creates a big new challenge of its own: How to score the accuracy of polls taken as many as three weeks before an election while not penalizing pollsters that are more active in races like primary elections that are more prone to huge late swings in vote preference. A pollster might provide a spot-on measurement of a late breaking trend in a series of tracking polls, but only their final poll would be deemed "accurate."

Fourth, for better or worse, Silver has already done a service by significantly raising the profile of the Transparency Initiative of the American Association for Public Opinion Research (AAPOR). Much more on that subject below.

Finally, you simply have to give Nate credit both for the sheer chutzpah necessary to take on the Everest-like challenge of combining polls from so many different types of elections spanning so many years into a single scoring and ranking system. It's a daunting task.

A Reality Check

While the goals are laudable, I want to suggest a number of reasons to take the resulting scores, and especially the rankings of pollsters using those scores, with huge grains of salt.

First, as Silver himself warns, scoring the accuracy of pre-election polls has limited utility. They tell you something about whether pollsters "accurately [forecast] election outcomes, when they release polls into the public domain in the period immediately prior to an election." As such:

The ratings may not tell you very much about how accurate a pollster is when probing non-electoral public policy questions, in which case things like proper question wording and ordering become much more important. The ratings may not tell you very much about how accurate a pollster is far in advance an election, when definitions of things like "likely voters" are much more ambiguous. And they may not tell you very much about how accurate the pollsters are when acting as internal pollsters on behalf of campaigns.

I would add at least one more: Given the importance of the likely voter models in determining the accuracy of pre-election polls, these ratings also tell you little about a pollsters' ability to begin with a truly representative sample of all adults.

Second, even if you take the scores at face value, the final scores that Silver reports vary little from pollster to pollster. They provide little real differentiation among most of the pollsters on the list. What is the range of uncertainty, or if you will, the "margin of error" associated with the various scores? Silver told Markos Moulitsas that "the absolute difference in the pollster ratings is not very great. Most of the time, there is no difference at all."

Also, in response to my question on this subject, he advised that while "estimating the errors on the PIE [pollster-introduced error] terms is not quite as straightforward as it might seem," he assumes a margin of error "on the order of +/- .4" assuming a 95% confidence level. He adds:

We can say with a fair amount of confidence that the pollsters at the top dozen or so positions in the chart are skilled, and the bottom dozen or so are unskilled i.e. "bad". Beyond that, I don't think people should be sweating every detail down to the tenth-of-a-point level.

That information implies, as our commenter jme put it yesterday that "his model is really only useful for classifying pollsters into three groups: Probably good, probably bad and everyone else." And that assumes that this confidence is based on an actual computation of standard errors for the PIE scores. Commenter Cato has doubts.

But aside from the mechanics, if all we can conclude is that Pollster A produces polls that are, on average, a point or two less variable than Pollster B, do these accuracy scores help us understand why, to pick a recent example, one poll shows a candidate leading by 21 points and another shows him leading by 8 points?

Third, even if you take the PIE scores at face value, I would quarrel with the notion that they reflect pollster "skill." This complaint that has come up repeatedly in my conversations with survey methodologists over the last two weeks. For example, Courtney Kennedy, a senior methodologist for Abt SRB, tells me via email that she finds the concept of skill "odd" in this context:

Pollsters demonstrate their "skill" through a set of design decisions (e.g., sample design, weighting) that, for the most part, are quantifiable and could theoretically be included in the model. He seems to use "skill" to refer to the net effect of all the variables that he doesn't have easy access to.

Brendan Nyhan, the University of Michigan academic who frequently cross-posts to this site, makes a similar point via email:

It's not necessarily true that the dummy variable for each firm (i.e. the "raw score") actually "reflects the pollster's skill" as Silver states. These estimates instead capture the expected difference in accuracy of that firm's polls controlling for other factors -- a difference that could be the result of a variety of factors other than skill. For instance, if certain pollsters tend to poll in races with well-known incumbents that are easier to poll, this could affect the expected accuracy of their polls even after adjusting for other factors. Without random assignment of pollsters to campaigns, it's important to be cautious in interpreting regression coefficients.

Fourth, there are good reasons to take the scores at something less than face value. They reflect the end product of a whole host of assumptions that Silver has made about how to measure error, and how to level the playing field and control for factors -- like type of election and timing -- that may give some pollsters an advantage. Small changes in those assumptions could alter the scores and rankings. For example, he could have used different measures of error (that make different assumption about how to treat undecided voters), looked at different time intervals (Why 21 days? Why not 10? Or 30?), gathered polls for a different set of years or made different decisions about the functional form of his regression models and procedures. My point here is not to question the decisions he made, but to underscore that different decisions would likely produce different rankings.

Fifth, and most important, anyone that relies on Silver's PIE scores needs to understand the implications of his "regressing" the scores to "different means," a complex process that essentially gives bonus points to pollsters that are members of the National Council of Public Polls (NCPP) or that publicly endorsed AAPOR's Transparency Initiative prior to June 1, 2010. These bonus points, as you will see, do not level the playing field among pollsters. They do just the opposite.

In his methodological discussion, Silver explains that he combined NCPP membership and endorsement of the AAPOR initiative into a single variable and found, with "approximately" 95% confidence, "that the [accuracy] scores of polling firms which have made a public commitment to disclosure and transparency hold up better over time." In other words, the pollsters he flagged with an NCPP/AAPOR label appeared to be more accurate than the rest.

His PIE scores include a complex regressing-to-the-mean procedure that aims to minimize raw error scores that are randomly very low or very high for pollsters with relatively few polls in his database. And -- a very important point -- he says that the "principle purpose" of these scores is to weight pollsters higher or lower as part of FiveThirtyEight's electoral forecasting system.

So he has opted to adjust the PIE scores so that NCPP/AAPOR pollsters get more points for accuracy and others get less (he applies an analogous penalty for pollsters that conduct surveys over the internet). The adjustment effectively reduces the PIE error scores by as much as a half point for pollsters in the NCPP/AAPOR category. Pollsters with the least number of polls in his database get the biggest boost in their PIE scores. He also awards a similarly sized and analogous penalty to three firms that conduct surveys over the internet. His explains that his rationale is "not to evaluate how accurate a pollster has been in the past -- but rather, to anticipate how accurate it will be going forward."

Read that last sentence again, because it's important. He has adjusted the PIE scores that he uses to rank "pollster performance" not only on their individual performance looking back, but also on his prediction on how they will perform going forward.

Regular readers will know that I am an active AAPOR member and strong booster of the initiative and efforts to improve pollster disclosure generally. I believe that transparency may tell us something, indirectly, about survey quality. So I am intrigued by Silver's findings concerning the NCPP/AAPOR pollsters as a group, but I'm not a fan of of the bonus/penalty point system he built into the ratings of individual pollsters. Let me show you why.

The following is a screen-shot of the table Silver provides that ranks all 262 pollsters, showing just the top-30. Keep in mind this is what his readers get to when they click on the "Pollster Ratings" tab displayed prominently on tab at the top of FiveThirtyEight.com:

2010-06-17-538ratings-screenshot.png

The NCPP/AAPOR pollsters are denoted with a blue star. They dominate the top of the list, accounting for 23 of the top 30 pollsters.

But what would have happened had Silver awarded no bonus points? We don't know for certain, because he provided no PIE scores calculated any other way, but we did our best to replicate Silver's scoring method but recalculating the PIE score without any bonus or penalty points (regressing the scores to the single mean of 0.12). That table appears below.**

[I want to be clear that the following chart was not produced or endorsed by Nate Silver or FiveThirtyEight.com. We produced it for demonstration purposes only, although we tried to replicate his calculations as closely as we could. Also note that the "Flat PIE" scores do not reflect Pollster.com's assessment or ranking of pollster accuracy, and no one should cite them as such].

2010-06-17-flatPIE.png

The top 30 look a lot different once we remove the bonus and penalty points. The number of NCPP/AAPOR designated pollsters in the top 30 drops from 23 to 7 (although the 7 that remain all fall within the top 13, something that may help explain the underlying NCPP/AAPOR effect that Silver reports). Those bumped from the top 30 often move far down the list. You can download our spreadsheet to see all the details, but nine pollsters awarded NCPP/AAPOR bonus points drop in the rankings by 100 or more places.

[In a guest post earlier today on Pollster.com, Monmouth University pollster Patrick Murray describes a very similar analysis he did using the same data. Murray regressed to the PIE scores to a different single mean (0.50), yet describes a very similar shift in the rankings].

Now I want to make clear that I do not question Silver's motives in regressing to different means. I am certain he genuinely believes the NCPP/AAPOR adjustment will improve the accuracy of his election forecasts. If the adjustment only affected those forecasts -- his poll averages -- I probably would not comment. But they do more than that. His adjustments appear to significantly and dramatically alter rankings prominently promoted as "pollster ratings," ratings that are already having an impact on the reputations and livelihoods of individual pollsters.

That's a problem.

And it adjusts those ratings in a way that's not justified by his finding. Joining NCPP or endorsing the AAPOR initiative may be statistically related to other aspects of pollster philosophy or practice that made them more accurate in the past, but no one -- not even Nate Silver -- believes that a mere commitment made a few weeks ago to greater future transparency caused pollsters to be more accurate over the last ten years.

Yet in adjusting his scores as he does, Silver is increasing the accuracy ratings of some firms and penalizing others on those grounds, in a way that is also contrary to AAPOR's intentions. On May 14, when AAPOR's Peter Miller presented the initial list of organizations that had endorsed the transparency initiative, he specifically warned his audience that many organizations would soon be added to the list because "I have not been able to make contact with everyone" while others faced contractual prohibitions Miller believed could be changed over time. As such, he offered this explicit warning: "Don't make any inferences about blanks up here, [about] names you don't see on this list."***

And one more thought: If you look back at both tables above, you will notice Silver strikes out the name Strategic Vision, LLC, and marks with a black "x", because he concludes that its polling "was probably fake," cracks the top-30 "most accurate" pollsters (of 262) on both lists.

If a pollster can reach the 80th or 90th percentile for accuracy with made up data, imagine how "accurate" a pollster can be by simply taking other pollsters' results into account when tweaking their likely voters model or weighting real data. As such, how useful are such ratings for assessing whether pollsters are really starting with representative samples of adults?

My bottom line: These sort of pollster ratings and rankings are interesting, but they are of very limited utility in sorting out "good" pollsters from "bad."

**Silver has not, as far as I can tell, published the mean he would regress PIE to had he chosen to regress to a single mean. I arrived at 0.12 based on an explanation he provided to Doug Rivers of YouGov/Polimetrix (who is also the owner of Pollster.com) that Rivers subsequently shared with me: "the [group mean] figures are calibrated very slightly differently than the STATA output in order to ensure that the average adjscore -- weighted by the number of polls each firm has conducted -- is exactly zero." A "flat mean" of 0.12 creates a weighted average adjscore of zero. I emailed Silver this morning asking if he could confirm. As of this writing he has not responded.

**In the interests truly full transparency, I should disclose that I suggested to Nate that he look at pollster accuracy among pollsters that had endorsed the AAPOR Transparency Initiative before he posted his ratings. He had originally found the apparent effect looking only at members of NCPP, and he sent an email to Jay Leve (of SurveyUSA), Gary Langer (polling director of ABC News) and me on June 1 to share the results and ask some additional questions, including: "Are there any variables similar to NCPP membership that I should consider instead, such as AAPOR membership?" AAPOR membership is problematic, since AAPOR is an organization of individuals and not firms, so I suggested he look at the Transparency Initiative list. In his first email, Silver also mentioned that, "the ratings for NCPP members will be regressed to a different mean than those for non-NCPP members." I will confess that at the time I had no idea what that meant, but in fairness, I certainly could have raised an objection then and did not.


AAPOR Adds Transparency Initiative Endorsements

Topics: AAPOR , AAPOR Transparency Initiative , Disclosure , Magellan Data & Mapping , Peter Miller , Quinnipiac University Poll

And speaking of AAPOR's Transparency Initiative, the organization announced via email yesterday the names of 11 more survey organizations that recently pledged their support for the evolving program:

  • Elon University Poll
  • The Elway Poll
  • Magellan Data and Mapping Strategies
  • Monmouth University Polling Unit
  • Muhlenberg College Institute of Public Opinion
  • NORC
  • Public Policy Institute of California
  • Quinnipiac University Poll
  • University of Arkansas at Little Rock Survey Research Center
  • University of Wisconsin Survey Center
  • Western New England College Polling Institute

Among the new names, the two most notable for regular readers are probably Quinnipiac University, the pollster active in many important races in 2010 and Magellan Data and Mapping Strategies, a relatively new firm that has released mostly automated pre-election polls in recent months. When newcomers like Magellan choose to endorse the Transparency Initiative, it's apparent that the "carrot" that past AAPOR President Peter Miller hopes to offer participating pollsters as an incentive is beginning to work.

That new names bring the total number of participants up to 44. When the initiative is launched in about a year, participating which pollsters will routinely release essential facts about their methodology and deposit their information to a public data archive. AAPOR has also posted an update on their work-in-progress on the initiative. I wrote about it in more detail here.


Rasmussen Profile in Washington Post

Topics: AAPOR Transparency Initiative , Automated polls , Disclosure , IVR Polls , Jason Horowitz , Rasmussen , Scott Rasmussen , Washington Post

Today's Washington Post Style Section features a lengthy Jason Horowitz profile of Scott Rasmussen, the pollster whose automated surveys have "become a driving force in American politics." Horowitz visited Rasmussen's New Jersey office -- he leads with the "fun fact" that Rasmussen "works above a paranormal bookstore crowded with Ouija boards and psychics on the Jersey Shore" -- and talked to a wide array of pollsters about Rasmussen including Scott Keeter, Jay Leve, Doug Rivers, Mark Penn, Ed Goeas and yours truly. It's today's must read for polling junkies.

It's also apparent from the piece that Rasmussen won't be joining AAPOR's Transparency Initiative any time soon:

Rasmussen said he didn't take the criticism personally, but he grew visibly annoyed when asked why he didn't make his data -- especially the percentage of people who responded to his firm's calls -- more transparent.

"If I really believed for a moment that if we played by the rules of AAPOR or somebody else they would embrace us as part of the club, we would probably do that," he said, his voice taking on an edge. "But, number one, we don't care about being part of the club."

With due respect, AAPOR's goal in promoting transparency issue is not about getting anyone to join a club (and yes, interests disclosed, I'm an AAPOR member) or even about following certain methodological "rules," it's about whether your work can "stand by the light of day," as ABC's Gary Langer put it recently.

And speaking of methodological rules, I want to add a little context to Horowitz' quote from me:

"The firm manages to violate nearly everything I was taught what a good survey should do," said Mark Blumenthal, a pollster at the National Journal and a founder of Pollster.com. He put Rasmussen in the category of pollsters whose aim, first and foremost, is "to get their results talked about on cable news."

The quotation is consistent with an argument I made last summer in a piece titled "Can I Trust This Poll," which explained how pollsters like Rasmussen are challenging the rules I was taught:

A new breed of pollsters has come to the fore, however, that routinely breaks some or all of these rules. None exemplifies the trend better than Scott Rasmussen and the surveys he publishes at RasmussenReports.com. Now I want to be clear: I single out Rasmussen Reports here not to condemn their methods but to make a point about the current state of "best practices" of the polling profession, especially as perceived by those who follow and depend on survey data.

[...]

If you had described Rasmussen's methods to me at the dawn of my career, I probably would have dismissed it the way my friend Michael Traugott, a University of Michigan professor and former AAPOR president, did nine years ago. "Until there is more information about their methods and a longer track record to evaluate their results," he wrote, "we shouldn't confuse the work they do with scientific surveys, and it shouldn't be called polling."

But that was then.

In the piece, I go on to review the findings of Traugott and AAPOR's report on primary polling in 2008, as well as Nate Silver's work in 2008, both of which found automated polling to be at least as accurate as more conventional surveys in predicting the outcome in 2008.

The spirit of "that was then" is also evident in quotations at the end of the Horowitz profile that remind us that automated polling depends on people's willingness to answer landline telephones and is barred by federal law from calling respondents on their cell phones:

"When you were growing up, you screamed, 'I got it, I got it,' and raced your sister to the telephone," said Jay Leve, who runs SurveyUSA, a Rasmussen competitor who uses similar automated technology. "Today, nobody wants to get the phone."

Leve thinks telephone polling, and the whole concept of "barging in" on a voter, is kaput. Instead, polls will soon appear in small windows on computer or television screens and respondents will reply at their leisure. For Doug Rivers, the U.S. chief executive of YouGov, a U.K.-based online polling company that is building a vast panel of online survey takers, debating the merits of Rasmussen's method struck him as "a little odd given we're in 2010."

Again, I'm doing the full profile little justice -- please go read it all.


Oil Spill Pie Charts Suck 'Outliers'

Topics: Outliers Feature

John Harwood sees little impact from the oil spill on Obama approval (with an assist from Charles Franklin).

Joel Benenson frames climate change legislation as a political winner (via Smith)

Pete Brodnitz had a good night last week.

Mike Huckabee asks pollsters to include him on 2012 trial heat questions.

Marco Rubio is not worried about polls.

PPP invites you to vote on where they poll this weekend.

Lymari Morales counts the many ways Gallup posts updates of Obama job approval.

Bob Groves reports that the Census non-response follow-up is "about 93% complete.. somewhat ahead of schedule and certainly under-budget."

Junk charts says the BP oil spill brings out the worst in pie charts.


Transparency and Pollster Ratings: Update

Topics: Clifford Young , Disclosure , Gary Langer , Joel David Bloom , Nate Silver , poll accuracy , Taegan Goddard

[Update: On Friday night, I linked to my column for this week, which appeared earlier than usual. It covers the controversy over Nate Silver's pollster ratings, and an exchange last week between Silver, Political Wire's Taegan Goddard and Research 2000's Del Ali over the transparency in the FiveThirtyEight pollster ratings. In linking to the column I also posted additional details on the polls that Ali claimed Silver had missed and promised more on the subject of transparency that I did not have a chance to include in the column. That discussion follows below.]

Although my column discusses issues of transparency of the database Nate Silver created to rate pollster accuracy, it did not address transparency in regards to the details of the statistical models used to generate the ratings.

When Taegan Goddard challenged the transparency of the ratings, Silver shot back that the transparency is "here in an article that contains 4,807 words and 18 footnotes," and explains "literally every detail of how the pollster ratings are calculated."

Granted, Nate goes into great detail describing how his rating system works, but several pollsters and academics I talked to last week wanted to see more details of the model and the statistical output in order to better evaluate whether the ratings perform as advertised.

For example, Joel David Bloom, a survey researcher at the University at Albany who has done a similar regression analysis of pollster accuracy, said he "would need to see the full regression table" for Silver's initial model that produces the "raw scores," a table that would include the standard error and level of significance for each coefficient (or score). He also says he "would like to see the results of statistical tests showing whether the addition of large blocks of variables (e.g., all the pollster variables, or all the election-specific variables) added significantly to the model's explanatory power."

Similarly, Clifford Young, pollster and senior vice president at IPSOS Public Affairs, said that in order to evaluate Silver's scores, he would "need to see the fit of the model and whether the model violates or respects the underlying assumptions of the model," and more specifically, "what's the equation, what are all the variables, are they significant or aren't they significant."

I should stress that no one quoted above doubts Silver's motives or questions the integrity of his work. They are, however, trying to understand and assess his methods.

I emailed Silver and asked about both estimates of the statistical uncertainty associated with his error scores and about not providing more complete statistical output. On the "margin of error" of the accuracy scores, he wrote:

Estimating the errors on the PIE [pollster-introduced error] terms is not quite as straightforward as it might seem, but the standard errors generally seem to be on the order of +/- .2, so the 95% confidence intervals would be on the order of +/- .4. We can say with a fair amount of confidence that the pollsters at the top dozen or so positions in the chart are skilled, and the bottom dozen or so are unskilled i.e. "bad". Beyond that, I don't think people should be sweating every detail down to the tenth-of-a-point level.

In a future post, I'm hoping to discuss the ratings themselves and whether it is appropriate to interpret differences in the scores as indicative of "skill" (short version: I'm dubious). Today's post, however, is about transparency. Here is what Silver had to say about not providing full statistical output:

Keep in mind that we're a commercial site with a fairly wide audience. I don't know that we're going to be in the habit of publishing our raw regression output. If people really want to pick things apart, I'd be much more inclined to appoint a couple of people to vet or referee the model like a Bob Erikson. I'm sure that there are things that can be improved and we have a history of treating everything that we do as an ongoing work-in-progress. With that said, a lot of the reason that we're able to turn out the volume of academic-quality work that we do is probably because (ironically) we're not in academia, and that allows us to avoid a certain amount of debates over methodological esoterica, in which my view very little value tends to be added.

To be clear, no one I talked to is urging FiveThirtyEight to start regularly publishing raw regression output. Even in this case, I can understand why Silver would not want to clutter up his already lengthy discussion with the output of a model featuring literally hundreds of independent variables. However, a link to an appendix in the form of a PDF file would have added no clutter.

I'm also not sure I understand why this particular scoring system requires a hand-picked referee or vetting committee. We are not talking about issues of national security or executive privilege

That said, the pollster ratings are not the fodder of a typical blog post. Many in the worlds of journalism and polling world are taking these ratings very seriously. They have already played a major role in getting one pollster fired. Soon these ratings will appear under the imprimatur of the New York Times. So with due respect, these ratings deserve a higher degree of transparency than FiveThirtyEight's typical work.

Perhaps Silver sees his models as proprietary and prefers to shield the details from the prying eyes of potential competitors (like, say, us). Such an urge would be understandable but, as Taegan Goddard pointed out last week, also ironic. Silver's scoring system gives bonus accuracy points to pollsters "that have made a public commitment to disclosure and transparency" through membership in the National Council on Public Polls (NCPP) or through commitment to the Transparency Initiative launched this month by the American Association for Public Opinion Research (AAPOR), because he says, his data shows that those firms produce more accurate results.

The irony is that Silver's reluctance to share details of his models may stem from some of the same instincts that have made many pollsters, including AAPOR members, reluctant to disclose more about their methods or even the support the Transparency Initiative itself. Those instincts are what AAPOR's leadership is hoping to use their Initiative to change.

Last month, AAPOR's annual conference included a plenary session that discussed the Initiative (I was one of six speakers on the panel). The very last audience comment came from a pollster who said he conducts surveys for a small midwestern newspaper. "I do not see what the issue is," he said, referring to the reluctance of his colleagues to disclose more about their work "other than the mere fact that maybe we're just so afraid that our work will be scrutinized." He recalled an episode where he had been ready to disclose methodological data to someone who had emailed with a request but was stopped by the newspaper's editors who were fearful "that somebody would find something to be critical of and embarrass the newspaper."

Gary Langer, the director of polling at ABC News, replied to the comment. His response is a good place to conclude this post:

You're either going to be criticized for your disclosure or you're going to be criticized for not disclosing, so you might as well be on the right side of it and be criticized for disclosure. Our work, if we do it with integrity and care, will and can stand the light of day, and we speak well of ourselves, of our own work and of our own efforts by undertaking the disclosure we are discussing tonight.


Transparency and Pollster Ratings

Topics: Del Ali , Disclosure , Nate Silver , Poll Accuracy , Research2000 , Taegan Goddard

My column for next week has been posted a little earlier than usual. It covers the controversy over Nate Silver's pollster ratings, and bloggy exchange over the last day or two between Silver, Political Wire's Taegan Goddard and Research 2000's Del Ali over the transparency in the FiveThirtyEight pollster ratings. I have a few important footnotes and another aspect of transparency to review, but real life intrudes. So please click through and read it all, but come back to this post later tonight for an update.

***

I'm going to update this post in two parts. First I want to add some footnotes to the column, which covers the questions that have been raised about the database of past polls that Nate Silver created and used to score pollsters. The second part will discuss the transparency regarding additional aspects of Nate's model and scoring.

I want to emphasize that nothing I learned this week leads me to believe that Silver has intentionally misled anyone or done anything intentionally sinister. I have questions about the design and interpretation of the models used to score pollsters, and I wish he would be more transparent about the data and mechanics, but these are issues of substance. I'm not questioning his motives.

So on the footnotes: Earlier today, Del Ali of Research 2000 sent us a list of 12 of his poll results he claimed that Silver should have included in his database and 2 more that he said were in error. Later in the morning he sent one more omitted result. We did our best to review that list and confirm the information provided. Here is what we found.

First, the two polls included in Silver's database with errors:

  • 2008-FL President (10/20-10/22) - Error (+3 Obama not +4)
  • 2008-ME President (10/13-10/15) - Error (+17 Obama not +15)

These are both relatively small errors, and we noticed that the apparent mistake on the Maine poll was also present in the DailyKos summary of the poll published at the time.

There were four three more polls in the omitted category that were either more than 21 days before the election (Hawaii and the Florida House race) our outside the range of races that Silver said he included (he did not include any gubernatorial primaries before 2010). [Correction: We overlooked that the NY-23 special election was omitted intentionally because of Silver's criteria of excluding races "where a candidate who had a tangible chance of winning the election drops out of it prematurely"].

  • 2010-HI-01 Special Election House (4/11-4/14)
  • 2006-FL-16 House (10/11-10/13)
  • 2002-IL Dem Primary Governor (3/11-3/13)
  • 2009-NY-23 Special (10/26-28)

Some may quarrel with Silver's decisions about the range of dates he sets as a cut-off, and I'm hoping to write more about that aspect of his scoring system. But as long as Silver applied his stated rules consistently, these examples do not qualify as erroneous omissions.

That leaves nine ten more Research 200 polls that appear to be genuine omissions in the sense that they meet Silver's criteria but were not included in the database:

  • 2000-IN President (10/28-10/30)
  • 2000-NC President (10/28-10/30)
  • 2000-NC Governor (10/28-10/30)
  • 2002-IN-02 House (10/27-10/29)
  • 2004-IA-03 (10/25-10/27)
  • 2004-NV Senate (10/19-10/21)
  • 2008-ID Senate (10/21-10/22)
  • 2008-ID-01 (10/21-10/22)
  • 2008-FL-18 (10/20-10/22)
  • 2009-NY-23 Special (10/26-28)

Do these omissions indicate sloppiness? We were able to find the NY-23 special election results on Pollster.com and elsewhere, the 2004 Nevada Senate and 2002 Indiana House on the Polling Report and the Iowa 3rd CD poll from 2004 with a Google search at KCCI.com. So those examples should have been included but were not.

However, we could not find the 2000 North Carolina poll anywhere except the subscriber-only archives of The Hotline (although, oddly, with different field dates: 10/30-31). The Hotline database is not among Silver's listed resources.

We also checked and the three results (from two polls) missing for 2008 and found they were also missing from the compilations published by our site, RealClearPolitics and the Polling Report during the campaign (though we did find mention of the Idaho poll on Research2000.com). We could not find the Indiana presidential result from 2000 anywhere.

The point of all of this is that there are really only a small number of examples that qualify as mistakes attributable to Silver's team. Most of the other oversights were also made by their sources. And even if we correct all of the errors and include all of the inside-the-21-day-window omissions, it changes the average error for Research 2000 hardly at all (as summarized in the column [and leaving out NY-23 does not change the average error]). These examples still represent imperfections in the data that should be corrected, and we can assume that more exist for the other pollsters, and as argued in the column, I'm all for greater transparency. But if you are looking for evidence of something "sinister," it just isn't there.

We created a spreadsheet that includes both the original list of Research 2000 polls included in the Fivethirtyeight database and a second tab that includes the corrections and appropriate omissions. It is certainly possible that our spreadsheet contains errors of it's own, so in the spirit of transparency, we've made it available for download. Feel free to email us with corrections.

[I corrected a few typos and cleaned up one mangled sentence in the original post above -- Part II of the update coming over the weekend.]

Update (6/14): Since I did not finish the promised update until Monday afternoon, I posted it as a separate entry. Please click through for more.


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR