Pollster.com

January 4, 2009 - January 10, 2009

 

Calm Before the Inaugural "Outliers"

Topics: Outliers Feature

Carl Bialik has everything you've ever wanted to know about estimating crowd size (blog post too).

Kathy Frankvoic considers polling lessons from 2008 and challenges for 2009.

David Hill ponders the still gloomy "wrong track" numbers.

Mark Mellman names the four conditions necessary to deliver bold policy change.

Nate Silver questions a contradictory Alaska poll.

Kevin Drum revisits his election projection poll (via Sides).   

John Sides reviews new research exploring links between early life experiences and voter turnout.

Henry Farrell see the fingerprints of political science in the 2008 campaigns.

PPP will be polling Missouri this weekend.

Patrick Egan and  Kenneth Sherrill (pdf) author a new report on California's Prop 8 (sponsored by the National Gay and Lesbian Task Force; via Coates via Sullivan).

R (the software that powers our charts) gets a New York Times profile (via Gelman with equal time for SAS).

Flowing data shows reproduces an example of webcomic xkcd, by artist Randall Munroe, on how graphs lead to a decline in love:


20090109flowingdata.jpg


Bowers Vs. 538 Vs. Pollster

Topics: Chris Bowers , Fivethirtyeight , Poll Aggregation , Pollster.com

Chris Bowers posted a two-part series this week that compares the final estimate accuracy of his simple poll averaging ("simple mean of all non-campaign funded, telephone polls that were conducted entirely within the final eight days of a campaign") to the final pre-election estimates provided by this site and Fivethirtyeight.com.

Chris crunches the error on the margin in a variety of different ways, but the bottom line is very little difference among the methods. These are his conclusions:

  • 538 and Pollster.com even, I'm further back: Pollster was equal to 538 when all campaigns are included (the "1 or more" line) and with all campaigns except the outliers (the "2 or more" line). Kind of funny that not adjusting any of the polls, and adjusting all of the polls, results in the same rate of error. To no one's surprise, my method was much better among more highly polled campaigns, but still about 10% behind the other two once poll averaging (2 polls or more) comes into play. I make no pretense about my method needing polls in order to work.
  • Anti-conventional wisdom : 538 had the edge among higher-polled campaigns, which means Pollster.com was superior among lower-polled campaigns. This goes against conventional wisdom. Many thought Silver's demographic regression gave him an edge among less-polled campaigns, but that Pollster's method only worked well in heavily polled environments. Turns out the opposite was true, and I'm not sure why. Maybe Silver's demographic regressions don't work, but his poll weighting does. Or something.
  • Still very close : While I was a little behind, the difference between the methods is minimal. I'm a little disappointed, but clearly anyone can come very close to both 538 and Pollster.com in terms of prediction accuracy with virtually no effort. Just add up the polls and average them. It is about 90% as good as the best methods around, and anyone can do it.

You can see the full post for details, but his calculations are in line with what we found in our own quick (and as yet unblogged) look at the same data. We simply saw no meaningful differences when comparing the final, state-level estimates on Pollster to Fivethirtyeight.

Keep in mind that we designed our estimates, derived from the trend lines plotted on our charts, to provide the best possible representation of the underlying poll data -- nothing more and nothing less. So the accuracy of our estimates tells us that the poll data alone, once aggregated at the end of the campaign, provided remarkably accurate predictions of state-level election outcomes. The fact that the more complex models used at FiveThirtyEight were equally accurate raises the question: In terms of predictive accuracy, what value did Fivethirtyeight's extra steps (weighting by past polls performance and the various adjustments based on other data and regression models) provide?


News Flash: Obama Using Polling Data

Topics: AAPOR , Barack Obama , Pollsters , Public Opinion Quartely

Not that this should come as a surprise to anyone, but Bloomberg News (via First Read) is reporting that President-elect Obama's political advisors have conducted focus group and survey research to help sell their economic stimulus plan:

President-elect Barack Obama's top political aides are adapting their campaign tactics to selling policy, using data from polls and focus groups to shape the debate over a stimulus plan that may cost at least $775 billion.

David Axelrod, Obama's chief political adviser, along with campaign media adviser Jim Margolis, are encouraging lawmakers to use the word "recovery" instead of recession and "investment" instead of "infrastructure." Those recommendations came from focus-group research indicating that such framing would make the package more appealing to voters.

[...]

"Not unlike news organizations, we poll public attitudes about where the economy is," Robert Gibbs, Obama's choice for White House press secretary, said in an interview yesterday. "We're not polling to see what should be in an economic-recovery plan."

There is nothing at all unusual about presidents commissioning surveys and focus groups to guide their political messaging. Pollsters conducted private, internal surveys paid for by with campaign or party committees, on behalf of every pollster since Nixon. National pollsters also provided results "piggybacked" onto polls paid for by other clients to presidents Kennedy and Johnson.

Since presidents leave extensive records of both the White House paper trail (in their presidential libraries) and, since the Carter administration, in Federal Election Commission (FEC) records of party expenditures, a fairly well developed academic literature exists on how modern presidents used political polling. The pages of Public Opinion Quarterly (POQ), the academic journal of the American Association for Public Opinion Research (AAPOR), are particularly rich with this sort of analysis (which unfortunately, are all trapped behind the POQ/AAPOR subscription wall Update: Trapped no longer.  Thanks to the folks at POQ, for graciously opening up the links I used below to non-subscribers!).

For example, in 1995, POQ published such an analysis by Lawrence Jacobs and Robert Shapiro. They interviewed former officials and mined the archives of Presidents Kennedy, Johnson and Nixon and traced the "institutional development" of presidential polling in their administrations (abstract, PDF). Kennedy and Johnson spent little money on polling because their pollsters "piggybacked" presidential polls on surveys paid for by other clients. Nixon's administration devoted more money and staff to polling: "Nixon's [polls] were more detailed, extensive, and sophisticated than his predecessor's."

Three years later, in 1998, POQ published an article by Diane Heath (PDF) that examined the way presidents Nixon, Carter, Ford and Reagan used polling data within the White House. She turned to their memoranda, notes and pollster reports she found in presidential archives and did a quantitative analysis of the polling related paper trail in each administration. She found " consistent patterns and timing of polling usage" across all four administrations.

2009-01-09_Heath.png

Her timing chart showed a consistent patttern:

The four administrations' polling organizations each exhibit a bell curve of increasing and decreasing usage over time.23 The curves peak with high levels of White House polling usage in year two of the first term and exhibit a significant decline in staff polling usage as the reelection campaign approaches. It is interesting that this pattern of early usage followed by declining usage correlates with Light's (1991) conception of ''the problem of policy cycles'': ''If the first two years are for learning, they are also the most important focus for agenda activity'' (Light 1991, p. 41). Polling usage, like agend activity, peaks in the second year of the administration.

In a subsequent article published by POQ in 2002 (abstract, PDF), Shoon Kathleen Murray and Peter Howard mined FEC reports for the amounts of money that presidents Carter, Ford, Reagan, Bush (41) and Clinton paid to pollsters "though their respective party organizations." In contrast to Heath, they found "significant variation" in how much different administrations use private polls. Reagan and Clinton polled "heavily from the start of their administrations," while Carter and Bush polled "only lightly during the first 3 years in office."

And finally, in 2007 POQ published yet another follow-up by Kathryn Dunn Tenpas and James A. McCann (abstract, full text) that took a somewhat different approach to the complicated task of identifying polling expenditures within the FEC reports and modeling that spending over time between 1977 and 2002. They found that presidents "do not vary significantly in the average amount spent per month on polls" and that such spending on internal polling increases throughout the term and especially "during the most intense months of a presidential reelection campaign."

So, while the academics disagree somewhat on the timing of polling during the course of each presidency, there is no question that presidents have been conducting internal polling for decades. Obama's use of polling is nothing new.


US: Gaza Conflict (Gallup-1/6-7)


Gallup Poll
1/6-7/08; 2,049 adults, 3% margin of error
Mode: IVR

National

Half-sample:

Thinking about the current fighting between the Israelis and Palestinians in Gaza, do you think the Bush administration should be doing more, is doing the right amount, or should be doing less to help resolve the conflict?

    33% Should be doing more
    30% Doing the right amount
    22% Should be doing less

Still thinking about the current fighting in the Middle East, do you think Barack Obama should announce a firm public position on the conflict now, or wait until he takes office on January 20th to take a firm public position on the issue?

    19% Should announce position now
    75% Should wait to announce position

(source)


NJ: 09 Governor (FDickinson-1/2-7)


Fairleigh Dickinson University / PublicMind
1/2-7/08; 831 registered voters, 3.5% margin of error
Mode: Live Telephone Interviews

New Jersey 2009 Governor

Jon Corzine (D-i) 40, Christ Christie (R) 33
Corzine 46, Steve Lonegan (R) 28
Corzine 43, Rick Merkt (R) 23

(source)


MN: Senate Recount (SurveyUSA-1/09)


SurveyUSA / KSTP-TV
1/2009; Minnesota adults
Mode: IVR

Minnesota

49% disagree with Coleman's decision to contest the recount; 42% agree.

56% say the recount was fair to both candidates; 31% say it was unfair to Coleman, 3% say it was unfair to Franken.

44% say Coleman should concede; 31% say there should be another election, 8% say there should be another recount.

Favorable / Unfavorable
Coleman: 38 / 44
Franken: 37 / 45

(source)


US: State of Economy (Allstate-Politico-12/27-29)


Allstate / Politico / Garin-Hart-Yang (D)
12/27-29/08; 1,007 registered voters, 3.1% margin of error
Mode: Live Telephone Interviews

National

President Bush recently announced a plan that would shift seventeen billion dollars from the Wall Street bailout fund to American automakers. This plan would provide automakers with a government loan to help the companies avoid bankruptcy. In return for this loan, automakers will be required to significantly restructure their operations and become profitable. Do you strongly approve, somewhat approve, somewhat disapprove, or strongly disapprove of this plan?

    16% Strongly approve
    35% Somewhat approve
    21% Somewhat disapprove
    25% Strongle disapprove

The Obama administration is considering a proposal to create jobs and strengthen the economy. It would cut taxes for the middle class, make major investments in the country's infrastructure, reform health care to make it more affordable and accessible to all Americans, and reduce America's dependence on foreign oil. Do you strongly favor, somewhat favor, somewhat oppose, or strongly oppose this proposal?

    55% Strongly favor
    24% Somewhat favor
    8% Somewhat oppose
    9% Strongly oppose

(story, results)


CA: 2010 Senate (DailyKos-1/5-7)


DailyKos.com (D) / Research 2000
1/5-7/08; 600 likely voters, 4% margin of error
Mode: Live Telephone Interviews

California
2010 Senate: Boxer (D-i) 49, Schwarzenegger (R) 40


NH: 2010 Senate (ARG-12/27-29)


American Research Group
12/27-29/08 (released 1/8/08); 569 registered voters; 4.5% margin of error
Mode: Live Telephone Interviews

New Hampshire 2010 Senate
Judd Gregg (R-i) 54%, Carol Shea-Porter (D) 35%
Gregg 47%, Paul Hodes (D) 40%


The Pleasure of "I Told You So"

Topics: Barney Frank , Gary Langer , I told you so

"Told ya so."

That's how ABC News polling director Gary Langer begins a blog post explaining that polls from ABC News predicted today's report of a 2.2 percent decline in holiday sales by the International Council of Shopping Centers. He points to results reported in November signaling "a Dismal Retail Season" and showed 51% saying "they'll spend less this year than last on holiday gifts, matching the sharpest consumer retreat in polls dating back 23 years." The results from their December survey were worse. He concludes with this appropriate point:

As with politics, there's been far greater detail in our economic polls, especially in our mid-December survey's extensive look at the roots and directions of the public's economic anxiety. But as good data help us understand the contours of public opinion, so they anticipate the results of those attitudes. It's why the vast bulk of survey research isn't carried out by news organizations seeking to report public views, but by corporations seeking to understand how such attitudes will impact their bottom line.

But I link mostly because of the "Told Ya" headline. Langer does concede that the phrase is not "terribly polite," but his use of it gives me the chance to blog this unforgettable Barney Frank quote from Jeffrey Toobin's profile in this week's New Yorker:

"There are three lies politicians tell," [Frank] told the real-estate group. "The first is 'We ran against each other but are still good friends.' That's never true. The second is 'I like campaigning.' Anyone who tells you they like campaigning is either a liar or a sociopath. Then, there's 'I hate to say I told you so.' " He went on, "Everybody likes to say 'I told you so.' I have found personally that it is one of the few pleasures that improves with age. I can say 'I told you so' without taking a pill before, during, or after I do it."


NY: 2010 Senate (Rasmussen-1/6)


Rasmussen Reports
1/6/08; 500 likely voters, 4.5% margin of error
Mode: IVR

New York State
2010 Senate: Caroline Kennedy (D) 51, Peter King (R) 33%


Re: Race Over

Topics: Andrew Gelman , Barack Obama , Cornell Belcher , John Sides

Maybe my post yesterday on Marc Ambinder's review of race and the Obama campaign was a day early.

As soon as I returned to the keyboard after publishing the item -- which emphasizes details provided by Obama pollster Cornell Belcher on how the campaign dealt with "racial aversion" -- my RSS reader produced links to a new Carl Bialik item on the role race played in Obama's victory. It summarized another article by two political scientists, Stephen Ansolabehere and Charles Stewart, who used exit poll data to argue that “Obama won because of race — because of his particular appeal among black voters, because of the changing political allegiances of Hispanics, and because he did not provoke a backlash among white voters.”

Crucial to their argument is that Obama barely gained among white voters compared to Sen. John Kerry in 2004; Obama won 43% of white votes, compared to 41% for Kerry. That slight gain didn’t tilt the election to Obama; instead it took blacks’ and Latinos’ rising share of the electorate, coupled with Obama’s big win among both groups — far bigger than Kerry’s. (Obama won 95% of black votes and 67% of Latino votes, compared to 88% and 53%, respectively, for Kerry.)

“Had the racial composition of the electorate stayed the same in 2008 as it was in 2004, and had whites remained as supportive of Republicans as they were in 2004, Obama would still have won the popular vote, albeit by a much smaller margin,” Ansolabehere and Stewart wrote. “But, had Blacks and Hispanics voted Democratic in 2008 at the rates they had in 2004 while whites cast 43 percent of their vote for Obama, McCain would have won.”

A few hours later, Andrew Gelman chimed in with more details from his analysis of county-level voting patterns:

You can also slice up the vote swing geographically, by counties in different regions of the country, and you find that Obama did close to uniformly better than Kerry nearly everwhere, except for Republican-leaning poor counties in the South (where Obama pretty much stayed even with Kerry). The geographic patterns are striking (see graph at the end of this post).

Race matters, yes, but we're still seeing a national swing.

He added:

I think Ansolabehere and Snyder are right on the money when they write, "the results of the 2008 election challenge much of what has been conventionally thought about race and politics in America. Barack Obama has accomplished an astonishing political move [by] disproportionately energizing nonwhite voters and converting erstwhile Republican supporters within the minority community without alienating white voters."

My summary: as Carl said, the election outcome is multidimensional. Because Steve and Charles were writing a short article, they very properly focused on a single feature of the election--race. I'd say that the #1 feature of the election was a bad economy that produced a national swing toward the Democrats in general and Obama and particular. But once you want to break this down by demographics, I agree that ethnicity is the biggest factor.

Next, John Sides linked to all of the above and took issue with my post on two points. First, he interprets my post as an argument that political scientists forecast an Obama landslide on the basis of political fundamentals. "Neither the fundamentals nor the existence of racial prejudice," he writes, "should have led a sensible analyst to predict a landslide. Most political scientists certainly didn’t. I railed against the perception that this race should have been a landslide here."

The use of the term "landslide" was Ambinder's. I have to admit, I'm not sure what would constitute a "landslide" in the context of last year's election, as defined by Belcher, the unnamed "Obama advisors" he differed with or anyone else. Was Obama's 7.2% margin in the popular vote or his 365 to 173 margin in the electoral college a "landslide?" I'm not sure, although I read the Ambinder passage as implying skepticism from Belcher "in the fall" that Obama's margin would be as wide as it ultimately proved to be.

Second, Sides hears me arguing, perhaps inadvertently, that “political scientists will think they’re right no matter what happens.” Well, no, I didn't mean to imply that all political scientists tend to declare themselves right regardless of the outcome (or even that some do), only that had the outcome been different, another set of scholars somewhere would have been ready to declare (rightly), "I told you so."

Most electoral outcomes are "multi-dimensional." Rarely, if ever, are they about just one thing. And I will grant that the machinations of the campaign -- rallies, paid advertising, field organizing, etc. -- tend to be less consequential in presidential general elections, where voters get massive, direct exposure to the candidates through televised debates and a year's worth of news coverage, than in almost all other types of elections. However, I think the tendency within political science to use presidential general election forecasting models to dismiss the notion of "campaign effects" is overdone. That point aside (or perhaps on that point as well), Sides, Gelman and I mostly agree.

Finally, in the midst of writing this post, I stumbled on yet more from Obama pollster Cornell Belcher via Marc Ambinder. The latter posts a 14-page analysis (PDF) by the former produced for the Democratic National Committee based on data collected in two post election surveys. The memo includes data on "the surge among new voters of color" and much more.


AK: 2010 Senate (Dittman-12/20)


Alaska Standard / Dittman Research (R)
12/5-20/08 (released 1/7/09); 505 adults, 4.4% margin of error
Mode: Live Telephone Interviews

Alaska 2010 Senate
Republican Primary: Murkowski (R-i) 57, Palin (R) 33

(via Hotline - subscription required)


Hotline Publishing IVR Results


I must admit, despite the fact that my National Journal colleagues publish The Hotline just one floor down from my office, I missed this brief announcement (subscription required) on Tuesday appended to results from a recent survey from Public Policy Polling (PPP):

Traditionally, the Hotline has only published live-telephone interview surveys while excluding interactive voice response (IVR) polls, despite the increased media coverage of many of these so-called "robo-polls." In our constant effort to remain tuned to industry developments, and to determine if such distinctions are fair and valid, the Hotline will begin running selected numbers from IVR polls during the upcoming cycle. Specifically, head-to-head matchups, favorability ratings and approval ratings from IVR outfits will appear on an interim basis in the Hotline's Latest Edition through the '10 midterms. This data -- from firms such as InsiderAdvantage, Public Policy Polling, Rasmussen Reports and SurveyUSA -- will be published alongside live-telephone data, but will be clearly labeled as IVR results.

For those who are unfamiliar, The Hotline has been a DC institution for more than 20 years, serving up a daily political news summary chock full of polling data since the days when the preferred mode of delivery was the fax machine. They have long refused to publish surveys that used an automated methodology rather than live interviewers, so in our small world, their decision to publish IVR results, even if only on an "interim" basis, is important and, in my view at least, a welcome step.


Ambinder: Race Over?

Topics: Barack Obama , Cornell Belcher

Not to be missed: My colleague Marc Ambinder has an article in the latest issue of The Atlantic on how the Obama campaign "worked methodically to woo white voters without alienating black ones--and vice versa."   Ambinder, whose sources in Obamaland were strong, draws heavily on conversations with Cornell Belcher, "a top Obama pollster who had conducted some of the campaign's earliest research on race."

The short version is that Obama gained sufficient credibility and support among African-American voters to allow his campaign to focus later in the campaign on wooing uncertain white voters, many of whom demonstrated what Belcher describes as "racial aversion." The eventual lopsided margin among black voters, Ambinder writes,

also highlights what Obama did not have to do: he did not have to pander to black leaders; he did not have to target specific messages at the black community with the attendant risk of exacerbating economic tension between blacks and whites. He did not have to bring up race. And that was key, because Belcher's polling confirmed that culturally anxious whites were willing to vote for a black candidate so long as they did not meditate on the candidate's blackness. Obama was able to credential himself as an African American without engaging in overt racial politics. Or, rather, the black community credentialed Obama without his resorting to racial politicking, something that white Democratic candidates had to do.

The article -- well worth reading in full -- has more details on how Belcher measured "racial aversion" and on the conclusions he drew from the data.

Ambinder also reports on Belcher's candor, back in September, about how race might limit Obama's support:

In the fall, when some Obama advisers began predicting a landslide, Belcher would have none of it. "No one with any real post-civil-rights understanding of our national political contours could with a straight face predicate a Democratic national landslide," he told me in September.

It's worth contrasting that statement with the post-election assessments of many political scientists. They found Obama's ultimate margin "not surprising" since it roughly matched what statistical models based mostly on "fundamental factors" (such as perceptions of the economy and the Bush administration) had predicted (for more details, see comments by Larry Bartels in the Brookings post-election roundtable or the concise summary, with ample links, by John Sides).

But Belcher's words of caution remind me of a comment from one of John Sides' readers in reaction to Sides argument that "the fundamentals" mattered more than the campaigns:

[T]he fact that Obama--as a black man--was able to pull within the margin of a usual victory speaks to the ability of his campaign skills. Again, coming from a sociological perspective, race is so incredibly salient in so many aspects of our lives as Americans, it is astounding that so many Americans were willing to put those sentiments aside and vote for a black man. I suspect that this is what you might mean by "It may be that the campaign helped move voters in line with the outcome that the fundamentals predict" -- but I think that understates how amazing Obama's accomplishment to be the first African American President really is.

See the same link for Sides' reply.

I can't help but thinking that if the election had turned out differently, we might have heard a chorus of "I told you so's" from a different set of political scientists reminding us of the lessons of 30 or 40 years of academic opinion research on how racial attitudes shape political preferences. It didn't happen that way, and Ambinder's piece helps explain why.


US: Kennedy, Burris (USAToday-1/5)


USA Today / Gallup
1/5/08; 1,000 adults, 3% margin of error
Mode: Live Telephone Interviews

National

As you may know, the governor of New York will need to appoint someone to replace Hillary Clinton in the U.S. senate once she becomes secretary of state in January. Caroline Kennedy, the daughter of former president John F. Kennedy, has been mentioned as a possible replacement. Would you like to see Caroline Kennedy appointed to this seat or would you rather see someone else get the appointment?

    45% Would like to see Kennedy appointed
    36% Would rather see someone else appointed

Which of the following would you like to see the state of Illinois do to fill the open senate seat...?

    16% Allow Roland Burris to serve until 2010 when the next election is scheduled
    23% Keep the seat open until the situation with Blagojevich is resolved and allow him or a new governor to appoint a senator to serve until the 2010 election
    52% Hold a special election as soon as possible to fill the seat

How do you think the senate should handle the situation when Roland Burris arrives to fill the open Illinois senate seat?

    27% Allow Burris to fill the seat
    51% Block Burris from filling the seat

(source: Kennedy, Burris)


From 2002 to 2008


Ever since Election Night this past November, I have been ruminating about a story that I've been meaning to post. Its connection to pollsters and poll methodology is indirect, but seems especially relevant right now as we formally turn the page from the incredible election year 2008 to a new administration and its travails in 2009. So before we get back into our usual routine, I want to share it.

In May 2002, I sat down to draft a questionnaire for a new client, a wealthy businessman who was then considering a challenge to an incumbent U.S. Senator two years hence. It was just six years ago, back in the days before my MysteryPollster blog, back when conducting such surveys for Democratic candidates was my full-time occupation. And, as it turned out, this client would be my last "major" race (a designation that, in my mind, falls somewhere between a contest with an uncertain outcome and one with some national significance).

My client's eventual campaign struck some as a microcosm of conventional (and thus flawed) politics. One of the consultants for one of our opponents called our campaign "an exercise in the technology of politics...They do polls and see what people want to hear and then they put ads on the air and run them again and again. There's nothing new about what they're doing." There was truth in this critique, and the ultimate demise of this campaign played a not insignificant role in my evolution from consultant to blogger.

But it was something that happened at the very beginning of that campaign that sticks in my memory now. On that day in May, I started with my usual process when drafting a benchmark "message testing" survey. I took all of my notes from initial discussions with the client, suggestions from other consultants and information gleaned from news clippings and the Internet and synthesized it into a single document listing all of the "messages" -- candidate profiles and arguments about them pro and con -- we hoped to test on the benchmark survey.

I found my "notes for questionnaire" for that project on my hard drive yesterday, still there after moving with me through at least three different computers over the last six years. My first cut consisted of roughly 700 words. About half involved the incumbent Republican, about a third concerned our client. I had also typed a two sentence profile of the potential primary opponent, a former Democratic Senator that had been turned out of office a few years before.

Finally, the document also included the names -- just the names --of three potential candidates that most everyone considered extreme long-shots. The paucity of detail on these three spoke to their status as very long shots in this particular Senate race. Anyone looking over my notes right now, however, would immediately notice one now familiar name among the also-rans:

"Barack Obama."

The survey we fielded a few weeks later showed that just 4% of likely general election voters had a favorable impression of Obama, while 5% rated him unfavorably. Nearly four out of five (79%) had never heard of Obama. He did a little better among likely Democratic primary voters -- 11% favorable, 6% unfavorable -- but still ran far behind, winning just 6% of the vote, in a five-candidate primary matchup that also featured former Senator Carol Moseley Braun. Obama did a little better (rising to 12%) when we omitted Braun, but Obama still ran far behind state Comptroller Dan Hynes (with 34%).

Of course, my client, Chicago businessman Blair Hull, started with even less recognition and support (2% of the vote, to be precise), but his candidacy and its ultimate demise was another story altogether. The consultant who dismissed our campaign as "nothing new" was a truly forward looking guy named David Axelrod. But I digress.

It is still hard to believe that a State Senator who seemed like the longest of long shots for the U.S. Senate so recently is about to be sworn in as the 44th President of the United States. I look back at my notes from six years ago, ponder all that has happened since and just shake my head in wonder.

This story teaches many lessons, of course, but the most relevant to readers of this site is that no poll or statistical model could have predicted Obama's ascent. Yes, we could see from the "internals" of that first survey that Obama had the potential to be a formidable contender in the 2004 Senate primary, especially after both Braun and incumbent Peter Fitzgerald announced that they would not run. But no survey or model in 2002 could have predicted Obama's 52.8% majority in the seven-candidate 2004 Senate primary or the nearly 70% of the vote he won in the general election, much less all that followed in bid for the White House in 2008. I'm certainly a believer in opinion surveys and statistical models, but they have their limits.

The Obama story from 2004 provides no direct corollary to the ongoing arguments about the "electability" of potential Senate candidates in Illinois, New York or elsewhere, but I do see a warning against leaping to conclusions about a candidate's prospects based only on early favorable ratings or horse-race numbers. "Electability" is an appropriate topic whenever a party selects its nominee (or appoints someone to serve out the term of a departing legislator), but quantifying electability through polling, or through predictive models derived from it, is a shaky enterprise at best. Obama's rise -- in the face of early data that led many to question his electoral potential in both 2004 and 2008 -- is a testament to such challenges (see also this pertinent example from September 2007).

This story also says something bigger about the potential for "change" within the American electoral system. Typically the day-to-day business of politics is mundane and static. We fight the same fights over and over, and little seems to change. Politics seems to be mostly about compromise, mostly "the art of the do-able." Yet every once in awhile, usually in the context of a presidential election, some out-of-the-blue candidacy reshapes our perception of what is possible. It shows us that sometimes, if we're lucky, we get to see history being made in the midst of mere electoral politics.

I look back to my own experience in the 2004 Senate election and realize that there was a silver lining in an unsatisfying campaign that amounted to, admittedly, little more than "an exercise in the technology of politics:" It offered me the opportunity to witness (and, yes, literally chart) one of the most meteoric success stories in American political history from the very beginning. And from this perch at Pollster.com, I got the chance to keep that ringside seat, charting and writing about the most exciting campaign of my lifetime.

So I want to thank Pollster.com's readers for coming along with us for the ride, and especially those who have stayed now that things have calmed own a bit. We hope you will stick around and let your friends know as we continue to track public opinion in 2009. You will be seeing a transition in Pollster.com over the next month or so, as we move the 2008 data off the front page and begin to feature charts and data that track the performance of the new administration and the concerns of American voters. But as we do, let's not lose sight of an idea we have tried to stress here from our first post: Using survey data well requires that we understand its limitations.

[Typos fixed and one badly constructed sentence revised].


NY: 2010 Sen (PPP-1/3-4)


Public Policy Polling (D)
1/3-4/09; 700 registered voters, 3.7% margin of error
Mode: IVR

New York State 2010 Senate

Tested: Attorney Gen. Andrew Cuomo (D), Caroline Kennedy (D), Rep. Peter King (R)

Cuomo 48, King 29
Kennedy 46, King 44

(source)


NY: Kennedy as Sen (PPP-1/3-4)


Public Policy Polling (D)
1/3-4/09; 700 registered voters, 3.7% margin of error
Mode: IVR

New York State

If the choices were Andrew Cuomo and Caroline Kennedy, who would you prefer Governor Paterson appoint to replace Hillary Clinton in the US Senate?

    58% Cuomo
    27% Kennedy

How has your opinion of Caroline Kennedy changed since she started publicly campaigning for appointment to Hillary Clinton's Senate seat?

    23% More Favorable
    44% Less Favorable

(source)


While I Was Away "Outliers"

Topics: Outliers Feature

Some odds and ends missed while I was away. Will be back in the swing once I clear out my email...

The Pew Internet & American Life Project releases a survey on voter engagement after the election (via Amy Sullivan).

Jennifer Agiesta finds new optimism among Democrats.

Brenden Nyhan sees no sign of the Obama honeymoon ending.

Gary Langer digs into how veterans voted in 2008.

Tom Jensen shares a tongue in cheek anecdote on response rates.

Chris Bowers considers public opinion among the Palestinians (PS: Get well soon).

One more:  John Sides shares an analysis of voter turnout among Virgos.


US: Bush Approval Avg (Gallup-2001-2008)


Gallup Poll
2001-2001
Mode: Live Telephone Interviews

National

"Because of these ups and downs, Bush's 49% approval average for his presidency will rank him in the middle of the pack (7th of 11) of post-World War II presidents. His average to-date of 49.4% is similar to Richard Nixon's 49.1% but slightly better than Harry Truman's and Jimmy Carter's historical lows below 46%."

(source)


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR