Well, our much anticipated moving day is upon us.
Some of you may have missed the news when it happened, and some of you may have forgotten, but we joined forces with the Huffington Post this past July (and answered some common questions about the acquisition here). Sometime later tonight or tomorrow, if all goes well, we will flip a virtual switch and begin "redirecting" traffic from Pollster.com to Pollster's new home on the Huffington Post.
A lot of very talented HuffPost developers have worked very, very hard over the last few weeks to move all of the features, content and data you have come to depend on here at Pollster to HuffPost. Our primary aim during this first wave of our relaunch has been to move everything without "breaking" anything. Thanks to the superhuman efforts of the HuffPost tech team, we think you will be satisified that while the web address will be different, everything you like about Pollster will make the trip with us.
Once we have relocated, we will begin adding some exciting new features that take Pollster.com to the next level, including quite a bit that will debut in the next few weeks. So we hope you come along and stay tuned.
Meanwhile a few more specific notes about the move:
We have managed to move every entry -- every chart, every map, every blog post, every Poll Update -- to Huffington Post. That includes our collection of charts from 2006 (which for a variety of technical reasons, I feared we might not be able to move). Needless to say, we will continue to update all active charts with new data. And your bookmarks to our existing pages should continue to work. We will simply redirect you to the new home for each page.
Our classic format poll maps will be active and functioning and will help you scan and navigate to chart pages. These are actually already active on Pollster now for races for Senate and Governor. If you're glad to see them back, don't worry, they will remain in place on HuffPost.
Once we move, you will also see that our charts feature prominently in a new HuffPost feature called Dashboard. We think you will find Dashboard engaging and useful -- it will include more than just polling data -- but if you prefer our classic poll maps and charts, again, don't worry, those will be there too and easily accessible via our new Pollster page.
While we have made copies of all blog posts, the original reader comments left on those posts will remain in place on their original Pollster.com locations. The HuffPost version of each entry will include a special link to take you back to the comments left on the original Pollster.com version.
All of our RSS feeds will continue to operate without interruption. All of our feeds will continue to provide the full post and not excerpts. Author specific feeds will require a different link, although all will be active immediately.
Now all that said, despite the best of intentions and a lot of hard work, a few things -- such as a complete index to archived blog posts -- may not be in place immediately. We will work to move anything left behind over the next week or two and will try to keep you updated on any such issues as they arise.
I welcome your comments, suggestions, problem reports or complaints -- just email me. If we have managed to "break" something you care about, please let me know. I can't promise I'll have time to respond personally to every message, but I'll definitely read them all.
A special note on comments and the Pollster.com commenter community: Admittedly, given Huffington Post's far bigger audience, the posts from me and from other contributors that also appear on HuffPost's front page will draw far more comments than our posts here. And no, we will not be migrating the Typekey user logins to Huffington Post, although you can log in and comment there using an existing Facebook, Twitter, LinkedIn, Google or Yahoo account or create an account on HuffPost.
For those concerned about the changes to the comments section, let me highlight two things. First, over the last month, Huffington Post has implemented a new "Community Pundits" feature that, as HuffPost's social news editor Adam Clark Estes explained to WebNewswer earlier this week, aims to highlight the most "insightful, informative, and engaging commentary" on any feature from across the ideological spectrum. Such comments appear in a prominent Community Pundit box that appears at the top of the comments section of each post.
Moreover, those who leave such comments consistently can earn a Community Pundit badge, which comes with privileges: "Besides having their comments highlighted in the Highlights tab and the Community Pundits box," Estes said, "we also allow our Pundits to leave longer comments."
Better yet, Estes and the Social News team have pledged to create a special Community Pundit badge specific to the Pollster section that will identify and highlight comments that are consistently insightful, informative and on topic, which is to say relevant to our focus on political polls and survey research. We have not yet begun working on this feature, so we would welcome your input and suggestions for it.
Now all that said, you should know that some entries -- especially the Outliers feature and the many "Poll Update" entries that Emily posts constantly -- will appear only on the new Pollster page and not elsewhere on Huffington Post. We're hoping that the Pollster corner of HuffPost will attract its own unique community of readers, so we encourage those of you who comment frequently to come along and try it out. We hope that the existing community can move along with the charts and the blog archive.
If you have questions about Huffington Posts comments and moderation policies, please see this FAQ page.
Emily Swanson |
September 8, 2010
As we gear up for a busy election season, Pollster and Huffington Post are hiring a new polling intern to help us out with gathering and entering polling data.
We're seeking a three-month unpaid intern to work in the Huffington Post office in Washington DC. Primary responsibilities will include:
- Entering polling data into our database and publishing tables and charts to Huffington Post/Pollster
- Entering and publishing poll update blog entries into our content management system
Strong attention to detail is a must. Experience using basic HTML, entering content into a CMS (especially Movable Type), and experience with statistical analysis including the R programming language are appreciated but not required.
If you're interested, please send a resume and brief statement of interest/availability to firstname.lastname@example.org
Emily Swanson |
August 13, 2010
We're having problems with some users signing in and posting comments this morning. We're working on correcting this issue, but if you've experienced any problems logging in, please email us with a description of what happened - were you able to sign in to Typekey at all? What happened it you were able to, and what error messages did you receive specifically? Are you able to sign in from a different computer or from a different browser? Apologies if we're unable to respond individually, but we'll do our best to keep all informed of the status of the problem.
Emily Swanson |
August 9, 2010
A quick note to our commenters -- we will be doing a scheduled server upgrade tonight from about 5-6:30 ET. During that time, you should still be able to access all the content you are familiar with but comments will be turned off. Any comments posted during this time may not appear after the server maintenance.
Update 6:25 PM: Things should be back up and running now
By Emily Swanson | August 9, 2010 4:52 PM | Permalink
| TrackBacks (0)
I want to try to answer some of the questions many of you have been asking about our acquisition by the Huffington Post, but I have to start with a personal story.
Seven years ago, I attended my first conference of the American Association for Public Opinion Research (AAPOR). I knew AAPOR well, and had long wanted to attend the annual conference, but until 2003 had never willing or able to devote the time and money necessary. One of the reasons I finally went was that I had been kicking around the idea -- "pipe dream" was probably more accurate at the time -- of starting a blog about polling. So I decided that attending the AAPOR conference would be a good way to get up to speed on the current methodologies and controversies.
As it happened, AAPOR had planned something bold for 2003: They invited the country's most prominent polling critic, Arianna Huffington, to be their plenary speaker. In 1998, Huffington had launched a "crusade" -- the Partnership for a Poll-Free America -- that urged her followers to hang up on pollsters in order to stop polls "at their source" because they "are polluting our political environment." In a 1998 column headlined "hang it up," she described herself as the "sworn enemy" of pollsters gathering at that year's AAPOR Conference, and hoped that "if all goes well" her crusade would spell the end of such meetings altogether.
What actually transpired was a fascinating and somewhat surprising discussion captured by the conflicting media accounts at the the time. AP's Will Lester wrote that although the pollsters were "prepared for the worst, they got charmed instead," as Huffington "set aside her apocalyptic view of the polling profession," and focused instead on points of agreement. An account in Businessweek took a different tack, noting that by evening's end, "Huffington was on the defensive, dodging accusations that he had her facts wrong and was protesting that she had been misunderstood." As an eyewitness, I can testify that both accounts were accurate. Either way, it was easily the best attended and most provocative event at any AAPOR Conference in my memory (I rescued the full transcript of the plenary session, once posted to AAPOR's web site, from the Internet Archive).
But let me stop there. If anyone had told me during that conference that (a) I would find time to start the MysteryPollster blog a year later, (b) that the blog would be a success, (c) that it would ultimately lead to a day job publishing Pollster.com, (d) that Pollster.com would win AAPOR's prestigious Innovator's Award, (e) that Pollster would eventually be sold to Arianna Huffington and (f) that I'd be truly excited by that prospect, well...let's just say that even after seven years the events of the last week or two have been a bit surreal.
So with that in mind, let's review some of the questions that friends and readers have been asking over the last 24 hours:
1) So what about Huffington's "Poll Free America" Crusade? First, for the record, I have never been a fan of the crusade: not in 2003, not when Arianna renewed it in early 2008 and not now. Even if the intended victims were, as she explained in 2003, only those polls "that are about the political questions of the day," a truly effective campaign to get Americans to hang up on surveys would also ensnare surveys that track consumer confidence, the costs of government programs, the incidence of illness and disease and the health needs of all Americans.
But that said, I can also tell you that Arianna Huffington has given her unqualified support to our longstanding mission to "aggregate polls, point out the limitations of them and demand more transparency," as she told the New York Times. She has also given us the editorial independence to disagree if we deem it appropriate, as I did in the previous paragraph. I also understand that there was a larger point she has been making all along that is in sync with our mission (something I noted in a column last year). As I said in the press release that went out today, I have long believed that to use polling data effectively, consumers need to understand its power as well as its limitations. That's what we have always been about, and that's the mission that Huffington Post has unambiguously endorsed.
So what does Arianna have to say about the apparent contradiction between her anti-polling crusade and buying a site named Pollster.com? Or, as Huffington Post commenter Marlyn, who said she "took Arianna's pledge to never participate in polls" asked last night, "What am I to do now?"
I asked Arianna how she would answer Marlyn's question. Here, via email, is her answer:
I've been a longtime critic of the accuracy of polls and how they're misused by the media, which continue to treat poll results as if Moses just brought them down from the mountaintop. That's why we launched the "Say No to Pollsters" campaign on HuffPost in 2008. And it's why I wanted to work with Mark and Pollster. Since it's clear that polls and polling are not going to go away - indeed, if anything, the media have only gotten more addicted to political coverage dominated by polling - we need to make sure that polls are as accurate as possible and that they are put in the proper larger context. So, though we come at it from different perspectives, Mark and I -- and the rest of the HuffPost team - share the same goal: we are committed to pulling the curtain back on how polls are conducted, and, in the process, make polls more transparent, help the public better understand how polls are created, and clarify polls' place in our political conversation.
2) Aren't you worried about Huffington's partisanship? Or to quote Pollster commenter IWMPB, who is troubled by our move and the greater "partisan division" of the news that it appears to herald: "So much for objectivity...whether or not it's true, perception is reality."
There is no question, given the comments here, in my inbox and elsewhere, that many of you are concerned by the perceived partisan slant at Huffington Post. I have no doubt that these perceptions are the biggest risk we are taking with this move, and represent a huge change from the inside-the-beltway prestige of The National Journal. The questions so many of you are asking are fair. My only hope is that those who have come to value Pollster will judge us on the basis of the work we do going forward and not pre-conceived notions about what this move may or may not mean in the future.
That said, concerns about my objectivity were equally valid when I started blogging six years ago while still actively polling on behalf of Democratic candidates and after more than 20 years as a pollster and campaign staffer for Democrats. Such concerns were equally valid three years ago, when we launched Pollster.com, a venture owned and backed by an Internet research company. I would never claim to be without bias, but I have worked hard from day one to be thorough, accurate and fair. If Pollster has a reputation for straight-shooting commentary and non-partisan poll aggregation, it is because we never took our eyes off those goals.
That's why we have regular contributors who have worked for both Republicans (Kristen Soltis, Steve Lombardo, Bob Moran) and Democrats (myself and Margie Omero) as well as those from academia (Charles Franklin, Brendan Nyhan, Brian Schaffner). That is also why all of these individuals have assured me that will continue to contribute once we launch our new virtual home at Huffington Post.
And one trivial question that keeps coming up:
3) Did this sale make you rich? Sadly...no. Pollster and its assets were purchased from YouGov/Polimetrix, not me, though I am privileged to have a stable new job in the news media doing something I love.
And unfortunately, despite the sale, Pollster.com resulted in a net loss for our former owners. Doug Rivers, the CEO of YouGov/Polimetrix, helped us launch Pollster.com with the hope of doing a service to the survey profession and making a profit. We succeeded, arguably, at the former but not the latter.
Which reminds me that, before ending this post, I need to offer thanks to two important sets of people.
First, to Doug Rivers, who first invested in Pollster.com despite strong advice that a business model would be elusive and who continued to support us long after it was clear he would never see a dime of profit. All along he kept his promise of total editorial independence, never once reaching out to complain if we wrote or linked to something critical of his business. Thanks also to the technology staff at YouGov/Polimetrix who helped keep our site up and running even though that task was far down their daily to-do lists.
Second, thanks to all of my valued friends from National Journal and Atlantic Media (too many to name, but they know who they are) but most of all to Kevin Friedl, Tom Madigan and Deron Lee who for two years took my typo-ridden copy and molded into weekly columns we could all be proud of. I will miss your skilled editing more than you know.
We are going to be moving tomorrow, so time will be limited, but if there are more questions -- and I'm sure there will be -- I will try to answer them in the comments below.
Yes, it's true. As reported this afternoon by the New York Times' Jeremy Peters, Pollster.com has been acquired by The Huffington Post:
The Huffington Post is venturing into the wonky but increasingly popular territory of opinion poll analysis, purchasing Pollster.com, a widely respected aggregator of poll data that has been a major draw for the website of the National Journal.
The purchase is something of a coup for The Huffington Post, which has been making a more aggressive push into political journalism ahead of the midterm elections in November.
"It's going to beef up our political coverage," said Arianna Huffington, the website's editor in chief and founder. "Polling, whether we like it or not, is a big part of how we communicate about politics. And with this, we'll be able to do it in a deeper way. We'll be able to both aggregate polls, point out the limitations of them, and demand more transparency."
I will have much more to add later, but for now let me just say how excited we are to joining forces with Huffington Post, as the change will ultimately super-charge everything we do. If you are a fan of Pollster.com, I assure you that what you like will stay the same, including our mission, editorial voice and commitment to providing a forum for better understanding poll results, survey methods and the polling controversies of the day. What will improve will be the overall quality of our site, the power of our interactive charting tools and even greater efforts to promote transparency and disclosure of polling methods.
When Markos ("Kos") Moulitsas published the analysis last week that convinced him that the polls produced by Research 2000 were "likely bunk" and announced plans to sue his former pollster for fraud, he also made an unusual request:
I ask that all poll tracking sites remove any Research 2000 polls commissioned by us from their databases.
Given the still unexplained patterns in the results uncovered by Grebner, Weissman and Weissman, and the even more troubling response late last week by Research 2000 President Del Ali (discussed here), we have chosen to honor Kos' request, as least as it pertains to the active charts on Pollster.com that we continue to update (such as favorable ratings and vote preference questions for upcoming elections). As of this writing, we have removed the Daily Kos/Research 2000 results from the national Obama favorable rating and national right direction/wrong track charts. The rest should be removed from active charts by close of business today.
We have left in place, at least for now, Research 2000 poll results in active charts sponsored by others organizations, although we will also remove those if they so request. We may also revisit this decision as further developments warrant.
Finally, we will leave in place the results from prior elections as, for better or worse, we consider our final estimates (and the results upon which they were based) to be part of the public record. That said, we will likely follow Brendan Nyhan's lead and add a footnote about the controversy to our charts from 2008 and 2009 that include Daily Kos/Research 2000 data.
Last week, Andrew Sullivan wrote of his sense that people divide into two classes with respect to sleep and general exhaustion, "those with kids under ten and the rest of us." As the father of a 5 and 7-year-old, I can both confirm that observation and extend it a little: Having kids under 10 probably also introduces a similar divide in coping with a multi-day snow in.
For those without kids, getting snowed in may offer a respite, a chance to catch up on work, reading or some other long deferred project. Being confined to your house with two small children is a different experience entirely. Fifteen minutes of uninterrupted time is a luxury I have not known in many days. So apologies for the slower pace of blogging this week.
As I write, as the view from my window shows, we are experiencing another blast of snow, featuring white-out conditions and heavy winds. So far, we have avoided interruptions in power and bandwidth, but our luck may run out given today's 40-mile-an-hour gusts. So apologies in advance if I drop off the grid altogether. Ditto for Emily, who is also working from home to post poll and chart updates. Hopefully, we will all be back to normal soon.
Although we're still nine (9) months away from the 2010 elections, DC elites are already handicapping the outcome.
In order to get a better sense of what official and political Washington is thinking about the 2010 elections, StrategyOne appended a question on our recent Beltway Barometer survey. The full (crosstab) results can be found here.
So what are the DC insiders thinking?
First, both Democrats and Republicans think Democrats will lose seats in the House in 2010. Only 7% of Democratic elites think Nancy Pelosi will maintain (5%) or increase (2%) her seat margin in the House. The question Washington elites are pondering is not IF Democrats will lose seats in the House, but HOW MANY seats they will lose. This HOW MANY question is where DC elites differ.
Elite Republicans in Washington generally expect to see their party gain between 20 and 39 seats in November. 56% of elite Republicans expect Democrats to lose between 20 and 39 seats, and 25% of elite Republicans expect Democrats to lose 40 or more seats and with this their majority. It is interesting to note here that Charlie Cook's current forecast (Democrats lose 20-30 seats) tracks closely to majority elite Republican sentiment. It is also interesting to compare this survey data to National Journal's Congressional Insiders Survey.
At this stage elite Democrats in Washington expect to keep the losses under 20 seats. Specifically, 62% believe Democrats will lose fewer than 20 seats and 28% believe Democrats will lose between 20 and 39 seats. Only 2% of elite Beltway Democrats currently think that Democrats will lose the House in November. This sentiment may change if Coakley loses today.
StrategyOne will be tracking elite DC opinion on the 2010 elections and reporting this data first to our clients and then to the media (and pollster.com readers).
The warning signs are certainly present for Democrats at this point. I noted in my December 16th article (What Does Bart Gordon's Retirement Tell Us?) that the Moore, Tanner, Baird and Gordon retirements certainly appear to be warning signs reminiscent of past tough election cycles. Vic Snyder's (AR-2) retirement announcement last week only seems to add to this fact pattern.
Although wild card events could easily change the arc of the 2010 election season, the trend is certainly beginning to suggest a Republican wave.
Robert Moran, StrategyOne
I just want to wish our readers a happy New Year on behalf of everyone at
Pollster.com. We appreciate your continuing support and look forward to
serving you again in 2010. See you Monday!
Emily and I are both taking some time off this week, so I will be posting links to whatever polls are released in a daily 'outliers' entry that will appear in the middle column on the front page. We'll be back to our usual schedule on 1/4.
We hope you are enjoying whatever holiday's you are celebrating, thank you for your continuing support and look forward to updating you on all the polls in 2010!
Reader DG emails with a reaction to my column and accompanying post on Monday about Strategic Vision LLC:
I am a huge fan of your work and deeply appreciative of all the effort you and your staff have put into making pollster.com one of the best political sites on the Internet.
I do have to confess, though, to being deeply disturbed by the debacle with Strategic Vision. The fact is that there have been problems with the shop for years, yet little attention was paid, even while respectable bloggers (such as electoral-vote.com) made the call in 2004 to stop reporting SV's numbers as they were consistently, and suspiciously, pro-GOP. SV appears to me to be a very bald-faced effort to gratuitously influence national and local debates through nefarious means, and could have seriously damaged the reputation pollsters have worked so hard to build over the preceding decades. Even worse, Strategic Vision was enabled by people who damn well should have known better, like yourself.
Your site is a one-stop shop for journalists, pundits, Administration officials, etc. and anything that gets reported by you is magnified because of that. Moreover, these people do not have the time or training to effectively evaluate polls. As such, you have a responsibility to ensure methodological rigor is adhered by the pollsters whose results you report, and you must begin to call out anything from consistently being an over-the-top outlier to having an uncommonly large (such as Kaiser) or uncommonly small (Fox) party ID spread. I am not even saying to stop reporting polls like Kaiser or Fox, simply make it clear that there are methodological hang-ups with the data that your readership should be aware of. Your "general philosophy" of reporting results as long as the pollster "purports" to adhere to methodological basics is at best lazy, at worst, dangerous. Like it or not, websites such as yours have become such powerful aggregators of information that you must impose some kind of control to limit the ability of the mendacious and malicious from having an undue influence. You must be a Wikipedia, not a Google.
I agree with DG's general argument: Sites like ours need to do more to help readers evaluate individual pollsters and their methods. That was the spirt of the three part series I wrote in August titled, "Can I Trust This Poll," and the reason why I want to use our site to actively promote better methodological disclosure by pollsters.
That said, I'll cop to "lazy" in just one respect: On Monday, I gave short shrift to our "general philosophy." I combines two goals, (1) making all poll results available and (2) providing an informed and critical context -- through interactive charts and commentary -- for understanding those results. The best examples are the interactive tools we built into our interactive charts (the "filter" tool and the ability to click on any point and "connect the dots" for that pollster) to make it easy to compare the results for any individual pollster to the overall trend. We have also devoted considerable time to commentary on pollster house effects both generally and for specific pollsters (like Rasmussen).
I'll also take issue with the idea that we "damn well should have known better" with respect to Strategic Vision. The evidence that they were a "consistently over-the-top outlier" relative to other pollsters is weak. This was Charles Franklin's take three years ago:
I tracked 1486 statewide polls of the 2004 presidential race, of which Strategic Vision did 196. The Strategic Vision polls average error overstated the Bush margin by 1.2%. The 1290 non-Strategic Vision polls overstated KERRY's margin by 1.3%. Further, the variability of the errors was a bit smaller for Strategic Vision than for all the other polls combined.
Try the connect-the-dots-tool on the 2008 Obama-McCain charts for Pennsylvania, Florida, Georgia and Wisconsin (the states where Strategic Vision released five or more "polls"), and make your own judgements for 2008.
But again, I tend to agree with DG's central thrust. We can do better. I am particularly intrigued by DG's comment about being "a Wikipedia, not a Google." What Wikipedia is about, for better or worse, is "crowdsourcing." A few weeks ago, the Wall Street Journal described crowdsourcing as the idea that "there is wisdom in aggregating independent contributions from multitudes of Web users." How might a site like ours help individuals collaborate on efforts to evaluate pollsters? If you have thoughts or suggestions on any of this, we would love to hear them.
We are excited to share some minor changes involving our RSS feeds, Twitter and email updates that we hope will make life a little easier for our regular readers.
RSS - As you may know, we have a variety of RSS feeds set up -- all available here -- that allow you to read Pollster.com blog entries using RSS readers like Google Reader or FeedDemon. We upgraded the process we use to produce these feeds in a way that is mostly invisible except that the feeds should update a little more quickly than before. The new feeds do use a new URL, so if you are reading this entry via one of our previously RSS feeds, it would be a good idea to resubscribe now using one of the links below, as we will shut off the old ones in a few weeks.
Twitter - Since many are starting to use Twitter in lieu of RSS, we have also set up some new automated Twitter feeds that feed a headline and a link for every new blog post on Pollster.com. Details on the specific options below, followed by links to each feed.
Email Updates - If neither Twitter nor RSS are your thing, starting today, you can also sign up for a daily email update that will deliver links to our most recent posts. You can subscribe using the links in the table below right now or via our RSS/Twitter page.
Feed Options - We have set up these RSS, Twitter and email feeds for a variety of different categories of blog content. "All content" gets you everything posted to Pollster.com. "Poll updates" includes only the brief posts on the latest polls. "Analysis" gets you everything but the poll updates from all of our contributors.
We also have automated RSS and email alerts specific to individual authors. The four Twitter accounts listed here for yours truly, Charles Franklin, Steve Lombardo and Kristen Soltis are personal accounts that can and will include commentary beyond what you see posted here.
Follow us on RSS, automated Twitter, or email alerts:
Blog Author RSS, Email alerts, and personal Twitter:
||RSS, Email alerts, Twitter
||RSS, Email alerts, Twitter
||RSS, Email alerts
||RSS, Email alerts
||RSS, Email alerts, Twitter
||RSS, Email alerts
||RSS, Email alerts, Twitter
||RSS, Email alerts
||RSS, Email alerts
||RSS, Email alerts
If you have any questions, comments or complaints about these feeds, please drop us an email or leave a comment below.
It's a change long overdue, but as some of you may have noticed this morning, the "analysis" posts on our blog now feature topic tags (such as the "Pollster.com" tag on this entry below my name above this paragraph). Other posts will feature a list of the topics they discuss. Click to tag link to see a list of previous posts on that subject.
Since this entry's tag is not particularly interesting, you might want to clicking try a few of these: likely voters, automated polls, cell phones. We have also "back-tagged" all of the analytical blog posts I have written, and most of those from other authors, going back to through all of 2008. Hopefully, these topic tags will make it easier to find what you're looking for on Pollster.com
Please note that we have not yet applied tags to any of the "poll update" posts. The main reason is to avoid forcing someone interested in analysis pieces on a particular pollster (Rasmussen, for example) to have to through a very long list of poll updates to find what they're looking for.
As with any such change, this one may introduce some bugs we need to iron out. If you stumble on any, please don't hesitate to email us and describe the problem. And a big "thank you" to Emily whose hard work made the new feature possible.
Here's a quick apology for my lack of blogging today and (in advance) for tomorrow too. I was traveling for much of today to Baton Rouge, Louisiana where tomorrow I'm participating all day in the John Breaux Symposium at the Reilly Center for Media & Public Affairs at Louisiana State University.
If you happen to be in Baton Rouge and have some free time tomorrow, tomorrow's discussion should be terrific. The topic is "Redefining Public Opinion Polling in an Age of Segmented Markets and Personalized Communication." In addition to yours truly, the panelists include our own Charles Franklin, Charlie Cook of the Cook Political Report, Scott Keeter of the Pew Research Center, Anna Greenberg of Greenberg Quinlan Rosner and Susan Herbst of Georgia Tech.
Meanwhile, two quick "outliers":
Louis Jacobson of Politfact.com did a fact check on Glenn Beck's citation of a result from the IBD/TIPP poll of physicians that I discussed on Pollster and in a column last month. Jacobson's piece includes considerable new reporting on the issue -- including the full text of the questions asked in the survey. It's worth a click.
Finally, we overlooked a new Rasmussen poll in New Jersey today. I just added the trial heat question to our chart; we should have the usual poll update post in the morning.
In the spirit of transparency, we need to provide full disclosure of a mistake we made and an apology to our readers and to Rasmussen Reports.
In the year since the election, we have worked to create a new collection of charts that track all available surveys not just for election trial heat results but also a series of national measures, including presidential job approval ratings, favorable ratings, the national "generic" congressional ballot, the classic "right direction wrong track" question and a handful of measures of perceptions of the economy.
We started putting up new charts after the 2008 election knowing that some would "work" -- we would find enough reasonably comparable data from a variety of sources to make for a robust trend line -- and some would not. It was also probably inevitable that we would make a mistake or two along the way.
Well, as I discovered this past week, we did. For a handful of charts, we have been republishing some extraneous data from behind the gated subscriber pages on RasmussenReports.com. The affected charts are two that track perceptions of the economy (excellent/good/fair/poor and getting better/getting worse), our Obama favorable rating chart and the three charts that track the Obama job rating by party (Democrat, Republican and independent). Rasmussen does provide some results from these questions (usually just from one answer category) on their free-to-all "By The Numbers" page. For the economic charts, that represent the bulk of the data we misused, we were filling in results from the subscriber tabs for data omitted on the By-The-Numbers page. Compounding the error, as noted last week, in one instance we were including numbers on our Obama favorable rating chart that were actually mislabeled job approval rating results.
I could tell a long story about a small error that cascaded, but it all boils down to a lack of clear communication by me. As such I deserve and will take full blame. So there is no confusion in the future, our policy henceforth is iron-clad: We will not republish a single number in our charts unless it has already been published or released into the public domain by the pollster or sponsor.
Knowing that Josh Tucker has raised some good questions about the whole notion of gated, subscriber-only crosstabs, I want to make clear that no one at Rasmussen complained to us about this issue. We discovered it ourselves and subsequently reached out to apologize, an apology I repeat publicly today. Except for the erroneously labeled data which we have already taken down, Scott Rasmussen has kindly granted us permission to leave the remaining data in place.
In correcting our error, however, it is now clear that two of our charts -- those tracking current and retrospective assessments of the economy -- will no longer "work" as intended. Virtually all of the data going forward would be coming from the Gallup Daily tracking and, as such, our chart would add no real value to those that Gallup publishes itself (here and here). We may rework our charts using only monthly values at some point in the future, but if we do, those charts will be based on monthly releases from other organizations, and from Gallup or Rasmussen should they ever opt to put monthly summaries into the public domain.
You may have noticed that the PoliticsHome box -- the one that included links to the top news stories of the day -- has disappeared from the site. Fear not, PoliticsHome fans, the box on our site is just on a temporary hiatus. PoliticsHome US, which is run by a different organization, recently launched a redesigned site that had the unfortunate side effect of "breaking" the sidebar box on Pollster (it was stuck on September 7).
The box should return soon. Meanwhile, those who have grown to enjoy their collection of "top stories right now" and the day's "must reads" can go directly to the Politics Home site.
And if you have come to use and depend on the PoliticsHome box on Pollster.com, we would appreciate it if you would email us or leave a comment. The more readers we hear from, the sooner I can get the box repaired and back in place.
Pollster.com quietly turned three-years-old this week. We launched with this post and a less pretty set of charts on September 1, 2006. Since that time, according to Sitemeter, we've served up over 80 million page views during over 28 million visits. We were honored to win the Warren J. Mitofsky Innovators Award from the American Association for Public Opinion Research (AAPOR) two years ago, receive praise for "excellent reporting" of pre-election polls in 2008 from statistical visualization guru Edward Tufte and, just last month, be named one of the 50 Best Websites of 2009 by Time.com (along with the likes of Google, Facebook and Twitter). Though exhausting at times, it has been a truly rewarding adventure, and we look forward to celebrating many more birthdays in the years ahead.
But as we pause and reflect on the last three years, I want to take a moment thank those who have helped make Pollster.com a reality: Doug Rivers, who originally conceived of Pollster.com and continues to provide financial and technical support through our principal sponsor YouGov/Polimetrix; our partners at the National Journal Group: Charles Franklin who has been a valued partner in this effort from day one; our growing list of contributors; the many talented individuals who helped develop our website, database and charts (though I'll single out Jeff Lewis, Seth Hill, Ben Schaffer and Quentin Fountain for their extraordinary contributions); and finally, Eric Dienstfrey and his successor Emily Swanson, the true heros who worked the hardest to bring you an accurate and up-to-date Pollster.com every day.
And, of course, we owe the biggest thank you to all of you who visit, read and link regularly. We would not be here but for your support.
Coincidentally, Will Urquhart at at SumOfChange.com just posted a well-produced video of the complete Netroots Nation panel that Charles Franklin and I participated in last month along with DailyKos contributing editor Greg Dworkin (DemFromCT), Charlie Cook of The Cook Political Report, and Nate Silver of FiveThirtyEight.com.
If you have time to watch just one presentation, I highly recommend the one by Charles Franklin that begins at about 19:55. Among other things, Charles provides the best review I've seen yet of the philosophy that guides the way we construct our charts and analyze polling data at Pollster.com.
My presentation begins at about 52:00 and is the made-for-TV-movie version (if you will) of the three-part-series I posted last month entitled, "Can I Trust This Poll."
Some very good news: Time has named Pollster.com to its list of the 50 Best Websites of 2009! What makes this honor especially huge is that Time's list is not limited to political or blog sites but rather features a much broader range of sites that "make your online life more efficient -- or just more fun." This year's list includes names like Google, Facebook, Twitter, Flickr, Skype, YouTube, Amazon, and Wikipedia, so we are in truly amazing company.
We are also very gratified that Time specifically recognized interactive charting features that we have worked so hard on:
Pollster also aggregates [polling] data, but it has a Web interface that allows you to remix it on the fly. Is there a poll you don't trust? Throw it out! Want a different smoothing algorithm? Change it! How much difference does it even make? Magnify the X and Y axes with a mouse-click and find out.
Thank you, Time!
Now that I'm back from a week's vacation, Emily is taking two well earned days off today and tomorrow and I will be filling in adding new polls to our charts and posting poll updates. As such, those updates will be a bit slower than usual for the next 48 hours. Apologies in advance for that.
Another piece of housekeeping and one of Eric Dienstfrey's final contributions to Pollster.com. We have produced a new chart that includes only polls that report the Obama job rating among all adults. The original Obama job rating chart that includes all surveys remains in place; this new chart adds a new way of tracking the trends.
We have discussed some of the challenges posed to our charts on measures like the Obama job performance rating by pollsters whose results show big "house effects" (consistent differences when compared to other pollsters). Our philosophy has always been to try to include all polls that claim to produce representative samples -- even those based on more controversial methods such as automated polls or those that survey respondents over the internet using pre-recruited panels -- to make it possible to use our interactive chart features to compare and contrast different surveys.
The problem is that if big house effects occur, the trend lines can sometimes display phantom trends when polls with consistently different results are more frequent. This issue crops up most often in the "nose" of the trend line, which moves around more than the rest of the line as we add new polls to our database. The Rasmussen Reports surveys appear to be a big problem in this respect, mostly because they are far more numerous. However, if you use your mouse to click on Obama job ratings that tend to be higher or lower than other polls, you will also see pollsters with similar house effects that poll less often.
Chart With All Surveys:
We offer the new all-adult-sample-only charts as one means of reducing the potential for "phantom" trends, though we have other potential improvements in the works. Please let us know what you think.
PS: A week or so ago we also broke out party identification in two: one is based on results among all adults, one among surveys of registered or likely voters.
Regular readers have probably noticed a new name appearing on the "poll update" entries on Pollster.com. Emily Swanson, a recent graduate of the University of Wisconsin-Madison, has joined the Pollster.com team and will be posting and updating our charts and tables regularly from here on out. Welcome Emily!
Unfortunately, Emily's appearance means that we are saying farewell to Eric Dienstfrey after nearly three years of relentless hard work and dedicated service. As announced a few months ago, Eric has been accepted to the Graduate Program in Film Studies at, coincidentally, the University of Wisconsin-Madison's Department of Communication Arts. So he is moving on to bigger and better things.
Sadly, today is officially Eric's last day at Pollster.com. I exaggerate not one bit when I say that the site as you know it would not exist but for his skill and tenacity. We will miss him, but wish him the best of luck in all of his future endeavors.
Sad news from Gallup editor-in-chief Frank Newport:
Alec Gallup, one of the polling world's most committed practitioners and dedicated supporters of the value of polling and all around good guys passed away last night. Alec was one of two sons of Dr. George Gallup and was the long time Chairman of the Gallup Poll. Alec lived in Princeton, New Jersey. Anyone who has worked at or with the Gallup Organization over the years and who came into contact with Alec recognized what a truly unique individual he was. He literally devoted all of his life to polling -- spanning his childhood days when he worked with his father as poll "ballots" came in via train to be tabulated at Gallup headquarters up to as recently as a week or two ago, when, even in declining health, he would call up and make suggestions about what poll questions Gallup should be asking in the current political environment. Polling has never had a greater champion, and those who knew Alec personally have never had a greater friend. Everyone who knew Alec will miss him immensely.
Alec Gallup was interviewed about his father and the early days of polling nine years ago for a PBS documentary. You can read a transcript here (via Mike Mokrzycki).
Kristen Soltis, a regular contributor here at Pollster.com, summarized her views on how the Republican Party can win back younger voters for the Huffington Post. Her bottom line:
In order to begin that effort, the GOP needs to have a positive message and vision that focuses on outcomes that matter to young voters. Right now, a lot of what Republicans are talking about is "less taxes" and "smaller government." But young voters are less convinced than older generations that the government tends to be inefficient and wasteful.
Among other issues, she also confronts the lack of diversity that was the subject of a widely read summary from Gallup this week that showing that 89% of Republican identifiers are white (or more specitically, non-Hispanic white) and 63% are white conservatives. Soltis:
Longer term, the Republican Party has to confront the issue of diversity. If the Republican Party retains a brand as the party tailor-made for conservative older white males, it will not survive for long. Consider the fact that younger voters represent a more ethnically diverse cohort than other generations. The issue of winning the youth vote is more and more inextricably linked to winning support among Hispanics and African-Americans.
There's much more, and it's worth clicking through for a full read.
"Sometimes the magic works," said Chief Daniel George in the 1970 classic flim Little Big Man, "and sometimes it doesn't." The same can be said about the loess regression trend lines we plot in our charts.
When we plot pre-election poll results from various pollsters on the same charts, the trend lines usually have the helpful characteristic of minimizing the impact of outlier results and pollsters with consistent "house effects" on the overall estimate. In other words, if one of five or ten pollsters produces a consistently different result, their results do not typically skew the overall average significantly so long as the timing of the various polls is more or less random.
But for some of the national measures we have been plotting recently -- especially Obama's job and favorable ratings and the question about whether Americans perceive things to be "headed in the right direction" or "off on the wrong track" -- a few pollsters that do daily or weekly tracking are producing results with large house effects. Unfortunately that combination, along with the more sporadic timing of other national surveys, is producing the appearance of trends on some charts that are not really trends.
Last night, for example, Andrew Sullivan linked to two charts that appear to show trends in recent weeks: An uptick in the unfavorable rating for Obama and an increase in the percentage saying that things are off on the wrong track. In both cases, unfortunately, the apparent trends are an artifact of timing and house effects.
Let me explain, starting with the right direction/wrong track chart, that follows. (I am using screen shots rather than our live-embedded version here to preserve the look of the chart at the time of this writing -- follow the link to the live chart to use the filter tools yourself):
What Sullivan noticed was the recent uptick in the red line (wrong track) and downturn in the black line (right direction) at the far right (or "nose") of the trend. Now look what happens when we use our filter tool to remove from the trend the two pollsters -- Rasmussen Reports and DailyKos/Research2000 -- whose weekly tracking results provide nearly half (41 of 96) polls plotted in this chart so far during 2009. The recent trend disappears producing an essentially flat line since mid-April:
So removing just two pollsters -- and particularly the two that contributed all four of the poll released in the last two weeks -- eliminates the apparent trend. One problem we have is that these two pollsters release weekly tracks, while the others poll more sporadically. Worse, virtually all of the national pollsters released surveys just before the Obama administration reached its 100th day in office, and we have experienced something of a poll drought since.
But wait. Perhaps those two weekly tracks are catching a more recent trend that we might miss if we rely (for the moment) on the other national tracking surveys that have not produced more surveys in the last few weeks.
To check, let's use the filter tool to select only the surveys from Rasmussen and DailyKos/Research 2000. And just to be safe, I will also turn up the smoothing setting to be especially sensitive to any recent trend:
The trend is almost exactly the same as the version with these pollsters removed, but you can also see that the gap between wrong track and right direction is larger on the second chart of just Rasmussen and Research 2000 (11 points) than on the previous chart excluding those two (4 points), with virtually all of the "house effect" coming from the Rasmussen survey.
So when we look at only the weekly trackers or only the other polls separately, we see flat lines over the last few weeks. When we put them together, we see a recent upward movement on "wrong track." Why? Because when combined the weekly trackers are driving the "nose" of the trend line and the trackers -- especially the Rasmussen track -- is producing consistently different results. So as the Rasmussen results have more influence in the trend line, they tend to drive the red line up and the black line down.
Now let's repeat the exercise with the Obama favorable rating. First, the standard chart showing all surveys. The recent apparent trend is the sharp upward movement on the red "unfavorable" line:
In this case, the Rasmussen and Daily Kos/Research2000 results are six of the seven surveys conducted in the month of May (the new Gallup result was added this morning, after Sullivan's initial post). If we use our filter tool to remove the weekly trackers, the apparent recent change smooths out, reflecting the more gradual increase in Obama's unfavorable rating since the inauguration:
Again, are the trackers picking up a more recent trend that the other national surveys are missing? Here is what the chart looks like if we include only the Rasmussen and DailyKos/Research2000 polls. Here, we see virtually no trend since late March:
The last chart above also clearly shows the enormous house effect separating (in this case) Rasmussen and DailyKos/Research 2000 surveys, with Rasmussen producing consistently lower favorable and higher unfavorable ratings for Obama.
We have discussed the "why" of house effects, especially the consistent differences in the Rasmussen tracking, in previous posts. This case involves something a little more troubling for us: The way house effects and timing have combined to produce misleading "trends" that are more artifact than real. That is something we need to address in a systematic way.
Update: At the suggestion of a reader, Andrew Sullivan removed
only the Rasmussen surveys with similar results to what I obtained above.
Hopefully, you have already noticed something a little different about our site today. As of this morning, a National Journal banner now sits atop our site, signifying a newly expanded partnership with the National Journal Group, publishers of the National Journal, CongressDaily, The Hotline, The Almanac of American Politics, and NationalJournal.com. While we have partnered since January 2008 -- most visibly through my weekly column on NationalJournal.com -- this new arrangement involves an even closer business relationship.
Some of you may have experienced a few bumps last night as we made the changes, but everything should be working now. If you are experiencing any unusual problems with the site, please let us know.
And as long as we are doing a bit of housekeeping, I also want to take this chance to ask for your input on both what you like about pollster and about the things you dislike or wish we would improve. I have a long list of things I would like to fix, but would appreciate your "qualitative" guidance in setting priorities. So if you can, please take a moment to leave a comment below or email me with your thoughts about the things you would most like to see upgraded or improved.
And thank you all for your continuing support!
This is something of a bittersweet post. Eric Dienstfrey, my relentlessly hard working number two here at Pollster.com, will be moving on to bigger and better things in the fall. He has been accepted into the Graduate Program in Film Studies at the University of Wisconsin-Madison's Department of Communication Arts. Congratulations Eric!
This news means that we have a job opening and big shoes to fill. This is a full-time, entry-level position in Washington DC with health care benefits, and we anticipate hiring in mid to late June. Applicants should have excellent proofreading skills, strong attention to detail and an abiding interest in political polling. While not required, the ideal applicant would also bring some previous knowledge of or experience in web site development/administration (especially with Movable Type), statistical analysis (especially with the R programming language) or database development (especially with PythonSQL).
If you are interested and would like more details on this unique opportunity, please email me and attach a resume.
Update: We have filled the opening. Many thanks to all that applied.
A quick update and apology for the slow pace of analysis posts over the last few days. This week is spring break for my children, and I have been trying to combine light blogging with a family visit (and rediscovering the challenge of finding uninterrupted time in the company of precocious 4 and 6-year-olds). I'll be back to full speed next week.
Let me start with the "my bad" portion of this entry: Three weeks ago, our colleague David Moore sent me an early draft of the "Dubious Polls Awards" commentary he co-authored with George Bishop. Moore asked for my comments, but in an oversight that speaks to my own poor management of an overflowing email inbox, I set the message aside without reading the attachment and soon forgot about it. He ultimately posted a summary here earlier today, with the more detailed version posted on stinkyjournalism.org. Had I read the draft, I would have given David feedback consistent with what follows. I apologize to David and our readers for that oversight, but I want to take this opportunity to air the issue publicly and allow readers to react and comment.
I make no apologies, on the other hand, for giving David Moore (and by extension, George Bishop), the opportunity to blog here at Pollster. As I noted earlier this week, David brings to this endeavor a long career in the field of survey research, as an author, an academic and a former managing editor of the Gallup Poll. George Bishop is one of the most respected academic survey researchers, and though his perspective is sometimes at odds with others in the field, his work is something any serious pollster should know (particularly his book, The Illusion of Public Opinion ). If Moore and Bishop are willing to to act as provocateurs and criticize the most respected voices in the field, fine. They have the expertise to do so with authority, and constructive criticism has always been part of our mission.
I also believe that blogging works best when edited least. Holding back posts for review and revision kills the spontaneity and give-and-take that make blogging work. As Andrew Sullivan has written, readers are his best editors. "E-mail seemed to unleash their inner beast. They were more brutal than any editor, more persnickety than any copy editor."
The only rule I have tried to set for contributors here at Pollster is to follow my tone: Avoid name calling and gratuitous snark and, above all, be fair.
My problem with the Dubious Polls summary -- and the feedback I should have given Moore and Bishop -- is that it offers far too much snark and name calling with just a smattering of the smart context that they are well equipped to provide (and do a better job providing in the longer version on stinkyjournalism.org). It is also, in places, less than fair.
Consider, for example, their "top award, earning five crossed fingers:"
[It goes to] all the major media polls for their prediction of Giuliani as the early Republican frontrunner. Collectively this group, beginning more than one year prior to the first statewide electoral contest in Iowa, relentlessly, and without regard for any semblance of political reality, portrayed Rudy Giuliani as the dominant Republican candidate in a fictitious national primary.
It is certainly true that most public pollsters reported results showing Giuliani leading, consistently, throughout most of 2007, on questions that asked Republican identifiers nationally to state their preference for the Republican nomination. And it is also true that far too many journalists and pundits (and some pollsters) looked at these early results, showing Giuliani with the support of just 30% of Republicans nationally, and wrongly assumed or predicted that the former New York mayor had some sort of lock on the Republican nomination.
If Moore and Bishop had argued in their summary that we should have paid more attention to polls in New Hampshire and Iowa than nationally,or that pollsters should have done more to caution poll consumers against reading too much into those early Giuliani leads, I would agree (and did, here and here, back in August 2007). I also agree that any "predictions" of a Giuliani triumph based on those 2007 horse race polls alone ignored many political realities, including the fact that we do not hold a single-day, national presidential primary.
But is it fair to characterize as a "prediction" every horse race result released by the ten organizations Moore and Bishop list? Is it fair to use the phrase "crossed fingers" -- words that imply deliberate dishonesty -- to depict the release of those results? It feels unfair to me.
Obviously, this site is not my exclusive domain. Our goal for Pollster.com is to present a wide variety of poll and survey related content from many different authors, and we do not expect every front page contributor to agree or speak with one voice. On the question of tone, however, I want to hear from you. Please read over the Dubious Polls piece here, and the longer version on stinkyjournalism.org. What advice would you offer -- to David Moore or to me -- for future contributions?
Please feel free to comment below or email me directly. I will try to post excerpts from email in a future post (please stipulate if you prefer that your comments to remain totally off the record).
As implied by the post earlier today on our New York charts, we are slowly working through the process of adding to our site all sorts of new poll charts and tables at the state level. In addition to New York, we also put up charts for two states -- New Jersey and Virginia -- with races for Governor in 2009 and one, Ohio, with a newly open Senate race in 2010. We will be adding many more, albeit gradually, in the coming weeks.
We are limited, of course, by the availability of public poll data. Some states are polled more often than others. In other states (like Virginia), pollsters have held off testing general election match-ups or will do so until contested primaries are resolved. Our aim is to post all available horse race results for all contests in 2009 and 2010 for Senate and Governor and, ultimately, for the U.S. House of Representatives as they become available.
One new feature we hope to keep consistent across states is to include tracking charts for the favorable and job ratings of each state's governor and two senators, as well as the statewide job approval rating of President Obama.
To help you find charts, Pollster.com features comprehensive index pages that list all charts and tables for all states. Each index page has a consistent URL that uses the state's two-letter postal abbreviation (e.g. www.pollster.com/polls/nj/). To make it easier to navigate to those state index pages, we will be adding two tools later today:
- The pull-down menu for "The Polls" on our masthead will have a choice labeled "Find All Polls" that will take you to the page displaying an all grey map. Clicking on a state will take you to that state's index page.
- We will also modify the text links in the sidebar box that appears at the top of our right column throughout the site to that it includes links to the index pages for all 50 states, DC and the national index page.
If you have already glanced at our front page today, you know that we have introduced three new charts using the same Flash software that displayed pre-election polling results last year. All three are based on results from national surveys and are accompanied by tables that include links to the underlying source data:
- Barack Obama's favorable rating - displays results back as far as January 2007, when most national polling organizations first started asking Americans to rate him.
- Obama's job approval rating - currently based on questions about how Obama is handling "his presidential transition," this chart will evolve into one that tracks questions about how he handles his "job as president" once pollsters switch to that language after inauguration day.
- Right direction, wrong track - tracks answers to the question, "do you think things in this country are generally going in the right direction or are they seriously off on the wrong track," as asked by a dozen or so pollsters since Labor Day, 2008. Later this year, we hope to add more data going back further in time
These three are just the beginning. We are also planning to add many more national measures over the course of the next few months, and of course, for election tracking graphs for 2009 and 2010 races as data becomes available. Our menus and sidebar links should update within the next few days to allow easier navigation to the new charts.
Again, the charts use the same Flash display software that we used during the fall campaign (static non-Flash graphic versions are displayed for those without a Flash capable browser). Pointing your mouse to any individual data point on the chart will pop-up information about that poll (pollster, survey dates, sample size, etc.). Clicking on that point will connect-the-dots to other results from the same organization. Options accessed through the tools menu allow you to filter out polls by any organization or by the mode of the survey, vary the sensitivity of the trend line, change the axis ranges and embed your chart, customized as you prefer, on your own blog or web page. We produced a video back in September that demonstrates most of these features.
One thing you will notice immediately is that some of these charts show more distinct "house effects" than the horse race results we typically plot. The favorable rating in particular shows big differences, owing to the sometimes very different ways that pollsters ask Americans to rate their general impressions of political leaders. Notice, for example, the way the Rasmussen surveys produce a greater unfavorable percentage for Obama and the way the CBS/New York Times wording produces lower percentages for both the favorable and unfavorable categories. I wrote about some of these differences, particularly as they affect the CBS/New York Times results, in a column back in July, along with a sidebar post that included the text of the favorable rating question asked by each national pollster.
By playing with the "filter" feature in the charts, you can get a sense for the degree to which removing any pollster or combinations of pollsters affect our overall estimate. What you will find is that the loess regression line is mostly resistant to minor "house effects," even major ones. Remove the frequently updating Rasmussen automated tracking, for example, and the overall estimate changes from 71.5%-17.8% (favorable-unfavorable) with all polls included to 72.5%-16.0% without.
Chris Bowers posted a two-part series this week that compares the final estimate accuracy of his simple poll averaging ("simple mean of all non-campaign funded, telephone polls that were conducted entirely within the final eight days of a campaign") to the final pre-election estimates provided by this site and Fivethirtyeight.com.
Chris crunches the error on the margin in a variety of different ways, but the bottom line is very little difference among the methods. These are his conclusions:
- 538 and Pollster.com even, I'm further back: Pollster was equal to 538 when all campaigns are included (the "1 or more" line) and with all campaigns except the outliers (the "2 or more" line). Kind of funny that not adjusting any of the polls, and adjusting all of the polls, results in the same rate of error. To no one's surprise, my method was much better among more highly polled campaigns, but still about 10% behind the other two once poll averaging (2 polls or more) comes into play. I make no pretense about my method needing polls in order to work.
- Anti-conventional wisdom : 538 had the edge among higher-polled campaigns, which means Pollster.com was superior among lower-polled campaigns. This goes against conventional wisdom. Many thought Silver's demographic regression gave him an edge among less-polled campaigns, but that Pollster's method only worked well in heavily polled environments. Turns out the opposite was true, and I'm not sure why. Maybe Silver's demographic regressions don't work, but his poll weighting does. Or something.
- Still very close : While I was a little behind, the difference between the methods is minimal. I'm a little disappointed, but clearly anyone can come very close to both 538 and Pollster.com in terms of prediction accuracy with virtually no effort. Just add up the polls and average them. It is about 90% as good as the best methods around, and anyone can do it.
You can see the full post for details, but his calculations are in line with what we found in our own quick (and as yet unblogged) look at the same data. We simply saw no meaningful differences when comparing the final, state-level estimates on Pollster to Fivethirtyeight.
Keep in mind that we designed our estimates, derived from the trend lines plotted on our charts, to provide the best possible representation of the underlying poll data -- nothing more and nothing less. So the accuracy of our estimates tells us that the poll data alone, once aggregated at the end of the campaign, provided remarkably accurate predictions of state-level election outcomes. The fact that the more complex models used at FiveThirtyEight were equally accurate raises the question: In terms of predictive accuracy, what value did Fivethirtyeight's extra steps (weighting by past polls performance and the various adjustments based on other data and regression models) provide?
Over these last two years, we are thankful to have had the opportunity to share the most exciting, compelling election of our lifetimes with you from this unique vantage point. The last year in particular has been long and sometimes grueling, so my family and I are looking forward to taking the next week or so off. Eric will be checking in from time to time, but I will be pretty much off the radar until the new year.
It will be a new year full of new polls and new challenges, and we are looking to bringing you new charts and analysis to follow it all.
Until then, from all of us here at Pollster.com, we wish you a Merry Christmas, a Happy Hanukkah or joy in whatever way you celebrate the holiday season. And if I don't get the chance to say it online, a happy New Year too.
As you may have noticed, we have inserted a new map at the top of our main page. The classic maps for President, Senate, Governor and House races are still there, just use the "map chooser" pull-down menu to see them. We will be live blogging tonight, starting a little after 5:00 Eastern Time.
Here is a quick description of the numbers in the map and where we get them.
First, we will be monitoring the five major television networks and the Associated Press and tracking the calls that each makes in the race for President. Once one network projects a state for a candidate, we will color the state light red or light blue for McCain or Obama. When all six organizations have made their projections, we will change the map to a darker shade of red or blue. States where the polls have closed, but where it is still too early or too close to call, will be colored yellow (also the color we will use in the extraordinarily unlikely event that any of the networks make a conflicting call).
To see which networks have made their projections, just point your cursor at the state to see an expanded "tool-tip" displaying that information. The tool-tip for each state will also display two important columns of data:
Pollster Trend -- The far right column will display our most recent trend estimate based on the pre-election polls. And just like our standard map, you can click on the state to display our chart for that state. See our map FAQ for more information on how we compute our trend estimates.
Est Result -- Shortly after the polls close, we hope to display the "estimated result" of the vote shares in each state culled from the network exit poll tabulations posted online (by CBS, CNN, Fox and NBC). These tabulations show the exit poll results by demographic and other subgroups (age, race, party, etc.). We will extrapolate the underlying vote estimates used to weight each table and display these in the Estimated Result column on the tool-tip.
During the course of election day and evening, the people who run the exit poll and projection operation have various estimates of the outcome in each state, estimates that gradually improve as they obtain first exit poll interviews, later the actual vote cast in random samples of precincts, and ultimately the actual vote count. When the polls close, and at two or more additional times during the night, the analysts will re-weight the tabulations based on more current and accurate estimates.
Important disclaimer: These estimates are most likely not undiluted "exit poll" results. At poll closing, the exit poll tabulations that appear online are most likely weighted to a "composite estimate" that averages the results of exit poll interviews with the averages of pre-election polls (not at all unlike the trend estimates we post here at Pollster.com). Also, as we learned during the primaries, the weighting of the cross-tabulations frequently falls far behind the up-to-the-minute estimates that network "decision desk" analysts use to call the race.
Note that we have added separate labels for the individual congressional districts of Nebraska and Maine, since these states allocate Electoral Votes partially by district and may split their electoral votes. Unfortunately, we were only able to obtain public polls for Nebraska-02, so that label is the only one of the districts that will click-thru to a chart.
First, a very personal thank you. I was surprised and deeply gratified by the response to my post a week ago about the death of my father-in-law, both in the comments and by email. I apologize for not responding to every note personally -- I am hoping to do so after the election. My wife's family has, for most of the last week, been practicing the Jewish ritual of Shiv'ah, and I have frankly struggled to balance my obligations to family and those to this site during the final, incredibly busy week for which we have prepared for the better part of two years. So your kind words have been a great comfort.
More important, those who left comments should know that without realizing it, paid your own virtual visit to the home of the Burstin family and thus did what Jews consider a great "mitzvah" (a good deed commanded by God). On Tuesday night, following the funeral, I shared my post with my wife and my brother-in-law who had been, up until then, understandably preoccupied with other matters. The immediately scrolled down to read the comments and were visibly moved by the outpouring of kindness shown by so many strangers who never knew their father. So please accept my thanks on their behalf as well (the most appropriate place to make contributions in Frank Burstin's would be the United States Holocaust Museum).
Second and more generally. We quietly achieved the milestone of a million page views about a week ago and have served over 1.2 million pages for five of the last six days. During October, we had over 23 million pages views and 1.9 million absolute unique visitors. I find that level of traffic truly mind boggling, and it is a big reason why I have been so committed to working and posting over the last week. Thank you for your confidence.
We realize that most of you are experiencing a unique, one-every-four-year obsession with polls and polling data, so we have no illusions about where the traffic will head after Wednesday, but we will still be around and have plans for aggregating, charting and analyzing public opinion more broadly as we move into a new presidential administration next year. We hope you come back and check in on us from time to time.
Meanwhile, a few "housekeeping notes." We apologize for the slow down many of you experienced this morning that seemed to peak about noon eastern time. In reaction to an unusually heavy load of traffic on our servers, our IT support staff made some changes to the way our computers are configured which appears to have eliminated most of the slow down.
Some of you also emailed to report a minor glitch affecting the charts that made the trend line appear to turn back on itself slightly in a few instances that was visible only when you focused on just the last month or two on the trend line. Our Flash developer quickly smashed the bug and we uploaded a new version of the chart program that should solve the problem (though you may need to clear your browser cache and reload the page). If you are still seeing the problem or any other glitch, please drop us an email.
Also, we quietly added a feature last week that some of you will find helpful over these last 24 hours. The main "Poll" pages for Polls on the races for President, Senate, Governor and U.S. House now feature tables showing the current trend estimates and classifications for all races, including all 107 House races for which we have data (some of which have been added too late to be included on our House map). An undocumented tip: You can easily copy and paste those tables into a spreadsheet for sorting or further manipulation.
Stay tuned for more tomorrow: We will be live blogging about the results tomorrow night and will have a special, expanded election night map with results and network calls.
And finally, a request I hope offends no one. I'm going to add the "donate" button to our front page and side bar so, if you have enjoyed this site and would be willing, you can make a contribution and help us both build a better site for the future and help me give an end of cycle bonus to Eric and others who have worked very hard over the last two years to bring you the charts and data every day. (And no, donations are not tax deductible).
I have restored the comments function I temporarily disabled earlier this afternoon. Those who continue to post abusive or profane commentary will be banned without warning, as our time allows. If you are in doubt, please read my post from earlier today.
If the tone reverts to the out-of-control ugliness I have seen in recent weeks, we may shut the comments off altogether for the remaining days before the election.
So why aren't the comments working this afternoon? I temporarily disabled them. Why?
Let me explain. We have always considered it important to maintain a largely unmoderated comments section that allows for dissenting views and debate over the topics raised by each post. Under the right circumstances a community of commenters forms that will help maintain a mostly civil forum for the expression of dissent and add great value to what we post here.
In recent days, I have seen some very impressive examples of our comments section functioning exactly as it should. Last Friday, I posted a lengthy entry that discussed likely voter models. It generated many comments. Some dissented from my argument or questioned some aspect of it, some added thoughts or theories of their own. And while some disagreed with each other, the comments that I read were generally civil, respectful and connected to the topic at hand.
Monday I posted a very personal note on the passing of my father-in-law. The many comments that followed were moving and beautiful. The outpouring restored my faith in the idea of an open, mostly unmoderated comments section (and thank you, thank you to all who posted so many kind words -- it meant a great deal to my family).
And then there are the comments left on our "poll update" posts that have degenerated into something altogether different. And that is partly my fault.
We do have a basic comments policy that requires, simply, that commenters "keep the dialogue civil." It also warns that "comments that we consider abusive, profane, hateful or racially, ethnically or otherwise objectionable" are subject to deletion, and the reader that post such comments are subject, at our discretion, to being banned from commenting on this site.
Until this past summer, I forwarded every comment left on this site to my email inbox, and made it a policy to read (or at least skim) every comment. Occasionally, someone would post an abusive or overtly profane comment, and would delete it. A small handful of commenters were so brazen about ignoring the rules that we banned them.
Unfortunately, when the volume of comments started to exceed a hundred or more every day, I could no longer keep up with it and to be honest, at that point, things started to get out of control. We are now getting more than a thousand comments a day -- in the last week, we received more than 10,000. At that pace, given our modest resources, it is simply impossible to read every comment, much less try to monitor or police them.
And unfortunately, the level of abusive, insults and profanity has grown to an embarrassing level. Two days ago, I received a email from a father of a 2nd grader. He wanted to know if we offered a "kid friendly" version of Pollster:
[My child's] school wants to share the site with the rest of the students. The only problem is that some of your visitors can be quite cantankerous with one another in the comments sections. Is there any way to disable those on our end? Any ideas or suggestions on how the school can use your site in a way that is appropriate for young kids?
No, we do not have a way to offer child friendly version of pollster, but I do not understand why the adults who use this site and comment on it cannot find a way to act like adults. This is not a locker room and not a night club. We have a simple policy, and the adults that comment here ought to find a way to follow it or leave. The alternative is that we disable comments altogether, just as we have this afternoon.
Yesterday, a very frequent commenter posted a comment that certainly offended me, and several other readers who emailed in protest. It said, in reference to Barack Obama, "the American Public doesn't want a Jew-hating Socialist running the economy." Now while I find that comment extremely uncivil and offensive, some might see it as a contrary opinion. So I posted a comment of my own asking the commenter to explain how that remark qualifies as remotely civil and intelligent and why I should not consider it a violation of our comment policy.
He ignored my question and instead posted a series of comments this morning including this charming response to another reader on another subject: "You STUPID liberal f*ck."
I asked a second time for some explanation. I heard none. He refused to answer my question and told another reader that he has nothing to explain. As such, he is free to take his comments elsewhere. As of today, he is banned and no longer welcome to comment on Pollster.
Now I understand that a lot of obnoxious, offensive, petty name calling has been going on in our comments section for months, and that this particular commenters behavior is just par for the course. I recognize that I bear some responsibility for letting it get out of control, but while I want to clean it up, I have no interest in who said what first or why. Since it seems to be hard to get the attention of some of you, I have shut off our comments for the afternoon. I will turn comments back on in a few hours, but before I do I want to make a few things clear:
1) If you can't say it on broadcast television, please don't post it here. Is that so hard? If you can't act like an adult when you comment, please take it somewhere else. I have only banned one commenter today, but there are obviously many others who have gotten into the habit of ugly, profane rants directed at other readers. These need to stop. Today. Those who ignore this plea when the comments come back on may find themselves locked out.
2) Don't pick an a screenname that is, itself, profane or abusive of other commenters. Doing so is grounds for being banned.
3) Banned users are banned permanently. They are not permitted to return under a new screen-name. Where possible, we will take action against those who violate this rule, including contacting webmasters or postmasters at the ISPs or businesses where the comments originate.
Now I would like to promise that Eric or I could spend every moment of the next five days carefully monitoring the comments and enforcing these rules, but will obviously be impossible. So I want to make a plea to the readers that care about our comments section and want to keep it open Be a community: Help us convince the others to clean up their act. If someone says something offensive when the comments come back on, please try to convince them to apologize and stop. If some continue to flout these rules when the comments come back, then please email us to nominate those who who deserve to be banned. However --and note this well -- please follow guidelines (borrowed from the DailyKos policies for their "Hide Ratings"):
- Do not request that we ban someone for expressing contrary opinions, so long as they do so in a civilized fashion.
- Do not request that we ban someone you are actively having a fight with.
- Please understand that we won't have time to respond personally to email received this week or to resolve disputes, we decide what we consider offensive and all decisions are final.
We have precious little time over the next 5 days and I have no sense of humor about the continuing abuse of this site and its readers. Shutting off comments altogether remains a real option, so please help us out.
For those who have tried to post comments this morning and received the following message:
Comment Submission Error
Your comment submission failed for the following reasons:
You are not allowed to add comments.
You have not been banned. The problem, which we are looking into, appears to have blocked all comments since a little after 10:00 a.m. Eastern Time.
We apologize for the inconvenience and will update with more information when we have it.
The bug has been solved and our comments feature is back and running.
I'll have more details in tomorrow morning's update, but the most obvious news tonight is that our electoral vote count as displayed at the top of our maps finally caught up with the shift in the national trend: Obama now holds a 296 to 163 advantage over John McCain, as both Colorado and Florida shift to the "lean" Obama.
So for at least some, this may be the ideal time to offer you an added feature: You can embed a version of our small map like the one below on your own blog or website.
Just select and copy all of the code in the box below and paste it into your blog or web site (using the HTML mode):
Thanks to our partners at Slate.com, you now have an easy way to check the latest Pollster.com trend estimates using Apple's iPhone.
Here are the details from Slate:
Today Slate introduces Poll Tracker '08, an application that delivers comprehensive up-to-the-minute data about the presidential election to your iPhone, iPhone 3G, or iPod touch. Using data from Pollster.com, the Poll Tracker '08 delivers the latest McCain and Obama polling numbers for every state, graphs historical polling trends, and charts voting patterns in previous elections. Poll Tracker '08 allows you to sort states by how contested they are, how fresh their poll data is, or how heavily they lean to McCain or Obama.
You can download Poll Tracker '08 on the iPhone App Store. It costs just 99 cents, a small price to pay for satisfying your craving for data anytime, anywhere. Get it on the App Store.
Yes, we have U.S. House race data! Actually, we have posted charts for House races for some time, but on Friday we finally put up our revised House map (it's accessible via the "map chooser" pull down from any of the large maps on the site) and added that page to our main menu.
Readers should know that public polling data for U.S. House races tends to be more rare. In 2006, we scoured various sources for poll in House races and ultimately found polls in just 94 of the 435 districts. This time, we created a map with labels for the 111 districts rated as competitive or potentially competitive by our colleagues at the Cook Political Report. So far, we have logged in poll results for 60 districts, with another dozen or so on the way this week.
A note of explanation about the map and House scoreboard: Where we have poll data, we classify the race based on the polling data using the same criteria as for the statewide races. Where we have no polls at all, we assume no change in party status.
We assume there are U.S. House surveys out in the public domain that we no nothing about. So if you know of a poll in a U.S. House race that's not listed here, please email us (questions at pollster dot com) with the details. Thank you!
I totally neglected to link to this -- apologies for that: Charles Franklin and I are joining Nate Silver of FiveThirtyEight.com for a live chat on WashingtonPost.com. The chat starts right now (noon Eastern Time) but should be available for review afterwards.
A little over two years ago, we launched Pollster.com with a mission of providing a complete compilation of poll results, expert analysis and graphical tools to help readers make sense of polling data. Today, after two long years of development, our commitment to interactive graphical tools takes a quantum leap.
At a moment when the political world is swimming in a flood of polling data, we are pleased to announce a new, fully interactive Flash chart application that will plot all of the poll charts here on Pollster.com. The new charts allow you to:
- Select or limit the polls used to draw trend lines and calculate polling estimates with the "filter" tool. If you don't like a particular pollster, just un-click and take them out (yes...really).
- Toggle between the display of the default trend line and alternatives that are more or less sensitive using the "smoothing" tool -- these are essentially the same as the "steady blue" and "ready red" trend lines often used by Charles Franklin.
- Hold your mouse over any data point to display details about each the poll.
- Click the mouse on any data point to "connect the dots" between all polls fielded by that pollster.
- Modify the date range (x-axis) and percentage range (y-axis) by clicking on either axis directly or with forms found on the "tools" menu.
- Select the candidates you want to see displayed on the chart with the "choices" tool .
- Toggle the display of data points, trend lines and grid lines on or off with the "plot" tools.
- Copy the code necessary to bookmark your customized chart or share it via email with the "URL" tool.
- Get the code necessary to place a small version of the customized chart on your own blog or web site with the "Embed" tool. [Update: We believe we've squashed the embed bug. If you experience problems with the embed tool or anything else, please email us with details at questions at pollster dot com.
We missed a bugin the embed function that prevents the embedded chart from displaying customized filters. Apologies -- we should have this cleaned up soon]
As of this posting, we have converted our charts for all presidential charts, including Ohio, Colorado, Virginia , Florida, Michigan, Pennsylvania, New Hampshire, Nevada, Minnesota and the National Trend to the new chart
, and we should have most of the presidential charts converted over the course of the day. Although we hope you will dive right in and try these new features, we have also created a quick video guided tour of some of the most important features.
We are really excited about the possibilities these new tools create for poll junkies to explore and discover the wealth of data now available and how you are too. So if you like If you like what you see, we hope that you share this news with your friends. If you have a blog or diary, we would very much appreciate a link to this entry and please try out the embedding feature to see how it works. I'll be back later with some extra tips on how to use the charts. Meanwhile, please don't hesitate to email us with your questions or reactions.
Where credit is due: A lot of very talented people worked exceptionally hard to make these new charts possible, but most deserving of thanks are Quentin Fountain and Technorganix (for Flash design and development), Jeff Lewis and Seth Hill (for development of the underlying database and statistical architecture), Charles Franklin (for the design of the original charts and the regression trend lines and smoothing routines) and, last but not least, Eric Dienstfrey, who has entered virtually every piece of the data now displayed on Pollster.com and is doing the nitty gritty work of implementing these charts and keeping Pollster operating every day. Update: I nearly forgot to thank those of you -- and you know who you are -- who helped beta test these charts over the past week.
Sorry to all for the slow pace of updates today. We have had an Internet service interruption that's kept us offline for much of the afternoon. We seem to be back online...for the moment at least.
If you are a regular reader, you no doubt noticed the new PoliticsHome.com panel that now appears in the right column on every page on Pollster.com.
PoliticsHome.com launched in April 2008 in the United Kingdom and has quickly established itself as a leading resource for constant updates on British politics. This past week, PoliticsHome kicked off Campaign08, a new site devoted to U.S. presidential election news.
Here is how it works: PoliticsHome.com is modeled on the financial news services used by traders that provide links and headlines -- "everything you need to know, each minute" -- on single screen that updates constantly throughout the day. They have a team of political journalists in bureaus in London and Washington that constantly review political news, publishing full news digest each morning at 6 a.m. Eastern time, with live updates until midnight.
They are launching the new site in association with Pollster.com. We will provide PoliticsHome with regular polling updates, and they have created a miniature version of the Campaign08 site that now runs in the right column on every page on Pollster.com. The "Latest Developments" box gets the same minute-by-minute updates as their main site. So now, you can visit Pollster.com for both the latest poll results and a quick, constantly updating digest of all the breaking political news across the web.
PoliticsHome.com will also be introducing some data collection efforts in the coming weeks that will be of interest to Pollster.com readers, so stay tuned.
First, a quick update on yesterday's post on the Obama campaign briefing. First, James Barnes of the National Journal has a write-up of the strategy spelled out by campaign manager David Plouffe yesterday that includes extended verbatim excerpts from the briefing.
In the briefing, Plouffe emphasized that when it comes to polling "all we care about is these 18 states." I had not specified those 18 states, but they are: Alaska, Colorado, Florida, Georgia, Iowa, Indiana, Michigan, Missouri, Montana, Nevada, New Hampshire, New Mexico, North Carolina, North Dakota, Ohio, Pennsylvania, Wisconsin, and Virginia.
Second, on Tuesday, I participated in a panel on polling put on by the National Journal along with Hotline editor Amy Walter and pollsters Ed Reilly, who conducts the Diageo Hotline poll and Geoff Garin, who worked for Hillary Clinton earlier this year. Video excerpts from that panel are now available at the this link.
Finally, in an hour or so, I will be going offline and heading over to Invesco Field for the evening. I've been experimenting with Twitter this week (under the handle of, what else, MysteryPollster), and will probably post some comments there tonight (with the caveat that any such "tweets" are likely to be a bit off our usual focus on polling and surveys).
I will be taking a break this week, although I filed a National Journal column for the week, which should appear in a day or two. Meanwhile, Eric will continue with poll updates and our other contributors should be active this week. See you next week!
In the first two installments of this online dialogue, I asked a question we have heard from readers about why we choose the results for "likely voters" (LVs) over "registered voters" (RVs) when pollsters release both. Charles answered and explained our rationale for our "fixed rule" for these situations (this is the gist):
That rule for election horse races is "take the sample that is most likely to vote" as determined by the pollster that conducted the survey. If the pollster was content to just survey adults, then so be it. That was their call. If they were content with registered voters, again use that. But if they offer more than one result, use the one that is intended to best represent the electorate. That is likely voters, when available.
Despite my own doubts, I'm convinced by the rule for this reason: I can't come up with a better one. Yes, we would arbitrarily choose RVs over LVs until some specified date, but that would leave us still plotting numbers from pollsters that only release LV samples. And on which date do we suddenly start using the LV numbers? After the conventions? After October 1? What makes sense to me about our rule, is that in almost all cases (see the prior posts for examples) it defers to the judgement of the pollster.
Several readers posed good questions in the comments on the last post. Let me tackle a few. Amit ("Systematic Error") asked about how likely voters are constructed and whether we might be able to plot results by "a family of LV screens (say, LV_soft, LV_medium, LV_hard)" and allow readers to judge the effect.
I wrote quite a bit back in 2004 about how likely voter screens are created, and a shorter version focusing on the Gallup model two weeks ago. One big obstacle to Amit's suggestion is that few pollsters provide enough information about how they model likely voters (and how that modeling changes over the course of the election cycle) to allow for such a categorization.
"Independent" raised a related issue:
Looking at the plot, it appears that Likely Voters show the highest variability as a function of time, while Registered Voters show the least. Is there some reason why LVs should be more volatile than RVs? If not, shouldn't one suspect that the higher variability of the LV votes is an artifact of the LV screening process?
The best explanation comes from a 2004 analysis (subs. req.) in Public Opinion Quarterly by Robert Erikson, Costas Panagopoulos and Christopher Wlezien. They found that the classic 7-question Gallup model "exaggerates" reported volatility in ways that are "not due to actual voter shifts in preference but rather to changes in the composition of Gallup's likely voter pool." I also summarized their findings in a blog post four years ago.
Finally, let me toss one new question back to Charles that many readers have raised in recent weeks. The two daily tracking surveys -- the Gallup Daily and the Rasmussen Reports automated survey -- contribute disproportionately to our national chart. For example, we have logged 51 national surveys since July 1, and more than half of those points on the chart (27) are either Gallup Daily or Rasmussen tracking surveys. Are we giving too much weight to the trackers? And what would the trend look like if we removed those surveys?
Mark started this conversation with "Why we choose polls to plot: Part I
" asking how we decide to handle likely voter vs registered voter vs adult samples in our horse race estimates. This was especially driven home by the Washington Post/ABC poll reporting quite different results for A, RV and LV subsamples but it is a good problem in general. So let's review the bidding.
The first rule for Pollster is that we don't cherry pick. We make every effort to include every poll, even if it sometimes hurts. So even when we see a poll way out of line with other polls and what we "know" has to be true, we keep that poll in our data and in our trend estimates. There are two reasons. First, once you start cherry picking you never know when to stop. Second, we designed our trend estimator to be pretty resistant to the effect of any one poll (though when there are few polls this can't always be true.) That rule has served us pretty well. Whatever else may be wrong with Pollster, we are never guilty of including just the polls (or pollsters) we like.
But what do we do when one poll gives more than one answer? The ABC/WP poll is a great example, with results for all three subgroups: adults, registered voters and likely voters. Which to use? And what to do that remains consistent with our prime directive: never cherry pick?
Part of the answer is to have a rule for inclusion and stick to it stubbornly. (I hear Mark sighing that you can do too much of this stubborn thing.) But again the ABC/WP example is a good one. Their RV result was more in line with other recent polls while their LV result showed the race a good deal closer. If we didn't have a firm, fixed, rule we'd be sorely tempted to take the result that was "right" because it agreed with other data. This would build in a bias in our data that would underestimate the actual variation in polling because we'd systematically pick results closer to other polls. Even worse would be picking the number that was "right" because it agreed with our personal political preferences. But that problem doesn't arise so long as we have a fixed rule for what populations to include in cases of multiple results. Which is what we have.
That rule for election horse races is "take the sample that is most likely to vote" as determined by the pollster that conducted the survey. If the pollster was content to just survey adults, then so be it. That was their call. If they were content with registered voters, again use that. But if they offer more than one result, use the one that is intended to best represent the electorate. That is likely voters, when available.
We know there are a variety of problems with likely voter screens, evidence that who is a likely voter can change over the campaign and the problem of new voters. But the pollster "solves" these problems to the best of their professional judgement when they design the sample and when they calculate results. If a pollster doesn't "believe" their LV results, then it is a strange professional judgement to report them anyway. If they think that RV results "better" represent the electorate than their LV results, they need to reconsider why they are defining LV as they do. Our decision rule says "trust the pollster" to make the best call their professional skills can make. It might not be the one we would make, but that's why the pollster is getting the big bucks. And our rule puts responsibility squarely on the pollsters shoulders as well, which is where it should be. (By the way, calling the pollster and asking which result they think is best is both impractical for every poll, AND suffers the same problems we would introduce if we chose which results to use.)
But still, doesn't this ignore data? Yes it does. Back in the old days, I included multiple results from any poll that reported more than one vote estimate. If a pollster gave adult, RV and LV results, then that poll appeared three times in the data, once for each population. But as I worked with these data, I decided that was a mistake. First, it was confusing because there would be multiple results for a poll-- three dots instead of one in the graph. That also would give more influence to pollsters who reported for more than one population compared to those pollsters who only reported LV or RV. Finally, not that many polls report more than one number. Yes sometimes some pollsters do, but the vast majority decide what population to represent and then report that result. End of story. So by trying to include multiple populations from a single poll, we were letting a small minority of cases create considerable confusion with little gain.
The one gain that IS possible, is to be able to compare within a single survey what the effect of likelihood of vote is. The ABC/WP poll is a very positive example of this. By giving us all three results, they let us see what the effect of their turnout model is on the vote estimate. Those who only report LV results hide from us what the consequences might be of making the LV screen a bit looser or a bit tighter. So despite our decision rule, I applaud the Post/ABC folks for providing more data. That can never be bad. But so few pollsters do it that we can't exploit such comparisons in our trend data. There just aren't enough cases.
What would be ideal is to compare adult, RV and LV subsamples by every pollster, then gauge the effect of each group on the vote. But since few do this, we end up having to compare LV samples by one pollster with RV samples by another and adult samples by others. That gets us some idea of the effect of sample selection, but it also confuses the differences between survey organizations with differences in the likely voter screens. Still, it is the best we can do with the data we have.
So let's take a look at what difference the sample makes. The chart below shows the trend estimate using all the polls, LV, RV and adult samples separately. We currently have 109 LV samples, 136 RV and 37 adult. There are some visible differences. The RV (blue) trend is generally more favorable to Obama than is the LV (red) trend, though they mostly agreed in June-July. But the differences are not large. All three sub-population trend estimates fall within the 68% confidence interval around the overall trend estimate (gray line.) There is good reason to think that likely voters are usually a bit more Republican than are registered or adult samples. The data are consistent with that, amounting to differences that are large enough to notice, if not to statistically distinguish with confidence. Perhaps more useful is to notice the scatter of points and how blue and red points intermingle. While there are some differences on average, the spread of both RV and LV samples (and adult) is pretty large. The differences in samples make detectable differences, but the points do not belong to different regions of the plot. They largely overlap and we shouldn't exaggerate their differences.
There is a valid empirical question still open. Do LV samples more accurately predict election outcomes than do RV samples? And when in the election cycle does that benefit kick in, if ever? That is a good question that research might answer. The answer might lead me to change my decision rule for which results to include. But if RV should outperform LV samples, then the polling community has a lot of explaining to do about why they use LV samples at all. Until LV samples are proven worse than RV (or adult) then I'll stick to the fixed, firm, stubbornly clung to, rule we have. And if we should ever change, I'll want to stick stubbornly to that one. The worst thing we could do is to have to make up our minds every day about which results to include and which not based on which results we "like."
: In Part III
of this thread, Mark Blumenthal answers to some of the comments below and poses a new question].