Pollster.com

Articles and Analysis

 

Christie's Pollster on NJ Polls

Topics: Disclosure , Divergent Polls , New Jersey , New Jersey 2009

Adam Geller is the CEO of National Research, Inc. and conducted polling for Chris Christie's campaign in New Jersey this year.

I'd like to contribute a few thoughts on the performance of the public polls during the recently concluded New Jersey Gubernatorial race. On this topic, I bring a unique perspective, as the pollster for the Christie campaign, and I'd like to offer my thoughts not as any type of authority, but rather to contribute to an important professional discussion.

I should mention that, for what it's worth, some observers may have been surprised by the results on November 3rd, but neither Governor Elect Christie nor his advisers were surprised.

Before the cement hardens and ink dries on the post election wrap up, let me offer the following five thoughts:

  1. The automated polls were more accurate than the live interview public polls, due in part to the methodology of the live interview polls.
    From polls that were in the field for an entire week (Quinnipiac) or even longer (FDU), to polls that oversampled Democrats (Democracy Corps, among several others) to polls that asked every single name in the ballot (Suffolk), an essential reason for the poor performance of the live interview polls had less to do with the fact that a live person was administering the poll and more to do with methodological issues.
  2. The partisan spread in the polls ought to be reported up front.
    Some public pollsters make it difficult to determine how many Republicans, Democrats and unaffiliated voters they interviewed. Why not just put it into the toplines? Reporters and bloggers should demand this before they report on the results. Not to pick on Quinnipiac, but they had Corzine and Christie winning about the same amount of their own partisans, and they had Christie winning Independents by 15 percentage points, and yet they STILL had Christie trailing overall by 5 points. Quinnipiac did not publish their partisan spread, but then an astute blogger was able to ascertain the fact that there were, in fact, too many Democrats in the sample. Other polls, notably Democracy Corps, regularly produced samples with too many Democrats (though, in their parlance, some of these were "Independent - Lean Democrat"). That their sample was loaded up with Democrats had the obvious effect on their results. Whether this was intentional or not, I would leave to others to speculate.
  3. In general, RDD methodology is a bad choice in New Jersey, if the goal is predictive accuracy.
    In New Jersey, there are many undeclared voters (commonly but mistakenly referred to as Independents). These undeclared voters identify themselves as Republicans or Democrats - even though they are not registered that way. In our polls, we frequently showed a Democrat registration advantage that matched their actual registration advantage - but when it came to partisan ID, the spread was more like a six point Democrat advantage. By using a voter list, we knew how a respondent was registered - and by seeing how they ID'ed themselves, we gained insight into the relative behavioral trends of undeclared voters and even registered Democrats who were self identifying as Independents. Public pollsters who dialed RDD missed this. Partisan identification in New Jersey is not enough, if the goal is to "get it right."
  4. The public polls oversampled NON voters.
    Again, this is a function of RDD versus voter list dialing. It is easy for someone to tell a pollster they are "very likely" to vote. With no vote history and no other nuanced questions, the poll taker has little choice but to trust the respondent. Pollsters who use voter lists have the benefit on knowing exactly how many general elections a respondent may have voted in over the past five years, or when they registered. By asking several types of motivation questions, the pollster can construct turnout models that will have a better predictive capacity. The public polls did not seem to do this.

    To this end, we had heard all about the "surge strategy" that the Corzine campaign was going to employ. This refers to targeting "one time Obama voters" and driving them out in force on election day. With voter lists, we were easily able to incorporate some "surge targets" into our sample. After running our turnout models, we saw no evidence that the surge voters would be game changers.
  5. The Daggett effect was overstated in the public polls.
    Conventional wisdom holds that Independent candidates underperform on election day. But the reality is, many analysts could have easily predicted Daggett's collapse, based not on history, but on simple a simple derivative crosstab: for example, voters who were certain to vote for Daggett AND had a very favorable opinion of him. They could have asked a "blind ballot" where none of the candidate choices were read. We did these things - and we estimated Daggett's true level of support to be around 6%.
None of this is meant to pick on the "live interview" public pollsters. For the most part, these polls are conducted and analyzed by seasoned research professionals. But in non-Presidential years, RDD methodology can lead to inaccurate results, which can then lead to inaccurate analysis. It is tough to conclude that the automated polls are somehow superior to live interview polls, given the methodological issues I've outlined.

What does it mean for next year? At the very least, journalists, bloggers and reporters need to ask more questions about the methodology and construction of the poll sample. They need to understand the partisan spread, and the extent to which it conforms to reality. They need to know how long the survey was in the field. They also need to beware of polls being released that are designed to manipulate opinion rather than manage it. They need to ask if certain polls are being constructed to reflect what is happening, or if they are being constructed to reflect what the poll sponsor would LIKE to happen. The public polls add to the dialogue, and given their ever increasing contributing role, we all ought to be more demanding when reporting their results.

 

Comments
sfcpoll:

Speaking of transparency, are you willing to provide the party identification breaks from your final survey before the election? And what portion of the voter list are you able to match to phone numbers? How do you deal with cell phone only voters?

Obviously your job is to get a candidate elected, not produce public polls. But it seems a bit unfair to complain about transparency when comparing your polls to other polls when you don't to show such transparency in kind.

____________________

Jessie123:

SFCPoll – Seems like you are missing the point. The guy is talking about PUBLIC pollsters. He seems to me to be a PRIVATE pollster that works on PRIVATE strategy. I think the point he was trying to make is that PUBLIC pollsters ought to be PUBLIC with their methods. Is that such a bad thing? Now if a campaign decided to release poll results, by the logic of Gellar the campaign should release their method statement too.

____________________

PatrickM:

I think this is a worthwhile discussion -- and one I have had with Adam in the past (I am one of the NJ pollsters he is being critical of). One problem with the argument, however, is that his Democratic counterpart, Joel Benenson, reportedly had Corzine up by 6 points. Both used RBS sample frames. And since neither release their data, we can't really judge the veracity of the Adam's claim.

I do agree, though, with the premise that public pollsters need to be more transparent about their sample composition and likely voter screens (and that the media should get answers to these questions before they regurgitate poll numbers).

I also believe that this race was pretty close up until the end, but that the "incumbent rule" should have been in effect for allocating undecideds.

____________________

DSchwartz:

I agree with Patrick Murray that there is value in discussing the pros and cons of using RDD vs RBS. In so doing it would be help to have information from Mr. Geller on his final few polls prior to the election. While there may be differences in expected disclosure between public polls and private polls, Mr. Geller's comments place his results in the public domain.

Mr. Geller says that Quinnipiac was in the field for a week and that helps to explain Quinnipiac's poor performance. The problem is Quinnipiac got it right. Quinnipiac had Christie by 2 points and he won by 4 points.

Mr. Geller notes that Quinnipiac didn't publish its partisan spread. We ask many demographic questions that we don't put in the press release due to space limitations. However, if a reporter asks for it, we will provide it. I don't recall seeing Mr. Geller's partisan distribution, let alone any poll numbers in a public release. How do we know Mr. Geller got it right if he never publicly released any information about his polls prior the election?

Mr. Geller alleges that Quinnipiac had too many democrats in its sample. It was the same methodology that produced "too many" Democrats that also nailed Christie's margin of victory.

____________________

DanGreen:

DSchwartz speaks of space limitations as the reason why Quinnipiac doesn’t publish “many demographic questions.” That’s a poor excuse. Public pollsters ought to have an obligation to be transparent and describe the make-up of their sample. It is not good enough to only provide this information “if a reporter asks for it.”

____________________

Jessie123:

Dan Green makes a good point. Dr. Schwartz, since your two final polls had such different results, can you please let us know what your partisan make up was in your poll of October 20-26, as well as what the partisan make up was in your October 27-November 1st poll? I may not be a reporter, but I think that this is a reasonable request, given the subject matter of this website.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR