Mark Blumenthal | May 7, 2009
Topics: Andrew Baumann , Democracy Corps , Jon McHenry , Party Identification , Party Weighing , Resurgent Republic , Stan Greenberg
Here is another follow-up on Monday's back-and-forth between Democratic pollster Stan Greenberg on behalf of the organization known as Democracy Corps and Republican Whit Ayres on behalf of the new group, Resurgent Republic. Greenberg helped found Democracy Corps the years ago as a means to provide "free public opinion research and strategic advice" to those on the Democratic side, while the newly launched explicitly Resurgent Republic aims to provide a similar service for Republicans. Resurgent Republic released its first survey last week. Greenberg has some harsh criticism for it, Ayres quickly responded.
In his critique, Greenberg asked RR to "explain what about your methodology produces" results for party identification that he considered at odds with other national polls. In my summary of the spat, I asked both pollsters to publicly disclose(a) whether they weight their results by party identification or anything like it and (b), if so, how those weights are determined.
Both have now responded and both say they do sometimes weight on a "case by case basis" to rolling averages of their measurements of party ID or (in the case of Democracy Corps) something close to it.
Jon McHenry, a partner at Ayres, McHenry and Associates, the firm that conducted the first survey for Resurgent Republic. says that in tracking surveys conducted for its clients, they "typically do weight according to the rolling average of all the interviews conducted in the race that year." This approach is similar to the "dynamic weighting" applied by Rasmussen Reports.
Andrew Baumann, senior associate at Greenberg Quinlan Rosner Research, the firm that conducts the Democracy Corps surveys explains that after weighting on demographics, they check whether they consider the weighted sample "markedly tilted politically or ideologically" on a wide number of questions. If so, they will weight on a three-month rolling average of the "recalled presidential vote," or sometimes recalled congressional ballot.
In other words, every Democracy Corps questionnaire now asks respondents whether they cast a ballot in November 2008, and if so, which candidate they voted for. They weight on recalled vote questions because they theoretically measure "behavior rather than attitude for most people - thus potentially, less influenced by political events of the moment," though they concede that respondents often report past voting inaccurately. Their argument is that while a recalled vote question is not perfect, in theory at least, respondents should report the same 2008 vote choice in April that they would have reported in February. If a survey is different on that measure, they are more comfortable that the difference is random error and not a real change.
On their most recent surveys, Resurgent Republic did not weight by party, while Democracy Corps did weight on recalled presidential vote to match the weighted-only-by-demographics average obtained on their last three surveys.
What should we make of this? The practices of both Resurgent Republic and Democracy Corps are different from most national media polls (though not all). Some traditional media pollsters will consider these practices nothing short of anathema. On the other hand, this sort of dynamic party weighting is much more common among pollsters that conduct internal surveys for campaigns, particularly for weekly or nightly tracking programs. As Jon McHenry points out, they struggle for methods that capture the modest trends in party identification that sometimes occur during a campaign "in a way that doesn't freak out the campaign if you get a night that's pretty different than other nights.
Also, the procedures the pollsters describe mostly involve weighting to their own (comparable) measures of party, but not always. Note that Ayres and McHenry opted against weighting their first survey for Resurgent Republic by party ID, partly because they lacked previous comparable measurements of their own, but partly because they considered their results " within the range of party numbers seen this year (and as it turns out, close to the 28 R/35 D result in Gallup's latest release)" (link added).
More tomorrow on the perils of comparing party ID across different pollsters.
Meanwhile, this information has implications for our new chart of party identification. For reasons that are hopefully obvious, our intent is to include only results from pollsters weighted by demographics and not by party ID. So we will definitely exclude Resurgent Republic from the party ID chart, and I strongly "lean" to excluding Democracy Corps as well. We also need to follow up with the other public pollsters to be sure we are up to date on their procedures.
Any comment from our knowledgeable readers?
The complete verbatim responses from both McHenry and Baumann follow after the jump.
From Andrew Baumann, senior associate at Greenberg Quinlan Rosner Research on behalf of Democracy Corps:
Our approach to party weighting is really to evaluate it on a case by case basis. In tracking, we typically do weight according to the rolling average of all the interviews conducted in the race that year. So if a state or congressional district trends one way or the other, we hope to capture it, but in a way that doesn't freak out the campaign if you get a night that's pretty different than other nights.
In this instance, the survey was conducted with quotas for race, gender, and state, using rdd for both landline and cell phone samples. We found a party balance of 29 [percent Republican] and 33 [percent Democrat], staying true to the way we asked it, rather than including independents who lean when asked. With our results falling within the range of party numbers seen this year (and as it turns out, close to the 28 R/35 D result in Gallup's latest release), weighting to an average party id of the polls wouldn't have changed much in the substantive questions.
All things considering, I'd prefer not to weight, especially without your own data to reference as we do with tracking surveys. I'm not saying we won't start weighting for party ID a few surveys into this project, with several thousand more interviews behind us. (For the ABC/WaPo poll with 21 percent Republican, I might have considered it, just as we would have for a survey that produced a Republican number in the mid 30s.)
It is not our practice to weight on party identification, but we do use political weights. Let us describe the approach of Greenberg Quinlan Rosner on surveys for Democracy Corps.
We first weight on demographics. This has always included region, gender, race, age and education which we weighted to a standard that we set at the beginning of the cycle based largely on projected census data for the coming election and turnout patterns from previous like elections. On occasion, we weight on demographics such as martial status, union membership or church attendance if those values fall significantly outside of our usual range.
Only after demographic weighting is completed, we determine whether the sample is markedly tilted politically or ideologically - compared to surveys over the last few months, depending on how many surveys we have conducted. When we look at the partisan and ideological landscape, we examine party identification, presidential vote recall (for 2008, this was recall of the 2004 Kerry vs. Bush vote and now it is recall of the 2008 president vote), congressional vote recall (for 2008, this was the named 2006 congressional ballot and now, the 2008 ballot) and ideology. At least as important are the thermometer scores for the Democratic Party, Republican Party, the NRA, pro-life groups and gay marriage. For each of these, we create a "norm" which equals the average value of the variable, after demographic but before political weights over the last 3-5 surveys - including the current survey. By using this evolving "norm" for assessment, we try to determine whether the survey needs adjustment - political weighting.
We prefer not to weight politically and most of our surveys at the end of the last election had no political weighting. Still, many of our surveys do require a political adjustment. For that, we use the "norm" for recalled presidential vote (the average recalled over the last few months) - not the actual vote. In most cases, we move half way to the norm. Where sample bias is still very strong, we occasionally move to the norm.
We use presidential vote recall because it is a behavior rather than attitude for most people - thus potentially, less influenced by political events of the moment. We understand that many voters will adjust their recall to reflect reactions to the president or defeated opponent or to align with their party identification. But probably for most, they are reporting accurately. We fully understand, it encompasses some of the same currents as party identification. In any case, we are weighting to the recalled vote over the last few months - not the actual vote.
On some occasions in 2008 we also weighted on the "norm" of congressional vote recall, if the survey still fell well outside the partisan and ideological parameters. We were reluctant to weight on congressional recall, as respondents are less likely to remember their real behavior and responses move with swings in party identification.
Since the 2008 elections we have had to modify these standards slightly. This year we are including cell phones in our sample so now we include phone use as one of the demographics on which we weight. In our first four national surveys (conducted between November and February), we did not see any reason to apply political weights.
We also decided in March to begin conducting our surveys with presidential year voters - rather than off-year "likely voters." We made that change so that we can report on the critical group of voters - those who voted in 2008 but are now not likely to vote in 2010 (drop-off voters). This required us to adjust our demographic targets (which are now based on 2008 results) and also reset our "norms."
But since we did not yet have a "norm" for recalled vote in our March surveys, we weighted the 2008 presidential vote recall to the actual results. With our April survey, we had three surveys in the new universe, which was enough to establish new "norms." We weighted the 2008 presidential vote recall to its "norm."
We should note that for our last two surveys the political weights we applied made our sample LESS Democratic, though those unweighted and more Democratic results now become part of the calculation of "norm," perhaps reflecting the current period that were are in.