Pollster.com

October 29, 2006 - November 4, 2006

 

About That US House Scorecard

Topics: 2006 , The 2006 Race

I want to take a close look at our House of Representatives summary scorecard, partly to address some of its shortcomings, but mostly to try to get an overall sense of what the available public polling is telling us about the likely outcome. This post may get a bit long and a little esoteric, but there are ultimately two big takeaway messages: The first is that we still see a remarkably large number of races -- 25 to 50 depending on which polls you trust -- where public polling data is inconclusive. The second is that if we assume that the pure "toss-up" races split about evenly between the parties, the Democrats stand to gain 30 to 35 seats on Tuesday.

Many of you have posted comments or sent email asking why we took the approach that we did for classifying House ratings. We did consider many different approaches. None were perfect. We ultimately chose the simplest - replicating the approach used for the statewide races - because the process of creating something more complicated required far more time and computer programming than we had available. In this post I want to try to look at what some alternative approaches would tell us about how who is ahead and who is behind.

First, let's review how we our scorecard classifications work. We take average of the last five random sample polls in each district and then classify each based on the statistical significance of the leader's margin. We classify races where a candidate leads by at least one standard error or better as "leaning," we classify leads of at least two standard errors as "strong." The rest we classify as toss-ups, meaning that the surveys provide no conclusive evidence about which candidate is ahead. If no polling is available, we assume no change in party and assign it a "strong" status for the incumbent party.

That last step is important for House races, because we can find no public poll data for 351 of the 435 districts. However, very few of those missing districts are considered even potentially competitive by the various respected handicappers. We currently itemize seven theoretically competitive seats as "no-poll" in the scoreboard (because the Cook Political Report listed these among the seats with the "potential" to become competitive), but Cook considers five of seven incumbents in these districts "likely" to be reelected (i.e. "not considered competitive at this point).

So far, so good. But one big problem, as many of you have pointed out, is that polls in House districts are far less numerous than those in statewide contests. As such, a lot of those "last 5 poll" averages include some pretty stale results. While we have logged in more than 250 new House polls since October 1, there are still only 32 districts with five polls or more to average. Applying the "last 5 polls" filter still leaves 37 polls from September - and 25 polls from the summer months - contributing to the averages that we use to classify districts.

In some cases, those stale results can give a very distorted impression of where the race stands today. Consider, Pennsylvania-07, the district currently represented by Republican Curt Weldon. We currently rates that district a toss-up, based on the average of five polls that includes two from September and one from March. Weldon trailed by an average seven points in the two polls conducted in October - enough to shift the district to "strong" Democrat status.

So I put all of our House data into a big spreadsheet and did some "what-if" analysis. The first question I asked was, what would happen if we had applied a filter so that only polls released since October 1 could be included in our averages. Here is the result:

11-4%20House%20all.jpg

As the table shows, the net impact on the scoreboard is not dramatic but improves the lot of the Democrats: The count of seats at least leaning Democratic grows from 221 to 222, while the count at least leaning Republican drops from 187 to 184. The number of toss-up seats grows from 27 to 29, and all but two of those toss-up seats (Georgia-12 and Indiana-07) are currently held by Republicans.

The net changes on the scoreboard obscure a bit more reshuffling at the district level. For those keeping track: Four districts (Florida-16, New Hampshire-02, Ohio-2 and Pennsylvian-07) move from toss-up to Democrat, but three (Indiana-07, Iowa-1 7 and New York-20) shift from leaning or better Democrat to toss-up. Three more seats (Arizona-05, Kentucky-02 and California-50) move from Republican to toss-up based on unfavorable trends since September.

The table also reminds us of the relatively small number of surveys available in many of these districts. The good news is that the average number of polls per district drops only slightly (from 3.4 to 2.9) when we count only the October polls. The bad news is that more than half of the competitive districts have been polled two or fewer times (40) or not at all (12) in October.

While we're at it, thare are few more good "what if" questions we can ask....

[11/5 (11:45): Picking up where we left off last yesterday...].

What about partisan polls? As I have noted previously, the House data includes quite a few internal campaign polls, roughly one of every four in our database, and polls from Democratic campaigns outnumber those from Republicans by more than four-to-one (85 to 20). Since October 1, one-in-five House polls have come from partisans, and again those polls have been released mostly by Democrats (42 to 10).

Do all these Democratic polls tilt our scoreboard in favor of the Democrats? Yes, but only slightly. If we focus on the averages filtered to include only polls released since October 1, removing the partisan polls leaves the number of Democratic seats unchanged at 222 and shifts a net two seats to the Republicans (from 184 to 186). The absence of favorable internal polls make three potential Democratic pickups seem less likely (Florida-13, Nebraska-03, and Ohio-01), but also leaves three other Republican incumbents looking more vulnerable (New York-19, North Carolina-08 and New York-20).

11-5%20Scenario%202-3.jpg

What about the Majority Watch automated polls? Their two waves of October surveys account for roughly a third of the House district polls released in the last month, and as the table shows, removing them from the averages does reduce the Democratic advantage on the scorecard. Keeping our October-only filter on, the Democratic seats drops from 222 to 215 the Majority Watch surveys also removed, while the number of Republican seats increases from 186 to 192.

What is driving the change? Removing the Majority Watch surveys changes our classification of 15 seats. In seven of these districts, Majority Watch conducted the only public polls released in October, and all seven were seats held by Republicans and classified as toss-ups or likely Democratic pick-up using their data. So without any polling data available, our model assumes "no change" and shifts all seven seats to the Republican column. In another eight seats, the absence of the Majority Watch surveys tips the balance in the averages just enough to shift our classification - 6 seats move toward the Republicans and 2 seats move to the Democrats.

Those changes beg an important question: How do the Majority Watch results differ from other pollsters in districts where we have other sources of data available? I count 40 districts in which public polls were released in October by both Majority Watch and other pollsters. So I went back to my big spreadsheet and averaged the averages for those 40 districts two ways: Once including only the Majority Watch surveys, once including only the results from other pollsters.

11-5%20MW%20vs%20interviewer.jpg

The results are a bit different. The Majority Watch surveys indicate a 3.3 point lead for the Democratic candidates in those districts (49.1% to 45.8%) compared a 1.0 point lead by other pollsters (44.6% to 43.6%). But notice that the percentage going to undecided or third party candidates is more than twice as large on the traditional telephone surveys (11.8%) as on the Majority Watch automated surveys (5.1%). So we have two potential explanations for the difference: One is that the automated surveys reach different kinds of voters (who tend to be more opinionated and less Democratic in their preferences). Another is that both types of surveys reach the same mix of voters, but that the absence of a live interviewer better simulates the "secret ballot" and entices more uncertain voters to express their true preference for the Democratic candidates. Which theory expalins the difference here? Take your pick.

Another question: What if we remove both the partisan and automated surveys? Unfortunately, at that point, this particular "model" essentially blows up because we have no polls to look at in 39 of the competitive districts. Since more than two thirds of the October "no-poll" districts (28 of 39) are currently held by Republicans, removing these polls shifts the scorecard in the Republican direction. Adding back the pre-October data nets us only five additional districts, but makes virtually no change in the scorecard numbers.

11-5%20Scenario%204.jpg

Still, even if we look only at the smaller number of districts with traditional live interviewer surveys conducted by independent pollsters, we still see Democrats leading by statistically meaningful margins in nine Republican districts. Moreover, these same surveys show Democrats with significant leads in 11 districts currently held by Democrats and indicate "toss-up" races in another 20 seats now held by Republicans.

[11/5 - 4:30 p.m. - Back again. And finally...]

One more thought about the last paragraph. Those 20 "toss-up" races exclude 9 districts with no traditional polls released during October that currently rated either "toss-up" or "lean Democrat" by the Cook Political Report.

But let me try to sum this up, following the same formula I used in discussing these results for the Slate Election Scorecard earlier in the week. The math is easier given one important finding: Not a single Democratic candidate in a district now held by a Democrat is currently trailing, regardless of the combination of polls examined. So the text and the table that follow focus on potential Democratic pickups.

11-5%20districts.jpg

  • Eight seats currently held by Republicans show a Democrat leading by a statistically meaningful margin regardless of what combination of polls we look at: Arizona-8, Colorado-7, Indiana-2, Indiana-8, North Carolina-11, New Mexico-1, Ohio-18, and Pennsylvania-10.
  • One seat deserves its own category: The one and only poll in the Texas-22 district formerly represented by Rep. Tom Delay shows Democrat Nick Lampson leading. However, a complicated ballot (Republican Shelley Sekula-Gibbs is a write-in candidate) makes this result tenuous.
  • Nine more Republican seats look to be in statistically meaningful jeopardy, but only when we count the automated Majority Watch surveys (either because those are the only surveys available or because they tip the balance making the Democrat's lead statistically meaningful): Florida-16, Iowa-1, New Hampshire-2, New York-24, New York-25, New York-26, New York-29, Ohio-15, Pennsylvania-6 and Pennsylvania-7.
  • Three more Democrats would show significant leads if we include the internal surveys released by partisan pollsters: Florida-13, Nebraska-3, and Ohio-1.

To sum up: If you trust the automated Majority Watch surveys and assume a pickup in Texas-22, then Democrats are leading in exactly the 18 seats the need to win a majority. If you trust all polls (including those released by partisans on both sides), then they currently lead in enough districts to pick up 21 seats. And they are not currently trailing in any.

But even more important: Polls have been conducted in October in another 29 seats where the averages indicate a statistical tossup. Only two of these seats are currently represented by Democrats. How well the Democrats ultimately do depends on how many of these still-too-close-to-call races they ultimately win. If they split evenly, then Democrats are looking at a gain of between 29 and 34 seats depending on which polls you trust.

But wait -- we need to remember one very important caveat. Even if we exclude the pre-October surveys, we are still looking at something of a time-lapse "snapshot" of voter preference. If voter Republicans have made late gains nationally over the last week (and at least two new national surveys out today suggest that they have), then these results may overstate the likely Democratic gains. As usual, we will need to wait to see the actual results to know for certain.

UPDATE: On that last note, be sure to see the post by Charles Franklin on late trends in the generical Congressional ballot.


Joe Lenski Interview: Part 1

Topics: 2006 , Exit Polls , The 2006 Race

Joe Lenski is the co-founder and executive vice-president of Edison Media Research. Under his supervision, and in partnership with Mitofsky International, the company of his later partner Warren Mitofsky, Edison Media Research currently conducts all exit polls and election projections for the six major news organizations -- ABC, CBS, CNN, Fox, NBC and the Associated Press. He spoke with Mark Blumenthal last week about plans for the exit polls and network projections this year.

It's really almost surreal for me -- and I think for all of us -- to think about an Election Day and the topic of exit polls without the presence of your mentor and former business partner, the late Warren Mitofsky. A few days after his passing in September, I wrote a post that recalled a phone call I made about 16 years ago when I was young and foolish and how astonished I was in retrospect that he took the call, and that he was patient and kind in answering what was really a very naive question. And you sent me an email and few days later and I wondered if you could share with our readers the thoughts you shared with me.

It's true, Warren did have this real enthusiasm for being around young people and teaching young people and listening to their questions and answering their questions. In sorting out his affairs after he passed, I looked at his calendar and he was involved with just about every University that's doing some sort of polling in the area. He was on the board of the Marist Poll, he was teaching a course on exit polling at Columbia University, he was helping Seton Hall establish their sports poll, he was scheduled to do a lecture at American University in DC, and so the 27 year old Mark Blumenthal that called him 16 years ago wasn't an oddity. Twenty-somethings all over the place -- he had been learning from them in the classroom or in New York AAPOR workshops, or making the same types of calls you made in getting answers from them over the phone. I heard a lot about that at his memorial service and I saw a lot of tributes similar to the one you wrote mentioning very similar stories.

Well, let's get to the business at hand. I'd like, in the limited time we have, for you to briefly give our readers some sort of sense of how this whole operation works. I think most political junkies understand that television networks conduct exit polls on Election Day and project winners at the end of the night. I don't think they have a sense for how complex this whole operation is. Could you give us a brief explanation of how it works?

Sure. First, there is a group called the National Election Pool [NEP], and just so everyone understands who that group is, that is the pool of the five television networks, ABC, NBC, CBS, CNN, FOX, and the Associated Press -- so it's the networks and the Associated Press who have formed this pool. We at Edison Research and Mitofsky International have a contract with those six members and we provide them with exit polling, sample precinct vote counts, and election projection information on Election Day and election night. The news organizations have the editorial control: they choose the races to cover, they choose the size of the samples, they choose the candidates to cover, they write the questions that are asked. We at Edison Research and Mitofsky International implement that -- we have a system in place where this year we'll have over a thousand exit poll interviews around the country at more than a thousand polling locations. We will have more than two thousand sample precinct vote count reporters at more than two thousand locations around the country. We'll be gathering that information during the day, distributing it to the six members and several dozen other news organizations that subscribe to our service and we will also be providing our analysis and projections of the winners of those races at poll closing and after poll closing as actual votes come in. The news networks and the Associated Press reserve the right to make their own projections based on our data and any other data they may collect, and they have their own decision teams in place to review any projections we send them. But basically the source of the data they will be using on elections are the exit polls and the sample precinct vote counts our interviews and reporters collect, and the county voter returns that are collected by the Associated Press and fed through our system into our computations and out to the members and subscribers.

What sort of system or algorithm will you be using to project which party wins control of the House of Representatives?

We at Edison-Mitofsky are not going to project House seats. The individual news organizations are going to make projections seat by seat. What we are going to do provide is an estimate of the national vote by party in the House races, but there are a bunch of complications in taking that and applying it at a seat-by-seat level. It's a lot like the Electoral College. We know popular vote doesn't necessarily translate into Electoral College votes. Similarly because of Gerrymandering, we know that popular vote for the House does not translate into House seats directly as well.

But in addition there are other complications. One is there are 55 house districts where one party or the other party has not nominated a candidate. And this year because of the added Democratic activism there are only 10 districts where Republicans are running unopposed but there are 45 districts where Democrats are running unopposed. So there are 45 districts where the Democrats are going to get 100 percent of the vote for House. And so those districts are going to account for 4, 5, 6 points of Democratic advantage, solely from undisputed races.

So I think all those factors could contribute to Democrats having a sizeable lead in the popular vote for the House and in the exit poll estimate of the popular vote for the House, but that might not necessarily translate into a Democratic majority in seats in the House or a Democratic majority in seats that is as large as the popular vote that they are going to receive.

So again, early exit poll estimates or even later exit poll estimates may show a significant Democratic lead in terms of the Democratic vote for the House that may not translate into House seats, but that doesn't mean the exit poll is wrong. It just means the exit poll is measuring something different. It's measuring the number of votes by party; it's not necessarily measuring the number of seats per party.

[Editor's Note: For a detailed discussion of the relationship between the national vote for Congress and seat gain or loss, see this post by Pollster.com's Charles Franklin].

So the consortium members will have that data available to them on Election Night and may use that as part of their decision matrix to essentially call the race for the House. Is that right?

Again, this is an editorial decision the news organizations themselves will make . To predict the number of seats for the House, you really have to look at those 40, 50, 60 competitive seats, district by district, and make estimates on each one.

One of the things -- one of the misperceptions I think of the exit poll projection system you have -- is that the mid-day estimates based on the exit polls would often leak, people would see them, and I think the misperception was that you'd see a candidate leading by two or three or four percentage points and people would assume that numbers meant that that candidate would win. What can you tell us about the margin of error if you will, for those exit poll estimates, if you look at them at the end of the day just before the polls close, how much of a margin would a candidate need to have before you consider it statistically meaningful enough to call the election?

Well, that varies based on the size of the precinct sample and the number of interviews that are taking place in each state and also the correlations with past vote, with the higher the correlations the lower the standard error calculated. One of the interesting things in your question is everywhere the data leaked in 2004, it was only the estimates that leaked, never did it leak with our computational status, which tells whether the race is "too close to call," or with what we call "leading status," or what we call "call status." All of those races then -- and there were four presidential states where Kerry had a point or two or three point lead in the exit poll that ended up going for Bush -- none of those ever were outside the "too close to call" status when we were distributing that to our members. So all the news organizations that had paid for the data and were looking at the data, knew those races were too close to call, even if it was 51-48 in the exit poll. Those were well within the standard errors that we have calculated before we have even a "leading" or "call" status in the race. Everything that was leaked on the web, none of that had the standard errors or none of them had the computation statuses that we assigned to each of those races based on the margin determined by the calculated standard errors.

And just briefly, what level of statistical confidence do you require before you give a state "call status," which is the recommendation to your NEP consortium members that you are ready to call a winner?

Again that varies depending on the circumstances. The rough rule of thumb is three standard errors, which would be 99.5% confidence.

Blumenthal's interview with Joe Lenski continues tomorrow with a discussion of the problems the exit poll experienced in 2004 and what will be done differently this year.


Weekend Media

Topics: 2006 , The 2006 Race

Just a note that I did two interviews on Thursday that will air over the weekend.  One was for a story that should run on the Saturday broadcast of the CBS Evening News (unless the LSU vs. Tennessee game runs late).  Either way, it will be posted sometime tomorrow at cbsnews.com.  

I was also interviewed by Bob Garfield for the NPR program "On the Media."  The segment is a follow-up to an NPR interview of Karl Rove that I posted on last weekend.   I am told that the interview will air on local NPR stations over the weekend, although streaming and MP3 audio of the interview is now available for download on the On the Media web site.  Local air times for On The Media can be found here


House 06: National forces estimate

Topics: 2006 , The 2006 Race

HouseNationalForces1103small.png

I estimated the net national forces in the Senate rate last night. Here is the same estimation procedure applied to the House. This is based on 86 House races with a total of 380 polls. This is much less dense than the Senate data, but the results are surprisingly stable. Unlike the Senate, however, the data do not extend back in time very far, so the starting point here is June 1, 2006. The size of the effects here are also NOT comparable to the Senate forces, since both are estimated independently and the zero point is arbitrary in both cases. Relative movement is meaningful, so it is fine to say that since June 1 the net national forces in the House (for these 86 races) have risen about 6 percentage points.

As with the Senate, this is a good explanation for why so many House seats held by Republicans are now competitive. UNLIKE the Senate, these effects appear to have continued to grow recently. Even a much rougher fit still produces the upward rise at the end. This also is consistent with the growth in the Democratic advantage on the generic ballot, though the details of the dynamics are somewhat different.

The evidence then is favorable to larger than anticipated Democratic gains in the House, but smaller gains in the Senate, at least as of November 3. Four days to go.

Note: This entry is cross-posted at Political Arithmetik.


House 06: Generic Ballot

Topics: 2006 , The 2006 Race

gb20061031small.png

The generic ballot measure of the House has surged up and not stopped rising since September 22. The surge began the week in which the National Intelligence Estimate (NIE) appeared, followed by Bob Woodward's book, State of Denial. A week later the Foley scandal broke, adding to the move that began a week earlier.

The week or so before the NIE was published there was a small trend in the Republican direction which was remarked on in political news, but this very modest movement was abruptly revered. I would not have thought the NIE or Woodward revelations would have had much effect on mass public opinion, but the timing here is pretty convincing that these did in fact play a role. I speculate that was due to undermining the growth in approval of the administration on terrorism which built in late August and early September, though convincing data of this link is missing. Certainly no such inference is needed with regard to the impact of the subsequent Foley scandal.

The generic ballot is, of course, only a rough indicator of election outcomes (see here, but also see the forecasting efforts of Bafumi, Erikson and Wlezien here and Alan Abramowitz here). I also think the current upturn is a political equivalent of "irrational exuberance" in the sense that the run up in the polls seems likely to seriously overstate the actual vote margin. The current 17 point Democratic margin would be enormous, and even applying the "Charlie Cook Correction" of subtracting 5 points would still imply a 56-44 Democratic triumph. It may happen, but the generic ballot has virtually always overstated the Democratic lead, and this overstatement seems to get worst as the polling margin increases.

For a bit of perspective, the figure below plots the generic ballot since 1994. The conclusion is clear-- the poll measure has not been anywhere near current levels in the past 12 years. The practical result of this remains to be seen, but if Democrats fail to capitalize on this opinion advantage there will be some interesting research to understand why the seat gains fail to respond to this advantage in vote intent (well, in generic vote intent, which isn't the same thing.)

gb1994to20061031small.png

Note: This entry is cross-posted at Political Arithmetik.


Gov 06: State of play

Topics: 2006 , The 2006 Race

TNRGov1029small.png

Here is a recap of what are (or at least once were!) the competitive Governor's races (AK and ID lack enough data for the analysis and are omitted.) The graph is ordered from the strongest Republican in the lower left corner to the strongest Democratic in the upper right.

Recent action that may affect election day is visible in NV where Republican U.S. Rep. Jim Gibbons is facing allegations of sexual assault. The race had looked strong for Gibbons but has now narrowed, with Gibbon's leading by under 5 points. In Maryland a small but consistent lead for Democratic challenger Martin O'Malley has all but vanished, leaving Gov. Robert Ehrlich a chance to hold on to the office. The reverse has happened in Minnesota where Republican Gov. Tim Pawlenty has lost the small lead he held over Democratic challenger Mike Hatch, with the race now a dead heat. In Iowa, Democratic fortunes have improved to a small lead, as have those of endangered Oregon Democratic Gov. Ted Kulongoski. In Wisconins, incumbent Dem. Gov. Jim Doyle has persistently held on to a small but relatively steady lead, making challenger U.S. Rep Mark Green's chances look longer than many (including me) expected.

No other races give any indication of shifts likely to threaten current leaders. The bottom line should be a considerable gain for Democrats. Our Pollster.com scoreboard shows 28 Dem, 20 Rep with 2 races too close for an assignment. This would give the Democrats a majority of Governorships for the first time since 1994, with potential advantages going into 2008 presidential contests.

Note: This entry is cross-posted at Political Arithmetik.


Bush Approval: 8 polls, trend at 37.5%

Topics: George Bush

BushApproval20050620061031small.png

Over the past 10 days there have been eight new presidential approval polls. (Sorry I've been tied up with Pollster.com and haven't updated as often as usual. I hope to make up for that a little.) The net effect of this new polling is to indicate that the approval drop we saw recently has stabilized but at a low approval trend of 37.5%. That is about where the President stood at the beginning of the summer-- better than the all time lows of May but well below the recent maximum near 41%.

The polls fall above and below that trend, as you can see below. No outliers, though CBS/NYT is relatively low. Bottom line-- low approval for a President at midterm. If a picture is worth 1000 words, let's assume I've got 8,000 words below and let me move on to more posts!

FourPanelApproval20061031Asmall.png

FourPanelApproval20061031Bsmall.png

Note: This entry is cross-posted at Political Arithmetik.


Updates - TN Moves to Lean Republican

Topics: 2006 , The 2006 Race

Our most recent update changed some of our designations.  Good news for Republicans In Tennessee:  Rasmussen's latest shows Republican Corker leading Democrat Ford by eight points (53% to 45%48%) and moves Tennessee to the "lean" Republican status.  As the chart shows (though you may need to click it and choose the "since October 15" view to see the recent trend) the new result confirms similar findings earlier in the week from Reuters/Zogby and CNN/ORC

On the other hand, hopeful news for Democrats in Arizona, where a new Arizona Daily Star poll helps narrow our last five poll margin enough to move the state from strong to "lean" Republican. 

Finallly, a new survey in Idaho's 1st Congressional District  -- only the third released there to date -- shifts our designation of that District from lean Republican to toss-up. 

I will have more on our House classifications -- lots more -- late today.


Weekend Interviewing?

Topics: Sampling

Slate's Daniel Engber takes a look at an issue right up our alley:  Are weekend interviews problematic and do surveys condcuted on Friday and Saturday evenings skew toward Democrats? 

Like many pollsters, I have an opinion on this issue less grounded in research findings than hard evidence.  Over the years, the companies I have worked for have avoided weekend only surveys, on the theory, articulated by Engber that, "younger people are more likely to be out on Friday and Saturday nights, which would make them less likely to be included in the sample."

Remarkably though, little hard evidence exists to support the claim that weekend interviewing skews Democratic, and one key study indicates it makes little difference:

One of the best studies of this question was conducted by two polling experts at ABC News. Gary Langer and Daniel Merkle looked at the data from ABC's tracking polls for the last three presidential elections. They compared results from people reached on Sunday through Thursday with those reached on Friday and Saturday and found no difference. Among the Sunday-to-Thursday people polled in 2004, 49 percent supported Bush and 46 percent supported Kerry. Polls of the stay-at-home, Friday-to-Saturday crowd produced similar numbers—48 and 46.

Engber's "Explainer" piece does a nice job summarizing the steps quality surveys take to interview those who are hard to reach.  It's worth reading in full.


Sen 06: Four Critical Races

Topics: 2006 , The 2006 Race

TNRSenate1102small.png

There have been some important changes in the Senate polling over the past week. Tennessee now appears to have turned against Democratic Rep. Harold Ford, while Virginia has moved away from Republican Sen. George Allen to a clear tossup. From now on, when people use the term "Tossup" they should show the plot of the Missouri race which lacks trends, bumps, wiggles or hints of what is to come. But the big news of today is the move that has been made in Montana where Democrats were ready to claim (amd many Republicans to concede) Sen. Conrad Burns' seat. President Bush visited and apparently money is now being devoted to this new "firewall" seat. A Burns win would require a Dem sweep of VA, TN and MO to manage a Senate majority. That is obviously a much higher burden than the "2 of 3" wins in these states required with MT in the Dem bag. It is worth noting the Burns is still behind in the trend estimate for MT, but clearly the level of competition has risen, and the odds of a Democratic Senate have shrunk. To make matters worse, in Maryland Democrat Ben Cardin still leads Republican Lt. Gov. Michael Steele but that lead has been shrinking steadily and while the normally Democratic state would be expected to go Democratic, the trend here and in the Governors race (see here) suggest that the Maryland race cannot be assumed to be over. The good news for Democrats (other than in VA) is that New Jersey Sen. Robert Menendez appears to have recovered his lead from Republican Thomas Kean Jr.

So as it now stands, the Dems need 3 of the 4 seats in MT, VA, MO and TN, while holding MD. That may be a tall order, and it makes it likely we won't know control of the Senate until the MT vote is in in the wee hours of Mountain Standard Time. Stay up! It will be fun.

Note: This entry is cross-posted at Political Arithmetik.


Unbelievable

Topics: 2006 , The 2006 Race

Here is an item published by Roll Call on Wednesday that we almost missed about two Zogby polls in New York's 25th District that two media outlets refused to run

The Post-Standard newspaper in Syracuse and WSYR-TV had asked Zogby to conduct a second poll of the race after the pollster acknowledged that his firm had improperly weighted the results of a survey last week. In that case, Zogby polled the 25th district but then weighted the data using voter registration information from the more-Republican 24th district.

Zogby promised the two media outlets that he would do a new poll from scratch, but when the results of that survey came in both declined to run them. Jim Tortora, the news director of WSYR-TV, wrote on the station's Web site that after consulting with outside polling experts, he was concerned that Zogby had conducted the second poll using the same larger sample of 5,000 likely voters as he had on the first survey.

"With respect to Mr. Zogby, we felt the questions raised ... left us with only one choice: We had to pull the poll," Tortora wrote.

Used the same sample?  Here is the explanation from WSYR's Tortora about their analysis of the second poll:

This time, the Post Standard arranged an independent expert from the University of Connecticut's Department of Public Policy to review the findings of the second Zogby poll. Late Tuesday, we discovered that some of the same people who were called for the first poll, were called again. Zogby confirmed they did indeed use the same larger sample of 5000 likely voters, to come up with this "new" poll sample of 502 likely voters. Our independent expert felt this raised a red flag...an unknown variable. The concern? How would you react if you were called twice in about a week, to answer the same questions? Would you answer differently? The same? Would you even take the call? 

And as the Syracuse Post-Standard reported, "27 people responded to both polls."  If that were not enough, even after all of this came to light, Tortora repots:

Mr. Zogby firmly stands by his findings. He insists his methodology is sound, and was prepared to join us live at 5:30pm to explain his findings and back-up his results.  He points out pollsters often disagree about each other's methods. 

In its story on the controversy, the Syracuse Post-Standard spoke to a number of other "national polling consultants," and none supported Zogby's sample recycling.

"I think it's sort of a rookie mistake if you're including people a second time from a database," said Cliff Zukin, past president of the American Association for Public Opinion Research, an industry group based in Lenexa, Kan.

A bad practice

Zukin, a professor of public policy and political science at Rutgers University, spoke before he was told who conducted the poll.

He said it's considered a bad practice to call the same people twice for "random" polls.

"The problem is the first interview activates them," Zukin said. "They follow the news differently. So the people become different from a random citizen.

"If you didn't purge those people from the database," he said, "then that is a significant methodological problem. It gives you a problem to make any inference from these data."

Yes, we often disagree, but there are limits to what can be waived off as a mere difference of opinion.  If Mr. Zogby or any other pollsters want to explain and defend the practice of reusing sample, our Guest Pollster's Corner is wide open.


Sen 06: National Forces Estimate

Topics: 2006 , The 2006 Race

SenateNationalForces1102small.png

This is certainly a good year for Democrats, but how good? And what are the national forces at work? I can estimate a summary of national forces to answer these questions.

I estimate a model that pools ALL Senate race polls, then iteratively fits a local regression (my usual trend estimator here) while simultaneously extracting a race-specific effect. This procedure has the effect of removing the difference between PA (with a strong Dem lead) and AZ (with a substantial Republican advantage) and likewise for all the states, effectively centering them at zero. The trend estimate that results then will move up if across most states the trend has been up, while if pro-Dem and pro-Rep movements equal one another, the national trend will be zero. There is no fixed metric for this national force, so it is convenient to pick a zero point for identification, in this case January 1, 2006.

The estimator finds that the Democratic margin has grown by 5 points across all races due to this national force. Where Republicans have enjoyed increased support, they have had to do it in the face of this opposing wind, while Democrats who would have been trailing by 5 points if January conditions still prevailed, will now have a "wind assisted" tossup race.

The dynamics of this national force have been generally increasing all year but with significant partial reversals at times. From a June high of about 4 points, this force shank to 2 points in August, then surged to 5 points by September 1. A brief improvement for Republicans took place in early September. At the time Republicans claimed to see new movement in their favor, and these data lend some support for that claim. However, that trend was sharply reversed after September 24 with the first publication and subsequent release of the National Intelligence Estimate followed by Bob Woodward's book, State of Denial. This was followed a week later by the Foley scandal, and once more Democratic advantage increased to about 6 percentage points. In the last two weeks of October there was a brief move in a Republican direction, then back to favor the Democrats. As of November 2, however, the national forces have again moved in a Republican direction, this time somewhat more strongly. While it is tempting to explain this as a result of Sen. Kerry's verbal difficulties, the downturn started before the joke-gone-wrong, so perhaps the Senator does not deserve all the credit for the 1.5 point decline since mid-October. As it stands, the estimate is only a little under 5 points. However, as a national force, common to all races, this decline of even 1.5 points is enough to be crucial for either party in Virginia and Missouri. If it moves more, it could also affect the Tennessee or Montana races as well (and conceivably Maryland.)

For my money, these are sensible estimates of the magnitude of national forces at work in this election. A gain of 5 points in the margin turns a 50-45 race into a 47.5-47.5 tie. Estimates much bigger than this would seem too large to be plausible as they would suggest too many races become competitive or Democratic leads.

The method I use here does not lend itself to the usual confidence interval estimates. But some sense of the variability of the estimator can be seen below. The estimation errors, indicated by the gray dots, are estimates of where the trend would be IF the series had stopped on the day represented by the dot. This method is sensitive to last observations and while the fit is quite stable when there is abundant data on both sides of a point of interest, it is often a poor predictor of what will come next. The deviations of the gray dots around the line show when the trend would have gone up more, or down more, than the blue trend estimator finally settled on. The errors are worse near points of change in direction, which makes sense. While the variability is not trivial, and indicates considerable uncertainty near changes of trend, the area covered by the gray dots is still relatively small compared to the size of the effect being estimated. The practical implication is that we have to be cautious in suggesting that the current trend will continue, because a change in direction is not well predicted by the model. That said, we can be reasonably confident that the trend estimator would not be radically different if we add more observations. (Which we won't do, after November 7.)

SenateNationalForcesAndError1101small.png

Note: This entry is cross-posted at Political Arithmetik.


MT Senate: Moves to Toss-up

Topics: 2006 , The 2006 Race

Two pieces of housekeeping: First, our latest update of the charts and scoreboards moves the Montana Senate race to the toss-up category. The new Reuters/Zogby poll showing Democrat Tester up by just a single percentage point confirms a recent Rasmussen poll showing Tester ahead by just three points. These two new polls help pull Tester's lead over Republican Senator Conrad Burns to just 3.2% on the last five public polls, just enough to move Montana to the toss-up category.

Second, those who watch this site closely have noticed the lag between updates of our "most recent polls" box and the charts. Until today, our charts and tables have updated infrequently, sometimes only once a day. The reason is largely technical (and not worth attempting to explain), but from now until the Election Day we are committed to far more frequent updates - hopefully at least three updates a day. Also, by popular demand, we will try to post updates like this one on the blog when our categorization of a race changes.


Yes, We Have House Charts!

Topics: 2006 , The 2006 Race

A few quick updates on the poll data we track on races for the House of Representatives:

First, as of last night, we now have charts available for all 84 House districts for which we currently have polling data. Clicking on a link for any House district on our House map and national summary table now takes you directly to the chart for that race, just like the links on our Senate and Governor maps. This latest update means that any our 84 House district charts,** like the one from can be embedded on your blog or website using the new "embed Chart feature" (see yesterday's post for details).

Second, an apology for the slightly slower pace of blog posts over the last 48 hours or so as we worked to get these new upgrades and features up and running. I have also been spent a lot of time the last few days crunching my "big-spreadsheet-o'House" and will have a more in-depth review of the available polling data later today. For those who cannot wait, you can find the abridged version in our Slate House Election Scorecard updates on Tuesday and Wednesday.

Finally, a quick update on a bit of anecdotal evidence I discussed last Saturday. There is one source of polling largely out of public view - the internal polls conducted by the campaigns and party committees. Some of these get released, but typically only when they show good news for that particular campaign. So one indirect measure of where things stand is which side is releasing more of its internal polling, and by that measure the Democrats are a lot more confident: Since Labor Day, Democrats have released 54 internal polls for House candidates logged into our Pollster.com database, Republicans have released only 13. And that confidence has not abated in the last two weeks. Since October 15, Democrats have released 21 internal polls, Republicans only 2.

**Unfortunately, many of the House races have only a handful of polls. As of this morning, roughly half of the districts in our database have three or fewer polls, and that will make for a very sparse looking chart. Keep in mind that the trend line represents the average of the last 5 (or fewer) polls at any given point in time. So for the first few polls in the series, the lines may draw in ways that seem a little confusing.


Jacob Eisenstein: Using Kalman Filtering to Project the Senate

Topics: 2006 , The 2006 Race

Today's Guest Pollster Corner contribution comes from Jacob Eisenstein. While not technically a pollster -- Eisenstein is a PhD candidate in computer science at MIT -- he recently posted an intriguing U.S Senate projection (and some familiar looking charts) based on a statistical technique applied called "Kalman filtering" that he applied to the Senate polls. He explains the technique and its benefits in the post below.

Polls are inexact measurements, and they become irrelevant quickly as events overtake them. But the good news about polls is that we're always getting new ones. Because polls are inexact, we can't just throw out all our old polling data and accept the latest poll results. Instead, we check to see how well our poll coheres with what we already believe; if a poll result is too surprising, we take it with a grain of salt, and reserve judgment until more data is available.

This can be difficult for the casual political observer. Fortunately, there are statistical techniques that allow this type of "intuitive" analysis to be quantified. One specific technique, the Kalman Filter, gives the best possible estimate of the true state of an election, based on all prior polling data. It does this by weighing recent polls more heavily than old ones, and by subtracting out polling biases. In addition, the Kalman Filter gives a more realistic margin-of-error that reflects not only the sample sizes of the polls, but also how recent those polls are, and how many different polling results are available.

The Kalman Filter assumes that there are two sources of randomness in polling: the true level of support for a candidate, which changes on a day-to-day basis by some unknown amount; and the error in polling, which is also unknown. If the true level of support for a candidate never changed, we could just average together all available polls. If the polls never had errors, we could simply take the most recent poll and throw out the rest. But in real life, both sources of randomness must be accounted for. The Kalman Filter provides a way to do this.

Pollsters are happy to tell you about margin-of-error, which is a measure of the variance of a poll; this reflects the fact that you can't poll everybody, so your sample might be too small. What pollsters don't like to talk about is the other source of error: bias. Bias occurs when a polling sample is not representative of the population as a whole. For example, maybe Republicans just aren't home when the pollsters like to call -- then that poll contains bias error that will favor the Democratic candidates.

We can detect bias when a poll is different from other polls in a consistent way. After repeated runs of the hypothetical biased poll that I just described, careful observers will notice that it rates Democratic candidates more highly than other polls do, and they'll take this into account when considering new results from this poll. My model considers bias as a third source of randomness; it models the bias of each pollster, and subtracts it out when considering their poll results.

The Kalman Filter can be mathematically proven to be the optimal way to combine noisy data, but only under a set of assumptions that are rarely true (these assumptions are listed at my own site). However, the Kalman Filter is used in many engineering applications in the physical world -- for example, the inertial guidance of rockets -- and is generally robust to violations of these assumptions. In the specific case of politics, I think the biggest weakness of this method is the elections are fundamentally different from polls, and my model does not account for the difference between who gets polled and who actually shows up to vote. I think this can be accounted for, but only by looking at the results of past elections.


Using the Generic Ballot to Forecast the 2006 House and Senate Elections

Topics: 2006 , The 2006 Race

[Today's Guest Pollster's entry comes from Alan I. Abramowitz, the Alben W. Barkley Professor of Political Science at Emory University in Atlanta, Georgia. He is also a frequent contributer to the blog Donkey Rising.]

In order to predict the outcome of the 2006 House elections, I create a model incorporating both national political conditions and candidate behavior. Pre-election Gallup Poll data on the generic ballot and presidential approval are used to measure national political conditions and data on open seats and challenger quality are used to measure the behavior of congressional candidates. The model is tested with data on U.S. House elections between 1946 and 2004. A simpler model based only on national political conditions is tested with data on U.S. Senate elections from the same period. Based on the estimates for the models, I forecast the 2006 House and Senate election results.

The dependent variable in the House forecasting model is the change in the percentage of Republican seats in the House of Representatives. The model includes six independent variables. The percentage of Republican seats in the previous Congress is included to measure the level of exposure of Republicans compared with Democrats in each election-the larger the percentage of Republican seats in the previous Congress, the greater the potential for Republican losses. A variable for Republican vs. Democratic midterm elections is included to capture the effect of anti-presidential-party voting in midterm elections. Net presidential approval (approval - disapproval) in early September is included to measure public satisfaction with the performance of the incumbent president, and the difference between the Republican and Democratic percentage of the generic ballot in early September is included to measure the overall national political climate. The actions of congressional candidates are measured by two variables: the difference between the percentages of Republican and Democratic open seats and the difference between the percentages of Republican and Democratic quality challengers, defined in terms of elected office-holding experience.

The model does a very good job of explaining the outcomes of past House elections-all of the independent variables except the percentage of Republican seats in the previous Congress have statistically significant effects and the model explains 87% of the variation in House seat swings since World War II. Even after controlling for presidential approval and the actions of strategic politicians, the generic ballot variable has a substantial impact on the outcomes of House elections: a 10-point advantage in the generic ballot produces a swing of approximately nine seats in the House with all other independent variables held constant.

abram_T1sml.jpg

House Forecast
We can use the results in Table 1 to predict the outcome of the 2006 House elections. Based on a net approval rating for President Bush of -17, a Democratic advantage of 12 points in the generic ballot, and a Democratic advantage of 2% in open seats, and a Democratic advantage of 3% in challenger quality, the model predicts a Democratic gain of 29 seats in the House of Representatives.

Senate Seat Change Model
The dependent variable in the Senate model is the change in the number of Republican Senate seats. The independent variables are the number of Republican seats at stake in the election (a measure of exposure), a variable for Republican vs. Democratic midterm elections, net presidential approval in early September, and the difference between the Republican and Democratic percentage of the generic ballot in early September. Variables measuring candidate behavior are not included in the Senate model because data on challenger quality is not available for Senate elections and relative numbers of Republican and Democratic open seats had no impact on the outcomes of Senate elections when it was added to the model.

abram_T2sml.jpg

The results in Table 3 show that the Senate forecasting model is not as accurate as the House forecasting model, explaining only 65% of the variance in the outcomes of Senate elections since World War II. This is not surprising since the model does not include any variables measuring candidate behavior. Moreover, Senate seat swings are probably influenced more by chance because there are far fewer contests in each election and a larger percentage of these contests are competitive.

Despite the limitations of the Senate model, however, the results indicate that three of the five independent variables have significant effects. In the Senate model, in contrast to the House model, seat exposure is the single strongest predictor of outcomes. This is consistent with the results of previous models of Senate election outcomes such as Abramowitz and Segal (1986). According to the results in Table 2, for every additional seat that the Republican Party has to defend in a Senate election, it loses an additional 0.8 seats.

While the effects of the presidential approval variable are not quite significant at the .05 level, the generic ballot variable does have a statistically significant, and substantively important, impact on the outcomes of Senate elections despite the fact that the question asks about voting in House elections. The results in Table 2 indicate that an advantage of 10 points in the generic ballot produces a swing of about two seats in the Senate with all other independent variables held constant.

Senate Forecast
We can use the results in Table 2 to predict outcome of the 2006 Senate elections. Democrats need a gain of six seats to take control of the Senate. Based on a net approval rating for President Bush of -17 and a Democratic advantage of 12 points in the generic ballot, the model predicts a Democratic gain of 2.5 seats in the 2006 Senate elections. The main reason why the predicted Democratic gain is relatively small is that only 15 Republican seats are being contested this year.

Conclusions
Both national conditions and the behavior of candidates influence the outcomes of U.S. House elections. President Bush's low approval ratings and especially the large advantage that Democrats currently enjoy in the generic ballot suggest that Democrats are very likely to regain control of the House of Representatives in November. Democratic gains are also likely in the Senate but it will be difficult for Democrats to pick up the six seats that they need to take control of the upper chamber because only 15 of the 33 seats up for election in 2006 are currently held by Republicans.


Majority Watch Mashup

Topics: 2006 , IVR , IVR Polls , The 2006 Race

Picking up on the post from earlier tonight, the new Majority Watch surveys released today provide another strong indicator of recent trends, in this case regarding the race for the U.S. House.  The partnership of RT Strategies and Constituent Dynamics released 41 new automated surveys conducted in the most competitive House districts. 

Since they conducted identical surveys roughly two weeks ago in 27 30 of the 41 districts, we have an opportunity for an apples-to-apples comparison involving roughly 27,000 30,000 interviews in each wave.  The table below shows the results from both waves from each of those 27 30 districts.  The bottom line average indicates that overall, the Democratic margin in these districts increased slightly, from +1.9 to +2.7 percentage during October. 

10-30%20market%20watch.jpg

Whatever one may think of their automated methodology, the Majority Watch surveys used the same methodology and sampling procedures for both waves.  And as with the similar "mashup" of polls in the most competitive Senate races in the previous post, these also show no signs of an abating wave.

Interests disclosed: Constituent Dynamics provided Pollster.com with technical assistance in the creation of our national maps and summary tables


Slate 13 Update

Topics: 2006 , Slate Scorecard , The 2006 Race

Charlie Cook writes tonight:  "With the election just eight days away, there are no signs that this wave is abating."   Some supporting evidence:  The overall average Democratic margin in the Slate 13 -- the 13 most competitive Senate races we have been tracking on the Slate Election Scorecard -- has increased for the sixth straight week (from +3.7 to +4.1 percentage points over the last week). 

10-30%20slate%2013.jpg

Again, the value in looking at this overall "mash-up" is that it combines a very large number of surveys, including at least 35 new statwide surveys in the 13 states released in the last week.  In any one state, the averge might be a little lower or a little higher due to the "house effects" or other variation in recent surveys.  By rolling up the results of many surveys, we should minmize the noise.  And that approach shows now end to slow Democratic trend in Senate races since mid-September.

PS:  The Slate Election Scorecard update for tonight focuses on the Senate race in New Jersey, where two new polls moved that State back to "lean" Democrat status.


MD/TN Calls: More From TPMMuckraker

Topics: Push "Polls"

One last update (for tonight at least) on the "Push Poll" story we have been following today.  TPMMuckraker's Justin Rood reports tonight on an interview with Zeke Smith of Common Sense Ohio, "the man responsible" for the calls into Maryland and Tennesee, and similar efforts in Montana and Ohio.   In the interview, Smith confirmed that "his group uses a firm called ccAdvertising to make his calls," and offers a defense of their tactic.  Surprise, surprise: They don't consider it "push polling."

He defended his group's questions ("Do you want your taxes raised?"). "Push polls" are used to spread negative information about a candidate, and are rarely used to collect respondent's answers.

The questions used "accurate characterizations," Smith said, and insisted his group was legitimately engaged in "data collection."

"There are a fair number of things that are unpleasant to talk about," Smith said. "But that doesn't make [our questions] any less accurate."

Listen to the recording of one of the Tennessee calls provided by Tom Woods of the Nashville Post and come to your own conclusions about the accuracy of their characterizations.

And then consider that we are dealing with a new variant of high-tech push polling.  Like the push "polls" of old, there are no samples involved.  They contact as many households as possible (how many calls do you think they had to make to reach two lines in Tom Wood's residence?).  They use the guise of a public opinion poll to lure votres into listeing to the sort of distorted negative "messages" that benefiting campaigns will never publicly embrace.  And then they add a new twist:  As long as they cover it with the fig leaf of "data collection," all is legitmate.

Nonesense. This effort has nothing to do with research. It is about mass communication conducted under the false guise of a survey. Just listen to Gabriel Joseph of ccAdvertising talk about his services (as quoted by Daniel Schulman in Mother Jones):

"When you make 3 ½ million phone calls a day, we generally talk to more people than watch television, listen to the radio, or read the newspaper combined." He paused, then added quietly, "If someone writes something that I don't like, I can make their life—I can make them understand a few things if I choose."


Push Polls in MD/TN: FreeEats.com?

Topics: Push "Polls"

We are getting more information about those push poll calls I posted on last night, the ones first brought to light by TalkingPointsMemo. While we do not know for certain who is responsible for the calls, a trail of circumstance points to one likely suspect.

Exhibit A: One of the aspects of the calls that struck me as both odd and unusual was that all the questions required yes or no answers, including the vote question. Again, from Pollster reader ST:

It was all yes/no questions. I assumed that yes or no was all the machine could process, because everything was asked in the form of a yes no question -- even who you were going to vote for. In the candidate preference part at the beginning and end you were asked would you vote for Corker (yes/no) and then would you vote for Ford (yes/no).

Real pollsters typically ask questions with more responses than just yes or no, especially vote preference. Why the odd format? It turns out that respondents did not answer questions by pressing the buttons on their touch-tone phones (the method used by IVR pollsters), rather they answered using speech recognition software that can hear a human voice saying "yes" or "no." I emailed ST and some other commenters who reported getting the calls. So far, one has responded to confirm that the call they received asked them to speak the words "yes" or "no."

Exhibit B: Listen to this sample "political survey" available on the web site maintained by ccAdvertising, a.k.a. FreeEats.com, a.k.a. ElectionResearch.com. All of the questions are asked in a yes/no format that utilizes, as the ccAdvertising About page tells you, their "patented (patents pending) Interactive Voice Response - Speech Recognition (IVRSR)." Listen to the entire "political survey," and immediately after the question asking if "you are undecided" in the race for New York Assembly, you hear the voice of candidate Charlie Fisher -- presumably a FreeEats client -- making a pitch for his election:

Hi, this is Charlie Fisher. I'll work hard to support your interests if you elect me as Assemblyman. I hope you'll vote for me on September 10.

The ccAdvertising site has many similar examples

Exhibit C: Back to the description of ccAdvertising's services it helpfully provides on its website.

ccAdvertising utilizes its patented (patents pending) Interactive Voice Response - Speech Recognition (IVRSR) method to ensure that our political, public policy and service organization clients have their messages reach the households they have targeted, usually based on location or anticipated household demographics [emphasis added].

Further down the page they point out that ccAdvertising "also engages in the distribution of market data research obtained in our public surveys." So these surveys serve a dual purpose. They "collect data" and they deliver "messages."

The problem, from my perspective at least, is that this message delivery capacity amounts to what the Market Research Association refers to as "Selling Under the Guise of Research" (or SUGGing): "a misuse of the survey process compromises legitimate marketing and opinion research surveys conducted by professionals," that "also causes distrust among the public and affects the reliability of all public opinion research."

Exhibit D: The article "Tales of a Push Pollster," out just last week by Daniel Schulman in Mother Jones. Schulman's article - definitely worth reading in full - is a profile of ccAdvertising/FreeEats. Here are some particularly relevant excerpts:

Today, FreeEats does mostly political work. In November 2002, the company issued a press release claiming to have played a role in the "Republican force that swept America on November 5," noting that "no fewer than six winning candidates and one hot ballot referendum were influenced" by its efforts....

Business has certainly been booming for FreeEats, which has deployed its technology on behalf of conservative candidates and causes ranging from the National Rifle Association and the anti-immigration Minutemen to Tom DeLay, who paid the firm $24,101 for telemarketing work between November 2005 and February 2006. DeLay's ally Grover Norquist, head of Americans for Tax Reform, has hired FreeEats to push his antitax agenda, including an unsuccessful effort to prevent a tax increase in Colorado.

FreeEats has also become the go-to firm for conservative groups fighting to restrict gay marriage and abortion, both issues that are dear to the company's chairman, Donald P. Hodel-a longtime Washington insider who served in two Cabinet posts (secretary of the interior and of energy) during the Reagan administration, then went on to become president of both the Christian Coalition and Focus on the Family. (Mother Jones' calls to Hodel's home in Silverthorne, Colorado, went unanswered.) In 2004, FreeEats was commissioned by the Defense of Marriage Coalition to promote a referendum banning gay marriage in Oregon. During the company's telephone surveys, Oregon residents reported being told: "In Massachusetts, where court-ordered same-sex marriage is legal, they are now preparing materials to teach the gay lifestyle to children, beginning in kindergarten." The referendum passed by a 14-point margin.

Now obviously, we do not know for certain who is behind the push poll calls reported in Maryland and Tennessee. But based on all the above, we can nominate a fairly obvious chief suspect.

UPDATE (1:40 p.m.):  The Nashville Post has a story (via DailyKos diarist Rook) on "Common Sense Ohio," that group that has apparently sponsored the calls in Tennessee.   Bigger news is that the story includes an audio tape of the call, so you can listen to it yourself.  Try playing that mp3 side by side with the demo from ccAdvertising.  The sound quality of the Tennessee call is poor, and my experience is that answering answering machine digital recording tends to distort the timber of voices.  But it sounds to me like the announcer on the push poll could be the same voice as the announcer on the ccAdvertising demos.  What do you think?

One big question about the recording:  Was it caught by answering machine or voice mail or did the person who recorded it edit out their answers?  If it was the former, then we have pretty conclusive proof of the intent of the push pollster, which is ultimately what defines push polling.  If the questions continued in the absence of verbal answers, then the "pollsters" did not care one iota about collecting data.  Their primary interest was communicating a message, even it if meant leaving unanswered "questions" on someone's voice mail.   [Tom Wood, who recorded the call for the Nashville Post explains that he did in fact answer each question.  See Update III below].

UPDATE II (2:18 p.m.):  Pollster reader ST emails to say he believes the Nashville Post recording was edited to remove the respondents answers:

[The audio tape] is exactly what I heard. Some of the push language I didn't hear because I answered no to some topics. But when I answered yes, I got the Corker talking points. Notice that it says that the poll was PAID for by Common Sense Ohio.

Based on my experience, I think that someone edited out all their answers to that recording and it is NOT from an answering machine. Also, I can tell you they answered "yes" to each question. When I was called, I only answered "yes" to the question about taxes. To the abortion, gun and immigration questions, I answered "no." I got the pro-Corker push language "only" when I answered "yes." The call just skipped to the next issue when I answered "no."

UPDATE III (5:48 p.m.):  After I contacted the Nashville Post seeking clarification of how the phone call was recorded, Tom Wood, the person who taped the call, posted the followng explanation in our comments section.  He answers my question definitively.  Yes, he says, he edited to audio to remove the sound of his answers to each question:

I recorded that audio for my colleague Ken's story, and yes, I did edit out my responses.

I had received the call on my downstairs line Saturday night, and I played along in order to hear what the questions would be. But on the question about whether I believe foreign terrorists ought to be allowed to live and work in the U.S., I responded with the first words that came to mind: "Fuck you!" The system then told me that the call would terminate unless I gave a yes or no answer. I finished the rest of the poll by giving answers as though I were a Ford supporter.

After that experience, when the call came in on my upstairs line yesterday, I grabbed my recorder and decided to answer as though I were a Corker voter. My responses prompted the system to give me the talking points on each subject, which my pro-Ford responses had not elicited.

Rather than try to explain that my responses were given for tactical reasons, I thought it would be simpler just to edit them. Sorry to have caused confusion.

Tom Wood
Nashvillepost.com

Another reader, "Fisch," left a comment on last night's post that confirms that the calls required the respondent to provide some sort of answer:

I was literally dumbstruck by the question, and while I struggled for a response, the recording went on to the next question: whether I was in favor of striking the words "under" God" from the Pledge of Allegiance. Again, I was made mute by the sheer audacity of the questions. Then the recording announced the poll would end even if I did not answer all the questions. All I could say was "Good!" I was stunned by the call, I couldn't even remember the nature of the second question, until I read your post.

Finally, fans of the movie All the President's Men may appreciate the comment left below by Pollster reader Mark Patten.  After receiving one of the calls he looked up the number for FreeEats.com and gave them a call:

I received one of the push-poll surveys in the Maryland Cardin-Steele race. I called the Herndon, VA number for FreeEats and asked the woman whom answered if this company was responsible for the robotic polling in the Maryland Cardin-Steele race. She answered yes, then became flustered, talked to someone nearby, then transfered me to another person. This was a young sounding man with a high voice and southern accent. Neither would reveal their name. I told him the number that I received the poll from (703-961-8297). He said that based on that number they were not responsible for this poll, but seemed evasive. He assured me that I would be on the no-call list, which was something I did not even bring up or know to ask for. Herndon, VA is indeed an area covered by the 703-961-xxxx number from which I received the poll.

Note: The number Patten says he "received from the call" is the number the push call left on his caller ID.  Another commenter, Gail Powers,  reports seeing the same number on her caller ID.  The number, 703-961-8269, provides a constant busy signal.  Google provides a listing for that number for an employee of a financial services firm in Virginia that appears to be long ago disconnected.  I called the firm and they had no record of any employee with that name. 


Real Push Polls in Maryland?

Topics: Push "Polls"

I wrote about allegations of push polling back in August.  More often than not, the targests of these allegations turn out to be the internal message testing surveys conducted by campaigns rather than true "push polls" -- calls conducted under the guise of a survey intended only to spread negative information. Tonight, DK, the weekend blogger at Josh Marshall's TalkingPointsMemo.com, has been reporting (here and mostly here) on automated calls received this weekend in Maryland (and elsewhere) that certainly sound like the real thing. 

Here is one example:

After asking you who you're going to vote for, it asks "do you want your own taxes raised or lowered?" Then it tells you that Cardin has voted to raise your taxes and will do so again. It follows with "do you believe the words 'under God' should be in the pledge of allegiance?" It tells you Cardin voted to remove them, which I assume is false. Then it goes straight to the gutter and asks "do you support medical research experiments on unborn babies?" Of course, it then tells you Cardin is for this. It finishes by asking again who you're going to vote for.

I am curious whether the recipients remember being asked any demographic questions, any attitudinal measures like ideology or party identification, any favorable ratings on the candidates, or questions geared at determining if the respondent intends to vote, is following news about the campaign or has voted in the past.  If the "poll" asked none of these questions, but only the "questions" described in the quotation above, then given the timing, it almost certainly the sort of fraudlent "push poll" dirty trick worthy of the name. 

I'll pass along further reports as find them. 

UPDATE (10/30, 6:58 a.m.):  Reader ST passes along this report from Tennessee

I have some experience in electoral politics and with legitimate polling, so I tried to pay attention as the call progressed

I was hit with the Tennessee version Sunday night around 6 p.m. EDT. First of all, they are interactive robo calls asking a series of yes/no questions.

You are first asked if you want to participate. Then you are asked if you would vote for Bob Corker. Same question for Harold Ford.

Next you are asked a series of yes/no issue questions. You get pushed if you answer a certain way. The first question was, roughly, do you want to keep your tax burden as low as possible. I answered yes to this one as was bombarded with a series of statements about how Corker advocates making the Bush tax cuts permanent, and how Ford wants to raise everyone’s taxes. Standard push technique.

They call then moved to other topics, asking if you would describe yourself as pro-life, asking if you support the NRA's strong defense of the Second Amendment, asking if you think there is a problem with illegal immigration in the U.S.

If you answered no, the call moved on to the next topic, if you answered yes, you got bombarded with pro-Corker talking points.

At the end the call again asked if you would vote for Corker, and if you would vote for Ford.

Then the call identified itself as coming from "Common Sense Tennessee" and gave a Web site commonsensetennessee.com. It also said it was associated with Common Sense Ohio and identified its treasurer as John Lind.

No demographic information was asked. It was all yes/no questions. I assumed that yes or no was all the machine could process, because everything was asked in the form of a yes no question -- even who you were going to vote for. In the candidate preference part at the beginning and end you were asked would you vote for Corker (yes/no) and then would you vote for Ford (yes/no).

Again, this call appears to fit the classic definition of "push polling," which is a fraud -- an effort to communicate a message under the guise of a poll -- not a poll at all.   Real tracking surveys conducted a week before an election typicaly ask demographic items, attidues used to classify voters such as party identification, and usually track candidate favorable or job ratings.  Real automated surveys are capable of handling questions with more categories than just "yes" and "no."


McDonald: 5 Myths About Turning Out the Vote

Topics: 2006 , The 2006 Race

Professor Michael P. McDonald, a nationally renowned authority on voter turnout (and an occasional commenter on Pollster.com), had a timely op-ed piece published in today's Washington Post reviewing the academic evidence that debunks "5 Myths About Turning Out the Vote." It's well worth reading in full.

McDonald covered a topic on a lot of minds lately (mine included), the Republicans' vaunted "72-Hour Campaign:"

Republicans supposedly have a super-sophisticated last-minute get-out-the-vote effort that identifies voters who'll be pivotal in electing their candidates. Studies of a campaign's personal contact with voters through phone calls, door-to-door solicitation and the like find that it does have some positive effect on turnout. But people vote for many reasons other than meeting a campaign worker, such as the issues, the closeness of the election and the candidates' likeability. Further, these studies focus on get-out-the-vote drives in low-turnout elections, when contacts from other campaigns and outside groups are minimal. We don't know what the effects of mobilization drives are in highly competitive races in which people are bombarded by media stories, television ads and direct mail.

Also, in 2002 and 2004, the 72-Hour-Campaign also benefited from a political environment and national mood largely favorable to Republicans. Not so this time. We will soon see hether they can work the same magic in a climate like 2006. 

Again, McDonald's piece is good summary of academic findings all political junkies should know. Go read it all.


Mellman: Another Measure of Stability

Topics: 2006 , The 2006 Race

[Democratic Pollster Mark Mellman posted a comment here on Friday in response to the final installment of my three part series on the national data on the race to control Congress. It was structured around a metaphor Mellman has used to characterize the Democrats chances on November 7:

There's a big anti-Republican wave out there. But that wave will crash up against a very stable political structure, so we won't be sure of the exact scope of Democratic gains until election night. We really don't yet know which is ultimately more important -- the size of the wave or the stability of the structure.

Since not all readers browse the comments, I am promoting Mellman's remarks as a contribution to our Guest Pollster Corner section].

When I talk about stability I have a couple of other factors in mind in addition to incumbency advantages. As I noted in my original Hill article last March....

One measure of political instability: the number of Republicans holding seats that vote Democratic for president and vice versa. When big political waves hit, that is precisely where much of the action is. In the two prior presidential elections, Bush (the father) or Reagan had won 30 of the 34 seats Democratic incumbents lost in 1994. Similarly, two-thirds of the Republican incumbents who lost in 1882 were running in districts presidential Democrats had won just previously.

Today, though, there are fewer mismatched seats than at any point in recent history. Going into 1994, 53 Democrats held seats won by Bush in 1992. Today just 18 Republicans hold seats won by Kerry. So, while forces in the political environment push strongly in a Democratic direction, they are acting on a relatively stable structure: Hence the test.


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR