October 1, 2006 - October 7, 2006
Last night, Josh Marshall linked to a new national automated poll from SurveyUSA asking whether House Speaker Dennis Hastert should resign:
Based on what you know right now, do you think Speaker of the House Dennis Hastert should remain in his position as Speaker of the House? Do you think he should resign as Speaker of the House but remain a member of Congress? Or do you think he should completely resign from Congress?
27% Remain Speaker
20% Resign leadership
43% Resign from Congress
10% Not sure
MP readers may want to note that the results above, from a one-night sampling of 1,000 adults conducted Thursday night, were actually the second of a two-night tracking poll. From the SurveyUSA release:
Though Thursday night's polling data is not good news for Hastert, the data is an improvement from SurveyUSA interviews conducted 24 hours prior, on Wednesday night. Then, 49% of Americans said Hastert should resign from Congress, 17% said he should remain as Speaker, and 23% said he should resign his Leadership post but remain a member of Congress. Though the day-to-day movement is small, and some of it is within the survey's 3.2% margin of sampling error, the movement is consistent across the board and therefore worthy of comment.
There are inherent limitations to surveys with short field periods; however, when a news story is changing hour-by-hour, nightly tracking studies can provide a valuable "freeze-frame" snapshot of what Americans were thinking at a moment in time.
As part of their "interactive crosstabs" for this poll, SurveyUSA provides a time series chart that allows users to plot trends for each of the key subgroups (via the pull-down menu that appears in the upper left corner of the data table).
Now I have no idea whether SurveyUSA intends to continue tracking this question going forward. They are obviously a lot busier now than when they tracked the response to Hurricane Katrina for 24 days in September 2005. But if these results intrigue you, it's probably worth checking the SurveyUSA Breaking News page for further updates.
Update: Rasmussen Reports, the other big automated pollster, also conducted a survey on whether Hastert should resign. Their results offer a lesson on the challenges of writing this sort of question:
Should Dennis Hastert Resign from His Position as House Speaker?
37% Not sure
Do you have a favorable or unfavorable opinion of Dennis Hastert?
10% Very Favorable
14% Somewhat favorable
19% Somewhat unfavorable
16% Very unfavorable
41% Not sure
The number who say Hastert should resign as speaker is much higher on the Rasmussen survey (36%) than on the SurveyUSA poll (20%), but Survey USA reports a much higher number (63%) who say Hastert should resign either as speaker or from Congress. Offering three choices rather than two appears to make a big difference. And the fact that 41% on the Rasmussen survey say they do not know Hastert well enough to rate him helps explain why the question format and language make such a big difference.
The timing also differed: The Rasmussen survey was conducted over the last two nights (Thursday and Friday) while SurveyUSA tracked on Wednesday and Thursday nights.
Our update to the Slate Election Scorecard yesterday focused on six new statewide polls from the USAToday/Gallup partnership. Their new poll in Rhode Island helped push our assessment of that state into the "lean" Democrat category. As always, we encourage you to read it all.
Separately, Gallup's Frank Newport posted an analysis on the Gallup site (free to non-subscribers through Wednesday) showing the results of an open-ended question asked on all six surveys of those supporting one of the candidates for Senate: "Why would you say you are voting for this candidate?" As is standard survey practice, the interviewers recorded responses verbatim, and Gallup later coded those responses into categories. Newport's analysis shows the results of those questions for each state, broken out by voters for each candidate.
But then it gets more interesting. I'll let Newport explain:
There are wide varieties of ways in which these data can be approached, and as a result a wide variety of conclusions that can be reached. Gallup Poll editors developed several important points about the reasons behind voters' choices for the Senate.
But in this situation, we thought some readers might have different approaches or insights, and that as a result it would be of interest to allow readers to suggest their own thoughts on these data, and to share those with us here at The Gallup Poll by responding to Talk to the Editors.
We will read all suggested responses, and post the most telling and insightful here on galluppoll.com next Wednesday.
And speaking of Frank Newport, his contribution to Guest Pollster's Corner last week provoked some interesting questions. Newport responded in the comments section yesterday. It's worth a click.
I'm between planes so no time for a full post, but the generic ballot that was trending slightly down two weeks ago is now at least flat and perhaps very slightly rising. Too soon to tell what the real effect of the Foley affair will be, but certainly the trend has shifted from down to flat. I'll post more at the next delayed flight (which could be soon.)
Note: This entry is cross-posted at Political Arithmetik.
More post-NIE, post-Woodward, post-Foley polls are out, suggesting that approval of President Bush has stopped its recent rise and has begun to turn down. Polls from Time, AP and Greenberg find small declines since the previous poll. A Pew poll finds no change. A GWU-Battleground poll is useless for comparison because the previous poll is from February.
The Time poll, taken 10/3-4/06 finds approval at 36%, disapproval at 57%. The AP poll from 10/2-4/06 has approval at 38%, disapproval at 59%. Pew's survey from 9/21-10/4/06 gets approval at 37%, disapproval at 53%. Greenberg/Democracy Corps was in the field 10/1-3/06 and has approval at 43%, disapproval at 53%. The GWU-Battleground poll is a bit stale, conducted 9/24-27/06. It found approval at 45% with disapproval at 53%.
The net effect of these polls on my estimate of approval is a drop of 0.4 points from the peak on September 20 to a current estimate of 40.2%. That peak, however, is also a revised estimate. Without the newer polling, the approval trend had reached 42.0%. The revised estimate of the peak is only 40.6, so recent polling has called the previous high into question. More data will be needed before we can precisely estimate either the September high point or the precise date when the turn in support took place.
While the AP, Pew and Time are generally below the trend estimate, their effect on the trend estimate is balanced by the typically high approval values from the Greenberg/Democracy Corps and GWU-Battleground polls. Likewise the "apples-to-apples" comparison of the plot for each polling organization increases the evidence that there is now a downturn in approval. Obviously we can't yet estimate how large or how sustained this turn may be.
The Pew poll is interesting because it was in the field before the Foley news broke, having collected 777 cases, and then collected an additional 726 cases after the scandal became known. Comparisons of "before" and "after" show an identical 13 point Democratic lead in the generic ballot question, suggesting no immediate impact of the Foley news on vote intentions.
Of course, as with so many Washington scandals, the story is as much about the aftermath of the revelations as the acts themselves. The "what did they know and when did they know it" drama now playing out around Speak Dennis Hastert seems likely to keep the story in the news for at least a few more days. One doesn't need polls to know this has been a bad week for the Republican party. How much that affects polling, and how long such an effect endures, remains to be seen.
For President Bush, the key question is whether he can resume his aggressive campaign efforts and can stay on his message of national security and terrorism. And if anyone is listening to his pitch at the moment. The challenge for Republicans is to change to subject. The challenge for Democrats is to take advantage of the October surprise effectively and to use Foley to advance a broader critique of the Republican congress. Stay tuned!
Note: This entry is cross-posted at Political Arithmetik.
"This is as close to pollster hell as it gets." Or so wrote GOP pollster Bill Cullo last night on Crosstabs.org. While this may be an especially hellish week for Republican pollsters, Cullo makes a very valid point. "These days, any national voter survey more than 72 hours old is largely obsolete," he writes. "All indications are that the whole Foley situation has eviscerated any hint of a Republican reversal that may or may not have been underway." But what evidence do we have of that evisceration? Let's take a look.
The best evidence for the Republican "reversal" was a consistent 2-3 point improvement in the Bush job approval rating from August to September in national surveys of adults. As the table below shows, the five national surveys conducted entirely after the Foley resignation all indicate a 2-3 percentage point movement back in the opposite direction:
I usually hesitate to make too much of the often variable three-day rolling averages from automated pollster Rasmussen Reports, but one particular finding this week looks particularly ominous for the Republicans. Bush's "strongly disapprove" rating today sits at 46% after rising steadily from 39% a week ago (according to data on Rasmussen's premium site). It had averaged 39-40% during September and only registered as high as 46% on one other day this year (May 23, ten days after the all time low point on Charles Franklin's chart).
On the other hand, measurements of the generic congressional vote did not show a consistent change one way or another on the national surveys reported so far this week (including a Pew Research Center survey that happened to be in the field just at the Foley story broke):
Averaging across the four surveys shows no change. Why a small shift on the job rating but no change on the generic ballot? While the differences in population (adults vs. registered and likely voters) may have played a role, I see no obvious explanation. Still readers should note that the two biggest shifts to the Democrats occurred on the registered voter samples done entirely over the last three nights (Time & AP/IPSOS).
Cullo's point is worth remembering: The surveys above may be obsolete again by this time next week. However, given that the Foley story has focused much more directly on the Republican leadership in the House over the last few days. That turn in the story may explain the reportedly ominous internal GOP surveys discussed in this Fox News story (via Josh Marshall).
We will no doubt have more national surveys to report soon that should clarify things. Stay tuned.
Update: The Rasmussen Report update this morning showed the percentage expressing "strong disapproval" for George Bush dropping from 46% to 44%.
In his more graphical analysis of the trend in the Bush job rating, Charles Franklin ncluded one more survey I missed: The two national surveys from the Democratic aligned Democracy Corps conducted 9/17-19 and 10/1-3. They show the Bush job rating dropping one point among likely voters (from 44% to 43%). They show no change on the traditional generic House question. Democrats led on both surveys by 10 points (51% to 41%).
Our update Tonight's Slate Election Scorecard update focuses on the slew of new statewide surveys released earlier today by the Reuters/Zogby polling partnership. MP readers may especially appreciate the disucssion of the wide variation in polls in New Jersey. Read it all.
Our Slate Senate Scorecard update for tonight focuses on a new Rasmussen poll in Connecticut that shows Joe Lieberman leading Democratic nominee Ned Lamont by ten points (50% to 40%).
Tracking the Connecticut Senate race especially challenging because the most active pollsters in the state have shown consistent differences in their results -- at least until today. See the chart below (courtesy Charles Franklin), which shows Lieberman's margin over Lamont (Lieberman's percentage minus Lamont's percentage):
Both the Rasmussen automated surveys and the conventional, live interviewer phone polls conducted by Quinnipiac University showed Lieberman's margins narrowing since July but holding fairly steady over the last month. However, until the survey released today, the Rasmussen surveys have consistently shown a closer margin than the Quinnipiac Polls. This pattern is similar to the one we described yesterday in Tennessee, where Democrat Harold Ford is running stronger on the Rasmussen surveys than on conventional telephone interview polls conducted by Mason Dixon.
In this case it is harder to use the survey mode (live interviewer vs. automation) to explain the differences in because the house effects are inconsistent by mode. Another live interview pollster (American Research Group) has also shown a consistently closer race, while automated pollster SurveyUSA reported Lieberman ahead by 13 points in early September.
Today's result, however, brings the Rasmussen and Quinnipiac polls into agreement, at least for the moment. The last Quinnipiac poll released last week also showed Lieberman leading by 10 points. So is the latest turn in the Rasmussen trend line the sign of new Lieberman momentum, a convergence in the polls results or just an outlier result? Only time, and more surveys, will tell for sure.
Instapundit Glenn Reynolds asked a good question yesterday:
THE LATEST POLL shows a Ford-Corker dead heat.
Hmm. Just yesterday we had one with Ford up by 5; not long before that there was one with Corker up by 5. Is it just me, or is this more variation than we usually see? Are voter sentiments that volatile (or superficial)? Or is there something about this race that makes minor differences in polling methodology more important? Or is this normal?
At the moment at least, I agree with the answer he received later from Michael Barone that the poll numbers in Tennessee do not appear unusually volatile. Barone pointed out that the results of nearly all the Tennessee polls this year appear to fall within sampling error of the grand average. That point is worth expanding on, but it is also worth noting that the averages conceal some important differences among the various Tennessee surveys.
First, let's talk about random sampling error. If we assume that all of the polls in Tennessee used the same mode of interview (they did not), that they were based on random samples of potential voters (the Internet polls were not), that they had very high rates response and coverage (none did), that they defined likely voters in exactly the same way (hardly), that they all asked the vote question in an identical way (close, but not quite) and that the preferences of voters have not changed over the course of the campaign (no again), then the results for the various polls should vary randomly like a bell curve.
Do the appropriate math, and if we assume that all had a sample size of roughly 500-650 voters (most did) than we would expect these hypothetically random samples to produce a results that falls within +/- 4% of the "true" result 95% of the time. Five percent (or one in twenty) should fall outside that range by chance alone. That is the standard "margin of error" that most polls report (which captures only the random variation due to random sampling. But remembering the bell curve, most of the polls should cluster near the center of the average. For example, 67% of those samples should fall within +/- 2% of the "true" value.
Now, let's look at all of the polls reported in Tennessee in the last month, including the non-random sample Zogby Internet polls:
As it happens, the average of these seven polls works out to a dead-even 44% tie, which helps simplify the math. In this example, only 1 the 14 (7%) results falls outside the range of 40% to 48%
44% (that is 44%, +/- 4%.). And only 4 3 of 14 (28% 21%) fall outside the range of 42% to 46% (or 44%, +/- 2%). So as Michael Barone noted, the variation is mostly what we would expect by random sampling error alone. Considering all the departures from random sampling implied above, that level of consistency is quite surprising.
These results may seem more varied than in previous years partly because the samples sizes are considerably smaller than the national samples of (typically) 800 to 1000 likely voters that we obsessed over during the 2004 presidential race.
The confluence of the averages over the last month (or even over the course of the entire campaign, as Barone noted) glosses over both important differences among the pollsters and some real trends that the Tennessee polls have revealed. Charles Franklin helped me prepare the following chart, which shows how the various polls tracked the Ford margin (that is, Ford's percentage minus Corker's percentage). The chart draws a line to connect the dots for each pollster that has conducted more than one survey. The light blue dots are for pollsters that have done just one Tennessee survey to date.
The chart shows a fairly consistent pattern in the trends reported by the various telephone polls, both those done using traditional methods (particularly Mason-Dixon) and the automated pollster (Rasmussen). Franklin plotted a "local trend" line (in grey) that estimates the combined trend picked up by the telephone polls (both traditional and automated). The line "fits" the points well: It indicates that Ford fell slightly behind over the summer, but surged from August to September (as he began airing television advertising).
As Barone noticed, the five automated surveys conducted since July (including one by SurveyUSA) have been slightly and consistently more favorable to Ford than the three conventional surveys (to by Mason-Dixon and one by Middle Tennessee State University). But the differences are not large.
The one partisan pollster - the Democratic firm Benenson Strategy Group - released two surveys that showed the same trend but were a few points more favorable to Democrat Ford than the public polls. This partisan house effect among pollsters of both parties for surveys released into the public domain is not uncommon.
But now consider the green line, the one representing the non-random sample surveys of Zogby Interactive. It tells a completely different story: The first three surveys were far more favorable to Democrat Ford during the summer than the other polls, and Zogby has shown Ford falling behind over the last two months while the other pollsters have shown Ford's margins rising sharply.
This picture has two big lessons. The first is that for all their "random error" and other deviations from random sampling, telephone polls continue to provide a decent and reasonably consistent measure of trends over the course of the campaign. The second is that in Tennessee, as in other states we have examined so far, the Zogby Internet surveys are just not like the others.
UPDATE: Mickey Kaus picks up on Barone's observation that the automated polls have been a bit more favorable to the Democrats in Tennessee and speculates about a potentially hidden Democratic vote:
Maybe a new and different kind of PC error is at work--call it Red State Solidarity Error. Voters in Tennessee don't want to admit in front of their conservative, patriotic fellow citizens that they've lost confidence in Bush and the GOPs in the middle of a war on terror and that they're going to vote for the black Democrat. They're embarrassed to tell it to a human pollster. But talking to a robot--or voting by secret ballot--is a different story. A machine isn't going to call them "weak."
Reynolds updates his original post with a link to Kaus and asks whether the same pattern exists elsewhere.
Another good question, although for now our answer is incomplete. We did a similar "pollster compare" graphic on the Virginia Senate race over the weekend. The pattern of automated surveys showing a slightly more favorable result for the Democrats was similar from July to early September, but the pattern has disappeared over the last few weeks as the surveys have converged. In Virginia, the most recent Mason-Dixon survey has been the most favorable to Democrat Jim Webb.
While we will definitely take a closer look at this question in other states in the coming days and weeks, it is worth remembering that most of the "conventional surveys" in Tennessee and Virginia were done by one firm (Mason-Dixon), while most of the automated surveys to date in Tennessee have been done by Rasmussen. As such, the differences may result from differences in methodology other than the mode of interviewer among these firms (such as how they sample and select likely voters or whether they weight by party as Rasmussen does).
[Missing "+/-" signs restored]
Our Slate Election Scorecard update for tonight looks at a slew of new polls that largely confirm the status of many races but make one key change: The last five poll average now puts Virginia into toss-up status. Read it all.
It must just be Murphy's Law of blogging, but the most link-worthy news always seems to spring up during days when I am unable to post. I will try to catch up with more details on some of these items, but as all are of interest, here is a brief run-down on poll related news:
- The Majority Watch project conducted a one-night automated survey the 16th District of Florida from which Republican Mark Foley resigned from on Friday (via Kaus). The difficulty of polling this contest is that Foley's name will remain on the ballot, and votes cast in his name will count toward replacement candidate Joe Negron. So the Majority Watch used two separate samples to ask two questions: One replicating the choice as it will appear on the ballot (Foley vs. Democrat Tim Mahoney), and one explaining that a vote for Foley "will count as a vote for the new Republican nominee." Mohoney led by seven points (50% to 43%) on the first version, but by just 3 on the second (49% to 46%).
While the impact of the unfolding Foley scandal is obviously topic A this morning, readers should remember that the Majority Watch surveys use an automated methodology so new that even its creators describe it as a "work in progress." Given the presumably rapidly evolving opinions of Florida 16 voters and the usual risks of conducting one-night surveys, we strongly recommend taking this particular result with a larger than usual grain of salt.
- The weekend also brought a slew of new polls in competitive statewide races from the Mason-Dixon organization. Links are in our "recent polls" update box to the right and will be include in the next update of our charts within a few hours.
- The Pew Research Center website posted an interview with Joe Lenski of Edison Media Research conducted by Pew' President Andrew Kohut. Lenski was the partner of the late Warren Mitofsky and will now take responsibility for leading exit polling next month for the news media consortium that includes ABC, CBS, CNN, Fox, NBC and the Associated Press. Kohut's interview with Lenski discusses the "steps that he and his colleagues have taken to avoid problems associated with the 2004 poll." It's worth reading in full.
- Finally, Professor Michael D. Cobb of North Carolina State University sent word of a new survey conducted along with NC State colleague William A. Boettcher III. According to their release, the survey provides evidence that Americans are skeptical of the links made by President Bush between Iraq and "the broader campaign against terrorism" and "appear unwilling to pay the future human and material costs of the war. Cobb is an occasional commenter here. Another frequent commenter here, DemfromCt, also posted to DailyKos excerpts from a lengthy email exchange with Cobb and Boettcher.
Editor's Note: This post inaugurates a new feature on Pollster, our "Guest Pollster's Corner." We hope this new forum will provide opportunity for professional pollsters of all stripes -- media and campaign; Democrat, Republican and non-partisan -- to occasionally share their own thoughts on the art and science of political polling. We are honored to receive our first contribution from Frank Newport, the Editor and Chief of the granddaddy of them all, the Gallup Poll.
The average turnout level among the voting age population in midterm elections is typically well below 50%, significantly lower than in presidential election years. This means by definition that the actual group of voters who turn out and vote on Election Day is a relatively small sub-set of the large poll of all eligible voters. If there is no difference in the voting intentions between these two groups, then reports of pre-election generic ballot results based on registered voters are all that is needed. If the pool of those who have the highest probability of voting is significantly different from those who are less likely to vote, however, then the effort to identify likely voters in pre-election polls becomes critical to accurately predicting and understanding the outcome.
Gallup's past history of polling indicates there is a high probability of a significant difference in the voting intentions of the large pool of registered voters and the smaller subset of likely voters in lower turnout midterm elections.
In 1994, Gallup's final generic ballot showed a dead heat between the Republicans and Democrats among all registered voters, but a 7-point lead for the Republicans among likely voters. According to estimates of the national-two party vote for that election, the Republicans had a nearly 7-point advantage in all votes cast for Congress that year (52.4%-45.5%). (In the penultimate Gallup poll in late October 1994, the likely voter "gap" showed a 10-point Republican advantage, while the registered voter gap in the same poll showed a 3 point Democratic lead, representing a 13-point difference in the gap between the two groups.)
In 2002, Gallup's final generic ballot among registered voters -- in the poll conducted Oct 31-Nov 3, 2002 -- showed a 5 point- Democratic edge, 49%-44%. Among likely voters it was 51% to 45% Republican, for a difference in the gap between registered and likely voters of 11 points. The final national House vote in 2002 was 50.5% for the Republicans vs. 45.9% for the Democrats, a 5 point Republican advantage).
In both of these years, the distinction between the vote intentions of all registered voters and likely voters was significant. The likely voter estimate was more predictive of the real world outcome.
Gallup's first use of the likely voter model in 2006 -- in the USA Today/Gallup Poll conducted Sept 16-17 -- provided an early suggestion that the standard pattern of turnout by party will continue this midterm cycle. Among the pool of all registered voters in the sample, Democrats led Republicans by a 9-point gap, 51% to 42%. Among the pool of those identified as likely voters, the ballot was tied at 48% to 48%.
This can change during the course of the election between now and Nov. 7. Likely voter estimates are more volatile than estimates based on larger samples of registered voters or all national adults. The gap between registered voter and likely voter estimates often fluctuates in September and October, particularly in response to the high-intensity campaigning likely to occur over the next month. Still, the mid-September Gallup results suggest that the historical turnout advantage Republicans have enjoyed in mid-term elections appears to be operative again this year -- at least as of this point.
Editor in Chief, The Gallup Poll
The maps and tables display an average of recent polls conducted in each state.
The table displays the status of each race based on our analysis of the leading candidate's margin. We rate races as "leaning" to one candidate or another if they lead is statistically meaningful (at least one standard error). If that lead is strongly significant (at least two standard errors), we rate the race as "strongly" Democrat or Republican.
The map colors update automatically to reflect any changes in status. Dark blue and dark red represent races that we rate "strongly" Democratic or Republican respectively. Lighter shades indicate a lean status. States colored yellow are those we classify as "tossups"- races in which neither candidate shows a significant lead over the last five polls. We use shades of green to indicate states where an independent or third party candidate has a significant lead. The U.S. House map also displays districts with no available polls in grey (see this post for more information about the U.S. House scorecard)
Please note that averages are typically based five polls, unless fewer have been released. The summary table indicates the number of polls used to calculate the average for each race. Clicking on any state in either the map or table will take you to our chart and complete source data for each race.
The averages that appear here are based on the most recent surveys in each state based on random probability sampling. The averages listed in the tables include telephone polls conducted using an automated methodology rather than live interviews, but exclude surveys based on non-random Internet panels. As such, the averages listed in the summary tables may differ slightly from the "all polls" averages currently displayed on our charts, which include the Internet polls.
Why a five-poll average? Results for pre-election polls often vary due to random sampling error as well as differences in methodology (question wording, sampling, the survey mode, or the way pollsters define likely voters). While averaging is an imperfect solution, we believe a five-poll average provides a more reliable snapshot of data available for each race than focusing on only the single latest poll.
Finally, our software displays a letter "i" for both independent and third party candidates when displayed by our database. We are aware that many such candidates (such as Joe Lieberman in Connecticut) are not independents per se, but are running as members of a third party. Our use of the "i" is a programming shortcut that we hope to eliminate at some point. We have not yet included independent and third party candidates in most races because their results are reported inconsistently or not at all by pollsters.