Eric Dienstfrey | May 3, 2008
Zogby Tracking (release)
Obama 43, Clinton 42
Obama 46, Clinton 37
Zogby Tracking (release)
Obama 43, Clinton 42
Obama 46, Clinton 37
Obama 49, Clinton 44
Clinton 47, Obama 40
Note: InsiderAdvantage's 4/29 survey showed African Americans at 25% of likely Democratic primary voters in North Carolina, while their 5/1 survey shows it at 33%.
Favorable / Unfavorable
McCain: 52 / 44
Clinton: 46 / 52
Obama: 49 / 48
Time for another update looking at the "sensitivity" of our trend lines for North Carolina. As Professor Franklin is tied up with his day job this morning, I will be your guide. But let's go immediately to the chart that Franklin just generated.
The solid red and blue lines represent our standard trend estimates of support for Barack Obama and Hillary Clinton, respectively, in North Carolina. The trend lines are based, not on simple averaging, but on a local-regression based trend. We deliberately set the standard estimate to be conservative in that it takes a good bit of evidence of "real" change (i.e. more than one or two contrary polls) before the trend will show a sharp turn. As we have noted before, with lots of polls, this more conservative estimator has an excellent track record of finding real turning points of opinion while not chasing outliers.
In this case, however, the standard line appears to be too conservative. The last nine polls have produced results below the solid trend line for Obama and all but one above the trend line for Clinton. So in the graph above, we have also included trend lines based on a more sensitive estimator -- the dotted lines -- for both candidates. The sensitive estimator uses the same local regression methodology as our standard approach, but sets the degree of smoothing to about half that of the standard blue estimator. The sensitive estimator should detect short term change more quickly than "blue", but it will also sometimes chase phantom changes due to flukes of a few polls that happen to be too high or too low.
In this case, for the moment at least, the sensitive estimator (which shows Obama leading by a 7.7 point margin or 50.3 to 42.6) fits the most recent data better than the standard estimate (which shows Obama leading by 13.7, or 52.7 to 39.0).
Update: In the hour since Charles generated this chart, we have added new polls from Rasmussen Reports and ARG that change the numbers for the standard estimates on on our North Carolina chart slightly from what appears in the graph above. We'll try to post an update of the more sensitive estimate trend later this afternoon.
American Research Group
Clinton 53, Obama 44
Obama 52, Clinton 41
Obama 49, Clinton 40
Clinton 52, Obama 45 (n=689)
Obama 48, McCain 47... Clinton 48, McCain 45
More information on the Downs Center/SurveyUSA methodology.
Obama 50, Clinton 34
Obama 42, Clinton 42
Obama 51, Clinton 44
441 registered Democrats and those who lean Democratic
Berwood Yost is Director of The Floyd Institute's Center for Opinion Research at Franklin & Marshall College. Kirk Miller is B.F. Fackenthal Professor of Biology and Senior Research Fellow at The Floyd Institute's Center for Opinion Research.
The 2008 Democratic presidential primary on April 22 put Pennsylvania in the national spotlight for a long six weeks. Members of the media followed the candidates into the Keystone State intending to learn more about its people and its politics. Not far behind the media came the pollsters--some media even brought their own pollsters. Pennsylvania voters were besieged by pollsters in unprecedented numbers. There were 39 publicly released surveys, which included more than 30,000 interviews with the state's voters, during only the last three weeks of the campaign. This is a tremendous increase in polling activity compared to the 26 polls released in the final three weeks of the 2004 presidential campaign in Pennsylvania or the 15 released during the final three weeks of the 2006 Senate campaign.
Taken together, the pollsters who pestered Pennsylvanians did an adequate job of predicting the final outcome: 36 of the 39 polls in April predicted a Clinton victory and the three outliers were all conducted by the same polling organization. We agree with Charles Franklin's assessment that the aggregate performance of the Pennsylvania pollsters was good. Figure 1 is a frequency distribution of the predictive accuracy of the 39 public polls released in Pennsylvania. It shows that there was a slight bias in the polling estimates toward Barack Obama (meaning the polls in Pennsylvania underestimated Hillary Clinton's margin of victory), but that this bias was small and, according to the exit polls, not surprising because late deciding voters moved in larger proportions toward Clinton.
Figure 1 Frequency Distribution of Predictive Accuracy
Some individual pollsters faired much better than others in the accuracy of their estimates. Figure 2 shows the predictive accuracy and corresponding confidence interval for each of the 39 polls conducted between April 1 and 22 in Pennsylvania, arranged by the number of days prior to the primary the survey was completed. Those pollsters who produced a biased estimate, meaning the confidence interval for their estimate did not overlap zero, are labeled in Figure 2. Three of the four polls conducted by Public Policy Polling (PPP) were biased and all were biased toward Obama. Two of the three polls conducted by American Research Group (ARG) were biased and one of SurveyUSA's three polls showed bias. One ARG poll showed that Clinton and Obama were tied; the other, seven days later, showed Senator Clinton ahead by 20 points. The SurveyUSA poll that missed also showed Senator Clinton ahead by 20 points. The measure of predictive accuracy we used shows that the pollsters' final estimates were mostly in line with the final election results.
Figure 2 Predictive Accuracy of Individual Polls by Date of Poll
The misses identified in Figure 2 are not related to sample size. Four of the surveys that missed had four of the eight largest samples; the other two that missed had sample sizes that were only slightly below the median size. There is a relationship in these analyses, as one would expect, between sample size and the widths of the confidence intervals, but there is no relationship between sample size, width of the confidence interval, and the likelihood that a survey was biased. We don't know what methodological choices matter most in producing unbiased polls without further examination of the methodological choices the pollsters make. Some might conclude that pollsters who use inter-active voice response (IVR) technology to collect data are more prone to bias because two of the three pollsters who produced biased estimates use IVR, but not all IVR pollsters produced biased results.
Another interesting question we tried to answer is whether the polls converged on the end result as election day approached. Depending on the method used, the answer is a qualified, "slightly." Figure 3 shows the predictive accuracy of each poll as a function of days before the Pennsylvania primary. The trend line fitted to the figure is produced by a LOWESS iterative locally weighted least squares regression. The red dots identify the six biased polls noted earlier. The curve indicates that the polls began to converge until about two weeks prior to the election, that they remained relatively constant for about a one-week period, and then began to converge again over the final days of the campaign. If the six biased polls are removed from the analysis, the convergence is not dramatically improved.
Figure 3 Predictive Accuracy of Individual Polls by Date of Poll with Fitted Regression Line
Measuring Predictive Accuracy
We used the measure of predictive accuracy developed by Martin, Traugott and Kennedy (2005) A Review and Proposal for a new Measure of Poll Accuracy. Public Opinion Quarterly, Volume 69 (3): 342 - 369. Their method compares the ratio of the estimated percent of voters voting for each candidate to the ratio of the final vote tally for each. The natural log of this odds ratio (ln odds) is used because of its favorable statistical properties and the ease of calculating confidence intervals for each estimate. The confidence interval for a poll that reasonably predicts the final outcome of the primary election will overlap zero. Senator Clinton's votes or projected votes were the numerators in all the ratios we calculated so negative values for ln odds represent an overestimate in favor of Senator Obama and positive values represent an overestimate in favor of Senator Clinton. According to this measure, a poll is biased if its confidence interval does not overlap zero. The polling results used in this analysis were taken from By Guest Pollster | May 1, 2008 3:30 PM | Permalink | Comments (1) | TrackBacks (0)
Douglas Usher is the Senior Vice President of Widmeyer Communications and formerly Vice President at the Democratic polling firm, The Mellman Group.
"The survey research and marketing industries need to recognize that the Internet and cellphones, not landlines, are likely to be the wave of the future." So says Humphrey Taylor, chairman of the Harris Poll.
I met Humphrey Taylor once - in 1999. He pitched Harris Online services to the Democratic polling firm where I worked, and his team said that telephone surveys in politics would likely be replaced by web surveys after that election cycle.
Was he right about political polling? Hardly - in fact, he couldn't have been more wrong. Let me make this as clear as possible: no professional political pollster on either side of the aisle has ever used web-based surveys for quantitative research in their campaign practice.
And as any pollster.com reader understands - and all serious consumers of political polling know - you can count on one hand the number of public pollsters using online methodology for political polls. Even John Zogby, who claims that his firm has "since the mid-90's... utilized the Internet as a means of providing the public with instant access to the day's best public opinion research," has like most pollsters used telephone polling this cycle.
Internet polling is a growing industry. I use it all the time for my clients - indeed, it rules many aspects of consumer research. So, why the disconnect for politics?
Because quantitative political research for nearly all levels of American politics hits the "sour spot" of internet research.
Let me explain.
Internet-based research is perfectly suited for certain types of public opinion research:
These three types of research describe most of the public opinion research for which clients pay money - hence, the internet has become a valuable research tool.
And qualitative public opinion research is well-suited for the internet (finally, the end of notoriously unreliable mall-intercepts!)
However, quantitative political public opinion research -- polling -- hits the internet's "sour spot" because it requires reaching a narrow population for which pollsters do not have well-defined web contact information.
How well do you think Harris Interactive's national panel maps on to likely voters in New York's 26th Congressional District? If you were polling Indiana's primary, would you feel comfortable that the list of e-mails that you bought from a vendor actually contained properly registered voters in the state with past primary vote history?
Some internet survey vendors claim that they have representative general election statewide panels. This may be true - but how many times can you go back to that panel before you exhaust it? Pollsters in competitive races will track data for 30 days or more - well beyond the capacity of internet vendors in even the largest state.
It's not because political pollsters are "old-fashioned" that they don't conduct web-based quantitative research - it's because there is no reliable way to reach their candidates' electorates online in a way that meets even a modest level of methodological rigor.
None of this is to discount concerns about telephone polling - ever-lower response rates, and caller-ID and cell-phone only households that makes reaching people on the phone more difficult than ever.
But, for political polling, internet-based research has not proven to be the panacea once (and continually) promised.
UPDATE: Humphrey Taylor responds.
943 likely voters
Clinton 48, Obama 38
Pew Research Center
1,502 adults, 1,323 registered voters
651 registered democrats and those who lean democratic
Favorable / Unfavorable
McCain: 51 / 45
Clinton: 44 / 54
Obama: 49 / 48
McCain 44, Obama 43... Clinton 49, McCain 41
McCain 43, Obama 42... Clinton 48, McCain 38
Obama 47, McCain 38... CLinton 51, McCain 37
n=400 LV, 4/29
CLinton 46, Obama 41
Clinton 44, Obama 42
n=400 LV, 4/28-29
Obama 49, Clinton 42
My NationalJournal.com column, with more on the performance of exit polls in Pennsylvania, is now posted online.
Obama 46, Clinton 43
Obama 46, McCain 43... Clinton 45, McCain 44
Obama 46, Clinton 38
Obama 45, McCain 45, Clinton 48, McCain 43
Gen House: Dem 50, Rep 32
We'll be doing some minor back-end work on our site tonight at around 8:00 Eastern, which may affect your ability to leave comments for about 10 to 15 minutes. Our apologies in advance.
Amidst the personal craziness last week, I neglected to link to two columns from network pollsters that provide some valuable data from the exit polls on the Obama-Clinton race tabulated by race, education and income. Interest in this issue peaked last week after Barack Obama, said the following after his loss in the Pennsylvania primary:
I have to say if you look at and I know my staff has talked about this: If you look at the numbers, in fact, our problem has less to do with white working class voters. In fact, the problem is that, to the extent there is a problem, is that the older voters are very loyal to Senator Clinton.
ABC's polling director Gary Langer combined data from exit polls to look at support for the two candidates among white voters by age and income. "Age clearly is a factor," he concludes, "but it’s equally clear that socioeconomic status, as measured by the education and income alike, is independently a factor, and a big one."
Langer's column has tables with all the data. To make the patterns easier to see, I created two charts I created using only the percentage supporting Obama.
Here is Langer's analysis:
Look just at seniors, for instance: Across all primaries to date, among less well-off white seniors (those with less than $50,000 in household incomes), Clinton has beaten Obama by 70-22 percent. Among white seniors with more than $100,000 in household incomes, by contrast, Obama’s actually run ahead, by 50-45 percent.
Put another way, Obama’s support from high-income white seniors has been 28 points higher than it’s been among working-class white seniors. That isn’t just a senior problem. [...]
The relationship is weakest in Obama’s best age group, under 30s, but it’s still there. He’s won under-30 whites in $100,000+ households by 65-33 percent; he’s won young whites in under-$50,000 households by a much closer 53-42 percent.
The results are similar by education – Obama does 21 points better with white seniors who’ve earned college degrees than with those who haven’t. College-educated white seniors have favored Clinton by just 8 points, 50-42 percent; those without degrees have backed her by a whopping 48 points, 69-21 percent.
Kathy Frankovic, polling director at CBS News, looks at the same exit polling data (or presumably the same -- she explained that she combined exit polls "weighted to total votes...excluding Florida and Michigan") and adds a little more granularity for the youngest voters:
Among white voters with a college degree, Obama and Clinton have run almost even so far this year - 49 percent for Obama, 47 percent for Clinton. The results are very different by age within this group - those under 45 have given Obama a lead, and those over 45 have chosen Clinton. This does seem to support Obama’s claim that older, better-educated Democratic voters are staying with what they know, keeping on “track.”
White voters without a college degree, however, vote differently. This year, they have voted for Clinton over Obama by almost two-to-one - 61 percent to 33 percent. And the age of the voter matters less. Clinton leads decisively with just about all age groups of these voters - as long as they are over 30. She even edged Obama, 48 percent to 47 percent, among non-degreed voters under 30, but over 24 years old. Only the white non-college graduates younger than 25 have favored Obama so far this primary season. They voted for him 59 percent to 38 percent.
Frankovic's column also draws on an innovative survey released last week conducted among college students in Pennsylvania in partnership with the website Uwire (another survey I neglected to link to last week). College students have always been notoriously difficult to survey, and the ubiquity of cell phones among students has made it even worse. In this case, CBS sampled and interviewed students online using email lists of all students, presumably obtained directly from the universities. The full results from CBS include more methodological details.
For those of us that have been following trends in the Obama-Clinton contest by race, education and income, these two columns from Langer and Frankovic are invaluable. Both are worth reading in full. Also, be on the lookout for analysis of this data and more by my colleague Ron Brownstein in National Journal on Friday.
Two week ago, we linked to a survey result for the Louisiana Senate race released by Rasmussen Reports. The survey purported to show Sen. Mary Landrieu with a 16-point lead over challenger John Kennedy. Apparently, according to this report that appeared on April 20 in the New Orleans Times-Picayne, that initial result was in error:
According to one news account of a new poll by Rasmussen Reports, Sen. Mary Landrieu, D-La., had a 39 percent to 55 percent lead over her Republican challenger, state Treasurer John Kennedy. According to another account, that same poll had Kennedy ahead by a statistically insignificant 46 percent to 47 percent. Who was right? As it turns out, no one. The first poll results showing Landrieu ahead were posted on the Rasmussen Web site and then pulled after the firm realized it had confused the results with polling done in the Virginia Senate race, which showed Democrat Mark Warner ahead by that same 39 percent to 55 percent margin. Rasmussen later posted the 46 percent to 47 percent results, and then quickly removed that from its Web site. A company spokesman confirmed that the first results were wrong, but could not explain what happened with the second posted results. All he would say is that Rasmussen doesn't have any current polling data in the Louisiana Senate race. For Landrieu campaign staff, the sudden fall from 16 points ahead to one point behind, followed by a "never mind" from Rasmussen, was softened by the release last week of another poll, this one by Southern Media & Opinion. It showed Landrieu running ahead of Kennedy 38 percent to 50 percent.
Needless to say, after learning about the Rasmussen error this afternoon, we immediately removed the erroneous result from our chart of the Louisiana Senate race. Apologies for not doing so sooner.
900 RV, 400 Dem, 4/28-29
Clinton 44, Obama 41
McCain 46, Obama 43... Clinton 45, McCain 44
Favorable / Unfavorable
McCain: 50 / 47
Clinton: 44 / 54
Obama: 49 / 48
n=720 RV, 4/24-28
Obama 56, McCain 32... Clinton 52, McCain 38
Zogby Interactive (online)
n=7,653 likely voters, 4/25-28
w/ Ralph Nader and Bob Barr
Obama 45, McCain 42, Barr 3, Nader 1
McCain 44, Clinton 34, Barr 4, Nader 3
n=555 likely voters, 4/26-28
Clinton 63, Obama 27
Late last week, a North Carolina musician named David LaMotte received a survey call from Garin-Hart-Yang, the firm of Clinton pollster Geoff Garin. The call, as he reported to HuffingtonPost blogger and DailyKos diarist Paul Loeb, "started out normal enough" but soon "turned to long Hillary-praising and Barack bashing policy statements" with response options that asked him to evaluate each statement. At the end of the call, they asked, "now based on everything we've discussed, who would you vote for?" LaMotte used his telephone answering machine to record the latter half of the call, and as a result was the transcript that Loeb posted at DailyKos and later as streaming audio posted by Loeb, Politico's Ben Smith and ABC's Jake Tapper.
Not surprisingly, much of the commentary about this call focuses on whether the Garin survey meets the classic definition of a "push poll." It does not, at least as far as I can tell.
The call in question was long, included dozens of question that seemed "normal enough" to LaMotte and, as he confirmed to me via email, concluded with a set of demographic items that LaMotte deleted from the audio recording in order to protect his own privacy. This call has none of the hallmarks of the classic, so-called "push poll" intended only to spread a negative message under the false guise of a survey.
It was, rather, a "message testing" survey, albeit one that tested a highly negative and -- to many -- objectionable message. It was not measuring "public opinion" as it exists now but rather voter reactions to a series of positive statements about Hillary Clinton and negative attacks directed at Barack Obama. Garin asked respondents to react to each statement, and subsequently asked a second vote question ("Now based on everything we've discussed, who would you vote for?"), in order to identify the most effective attack and the voters most likely to be swayed by it.
Like it or not, this sort of testing is common in most campaigns, and almost none of the results ever see the light of day. Full disclosure: As a campaign pollster, I helped design hundreds of surveys with similar tests of messages. (I have written previously about the differences between message testing and "push polls,' see also the commentary by Roll Call's Stu Rothenberg and the recent statement on "push polls" and message testing by the American Association for Public Opinion Research-AAPOR).
Of course, simply labeling this survey as "message testing" does not absolve the pollster of ethical constraints. The pollster still has an obligation to tell the truth and treat respondents with fairness and respect. Did this survey do that? LaMotte's audio has the interviewer reading five statements that he describes as "criticisms that opponents might make about Barack Obama." After each of the statements below, the interviewer asks "if they would give you very major doubts, some doubts or no real doubts about supporting Obama."
At a time when we need leaders who are clear, strong and decisive, Obama has been inconsistent, saying he would remove all troops, but then indicating that he might not, and pledging to renegotiate NAFTA, but then sending signals that he would not actually do so as president.
He supported George W. Bush's 2005 energy bill which payed six billion dollars in subsidies to the oil and gas industry, nine billion dollars in subsidies to the coal industry and twelve billion dollars in subsidies to the nuclear power industry. It was called 'a piñata of perks' and 'the best energy bill corporations could buy.
He leads the committee with oversight on Afghanistan but failed to hold a single committee meeting or hearing on Al Qaeda in Afghanistan or anything else.
He sided with the credit card companies voting against the bill that would cap interest rates at 30 percent.
While he talks about universal health care he has failed to make the hard choices that would truly get us to universal coverage and lower health care costs for all. His plan would leave 15 million Americans uninsured.
Let's stipulate up-front that the Obama camp vigorously contests these arguments, with some support from journalists. While by no means a complete listing, here are links to reports that provide more context on the NAFTA, energy, credit card, health care and Afghanistan issues. Readers are encouraged to add more in the comments, if warranted.
However, here is the non-rhetorical question that interests me most: How much do these statements differ from those included in Clinton mailers on NAFTA, the energy bill, the credit card bill, health care or Hillary Clinton's statements on the stump about the Afghanistan oversight committee? And if they are essentially the same, why would testing these assertions in the context of a survey be any more or less objectionable than making the same assertions in a debate, a speech, a television ad or a campaign mailer?
[The original version of this post included some extraneous verbiage in the third paragraph that I've cleaned up]
n=727 likely voters, 4/26-28
Obama 49, Clinton 44
Howey Politics/Gauge Market Research
n=600 likely voters, 4/23-24
Clinton 46, Obama 46
(Thanks to reader DW)
Public Policy Polling (D)
n=1,388 likely voters, 4/26-47
Clinton 50, Obama 42
Favorable / Unfavorable
McCain: 49 / 47
Clinton: 45 / 54
Obama: 52 / 45
n=774 likely voters, 4/28
Obama 51, CLinton 37
Favorable / Unfavorable
McCain: 51 / 46
Clinton: 45 / 53
Obama: 51 / 46
Public Policy Polling (D)
Obama 51, Clinton 39
April 26-27, n=1,121 LV, margin of sampling error 2.9%
Obama 46, McCain 44... Clinton 50, McCain 41
Robert Novak's column last week led with this reference to the Pennsylvania exit poll results:
When Pennsylvania exit polls came out late Tuesday afternoon showing a lead of 3.6 points for Hillary Clinton over Barack Obama, Democratic leaders who desperately wanted her to end her candidacy were not cheered. They were sure that this puny lead overstated Obama's strength, as exit polls nearly always have in diverse states with large urban populations. How is it possible, then, that Clinton, given up for dead by her party's establishment, won Pennsylvania in a 10-point landslide? The answer is the dreaded "Bradley effect."
Prominent Democrats only whisper when they compare Obama's experience, the first African American with a serious chance to be president, with what happened to Los Angeles Mayor Tom Bradley a quarter-century ago. In 1982, exit polls showed Bradley, who was black, ahead in the race for governor of California, but he ultimately lost to Republican George Deukmejian. Pollster John Zogby (who predicted Clinton's double-digit win Tuesday) said what practicing Democrats would not: "I think voters face to face are not willing to say they would oppose an African American candidate."
Unfortunately, Novak confounds two issues, and Zogby's contribution confuses things further. The "Bradley effect" (also called the "Bradley/Wilder effect," the latter based on the 1989 election of Doug Wilder in Virginia by narrower margins than indicated by pre-election polls) pertained less to exit polls but to pre-election telephone surveys. The underlying theory was that white respondents were sometimes unwilling to reveal their preference for the white candidate in a bi-racial contest when they felt some "social discomfort" in doing so. That is, respondents would be less likely to reveal their true preference in a telephone interview if they believed the interviewer supported a different candidate. The most important evidence was an observed race-of-interviewer effect: Support for Doug Wilder in one 1989 survey (pdf) was eight points higher when the interviewer was black than when the interviewer was white.
The problem with extending this idea to the 2008 exit polls is that -- contrary to the apparent assumptions of both Bob Novak and John Zogby -- exit polls do not involve a "face to face" interview. Rather, the exit poll interviewer's task is to randomly select and recruit respondents, hand them a paper questionnaire, a pencil and a clipboard and allow the respondents to privately fill out the questionnaire and deposit it into a large "ballot box."
The more likely explanation for the consistent Obama skew in the exit polls this year is likely less about "voters not willing to say they would oppose an African American candidate," than about the relative youth of the interviewers, and the well established problem that the typically younger exit poll interviewers have in winning cooperation from older respondents. Here is a summary I wrote two years ago about information included in the official, post-election report on the 2004 exit polls:
The [National Election Pool] NEP exit polls depended heavily on younger and college age interviewers. More than a third (36%) were age 18-24 and more than half (52%) were under 35 years of age (p. 43-44). These younger interviewers had a much harder time completing interviews: The completion rate among 18-24 year olds was 50% compared to 61% among those 60 or older. The college age interviewers also reported having a harder time interviewing voters...The percentage of interviewers who said "the voters at your location" were "very cooperative" was 69% among interviewers over 55 but only 27% among those age 18 to 24 -- see p. 44 of the Edison/Mitofsky report.
Given the huge differences by age in both pre-election and exit polls -- Obama wins those under 30 while Clinton dominates among those over 60 -- an age-related selection bias is not surprising. And the issue may not be about simply getting the age mix right in the exit poll. The issue may also be related to the "social discomfort" theories behind the Bradley-Wilder effect.
Respondents may be making judgements about the exit poll interviewers based on their appearance (age, gender and race) that influence whether they agree to participate or avoid the interviewer altogether. Similarly, while exit poll interviewers are supposed to be carefully counting exiting voters and sticking rigidly to instructions that they select every fourth voter (or whatever interval they are instructed to select) anecdotal evidence suggests that those with less experience often deviate from the procedure and "take who they can get." So less experienced, overburdened interviewers are probably making judgments about which respondents (based on their age, gender and race) might be most likely to cooperate.
Again, quoting from my own summary two years ago:
"It's not that younger interviewers aren't good," as Kathy Frankovic puts it (slide #30), "it's that different kinds of voters perceive them differently." Put all the evidence together, we have considerable support for the idea "that Bush voters were predisposed to steer around college-age interviewers" (Lindeman, p. 14) or, put another way, that "when the interviewer has a hard time, they may be tempted to gravitate to people like them" (Frankovic, slide #30).
It is not at all surprising that this same mix of issues -- younger interviewers who have trouble winning cooperation with older respondents and a huge age differential in the results -- produces a consistent skew to Obama in the context of 2008.
American Research Group
Obama 52, Clinton 42
Clinton 52, Obama 43
The Gallup Daily