August 10, 2008 - August 16, 2008


POLL: Rasmussen Maine (8/12)

Rasmussen Reports
8/12/08; 500 LV, 4.5%
Mode: IVR

Obama 53, McCain 39
(July: Obama 49, McCain 41)

Collins (R-i) 55, Allen (D) 40
(July: Collins 53, Allen 43)

POLL: Ivan Moore Alaska (8/9-12)

Ivan Moore Research
8/9-12/08; 501 RV, 4.4%
Mode: Live Telephone Interviews
(Anchorage Press)


Begich (D) 56, Stevens (R-i) 39, Bird (i) 2
GOP Primary: Stevens 63, Cuddy 20, Vickers 7

House At-Large:
Berkowitz (D) 51, Young (R-i) 41, Wright (i) 4
Parnell (R) 46, Berkowtiz 42, Wright 3
GOP Primary: Young 46, Parnell 40, LeDoux 7

POLL: Economist National (8/11-13)

Economist /
8/11-13/08; 1,000 Adults, 4%

Obama 41, McCain 40
(8/6: Obama 42, McCain 39)

POLL: Daily Tracking (8/12-14)

Rasmussen Reports
8/12-14/08; 3,000 LV, 2%
Mode: IVR

Obama 47, McCain 45

Gallup Poll
8/12-14/08; 2,690 RV, 2%
Mode: Live Telephone Interviews

Obama 44, McCain 44

POLL: Zogby National (8/12-14)

Zogby Interactive
8/12-14/08; 3,339 LV, 1.7%
Mode: Internet

Obama 44, McCain 42
Obama 43, McCain 40, Barr 6, Nader 2

POLL: Rasmussen North Carolina (8/13)

Rasmussen Reports
8/13/08; 500 LV, 4.5%
Mode: IVR

North Carolina
McCain 50, Obama 44
(July: McCain 48, Obama 45)

Perdue (D) 51, McCrory (R) 45
(June: Perdue 47, McCrory 46)

POLL: Rocky Mountain News Colorado (8/11-13)

Rocky Mountain News/CBS News 4/
Public Opinion Strategies (R)/
RBI Strategy & Research (D)
8/11-13/08; 500 LV, 4.5%
Mode: Live Telephone Interviews

McCain 44, Obama 41, Barr 3, Nader 2
Sen: Udall (D) 44, Schaffer (R) 38, Moore (i) 5, Kinsey (G) 2

How We Choose Polls to Plot: Part IV

[This is Part IV of the recent discussion betwen Mark Blumenthal and Charles Franklen called "How We Choose Polls to Plot." For previous posts in the discussion see parts I, II, and III].

"What happens if you leave out 'x'?" is probably the single most asked question at Pollster.com. Everyone has their favorite pollster to hate, and wonders if only that one were removed would the results be closer to the truth. It is a really good question because it goes to the heart of the robustness of our trend estimates and the role of one (or a couple) of pollsters in shaping the conventional wisdom of what "the polls show". The former issue is statistical, the later goes to how shared understandings are constructed. If our estimators are highly sensitive to any one pollster then we have a statistical problem. If one pollster unduly influences shared perceptions, then we better hope they are "right".

Today's question from Mark (and many readers) is what role the tracking polls play in our estimates. This is an issue Mark and I debated quite a lot during the winter when Gallup and Rasmussen began their daily tracking polls. Because they produce so many numbers, including all their data runs the risk that these two dominate our trend estimate to an unacceptable degree. But do they exert that much influence-- there is the question.

And just to be contrarian, take note of the opposite problem: data are valuable. You should never want to ignore information. In that sense excluding data from prolific sources is a mistake unless the data are biased in some uncorrectable way.

The first decision we reached in January was that we would only include each INDEPENDENT sample from tracking polls. This was an easy call. Rolling samples are great for daily updates but Thursday's poll isn't independent of Wednesday's because they both contain Tuesday's and Wednesday's results, if it is a three-day track. In that sense, there isn't as much new information as it seems. So we take only the independent results: Mon-Tues-Wed, Thur-Fri-Sat, Sun-Mon-Tues and so on for a three day tracker. This means we are only including independent data collections, and cuts down on the number of entries in our data that come from any single tracking poll.

Despite this, we get a lot of data in the national track from two primary sources: Rasmussen accounts for 63 of 286 data points in our national trend data. Gallup's tracker provides 41 more. (We keep Gallup's USAToday polls separate from the tracker.) And a third source, The Economist/YouGov's internet poll accounts for 24 data points. (Full Disclosure: YouGov/Polimetrix Pollster.com and supports our work here.) The next most common pollster is Zogby with only 12. So let's take a look at the influence of these top-three pollsters in terms of data. Together they account for 128 of 286 data points, or 45% of our national data.

Let's begin with recognizing that every data point MUST have some influence on our trend estimator. If it didn't then the trend would not be responding to the data! So in that simple sense, the Rasmussen, Gallup and YouGov data must play some role in determining the value of our trend estimate. That really isn't the issue that concerns people. The question is whether these three pollsters DISTORT the trends we would otherwise estimate from all other sources. It would be fine if Rasmussen or Gallup or YouGov had a huge influence on our estimate so long as their trends were exactly in line with everyone else's trends. The concern arises when there is the possibility that one of these is both influential AND out of line with the rest of the world.

We need to look at three things: the overall trend with all pollsters included, the trend only for a single pollster, and finally the trend we'd estimate if we excluded this pollster. If a pollster is different from others, that's a concern. But if they don't substantially change the trend estimate, then we aren't that worried. But if they are different AND shift the trend, then we have to worry.

So let's look at the data. The chart below plots the overall trend (the blue line), the trend for each of the three most prolific pollsters (solid red), and the trend estimate if we exclude that pollster (dashed red line). A fourth plot shows what happens if we exclude all three prolific pollsters and rely only on the 28 different pollsters who've done 12 or fewer polls each (dashed blue).


Over all our polls, we estimate an Obama advantage over McCain of 3.4 points (as of early morning on 8/14). If we exclude Gallup, the trend estimate is 3.2. If we exclude Rasmussen, the estimate is 4.5. If we exclude YouGov the estimate is 3.3. And if we omit all three (and 45% of our data) the trend estimate is 5.1. So it DOES matter which of these we include. By as little as 0.1 points or as much as 1.7 points.

The most striking thing to me about these figures is that all three tracking polls trend a bit below the overall trend, which is why omitting them all produces the biggest change in the current trend estimate. Gallup is only a bit below trend, YouGov a bit more in May but less recently. Rasmussen stands out as the most consistently below trend, with convergence only in June for a while.

At first glance, the worst thing about Rasmussen is that his trend seems much more sharply downward since late June than either other frequent pollster (both Gallup and YouGov see flat or rising Obama margins in that time.) The dashed line without Rasmussen looks flat or possibly rising slighting, while including Rasmussen with all others produces a modest downward slope recently. So is Rasmussen determining our current trend's tendency to be moving down? This is especially relevant given the upward moves by Gallup and YouGov.

The bottom right panel of the figure offers some reassurance. While Rasmussen does look different from Gallup or YouGov, when we take all three tracking polls out, the dashed blue line in the bottom right figure trends slightly down, approximately in parallel with the overall trend estimate using all the polls. To be sure, omitting the tracking polls does produce a higher current trend estimate: 5.1 vs 3.4 for all polls. Clearly the tracking polls are showing a lower margin and that is reflected here. But from my point of view, the happy news is that the trend with or without the three trackers moves in pretty much the same way over the year. Granted some minor differences, both curves move up and down at about the same time and the gap between the solid and dashed blue lines is roughly equal over time. This suggests that the effects of the three trackers may be to lower the estimated Obama margin over McCain, but they don't distort the dynamics of the race. When trends are up or trends are down, they are reflected in both the with and without tracker estimates.

It is reassuring that both the Gallup and YouGov trackers have very little influence on the overall trend estimate. Including or excluding either of these polls has very little effect on the trend estimate.

A final point is what this says about the validity of the polls. If Gallup and YouGov are flat or slightly up, and Rasmussen is sharply down, how are we to know which is "right"? The data here say a bit of both are right. Gallup and YouGov do somewhat better jobs tracking the overall trend than does Rasmussen. But the recent decline in Obama support, even though modest, is not captured by Gallup or YouGov. Rasmussen clearly overstates the decline (compared to other polling) but the consensus of the 158 polls NOT from these three sources is that there has been a little downturn in Obama's lead since late June.

It is easy to exaggerate how large these differences are, especially in light of the intrinsically hard problem of knowing what "the truth" is at any moment. The chart below compares the trend estimates we would get from dropping each of the 31 different pollsters in our our data. Two things stand out. Dropping any single pollster has very little effect on the trend estimate, with one exception. Omitting Rasmussen, who is both the most prolific pollster and the one with considerably more variation than others, does make a noticeable difference in the trend estimate. But the reassuring element of this graph is that even the line omitting Rasmussen still falls within the 95% confidence interval around our overall trend estimate. While there was a time in March when the "without Rasmussen" line moves just outside the 95% confidence interval, this is the exception rather than the rule. Most of the time, including now, the trend without Rasmussen is NOT significantly different from the trend over all pollsters (or the trend omitting any individual pollster.)


So what do we conclude from this exercise? I'd say that any individual pollster can have important effects on our trend estimate under the right circumstances. Concentrating a lot of unusual polls in a short time span can shift our estimates. But I am encouraged that while there are important differences in Gallup, Rasmussen and YouGov trends, none of them seem to outright dominate our trend estimates. Even Rasmussen's effects look less important when we see what all the non-tracking polls are showing. We might worry about what the right level of support is, but the shape of the trends looks pretty robust no matter who is included or excluded. While there are differences of as much as 1.7 points in the estimated margin, it is worth taking a deep breath and appreciating the margin of error in these and all other estimates of candidate support right now. The current confidence interval covers an range from +1.1 to + 5.2. That 4.1 point range looks pretty large compared to a 1.7 point difference among estimators. Meanwhile, individual polls range over a MUCH wider spread- over at least 10 points and often more. The trend estimate manages to narrow that range of uncertainty by more that 50%. A good achievement. But not one that is precise to tenths of a percentage point, nor one that is immune to some effects of individual pollsters.

POLL: Rasmussen Colorado, Minnesota (8/13)

Rasmussen Reports
Mode: IVR

Colorado (8/13/08; 500 LV, 4.5%)
McCain 49, Obama 48
(July: Obama 50, McCain 47)

Sen: Udall (D) 50, Schaffer (R)
(July: Udall 49, Schaffer 46)

Minnesota (8/13/08; 500 LV, 4.5%)
Obama 49, McCain 45
(July: Obama 52, McCain 39)

Sen: Coleman (R-i) 49, Franken (D) 46
(July: Franken 49, Coleman 46)

POLL: UT Texas (7/18-30)

University of Texas
7/18-30/08; 668 RV, 3.8%
Mode: Live Telephone Interviews Internet

McCain 43, Obama 33, Barr 5, Nader 2
Sen: Cornyn (R-i) 44, Noriega (D) 31

POLL: Cole Hargrave Snodgrass New Jersey (7/30-31)

Cole Hargrave Snodgrass & Associates (R)/
Club for Growth
7/30-31/08; 400 RV, 4.9%
Mode: Live Telephone Interviews

New Jersey
Sen: Zimmer (R) 36, Lautenberg (D-i) 35, Scheurer (L) 2, Lobman (S) 1, Brooks (i) 1, Carter (i) 1

POLL: IBD/TIPP National (8/4-9)

Investory's Business Daily/TIPP
8/4-9/08; 925 RV, 3%
Mode: Live Telephone Interviews

Obama 43, McCain 38 (July: Obama 40, McCain 37)

POLL: Strategic Vision Wisconsin (8/8-10)

Strategic Vision (R)
8/8-10/08; 800 LV, 3%
Mode: Live Telephone Interviews

Obama 47, McCain 42

POLL: SurveyUSA Washington (8/11-12)

8/11-12/08; 718 LV, 3.7%
Mode: IVR

Obama 51, McCain 44
(July: Obama 55, McCain 39)

Gregoire (D-i) 50, Rossi (R) 48
(July: Gregoire 49, Rossi 46)

POLL: Rasmussen Nevada (8/11)

Rasmussen Reports
8/11/08; 500 LV, 4.5%
Mode: IVR

McCain 48, Obama 45 (July: McCain 45, Obama 42)

POLL: Rasmussen Virginia (8/12)

Rasmussen Reports
8/12/08; 500 LV, 4.5%
Mode: IVR

McCain 48, Obama 47
(July: McCain 48, Obama 47)

Warner (D) 61, Gilmore (R) 35
(July: Warner 59, Gilmore 36)

Polling Registered vs. Likely Voters: 2004

As Pollster.com readers have no doubt noticed, there has been much discussion in the posts and the comments here about the merits of polling registered voters (RV) versus likely voters (LV). Mark and Charles have been debating this point in their most recent exchanges about whether it is better to include LV or RV results in the Pollster.com poll averages. Charles's last post on this topic raised the following questions:

"There is a valid empirical question still open. Do LV samples more accurately predict election outcomes than do RV samples?"

Ideally, I'd have time to go back over 30 or more years of polling to weigh in on this question. Instead, I thought I'd go back to 2004 and get a sense of how well RV versus LV samples predicted the final outcome. To do this, I used the results from the final national surveys conducted by eight major survey organizations. For each of these eight polls (nearly all of which were conducted during the last three days of October), I tracked down the Bush margin among both RVs and among LVs. The figure below demonstrates the difference in the Bush margin for the LV subset relative to the RV sample from the same survey.


For most polls, LV screens increased Bush's margin, including three surveys (Gallup, Pew, and Newsweek) where Bush did 4 points better among LVs than he did among RVs. But using a LV screen did not always help Bush. In three polls, (CBS/New York Times, Los Angeles Times, and Fox News) his margin remained the same and in the Time poll (which was conducted about a week earlier than the other surveys) Bush actually did 2% worse among LVs.

Of course, this doesn't really tell us which method was more accurate in predicting the general election outcome, just which candidate benefited more from the LV screens. To answer which was more accurate, we can plot each poll's Bush margin among both RVs and LVs to see which came closest to the 2.4% margin that Bush won in the popular vote. This information is presented in the figure below, which includes a dot for each survey along with red lines indicating the actual Bush margin.


Presumably, the best place to be in this plot is where the red lines meet. That would mean that both your RV and LV margins came closest to predicting the eventual outcomes. But, if you are going to be closer to one line over the other, you'd rather be close to the vertical line than the horizontal line. This means that the polling organization's LV screen helped them improve their final prediction over just looking at RVs. If the opposite is true (an organization is closer to the horizontal line than they are to the vertical line), their LV screen actually reduced their predictive accuracy.

The CBS/New York Times poll predicted a 3 point Bush margin for both its RV and LV samples, meaning it was just 6/10ths of a point off regardless of whether they employed their LV screen. Four organizations (Pew, Gallup, and ABC/Washington Post, and Time) increased the accuracy of their predictions by employing the LV screens, coming closer to the vertical line than they do to the horizontal line. Gallup's LV screen appeared to be most successful, since it brought them closest to the actual result (predicting a 2 point victory for Bush despite the fact that their RV sample showed a 2 point advantage for Kerry).

On average, the RV samples for these eight polls predicted a .875 Bush advantage while the LV samples predicted a 2.25 advantage for Bush, remarkably close to the actual result. Of course, this is just one election, but it does appear as though likely voters did a better job of predicting the result in 2004 than registered voters. On the other hand, this analysis reinforces some other concerns about LV screens, the most important of which is the fact that some LV screens created as much as a 4 point difference in an organization's predictions while in three cases LV screens produced no difference at all. It is also important to note that these are LV screens employed at the end of a campaign, not in the middle of the summer, when it is presumably more difficult to distinguish LVs. Ultimately, the debate over LV screens is an important one and the 2008 campaign may very well provide the biggest challenge yet to pollsters trying to model likely voters.

Taylor: What Motivates Voters?

Today's guest pollster contribution comes from Humphrey Taylor, who has served as chairman of The Harris Poll, a service of Harris Interactive, since 1994.

When I started work in market research, I spent the first month learning to interview people face to face, in central Scotland. It was a great experience. At the first house, a woman answered the door and, as I nervously explained that I wanted to interview her, she shut the door in my face. My supervisor, a cheerful, middle-aged woman, rang the bell again and, all smiles and self-confidence, easily completed the interview. The same thing happened at the next house, and my self-confidence hit rock bottom.

At the third house, to my enormous relief, I actually managed to complete the interview with an elderly Scottish lady. But as I thanked her, she said, with a twinkle in her eye, "Och, surely you don't believe all the things the folks tell you, do you?" This may have been one of the most valuable comments anyone has ever made to me about survey research, and I have remembered it many times over my working life.

The sad truth is that all too often we researchers naively accept what respondents tell us without questioning if it is really "true." The following exercise explains part of the problem. Ask someone which candidate or party they prefer and they will usually give you an answer (and, by the way, it will probably be true). But then ask "Why?" and the conversation will probably go something like this:

    "Because of his/her/their policies."
    "Which policies specifically?"
    "His. . . um.. economic (or Iraq, health care, etc.) policies."
    "What are his economic policies?"

Push harder and you will probably find that this voter really doesn't know what the candidate's economic (or most other) policy proposals really are. But this does not stop people from having a strong preference for one candidate over another, or believing that they would handle the issue mentioned better than their opponents.

One model of how voters choose candidates is that they are like juries. They listen to the candidates and carefully consider their policy proposals before deciding which way to vote. Unfortunately this theory is almost never true.

There are many reasons why it is so difficult to understand people's motives. One is that most people don't understand themselves and often rationalize their attitudes and behavior. Sometimes they surely deceive themselves and sometimes they knowingly bend the truth or tell outright lies. There is a growing body of literature that documents the unreliability of replies given to interviewers where there is a "socially desirable" answer. Large numbers of people lie in telephone and in-person surveys about whether they believe in God, go to church, give money to charity, clean their teeth regularly, drive over the speed limit or drink alcohol. Many people who do not vote claim that they do. And the number of people who say they voted for a sitting president tends to go up when he is very popular and down when his ratings fall.

Another problem is that people give inaccurate answers not because they are lying but because their memory is imperfect. And many people's honest predictions of their own future behavior are notoriously inaccurate.

When it comes to voting, there are many factors which influence voters' preferences, most of which they are often unaware of. While the candidates' positions on the issues (or voters' perceptions, which may be inaccurate, of their positions) are an important factor, there are several more powerful ones.

Voters often explain their votes based on the candidates' track record. Obviously these are important but voters' perceptions of politicians' track records vary greatly depending on their political and ideological views. Some voters think President Bush should have been impeached. Others think history will show him to have been a great president. While perceptions of politicians' record often influence voter preferences, the reverse is also true -- that voter preferences have a huge impact on perceptions of their track records.

So what other factors have a big impact on voting behavior? One is family, friends and people they work with. Most people vote the same way as most of the people they like and socialize with.

A candidate's voice, looks, style and rhetoric are all enormously important. Franklin Roosevelt, the only president to be elected four times, was the perfect candidate -- good looking, with a beautiful voice, a commanding presence and a wonderful way with words. But if you had asked people why they voted for him they would probably have referred to his policies or his track record and what they thought he would do.

One of the reasons Ronald Reagan was so successful as a politician was that he, and his pollster Dick Wirthlin, understood that "values" were more important than "issues." Reagan mastered the art of persuading people he shared their values, so that many people who did not support his positions on some key issues voted for him anyway. In addition he, and his aide Michael Deaver were masters of the photo-op, casting Reagan in great settings that made him look very presidential. But voters would not tell you this influenced their votes

There are many other favors that influence candidate preferences and voting behavior that are rarely mentioned by voters. Political advertising has a huge impact but few people believe, or tell surveyors, that they are influenced by advertising. The media voters are exposed to matter a lot. Those who read the editorials in the Wall Street Journal or The New York Times have their opinions shaped, or reinforced, by what they read. And those who watch Fox News get a very different world view than those who watch other television stations. As dictators and media moguls know well, the media's ability to shape public attitudes is very powerful.

So next time a voter tells you his vote is determined by the candidates' positions on the issues, treat this with a large dose of salt.

POLL: Rasmussen Kansas (8/11)

Rasmussen Reports
8/11/08; 500 LV, 4.5%
Mode: IVR

McCain 55, Obama 41 (July: McCain 58, Obama 35)

Sen: Roberts (R-i) 56, Slattery (D) 37
(July: Roberts 61, Slattery 33)

POLL: Harris National (8/1-7)

Harris Interactive
8/1-7/08; 2,488 RV
Mode: Internet

Obama 47, McCain 38, Nader 3, Barr 2
(July: Obama 44, McCain 35, Nader 2, Barr 2)

How We Choose Polls to Plot: Part III

Topics: Charts , Likely Voters , Pollster.com , Robert Erikson

In the first two installments of this online dialogue, I asked a question we have heard from readers about why we choose the results for "likely voters" (LVs) over "registered voters" (RVs) when pollsters release both. Charles answered and explained our rationale for our "fixed rule" for these situations (this is the gist):

That rule for election horse races is "take the sample that is most likely to vote" as determined by the pollster that conducted the survey. If the pollster was content to just survey adults, then so be it. That was their call. If they were content with registered voters, again use that. But if they offer more than one result, use the one that is intended to best represent the electorate. That is likely voters, when available.

Despite my own doubts, I'm convinced by the rule for this reason: I can't come up with a better one. Yes, we would arbitrarily choose RVs over LVs until some specified date, but that would leave us still plotting numbers from pollsters that only release LV samples. And on which date do we suddenly start using the LV numbers? After the conventions? After October 1? What makes sense to me about our rule, is that in almost all cases (see the prior posts for examples) it defers to the judgement of the pollster.

Several readers posed good questions in the comments on the last post. Let me tackle a few. Amit ("Systematic Error") asked about how likely voters are constructed and whether we might be able to plot results by "a family of LV screens (say, LV_soft, LV_medium, LV_hard)" and allow readers to judge the effect.

I wrote quite a bit back in 2004 about how likely voter screens are created, and a shorter version focusing on the Gallup model two weeks ago. One big obstacle to Amit's suggestion is that few pollsters provide enough information about how they model likely voters (and how that modeling changes over the course of the election cycle) to allow for such a categorization.

"Independent" raised a related issue:

Looking at the plot, it appears that Likely Voters show the highest variability as a function of time, while Registered Voters show the least. Is there some reason why LVs should be more volatile than RVs? If not, shouldn't one suspect that the higher variability of the LV votes is an artifact of the LV screening process?

The best explanation comes from a 2004 analysis (subs. req.) in Public Opinion Quarterly by Robert Erikson, Costas Panagopoulos and Christopher Wlezien. They found that the classic 7-question Gallup model "exaggerates" reported volatility in ways that are "not due to actual voter shifts in preference but rather to changes in the composition of Gallup's likely voter pool." I also summarized their findings in a blog post four years ago.

Finally, let me toss one new question back to Charles that many readers have raised in recent weeks. The two daily tracking surveys -- the Gallup Daily and the Rasmussen Reports automated survey -- contribute disproportionately to our national chart. For example, we have logged 51 national surveys since July 1, and more than half of those points on the chart (27) are either Gallup Daily or Rasmussen tracking surveys. Are we giving too much weight to the trackers? And what would the trend look like if we removed those surveys?

POLL: InsiderAdvantage Virginia (8/12)

InsiderAdvantage/Poll Position
8/12/08; 416 LV, 5%
Mode: IVR

McCain 43, Obama 43

Obama's Dog Days of Summer?

The first two weeks of August have not been good for Barack Obama. As we said last week, McCain's "Celebrity" ad--along with his campaign's subsequent attacks on Obama--blunted any momentum Obama may have gotten from his overseas trip and kept this thing close. Given that all of the internals (right direction/wrong track, generic congressional ballot and party ID) and the Illinois senator's huge "intensity gap" support an Obama/Democrat blowout, it is astounding that he is underperforming as much as he is. The electorate, of course, is still in flux, as a large segment of voters is still undecided (or switchable); it is clear, however, that Barack Obama has not come nearly far enough to close the sale on this election.

It is worth repeating what we said last week: the McCain attacks on Obama are working. And when you look at how long it took Obama's team to respond to the "Celebrity" spot (nearly two weeks) you can't help but be reminded of John Kerry in the summer of 2004. Voters don't just look at the candidates and their issue positions, they look at how the candidates run their campaigns and make decisions, and Obama has looked awfully soft in his response to this charge. Combine this with his tepid statements on the Russia-Georgia crisis and you have the makings of a legitimate campaign swoon. Plain and simple, the McCain team has been winning the earned media battle for the last two weeks.

This is a difficult election to classify because there is no incumbent president or vice president in the race, but it might be helpful to look at past elections to give us some guidance. While there are multiple ways to categorize elections, in every presidential election the two sides try to make the election hinge on some mix of referendum and personality (including the policies the candidate stands for). The winning campaign is the one that better succeeds in establishing its frame and making the case for it.

Some elections are more of a referendum and some are more about personality/issues. For example, 1980 and 1992 were clearly referendum elections. In both cases the electorate decided that things were bad and the alternative was acceptable (Reagan in '80 and Clinton in '92). In each case the referendum on the current administration worked for the challenger. In 2004 the direction of the country was poor but voters decided that the alternative (Kerry) was not acceptable. Kerry's referendum on Bush failed.

This year is clearly a referendum on Bush and the direction of the country and, as we have said before, if Obama is viewed as acceptable to a majority of voters he will win this election. Whichever side does a better job of framing the debate will win. Right now McCain is doing a good job of framing the debate as "this guy (Obama) isn't ready to lead." Obama's basic change thematic might be enough on its own because things are seen as so bad, but to improve his "referendum" position he needs to do a better job of tying Bush and McCain together.

So, at its core, this election is about Obama's ability to make this a referendum and him the acceptable alternative to the current course. Therefore, a McCain strategy to make Obama unacceptable is his only winning course of action. Any other strategy would be political malpractice. Contrast ads work. Anyone who says McCain has gone too negative too early has never been involved in a political campaign. In 2004, the Bush team started running attack ads against Kerry in March. Of course, that year there was a Democratic nominee much sooner but it shows that it makes sense to start defining your opponent in July and August.

Gaps Galore

According to a July Wall Street Journal poll there is both a "generation gap" and an "intensity gap" in the 2008 Presidential race. We have seen this in our own polling and in polling by other media outlets, as well. In this particular WSJ survey Obama leads McCain among 18-34 year olds by 24 points (55% to 31%). Among those 65 years of age and older McCain led Obama by 10 points (51% to 41%). There is also an enthusiasm or intensity gap between Obama's and McCain's vote with almost half (44%) of Obama voters saying they are enthusiastic about their candidate and only 14% of McCain voters saying the same. Inevitably, then, we have some questions that will be answered come November:

  1. What percentage of the 2008 electorate will be 18-29 year olds? If their raw vote totals are up but their share of the electorate remains the same then the impact is less. According to the VNS and NES exit polls, in 2000 and 2004 they represented approximately 17% of the vote. Yes, the raw vote total for 18-29 year olds increased significantly in 2004 but so did other age cohorts. If the 2008 youth vote share increases (to say 20%) and Obama improves upon the Kerry vote among this cohort (54%) then he will be tough to beat. If, however, the percentage of 18-29 year olds remains at about 17% and Obama does only marginally better than Kerry did with this group (let's say he wins that share of the electorate with around 56-58%) then I don't see the youth vote having as much of an impact. (An aside: according to Pollster.com contributor Charles Franklin, who uses more reliable Census CPS turnout data, the young did actually increase their share of the electorate, but not by an impactful margin. As he says, "Perhaps we will indeed see another rise, as we did in 2004. But unless something truly unprecedented occurs, no one can win on the young alone.")
  2. How much of the intensity/enthusiasm gap is due to Obama's overwhelming lead among 18-34 year olds? There is no doubt that the enthusiasm level among McCain's core vote needs to improve-and Obama's lead here is an important ingredient for driving likely voters--but I am not sure that the enthusiasm difference isn't being artificially inflated by the youth vote (a cohort that doesn't historically vote in overwhelming numbers). Again, time will tell.

National Horserace Observations

As the below chart indicates, the race remains close. From a macro level, Obama was poised to blow this race open in mid- to late-June when most polls had him up by double-digits. Since that time we have seen the gap close. Also, take a look at the numbers in March when the Reverend Wright story broke. Yes, Obama was still engaged in a primary battle and Clinton voters were not likely in the fold yet, but clearly that news story was a staggering blow and it showed up in his head-to-head numbers with McCain.

horserace aug 13.png

POLL: Pew National (7/31 - 8/10)

Pew Research Center
7/31-8/10/08; 2,414 RV, 2.5%
Mode: Live Telephone Interviews

Obama 46, McCain 43 (July: Obama 47, McCain 42)

POLL: Franklin & Marshall Pennsylvania (8/4-10)

Franklin & Marshall College
8/4-10/08; Likely Voters
Mode: Live Telephone Interviews

Obama 46, McCain 41

POLL: SurveyUSA North Carolina (8/9-11)

8/9-11/08; 655 LV, 3.9%
Mode: IVR

North Carolina
McCain 49, Obama 45
(July: McCain 50, Obama 45)

Sen: Dole (R-i) 46, Hagan (D) 41, Cole (L) 7
(July: Dole 54, Hagan 42)

Gov: Perdue (D) 47, McCrory (R) 44, Munger (L) 5
(July: Perdue 47, McCrory 46, Munger 3)

POLL: InsiderAdvantage Florida (8/11)

InsiderAdvantage/Poll Position
8/11/08; 418 RV, 5%
Mode: IVR

McCain 48, Obama 44, Barr 2

On Filled-in Questionnaires and the Clinton Pollsters

Topics: Filled-In Questionairre , Geoff Garin , Hillary Clinton , Mark Penn

I want to add one thought the chorus of commentary on Josh Green's Atlantic Monthly article on the Hillary Clinton campaign, based on a remarkable collection of email and memoranda he obtained from sources within the campaign. It concerns the first sentence in an April 25 email from newly installed pollster Geoff Garin to the Clinton high command:

Attached is the filled in questionnaire from the North Carolina survey.

Those ten words probably seem utterly mundane to the ordinary reader, even to the ordinary campaign consultant. Pollsters share results with their clients. It's a basic part of the job. Notice also that Garin sent his email at 7:25 a.m. on a Friday morning. The timing and content imply that he was sharing the most critical "top line" results of a tracking survey that had completed the night before.** Thus, this email shows us Garin passing along results as soon as he has them for review by other decision makers. Further analysis and internal discussion no doubt followed.

What makes Garin's ordinary act so remarkable is that Mark Penn, the original Clinton pollster and "chief strategist" rarely delivered a "filled in questionnaires" to the Clinton campaign's senior decision makers. I know this because I heard the story a few months ago from a Clinton staffer with first-hand knowledge of what Penn provided to the campaign (who agreed to share the story on condition of anonymity). My source said that Penn would routinely brief strategy sessions without providing the complete results of the poll in advance. Instead, he would present whatever results best made his case (as exemplified by the the smattering of numbers that appear in the Penn memoranda that accompany Green's article).

Perhaps Hillary and Bill Clinton received the full data, but the senior staff and consultants did not. Amazingly, I am told, Penn also initially refused to share the full cross-tabular reports (the reams of tables like this one showing results to every question by every subgroup of interest), as is also standard practice among campaign pollsters. It was not until relatively late in the campaign at the insistence of then campaign manager Patti Solis Doyle that Penn relented, sharing a hard copy of the cross-tabs on condition that Solis Doyle keep it locked up in a file cabinet in her office.

One can understand the temptation that a "chief strategist" might have to control the flow of data. If you are convinced you have the right strategy, and you make the final decision, why give others a tool to question your judgment?

The problem with that approach should be obvious. It poisons the environment within which functional campaigns privately hash out disagreements and reach consensus about strategy. The pollsters job in this process is to put the data on the table, to provide analysis and guidance about that data, but also to let other senior staffers examine and question it. When the pollster wears two hats -- pollster and "chief strategist" -- greater conflict, questioning of motive and campaign "dysfunction" are inevitable.

**One reason I'm confident that this email followed within hours after completion of calling is that one of the respondents later blogged about his experience (discussed here). The respondent reported having been called a night or two before Garin sent his email.

POLL: Gallup Open Ended National (8/7-10)

Gallup Poll
8/7-10/08; 903 RV, 4%
Mode: Live Telephone Interviews*

Obama 45, McCain 38, Barr 1, Nader 1, Clinton 1

* "The question, part of an Aug. 7-10 Gallup Poll, allowed respondents to name any candidate or political party, without prompting of specific names from Gallup interviewers. This is a different approach than Gallup takes in its Daily tracking polling and USA Today/Gallup polls, in which voters are asked whether they would vote for Barack Obama or John McCain for president if the election were held today."

POLL: Hays Alaska (8/6-7)

Hays Research Group (D)
8/6-7/08; 400 Adults, 4.9%
Mode: Live Telephone Interviews

Obama 45, McCain 40, Nader 2

POLL: SurveyUSA Kentucky (8/9-11)

8/9-11/08; 636 LV, 3.9%
Mode: IVR

McCain 55, Obama 37 (June: McCain 53, Obama 41)
Sen: McConnell (R-i) 52, Lunsford (D) 40 (June: McConnell 50, Lunsford 46)

POLL: Mellman Georgia (8/6-10)

The Mellman Group (D)/
Jim Martin
8/6-10/08; 600 LV, 4%
Mode: Live Telephone Interviews

Sen: Chambliss (R-i) 42, Martin (D) 36, Buckley (L) 3

POLL: Quinnipiac New Jersey (8/4-10)

Quinnipiac University
8/4-10/08; 1,468 LV, 2.6%
Mode: Live Telephone Interviews

New Jersey
Obama 51, McCain 41
Sen: Lautenberg (D-i) 48, Zimmer (R) 41

How We Choose Polls to Plot: Part II

Topics: ABC/Washington Post , Charts , Likely Voters , Pollster.com

Mark started this conversation with "Why we choose polls to plot: Part I" asking how we decide to handle likely voter vs registered voter vs adult samples in our horse race estimates.  This was especially driven home by the Washington Post/ABC poll reporting quite different results for A, RV and LV subsamples but it is a good problem in general. So let's review the bidding.

The first rule for Pollster is that we don't cherry pick. We make every effort to include every poll, even if it sometimes hurts. So even when we see a poll way out of line with other polls and what we "know" has to be true, we keep that poll in our data and in our trend estimates.   There are two reasons. First, once you start cherry picking you never know when to stop. Second, we designed our trend estimator to be pretty resistant to the effect of any one poll (though when there are few polls this can't always be true.)  That rule has served us pretty well. Whatever else may be wrong with Pollster, we are never guilty of including just the polls (or pollsters) we like.

But what do we do when one poll gives more than one answer? The ABC/WP poll is a great example, with results for all three subgroups: adults, registered  voters and likely voters. Which to use? And what to do that remains consistent with our prime directive: never cherry pick?

Part of the answer is to have a rule for inclusion and stick to it stubbornly. (I hear Mark sighing that you can do too much of this stubborn thing.)  But again the ABC/WP example is a good one. Their RV result was more in line with other recent polls while their LV result showed the race a good deal closer.  If we didn't have a firm, fixed, rule we'd be sorely tempted to take the result that was "right" because it agreed with other data. This would build in a bias in our data that would underestimate the actual variation in polling because we'd systematically pick results closer to other polls. Even worse would be picking the number that was "right" because it agreed with our personal political preferences.  But that problem doesn't arise so long as we have a fixed rule for what populations to include in cases of multiple results. Which is what we have.

That rule for election horse races is "take the sample that is most likely to vote" as determined by the pollster that conducted the survey. If the pollster was content to just survey adults, then so be it. That was their call. If they were content with registered voters, again use that. But if they offer more than one result, use the one that is intended to best represent the electorate. That is likely voters, when available.

We know there are a variety of problems with likely voter screens, evidence that who is a likely voter can change over the campaign and the problem of new voters. But the pollster "solves" these problems to the best of their professional judgement when they design the sample and when they calculate results.  If a pollster doesn't "believe" their LV results, then it is a strange professional judgement to report them anyway.  If they think that RV results "better" represent the electorate than their LV results, they need to reconsider why they are defining LV as they do.  Our decision rule says "trust the pollster" to make the best call their professional skills can make. It might not be the one we would make, but that's why the pollster is getting the big bucks. And our rule puts responsibility squarely on the pollsters shoulders as well, which is where it should be. (By the way, calling the pollster and asking which result they think is best is both impractical for every poll, AND suffers the same problems we would introduce if we chose which results to use.)

But still, doesn't this ignore data? Yes it does. Back in the old days, I included multiple results from any poll that reported more than one vote estimate. If a pollster gave adult, RV and LV results, then that poll appeared three times in the data, once for each population.  But as I worked with these data, I decided that was a mistake. First, it was confusing because there would be multiple results for a poll-- three dots instead of one in the graph. That also would give more influence to pollsters who reported for more than one population compared to those pollsters who only reported LV or RV. Finally, not that many polls report more than one number. Yes sometimes some pollsters do, but the vast majority decide what population to represent and then report that result. End of story.  So by trying to include multiple populations from a single poll, we were letting a small minority of cases create considerable confusion with little gain.

The one gain that IS possible, is to be able to compare within a single survey what the effect of likelihood of vote is. The ABC/WP poll is a very positive example of this. By giving us all three results, they let us see what the effect of their turnout model is on the vote estimate. Those who only report LV results hide from us what the consequences might be of making the LV screen a bit looser or a bit tighter. So despite our decision rule, I applaud the Post/ABC folks for providing more data. That can never be bad.  But so few pollsters do it that we can't exploit such comparisons in our trend data. There just aren't enough cases.

What would be ideal is to compare adult, RV and LV subsamples by every pollster, then gauge the effect of each group on the vote.  But since few do this, we end up having to compare LV samples by one pollster with RV samples by another and adult samples by others.  That gets us some idea of the effect of sample selection, but it also confuses the differences between survey organizations with differences in the likely voter screens. Still, it is the best we can do with the data we have.

So let's take a look at what difference the sample makes.  The chart below shows the trend estimate using all the polls, LV, RV and adult samples separately. We currently have 109 LV samples, 136 RV and 37 adult.    There are some visible differences. The RV (blue) trend is generally more favorable to Obama than is the LV (red) trend, though they mostly agreed in June-July. But the differences are not large. All three sub-population trend estimates fall within the 68% confidence interval around the overall trend estimate (gray line.)  There is good reason to think that likely voters are usually a bit more Republican than are registered or adult samples. The data are consistent with that, amounting to differences that are large enough to notice, if not to statistically distinguish with confidence.  Perhaps more useful is to notice the scatter of points and how blue and red points intermingle. While there are some differences on average, the spread of both RV and LV samples (and adult) is pretty large. The differences in samples make detectable differences, but the points do not belong to different regions of the plot. They largely overlap and we shouldn't exaggerate their differences.


There is a valid empirical question still open. Do LV samples more accurately predict election outcomes than do RV samples? And when in the election cycle does that benefit kick in, if ever? That is a good question that research might answer. The answer might lead me to change my decision rule for which results to include. But if RV should outperform LV samples, then the polling community has a lot of explaining to do about why they use LV samples at all.  Until LV samples are proven worse than RV (or adult) then I'll stick to the fixed, firm, stubbornly clung to, rule we have. And if we should ever change, I'll want to stick stubbornly to that one. The worst thing we could do is to have to make up our minds every day about which results to include and which not based on which results we "like."

[Update: In Part III of this thread, Mark Blumenthal answers to some of the comments below and poses a new question].

Age, Turnout and Votes


It's all about who votes. Those that do win. Those that don't lose. The chronic losers in American politics are the young who famously turn out at low rates election after election.

This year, those young people are of great interest. Allegedly they will be mobilized in huge numbers, and allegedly they will vote strongly for Barack Obama. The latest available Gallup weekly estimate (July 28-Aug 3) shows Obama leading 56%-35% among 18-29 year olds, while McCain leads 46%-37% among those 65 and older.

But will the young vote? And how much difference does it make when they don't?

The chart above shows the turnout rate by age for 2000 and 2004, based on the Census Bureau's "Current Population Survey (CPS)", the largest and best source of detailed data on turnout. The most striking result is just how low turnout is among those under 30 compared to older voters. No age group 18-29 managed to reach 45% turnout in 2000, and only two made it in 2004. Not one single age group over 30 fell so low in either year. Despite a little noise for each group, the pattern is a strong rise in participation rates with every year of age at least until the late 60s, after which there is some decline. Yet even among those 85 and over the turnout rate remains above 55%, more then 10 points higher than among their 20-something grandchildren and great-grandchildren.

The second striking feature of the chart is that the young can be mobilized a bit, under the right circumstances. Turnout among those under 30 rose significantly in 2004 compared to 2000. While turnout went up among all age groups, the relative gain was clearly greater among those under 30. While mobilizing the young is difficult, these data show that it is possible to get significant gains, at least relative to past turnout.

Even so, the "highly mobilized" 20-somethings of 2004 still fell behind the turnout of their 30-something older siblings. A supposed Obama-surge among the young may still not catch up with those even a bit older.

The irony is that the young are a large share of the population, but not of the electorate. The chart below shows the population by age in 2004 (it shifts a little by 2008 but not enough to change the story.)


The "boomers" in their 40s and 50s remain the largest group, but for our purposes there are two important points. Those under 30 make up a substantial share of the population, while those 60 and over represent a substantially smaller share at each age.

In 2004 those 18-29 were 21.8% of the population, while those 58-69 were just 13.2%. Add in the 11.5% 70 and up, and you get just 24.7% of "geezers" over 58 vs. 21.8% of "kids". But the sly old geezers know a thing or two about voting. Shift from share of the population to share of the electorate and the advantage shifts to the old: 18-29 year olds were just 16% of the electorate in 2004, while those 58-69 were an almost equal 15.9%. Add in the 70+ group at 13.4% and the geezers win hands down: 29.3% of voters vs 16% for the young. That difference is the power of high turnout. It goes a long way to explaining why Social Security is the third rail of American politics.

High turnout buys "over-representation". Divide share of voters by share of the population and you get proportionate representation. A ratio of 1.0 means a group votes proportionate to its size. Values over 1 are overrepresented groups. In 2004, for example, 55 year olds were represented 20% more than their population would suggest, with a 1.2 score. The youngest voters, 18 year olds, had an abysmal representation rate of 0.49 in 2000, less than half their share of the population.


While turnout rises with age, it is not until we hit 40 or so that we reach "fair" representation (1.0). After that, every age group is over-represented in the electorate. Less than 40, and every age group is under-represented. (Two small exceptions-- so sue me.)

So what are the implications? If you gave me a choice of being wildly popular with the young or moderately popular with the old, I'd take the old any day. They are far more reliable in voting, and while their population numbers are small they more than make up for it in over-representation thanks to turnout differences.

There is much conversation about "youth" turnout this year. Perhaps we will indeed see another rise, as we did in 2004. But unless something truly unprecedented occurs, no one can win on the young alone. The gap in turnout is simply too large.

But is age destiny? If there were constant differences in partisan preference by age, then perhaps so. But there aren't. Despite being supposedly "old and set in their ways", those 60 and up shifted their votes more than any other age group between 2000 and 2004. In 2000, the 60+ vote went to Gore by a 4 point margin. In 2004, however, those 60+ went for Bush by 8 points. That net 12 point swing, multiplied by their over-representation means a lot.


The 20-somethings also shifted, from +2 for Gore to +9 for Kerry. Coupled with their surge in turnout, the younger voters kept Kerry close in 2004 when he was losing in every other age category. But it wasn't enough to win.

The Obama campaign may be right that they can gain votes by mobilizing the young. But the old play a bigger role in elections, and they are not imovable in their vote preferences. Indeed, they make the youngest group seem a bit static by comparison. It is not the candidate's age that will be the key to winning the votes of those 60 and over. Issues and personality will play a large role. Any candidate would be well advised to recognize that the dynamic swings among older voters coupled with their substantial over-representation makes them a potent force for electoral change.

Cross-posted at PoliticalArithmetik.com

How We Choose Polls to Plot: Part I

Topics: ABC/Washington Post , Charts , Likely Voters , Pollster.com

Since adding the maps and upgrading this site, we have received a number of good questions about how the charts and trend lines work and why we choose to include the poll results that we do. I want to answer a few of those questions this week before we call get swept up in the conventions and the final stretch of the fall campaign.

Our approach to charting and aggregating poll data follows the lead and philosophy of our co-founder Charles Franklin. And while I am tempted to describe that approach as well entrenched, the reality is that in many ways it has and will continue to evolve.

Since launching this site nearly two years ago, Franklin and I have continued to discuss (and occasionally debate) some of the technical issues offline. Most of the time we agree, but I tend to propose ways to change or tinker with our approach, and Franklin usually succeeds in convincing me to stay the course.

In considering some of issues that came up more recently, I thought it might be helpful to take this dialogue online. Hopefully, we can both answer some of the questions readers have asked and also seek further input on those issues we have not completely resolved.

So with that introduction out of the way, here is the first question for Franklin:

Over the last few weeks, in commenting on the "likely voter" subgroups reported by Gallup and other national pollsters, I have essentially recommended that we focus on the more stable population of registered voters (RV) now, and leave the "likely voter" (LV) models for October (see especially here, here, here and here). Yet as many readers have noticed, when national surveys publish numbers for both likely and registered voters, our practice has been to use the "likely voter" numbers for our charts and tables.


Like the other sites that aggregate polling results from different sources, we face the challenge of how to best choose among many polls that are not strictly comparable to each other. Even if we examine data from one pollster at a time, we will still see methodological changes: Many national pollsters will shift at some point from reporting results from registered voters to "likely voters." Some will shift from one likely voter "model" to another, or will tinker with the mechanics of their model, often without providing any explanation or notice of the change. And no two pollsters are exactly alike in terms of either the mechanics they use or the timing of the changes they make.

As such, two principles guide our practices for selecting results for the charts and tables: First, we want to defer to each pollster's judgement about the most appropriate methodology (be it sample, questionnaire design or the most appropriate method to select the probable electorate). Second, we want a simple, objective set of rules to follow in deciding which numbers to plot on the chart.

In that spirit, when pollsters release results for more than one population of potential voters, our rule is to use the most restrictive. So we give preference to results among "likely" voters over registered voters and to registered voters over results among all adults. In almost all cases, the rule is consistent with the underlying philosophy: The numbers for the more restrictive populations are usually the ones that the pollsters themselves (or their media clients) choose to emphasize.

But there have been some notable exceptions recently, of which, last month's ABC News/Washington Post poll provided the most glaring example. ABC News put out a report and filled-in questionnaire with two sets of results: They showed Barack Obama leading John McCain by eight points (50% to 42%) among registered voters, but by only three points (49% to 46%) among likely voters. Following our standard procedure, we included the likely voter numbers in our chart.

However, ABC News emphasized the eight-point registered voters numbers in the headline of their online story ("Obama Leads McCain by Eight But Doubts Loom"). Within the text, they first reported the registered vote numbers and then used the likely voter results to argue that "turnout makes a difference." The 8-point lead also made the headline of the Washington Post story, but they did not report the likely voter results at all, either in the text of the story on in their version of the filled-in questionnaire.

So in this case, the news organizations that sponsored the poll clearly indicated that the RV numbers deserved greater emphasis, yet we followed our rule and included the LV numbers in our charts.

Charles, in cases like these, should we find a way make an exception? And why not just report on "registered" voters until after the conventions?

Update: Franklin answers in Part II.

POLL: Rasmussen Iowa (8/7)

Rasmussen Reports
8/7/08; 500 LV, 4.5%
Mode: IVR

Obama 49, McCain 44
(July: Obama 51, McCain 41)*
Sen: Harkin (D-i) 60, Reed (R) 36
(July: Harkin 55, Reed 37)

* Numbers now reflect those with leaners

POLL: Rasmussen Oregon (8/7)

Rasmussen Reports
8/7/08; 500 LV, 4.5%
Mode: IVR

Obama 52, McCain 42
(July: Obama 49, McCain 40)
Sen: Smith (R-i) 50, Merkley (D) 44
(July: Smith 46, Merkley 46)

POLL: PPP Colorado (8/5-7)

Public Policy Polling (D)
8/5-7/08; 933 LV, 3.2%
Mode: IVR

Obama 48, McCain 44
(July: Obama 47, McCain 43)
Sen: Udall (D) 47, Schaffer (R) 41
(July: Udall 47, Schaffer 38)

POLL: SurveyUSA Virginia (8/8-10)

8/8-10/08; 655 LV, 3.9%
Mode: IVR

McCain 48, Obama 47
(June: Obama 49, McCain 47)
Sen: Warner (D) 58, Gilmore (R) 34, Parker (G) 3, Redpath (L) 2

Wolfson's Iowa Hypothetical

Topics: Barack Obama , Exit Polls , Hillary Clinton , Iowa , Jon Cohen

The punditry is crackling this morning over remarks by Howard Wolfson, Hillary Clinton's campaign communications director, over what might have happened had John Edwards' been forced out of the presidential race last year: "I believe we would have won Iowa, and Clinton today would therefore have been the nominee," Wolfson told ABC News.

Washington Post polling director Jon Cohen did the logical thing and checked relevant survey data from Iowa:

It is a pure hypothetical, of course, and the entire dynamics of the contest would have been different without Edwards. But the public data do not bolster the notion that Clinton would have won.

In the networks' Iowa entrance poll, 43 percent of those who went to a caucus to support Edwards said Obama was their second choice, far fewer, 24 percent said they would support Clinton if their top choice did not garner enough votes at that location. The remainder of Edwards' backers said they would be uncommitted under such a scenario, offered no second choice or said they preferred someone else.

Nor was Clinton the obvious second choice among Edwards supporters in Post-ABC pre-election Iowa caucus polls in July, November or December. In July, for their alternate pick, Iowans split 32 percent for Obama to 30 percent for Clinton. In November, Obama led 43 to 26 percent as backup pick, and he had a slight 37 to 30 percent edge in December.

Nate Silver echoes that last point point and notes that, looking at the trend lines in late January, "Barack Obama appeared to get the lion's share of Edwards supporters once Edwards dropped from the race."

POLL: Economist/YouGov National (8/4-6)

The Economist/
8/4-6/08; 1,000 Adults, 4%
Mode: Internet

Obama 42, McCain 39 (7/29: Obama 44, McCain 37)