Pollster.com

July 6, 2008 - July 12, 2008

 

Dog Days "Outliers"


Justin Wolfers (subs. only) trusts political markets more than polls, Bob Erikson bets he's wrong, John Sides has all the links.

Jon Cohen sees a gap between high interest in the election and low information about the candidates.

Frank Newport finds that more "important" religion is to the voter, the more likely they are to favor McCain

Kathy Frankovic warns against putting too much faith in "early and incomplete" exit poll results.

Nate Silver sees a very strong correlation between the national popular vote and his electoral college projections.

Julie Rehmeyer of ScienceNews profiles Nate Silver.

Ronald Hansen, of the Arizona Republic, reviews the challenges facing pollsters.

A pollster friendly anti-"push poll" bill passes the Louisiana House and Senate.

Inside Higher Ed looks at the perils of private polls conducted by academic survey centers.

Jon Chait notices that 1% of Americans in the Pew poll think Barack Obama is Jewish (via Smith, Sullivan).

And finally, a little off-topic, kiwitobes posts an amazing video map depicting the growth of Walmart since 1962 (via FlowingData).


POLL: Newsweek National


Newsweek/PSRA
7/9-11/08 - 1,037 RV, 3%
(story, results)

National
Obama 44, McCain 41


POLL: Post-Dispatch Missouri


St Louis Post-Dispatch/KMOV-TV/
Research 2000
7/7-10/08 - 800 LV, 3.5%
(story, results)

Missouri
Obama 48, McCain 43


Moore: USA Today's Cluster Analysis of Voters - How Useful?

Topics: 2008 , David Moore , UNH Survey Center , USA Today

Today's Guest Pollster article comes from David W. Moore, a senior fellow with the Carsey Institute at the University of New Hampshire. He is a former vice president and senior editor with the Gallup Poll, where he worked for 13 years, and is the founder and former director of the UNH Survey Center. He manages the blogsite, Skeptical Pollster.com.

On Thursday, July 10, USA Today published an analysis of voter intentions that produced "six types of voters" who the paper claims "will decide the presidential election." The types included: true believers (30 percent of the electorate), up for grabs (18 percent), decided but dissatisfied (16 percent), fired up and favorable (14 percent), firmly decided (12 percent), and skeptical and downbeat (12 percent).* As Mark Blumenthal indicated, this is a fascinating analysis, but how useful is it for understanding the election?

The six types of voters were produced using cluster analysis. This statistical technique is similar to factor analysis, except that it classifies respondents into distinctive groups, while factor analysis classifies various opinions into distinctive groups. Without going into the details of how the technique works, I think it's sufficient to note that the analyst has a great deal of control over the types of groups produced by cluster analysis. The analyst chooses the variables that are used to classify respondents, and also determines how many groups the cluster analysis produces. The fact that the analyst chose six clusters, instead of any other number between two and ten, was purely a subjective decision.

What is most surprising about the analysis is that it is issue free. The stereotypical complaint by political observers about the news media is that reporters focus on the horserace almost to the exclusion of any real substantive issues. This USA Today analysis fits that criticism to a T. I believe there is a widespread consensus among political observers these days that the war in Iraq (and national security more generally), the economy, and healthcare are among the most salient issues dividing the two major presidential candidates. Yet, there is nothing in the newspaper's analysis that groups voters according to their views on any of these major issues. Nor is there any mention of party identification, which often acts as a catch-all variable for a host of issues.

The variables chosen to classify respondents were 1) respondents' enthusiasm about the election, 2) whether respondents think the election would make a difference to them, 3) respondents' opinions (favorable or unfavorable) of each of the two major candidates, and 4) how certain respondents were to vote for the candidate of their choice. As these variable make clear, the classification scheme focuses almost exclusively on election turnout factors, with no mention of issues. Even the favorability ratings can be considered turnout variables in this context, because voters who are negative about both candidates are least likely to vote, while those who are positive about both candidates are mostly likely to vote. This is not to say that a mostly horserace-driven analysis, as this one is, doesn't provide some insights into the electorate. There are many different angles from which to analyze the electorate, and this is certainly a valid one. To me, it's just not as interesting as one that is more political in context.

Like most political junkies, I find intriguing almost any statistical analysis of polling data that goes beyond the simple marginals, and USA Today should be congratulated for making the effort. Still, I'd like to see a little more politics thrown into the mix - even if only to take these six types and describe their party identification, as well as their responses to other public policy questions. But mostly I would like to see a completely new cluster analysis that included policy attitudes as the defining variables for the groups. This is not to say that issues alone will determine the election. But I don't think we can get a good read on the electorate, and which types of voters will ultimately "decide the election," if we ignore issues altogether.


* The percentages exceed 100 percent because of rounding error.


POLL: Rasmussen Washington


Rasmussen Reports
7/9/08 - 500 LV, 4.5%

Washington
Obama 48, McCain 39
w/ leaners: Obama 51, McCain 43
Gov: Gregoire (D-i) 52, Rossi (R) 44


Omero: The Right Words for Your Numbers

Topics: 2008 , Margie Omero , Pollster.com , Rasmussen

[Margie Omero is President of Momentum Analysis, a Democratic polling firm based in Washington, DC.]

Wednesday, Politico told the story of a single poll number getting mistakenly pushed around through blogs and talking points. Republican talkers from Rep. Putnam to Matt Drudge to Freedom Watch announced a "single-digit" congressional approval rating. Their proof was a Rasmussen poll that asked respondents to rate Congress using 4-point job scale: excellent, good, fair, or poor. Typically, one would call this a "job rating" and combine excellent/good to be "positive" and the fair/poor to be "negative." In this particular poll, Congress did receive a nine (9%) positive rating.

What Congress did not receive in that poll was a single-digit "approval rating." That is a different type of standard question altogether. An approval question usually reads "do you approve or disapprove of the job Congress [or whomever else] is doing?" While some use a 4-way approval rating, collapsed into two categories, most have only two categories (besides an "unsure" option). And all use the word "approval" as opposed to an entirely different set of words. Even the most cursory scan of public results demonstrates that a collapsed 4-point job rating scale will typically yield a smaller positive rating than will an either a 4-way collapsed or 2-way approval question.

It's not that one type of question is better than the other. But the shorthands that have emerged for particular questions mean something to pollsters and poll-watchers. To avoid confusion, it's best to just make sure you're comparing apples to apples, and using the clearest terms available.

Just as some reminders, here are some common other wording specifics to be on the lookout for when comparing across polls. (If you haven't already, also check out the pollster.com FAQ.)

Party ID vs. party registration: Definitely not the same thing. Identification is self-reported, and subject to national trends, local press, and respondent whims. Party registration requires some interaction with the state, and varies massively from state-to-state. In many states, voter declares party affiliation when registering to vote. In some states, like Ohio, "registration" refers to which party's primary ballot was recently pulled, rather than requiring a voter to declare their party in advance. Other states, like Missouri, have no party registration at all. In national polls, "party" means identification. But in state or Congressional district polls, the pollster should specify.

The "Re-elect:" Many pollsters ask a "re-elect" question about an incumbent, which includes only the incumbent and no challenger names. An example, "Would you vote to re-elect Mystery Pollster, would you consider someone else, or would you vote to replace Mystery Pollster?" The question wording varies (such as the SC public poll here), and some pollsters use a 2-way question (re-elect or not). Many just look at the response for re-elect and ignore the rest. But the "replace" can also be a useful figure, as we note in our own poll for Congressional candidate Victoria Wulsin (OH-2), which shows the Republican incumbent's "replace" as high as her re-elect.

Leaners: Typically respondents initially undecided in a vote are asked a follow-up, something like, "Well, if the election were held today and you had to decide, toward which candidate do you lean?" Net support for each candidate would then include leaners. But it doesn't have to. Leaners can be included in the undecided. A good polling memo or story should simply specify.

Public disclosure of calling methodology and weighting schemes are of course important, particularly with the closely followed national media polls. But that information is not always available, or easy for the average poll reader to decipher. In many cases, paying attention to wording differences, and asking pollsters for their question language can minimize reporting gaffes.


POLL: Rasmussen Wisconsin


Rasmussen Reports
7/8/08 - 500 LV, 4.5%

Wisconsin
Obama 50, McCain 39


POLL: Pew National


Pew Research Center/PSRA
6/18-29/08; 1,574 RV

National
Obama 48, McCain 40


POLL: Rasmussen Illinois, North Dakota


Rasmussen Reports
7/8/08 - 500 LV each, 4.5%

Illinois
Obama 50, McCain 37
Sen: Durbin (D-i) 63, Sauerberg (R) 28

North Dakota
Obama 43, McCain 43
Gov: Hoeven (R-i) 67, Mathern (D) 27


Measuring Nader and Barr

Topics: Barack Obama , Bob Barr , David Moore , John McCain , Measurement , National Journal , Ralph Nader

My Nationaljournal.com column, on the challenge posed to pollsters by third party candidates like Ralph Nader and Bob Barr, is now online.

The topic is timely since yesterday we also put up a new chart showing the results of national poll questions that include Nader and Barr as choices. One issue posed in the column was which form of the vote choice question is a better measure of support for the third party candidates: Those that include Nader and Barr as choices along with McCain and Obama or those that offer only the two major candidates (but typically record the preferences of those who volunteer a choice for another candidate).

I explore the reasons for skepticism toward the four-way vote question in the column. We have historical evidence that summer polls are poor predictors of November support for third party candidates (short version: the summer polls overstate their ultimate support). A more difficult question is which type of question -- the two-way or four-way choice -- provides a better measure of true preferences right now?

My hunch is that reality of Nader and Barr's current support lies somewhere in between the numbers that volunteer their support on questions offering only Obama and McCain as choices and those that offer all four. I believe that many voters are telling pollsters they support Barr or Nader now, especially when offered the four-way choice, because they are not entirely sold on Obama or McCain and would rather grab for the "independent"** as a way to give the interviewer a satisfactory answer (rather than trying to think through their final decision right there on the phone -- a tendency \survey methodologists sometimes call "satisficing").

This mushiness of support is something that David Moore touched on in his post here yesterday. You can also see evidence of the conflict in the fascinating cluster analysis of last month's USA Today/Gallup poll written up today.

**Or Libertarian or Green Party candidate. Incidentally, at least one commenter has wondered whether pollsters typically identify the candidate party on these vote questions, and the answer is almost always yes. You can usually get the actual language of the poll questions by clicking through the links in the tables below our charts or in the always invaluable, PollingReport.


Rasmussen's State Level Party ID Weighting

Topics: Party Weighing

I received an email reply from Scott Rasmussen to the questions I posted on Monday about their party identification numbers for June and about their procedures for weighting by party at the state level.

First, I had noticed a very slight difference (0.13%) in the Democratic party identification advantage they reported at the national level for June (9.37% vs 9.5%). Rasmussen confirms my hunch that filtering for "likely voters" narrows the advantage ever so slightly.

Second, he answered a far more important questions, how do they set targets for their party ID weights at the state level:

The question of states is more challenging and one we continue to work. Our initial targets are set by playing off the national numbers. We note changes from 2004 and/or 2006 and make comparable changes to the state targets from our polling in those years. Broadly speaking, if the number of Democrats are up 5 nationally compared to an earlier period, then the state numbers would be up five too. Due to demographic differences, not every state moves completely in synch with the national numbers, but they are close in our targeting formula.

Then, we monitor the state by state results as we conduct state polls and are in the process of making some modest mid-year adjustments now (in most states, we have at least 3,000 state specific political interviews to draw from, plus our national political tracking data, plus our baseline numbers from the other poll). Realistically, though, the current adjustments are very small. To this point, the national shifts appear to provide a good indicator. As we head to the fall, we will poll every competitive state at least weekly and do larger samples. This will enable our dynamic weighting process to draw upon up to 10,000 state-specific interviews or more to set the targets for key states.

Thoughts anyone?


POLL: Rasmussen New Jersey


Rasmussen Reports
7/7/08 - 500 LV, 4.5%

New Jersey
Obama 44, McCain 39
Sen: Lautenberg (D-i) 49, Zimmer (R) 36


POLL: PPP Missouri


Public Policy Polling (D)
7/2-5/08 - 723 LV, 3.6%

Missouri
McCain 47, Obama 44


Moore: Gallup's "Swing Voters" - A Major Underestimate?

Topics: David Moore , Gallup , NBC/Wall Street Journal , Newsweek , Time , UNH Survey Center , USA Today

Today's Guest Pollster article comes from David W. Moore, a senior fellow with the Carsey Institute at the University of New Hampshire. He is a former vice president and senior editor with the Gallup Poll, where he worked for 13 years, and is the founder and former director of the UNH Survey Center. He manages the blogsite, Skeptical Pollster.com.

In a recent post, Gallup's Jeff Jones reports that for the first time this election cycle, Gallup has measured the number of "swing voters" in the electorate. That's certainly a step in the right direction, but one might well wonder why it took so long for pollsters to admit that there is a substantial proportion of the public not committed to a candidate.

According to the post, Gallup finds that only 6 percent of "likely voters" are undecided as to which presidential candidate they will support. That number defies credulity. With five months to go in the campaign, neither major candidate the incumbent, no vice presidential candidates chosen, and no debates between the presumptive nominees, Gallup wants want us to believe that 94 percent of voters have already made up their minds? Yes, indeed! Not only that, CNN says 99 percent are decided. Time says 92 percent. Newsweek claims 87 percent. USA Today with Gallup says 97 percent. ABC/Washington Post - 96 percent. The NBC/Wall Street Journal poll says 90 percent. (For sources, see The Polling Report.)

With all these major media polls (not to mention numerous other polls not affiliated with the major news media organizations) in rough agreement that about nine in ten voters or more have made up their minds, any challenge to this conventional wisdom may seem futile. But here's something to consider. In a Sept. 3-5, 1996 Gallup poll, 40 percent of voters said they were undecided about whom they would support in the November presidential election between Robert Dole and Bill Clinton. How could such a large number be undecided in that poll - taken after the major party conventions and with just two months to go before the election, in which there was a popular incumbent candidate - and yet so few voters admit they are undecided in the current polls?

The answer, of course, lies in the way the voting question is asked. The standard vote choice question, which dates to 1935 when George Gallup first asked about presidential preferences, is deliberately designed to obfuscate the number of undecided voters. Gallup knew that the press wouldn't be interested in results that showed perhaps a majority of voters undecided months before an election, so he asked respondents which candidates they would vote for "today."1 And for the past seven plus decades pollsters have blindly followed that same format. In the September 1996 poll, however, Gallup abandoned the standard vote choice question, and instead first asked voters if they had even made their decision as to which candidate they would support. In that context, 39 percent said they hadn't, and an additional one percent were unsure.

In the current Gallup report, mentioned at the beginning of the article, Gallup retains the forced-choice standard format, but follows up by asking respondents if they could change their minds before election day. Those who said they could - 9 percent who initially said they would vote for McCain if the election were held "today," and 8 percent who initially favored Obama - were added to the six percent who initially said they were undecided, producing a 23 percent group Gallup characterizes as "swing voters."

Thus, according to Gallup, about a quarter of the electorate is up for grabs. I'm skeptical about that number - I suspect the percentage is much higher, perhaps even greater than the 40 percent measured by Gallup late in the 1996 campaign. But at least it's a recognition that there is a substantial number of voters who are not yet committed to a candidate.

Still, I would argue that most of the swing voters are not people who "could" change their minds before election day, as Gallup asserts, but rather people who have not yet even decided whom to support. Gallup (and any other national poll), of course, could test that proposition. All they need to do is replicate the question that Gallup asked in its September 3-5, 1996 poll: Ask voters up front if they have made up their minds whom they will support in November.2 My prediction - much more than a quarter of the electorate is up for grabs in 2008.


1 Three times in 1935, Gallup asked if respondents would vote for Roosevelt "today." The first time he pitted Roosevelt against anyone was in January 1936, when he asked: "For which candidate would you vote today - Franklin Roosevelt or the Republican candidate?" See George H. Gallup, The Gallup Poll: Public Opinion 1935-1971, Volume One (New York: Random House, 1972), pp. 1-10.

2 The exact wording is, "Have you made up your mind yet about who you will vote for in the presidential election this fall, or are you still deciding?"


POLL: Rasmussen Missouri


Rasmussen Reports
7/7/08; 500 LV, 4.5%

Missouri
McCain 47, Obama 42
Gov: Nixon (D) > 45, Reps < 40


POLL: Zogby 50 State-Map


Zogby Interactive (Internet Panel)
6/11-30/08
Likely Voters Nationwide

Electoral Votes
Obama 273, McCain 160, Too Close to Call 105


A Dog of a Poll Story

Topics: AP/Yahoo Poll , Associated Press , Barack Obama , John McCain

Yesterday, the Associated Press wrote up results from an AP/Yahoo poll showing that John McCain does better than Barack Obama among pet owners than among Americans who do not own a pet. Desmoinesdem, the blogger who seems to have a knack of getting called by internal campaign polls, called it "the worst analysis of a poll I've seen in a while." I tend to agree.

The gist is that a survey of 1,750 adults conducted over the Internet using the Knowledge Networks panel found that John McCain leads by Barack Obama by five points among pet owners (42% to 37%), while Obama leads by 14 point margin (48% to 34%) among those who do not own a pet.

They also report results showing Obama doing slightly better among cat owners than dog owners, although those differences to do not appear to be statistically significant -- something the AP story does not mention.

Go to the end of the story and you get a hint of something highly pertinent:

The population breakdown of who has pets and who doesn't also may be a factor.

For example, the poll found 47 percent of whites own dogs, compared with just 24 percent of blacks. Whites tend to favor McCain, while blacks overwhelmingly favor Obama.

Some 64 percent of dog owners are married, slightly higher than the overall population. The poll found 47 percent of married people own dogs, compared with 39 percent of non-married people. Married people tend to favor McCain.

Married people also "tend to" be over the age of 30. As Gallup tells us, Obama leads by a whopping 24 points among those age 18-29, while the race is much closer among those over 30.

And what about pet ownership by party affiliation? Or income? As Demmoinesdem points out, these potentially confounding variables may also be at work. And that strong possibility reminds us of the lesson that all pollsters are supposed to learn in their first statistics class: Correlation is not causation. Pet owners may prefer McCain for reasons that have nothing to do with whether the candidates own pets.

But that lesson is largely lost in this piece, because in the lead of the story -- and who knows how many local television news pieces run as a result -- strongly implies just the opposite (emphasis added):

If the presidential election goes to the dogs, John McCain is looking like best in show.

From George Washington's foxhound "Drunkard" to George W. Bush's terriers "Barney" and "Miss Beazley," pets are a longtime presidential tradition for which the presumed Republican nominee seems well prepared, with more than a dozen.

The apparent Democratic nominee Barack Obama, on the other hand, doesn't have a pet at home.

The pet-owning public seems to have noticed the difference.

Really? Do we have any evidence that Americans have "noticed" the difference? How many know that McCain owns "more than dozen" pets while the Obamas own none? Which candidate do Americans consider more pet friendly? This survey is silent on that score.

Even without probing deeper into the subject, a fairly simple regression analysis would tell us if pet ownership shows a significant correlation with vote preference even after controlling for things like party, race, age, income and marital status. And while we would not expect an AP story to expound on multiple regression analysis, they could certainly tell us, in so many words, that pet ownership explains greater preference for Obama even after controlling for factors like party, race, age, income and marital status.

That might be a start, but why let a straightforward analysis kill an irresistibly cute lead?

[Typos corrected]


Rasmussen's Unweighted Party ID

Topics: Barack Obama , John McCain , Party Weighing , Rasmussen

Yesterday, Rasmussen Reports did something a little unusual: They published what their party identification results would have been for every release in June had then not weighted by party. The results, according to their report,

show the expected statistical noise and results that bounce around generally within the margin of sampling error. The percentage of Republicans was within the margin of error from the full month sample on 29 of 30 days. For Democrats, there were four days where the daily results departed from the full month average by more than the theoretical daily margin of sampling error. But, in all cases, even those variances were quite modest.

Even these modest variations on a daily basis produced some significant differences in terms of the gap between Democrats and Republicans. On June 11, the gap was just 3.01 percentage points. On June 18, it was 16.4 percentage points.

To understand how this might impact a tracking poll, look at the final column in the table which shows a three-day rolling average of the gap between the parties. On days when the Democratic advantage is a bit larger, we would have showed a bigger lead for Obama. When it’s smaller, we would have shown McCain gaining ground. Pundits and bloggers would have tried to explain the bouncing by whatever the candidates had said or done in recent days even though most voters are not that closely attuned to the daily rhetoric.

In reality, of course, nothing happened.

The write-up touches on Rasmussen's use of "dynamic weighting," though it leaves me a slightly confused on one minor point. Two years ago, when Rasmussen announced their procedure, they explained that weighting would be based on targets from "results obtained during the previous three months" of interviews. They routinely publish their monthly party ID numbers. The results from June mentioned in the above report, showing Democrats with "a 9.37% advantage over the GOP," are slightly different from the 9.5% advantage in their latest "Summary of Party Affiliation" table. Obviously, the difference is tiny -- perhaps one number is based on "likely" voters and the other on adults or registered voters?

One question all of this raises -- also asked by reader jsh1120 over the weekend -- is about the procedure Rasmussen uses for weighting their state level polls: Is it the same? And if so, how many interviews have they done over the last three months in each state as a basis for their party targets?

Update: Scott Rasmussen emails with an explanation of how they arrive at party weighting targets at the state level.


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR