Pollster.com

Articles and Analysis

 

Why Is Rasmussen So Different?

Topics: Automated polls , House Effects , IVR Polls , job approval , Measurement , Rasmussen

Hardly a week goes in which I do not receive at least one email like the following:

Although I really appreciate you continually adding this "outlier" poll for your aggregated data, I do wonder why Rasmussen polling numbers are ALWAYS significantly lower and different than every other poll when measuring the President's job approval rating (with the exception of Zogby's internet poll)? How do Rasmussen pollsters explain this phenomenon and, more importantly, what is your explanation for this statistically significant ongoing discrepancy between Rasmussen and pretty much every other poll out there?

We have addressed variants of this question many times, but since this questions is easily the most frequently asked via email, it is probably worth trying to summarize what we've learned in one place.

Let me start with this reader's premise. Are Rasmussen's job approval ratings of President Obama typically lower than "every other poll?" The chart that follows, produced by our colleague Charles Franklin, shows the relative "house effects" for organizations that routinely release national polls based on the approval percentage. Rasmussen's Obama job approval ratings (third from the bottom) do tend to be lower than most other polls, but they are not the lowest.

2009-12-01_HouseFX-approve.png

Before reviewing the reasons for the difference, I want to emphasize something the chart does not tell us. The line that corresponds with the zero value is NOT a measure of "truth" or an indicator of accuracy. The numeric value plotted on the chart represents the average distance from an adjusted version of our standard trend line (it sets the median house effect to zero, producing a line that is usually within a percentage point of our standard trend line). Since that trend line is essentially the average of the results from all pollsters, the numbers represent deviations from average. Calculate house effects using a different set of pollsters, and the zero line would likely shift.

A related point: Readers tend to notice the Rasmussen house effect because their daily tracking polls represent a large percentage of the points plotted on our job approval chart. For the daily tracking polls released by Rasmussen and Gallup Daily, we plot the value of every non-overlapping release (every third day). As of last week, Gallup Daily and Rasmussen represent almost half (49%) of the points plotted on our charts (each organization claims 24% each). As such, their polls do tend to have greater influence on our trend line than other organizations that poll less often (see more discussion by Charles Franklin, Mike McDonald and me on the consequences of the greater influence of the daily trackers).

So why are the Rasmussen results different? Here are the three possible answers:

1) LIkely voters - Of the twenty or so pollsters that routinely report national presidential job approval ratings, only Rasmussen, Zogby and Democracy Corps routinely report results for a population of "likely voters." Of the pollsters in the chart above, PPP, Quinnipiac University, Fox News/Opinion Dynamics and Diageo/Hotline report results for the population of registered voters. All the rest sample all adults. Not surprisingly, most of the organizations near the bottom the house effect chart -- those showing lower than average job approval percentages for Obama -- report on either likely or registered voters, not adults.

Why does that matter? As Scott Rasmussen explained two weeks ago, likely voters are less likely to include young adults and minority voters who are more supportive of President Obama.

2) Different Question - Rasmussen also asks a different job approval question. Most pollsters offer just two answer categories: "Do you approve or disapprove of the way Barack Obama is handling his job as president?" Rasmussen's question prompts for four: "How would you rate the job Barack Obama has been doing as President... do you strongly approve, somewhat approve, somewhat disapprove, or strongly disapprove of the job he's been doing?"

Scott Rasmussen has long asserted that the additional "somewhat" approve or disapprove options coax some respondents to provide an answer that might otherwise end up in the "don't know" category. In an experiment conducted last week and released yesterday, Rasmussen provides support for that argument. They administered three separate surveys of 800 "likely voters, each involving a different version of the Obama job approval rating: (1) the traditional two category, approve or disapprove choice, (2) the standard Rasmussen four-category version and (3) a variant used by Zogby and Harris, that asks if the president is doing an excellent, good, fair or poor job. The table below collapses the results into two categories; excellent and good combine to represent "approve," fair and poor combine to represent "disapprove."

2009-11-30_rasmussen-experiment.png

The 4-category Rasmussen version shows a smaller "don't know" (1% vs. 4%) and a much bigger disapprove percentage (52% vs 46%) compared to the standard 2-category question. The approve percentage is only three points lower on the Rasmussen version (47%) than the traditional question (50%). As Rasmussen writes, the differences are "consistent with years of observations that Rasmussen Reports polling consistently shows a higher level of disapproval for the President than other polls" (make of this what you will, but three years ago, Rasmussen argued that the four category format explained a bigger "approve" percentage for President Bush).

We can see that Rasmussen does in fact report a consistently higher disapproval percentage for President Obama by examining Charles Franklin's chart of house effects for the disappprove category. Here the distinction between Rasmussen, Harris and Zogby -- the three pollsters that ask something other than the traditional two-category approval question -- is more pronounced.

2009-12-01_HouseFXDisapp.png

The Rasmussen experiment shows an even bigger discrepancy between the approve percentage on the two-category questions (50%) and the much lower percentage obtained by combining excellent and good (38%). This result is similar to what Chicago Tribune pollster Nick Panagakis found on a similar experiment conducted many years ago (as described in a post last year).

Variation in the don't know category also helps explain the house effects for many of the other pollsters. The table below shows average job approval ratings for President Obama by each pollster over the course of 2009 (through November 19). It shows that smaller don't know percentages tend to translate into larger disapproval percentages. With live interviewers and similar questions, the differences are usually explained by variations in interviewer procedures and training. Interviewers that push harder for an answer when the respondent is initially uncertain obtain results with smaller percentages in the don't know column.

2009-11-30_DK-percentage.png

3) The Automated Methodology - Much of the speculation about the differences involving Rasmussen and other automated pollsters centers on the automated mode itself (often referred to by the acronym IVR, for interactive voice response). Tom Jensen of PPP, a firm that also interviews with an automated method, offered one such theory earlier this year:

[P]eople are just more willing to say they don't like a politician to us than they are to a live interviewer because they don't feel any social pressure to be nice. That's resulted in us, Rasmussen, and Survey USA showing poorer approval numbers than most for a variety of politicians.

Other commentators offer a different theory, neatly summarized recently by John Sides, who speculates that since automated polls "generate lower response rates" than those using live interviewers, automated poll samples may "[skew] towards the kind of politically engaged citizens who are more likely to think and act as partisan[s] or ideologues," even after weighting to correct demographic imbalances.

A lack of data makes evaluating this theory very difficult. Few pollsters routinely release response rate data (and even then, technical differences in how those rates are computed makes comparisons across modes challenging). And, as far as I know, no one has attempted a randomized controlled experiment to test Jensen's "social pressure" theory applied to job approval ratings.

But that said, it is intriguing that the bottom five pollsters on Franklin's chart of estimated house effects on the approval rating all collect their data using surveys administered without live interviewers: Rasmussen and PPP use the automated telephone methodology and Harris, Zogby and You/Gov Polimetrix survey over the Internet (using non-probability panel samples). Of course, with the exception of YouGov/Polimetrix, these firms also either interview likely or registered voters, use a different question than other pollsters, or both.

As such, it is next to impossible to disentangle these three competing explanations for why the Rasmussen polls produce a lower than average job approval score for President Obama, although we can make the strongest case for the first two.

P.S.: For further reading, we have posted on the differences between Rasmussen and other pollsters in slightly different contexts here, here and here and on my old MysteryPollster blog here, here and here. Also be sure to read Scott Rasmussen's answer last week to my question about how they select likely voters. Finally, Charles Franklin posted side-by-side charts showing the Obama job approval house effects for each pollster last week; he has posted similar house effect charts on house effects on the 2008 horse race polls here, here, here and here.

 

Comments

Rasmussen's results on the generic congressional ballot also tend to be more favorable to the GOP than those of virtually any other pollster.

/polls/us/10-us-house-genballot.html

This finding (and others, below) suggest to me that Rasmussen draws more Republican-leaning samples than do other pollsters, which affects both Obama's approval rating and the generic ballot (the question-wording on presidential approval cannot affect the generic ballot).

Pollster.com's current party ID average for registered and likely voters has the Democrats up 6 over the Republicans, and the margin would be even wider without the many FOX polls showing a small Dem-GOP difference.

/polls/us/party-id-rl.html

Rasmussen's likely-voter party ID average is now Dem +3.

http://www.rasmussenreports.com/public_content/politics/mood_of_america/partisan_trends

____________________

011121:

What I find damning about the rasmussen data is that if you look at the Obama approval numbers you see a clearly bimodal distribution in the disapprove numbers. The vast majority of pollsters are in one clump and then in a separate group is Rasmussen. While individual pollsters have points that show up now and again in the rasmussen range, Ras is ALWAYS about 5 points higher than the pack on disapproval.

At the same time we don't see a bimodal distribution with Rasmussen's data as compared to everyone else's with regard to approval numbers, nor with other measures that you might suspect would correlate with approval/disapproval.

That makes their disapproval numbers seem quite fishy to me, personally.

FYI- I consistently throw out all internet pollsters as i find their methodology and results to be at best suspect and usually unconscionable. Hence the Rasmussen polling sticks out a lot more when you take out the harris and zogby trash.

____________________

StatyPolly:

Hey 011121,

Funny how you make a comment without even reading the piece. IT EXPLAINS AND MAKES A PRETTY GOOD CASE TOO, why Ras' disapprove is about 5 points higher than most.

Kinda helps to read the article before you disagree with the premise.

____________________

poughies:

I'm a big fan of judging pollsters on their past results.

For Rasmussen, they missed how many states in the 08 election?

3 (if you count Ohio as a tie going to Obama).

Throughout the race, Rasmussen's polls remained more McCain than the rest of the polls for the most part.... but they were also the least volatile as well... And they nearly called the final result exactly 52-46 vs. 53-46 in actuality.

And does anyone remember 2004, when Rasmussen's polls were consistently showing a close race in early September when other polls showed Bush with a huge lead? Then they showed Bush up late, and their final polls nailed pretty much everything.

Anyone remember this ad against Gallup for being pro Bush?http://www.usatoday.com/news/politicselections/nation/president/2004-09-28-gallup-defense_x.htm

And now its house effect is pro-Obama. Calls of bias really don't get us anywhere.

My guess is that the different polls will begin to converge as the election cycle nears. Best to use the polling aggregate as displayed on this site.

Right now, I really think this type of debate only stimulates the political junkies...

____________________

GARY WAGNER:

The inclusion of only "likely voters" is good enough explanation for me. Inclusion of all adults has always skewed toward democrats.

What interests me more are the reasons for the house effect of the liberal news organizations (ABC, CNN, NYT, WP, CBS). What do they do differently in their polls that always pushes Obama's approval numbers up and push his disapproval down?

____________________

Mark Blumenthal:

@Gary:

Look closely at the last table in the post. CNN/ORC has an average DK of 3%, ABC/Post is 4%. On the other extreme, CBS/NYT and Pew have average DK percentages in the 12-13% range. Both CNN and ABC slightly higher approval percentages and much higher disapproval percentages (also evident in the house effects table).

Then re-read the paragraph just above the last table. Most of the difference has to do with how hard their respective interviewers push initially uncertain respondents for an answer.

____________________

011121:

Statypoly-
I wasn't really disagreeing with the piece, I was offering an observation that to my mind indicates a serious problem. I did in fact read the piece (as well as the earlier pieces on Ras by Mark). I have great respect for both Blumenthal and Franklin, but I am still allowed to have a different perspective on the matter.

____________________

Wong:

@011121
I find your argument of bimodal distribution compelling,... and damning.

____________________

CUWriter:

Spot on analysis Mark. Rasmussen is really not that different than Gallup or anyone else's numbers when it comes to approval. The LV model pretty much says it all; Gallup was at 49% today and Ras was 47%. Perfectly normal when the former is polling all adults and the latter is polling LVs.

It's disapproval where the difference has always been and I think it has more to do with IVR and the "somewhat/strong" model they have in place. I don't understand the objection to that (though I do understand the objection to Excellent/Good/Fair/Poor) especially since it is considered standard fare across the board to push leaners in a horserace.

____________________

IdahoMulato:

So did they also show similarly low numbers for Bush in previous polls? Or is this phenomenon a recent occurence? Somebody answer me really quick. Otherwise, I'm unable to accept these analyses.

____________________

Wait.

If someone asks me how X is doing, and gives me the choices "Excellent, Good, Fair, or Poor", I do NOT think "fair" and "disapprove" are equivalent in any way!

It's like saying, does he far-exceed your expectations, exceed your expectations, meet expectations, or fail to meet expectations ... on AVERAGE "meet expectations" is the choice which should be made (otherwise your expectations are just too low/high).

"Fair" means that he's doing his job, but not excelling in any ways, whereas "good" means that he's excelling in some ways (but not enough to be called excellent).

Seems silly to want all our politicians to be better than "fair" at all times. Fair is fair. That's what it means. It's like wanting all our kindergartners to be above average; it's just not gonna happen!

____________________

TruthHurts:

Rasmussen is a Conservative. His work is contracted by Conservatives. It's touted by Conservative media. In other words, he is a pollster who "leans Republican". What the article attempts to do is to give us the formula(s) which may explain HOW he gets to his numbers. The article emphasizes different data sampling, questions etc. The concern is that Rasmussen KNOWS that his goal is to "lean Republican". The FORMULA is therefore inconsequential (the "ends" will always justify the "means")...He's also very smart to know that election day polling is what most people "rate" pollsters on. My take is that Rasmussen "leans Republican" until the very last week of polling. That's when we see polling movement closer to objectivity. The Bush 2004 election is the perfect example of this. Scott Rasmussen is one of the very best pollsters. He's also well-compensated by the GOP to "lean Republican". His election-day polling is adjusted to reflect his best call. In that manner, he placates the GOP until it counts.

____________________

Travis:

@Tomdibble

I agree that "fair" is incorrectly categorized as a rating of disapproval.

The word "fair" is literally synonymous with "favorable," "satisfactory," "pretty good," "fine," among other terms. Thus, it's odd to equate it with a negative rating.

In fact, there's a stronger argument, based on its synonyms, with a positive rating. At a minimum, it should be separated out as an "average" rating.

But, whatever is done, it shouldn't be lumped with "poor" under the presumption that it's a negative rating.

Does anyone know why "fair' is traditionally interpreted as a disapproval rating? It seems like a cursory (and illogical) attempt to establish a semblance of balance (i.e., 2 categories for "approve" and 2 categories for "disapprove).

____________________

Field Marshal:

TruthHurts,

Give me a break. You have consumed too much of the liberal kool-aid and need to wise up. Ras is the most accurate pollster out there. He not only predicted the prez election most accurately, he picked almost all the senate races in 2008.

And no, he did not make an election day adjustment. He had it 54-48 for several weeks before the election. The notion that they push republicans is a myth.

____________________

Centrist_Dem:

Rasmussen undermines his credibility by flip-flopping about the result of his methods. The above piece mentions the problem in passing, and it is very serious.

Item: Rasmussen's approval number for Bush was always markedly higher than other polls. He justified this by pointing to his four-choice question model, and said that it consistently led to higher presidential approval numbers.

But now a Democrat is in the White House, and Rasmussen reports consistently _lower_ presidential approval and higher disapproval... and he now points to that same four-way model, saying now that it predictably gives _lower_ approval numbers for presidents... even though he pointed to the same question structure to explain his atypically favorable Bush numbers.

That is an outright methodological contradiction. Rasmussen's Republican alignment is hardly a secret (he's become a favorite on Fox News) and his contradictory claim raises definite problems... however good his horse-race numbers are.

These are facts, and they raise real questions. If questioning a pollster who so openly contradicts himself, exactly in line with his own political preferences, is a product of "liberal kool-aid," then objective analysis is in serious trouble.

____________________

GARY WAGNER:

I think you need to read a little closer, Centrist_Dem. Rasmussen has never used the "Excellent, Good, Fair, and Poor" questions." He did an experiment (http://www.rasmussenreports.com/public_content/politics/obama_administration/november_2009/question_wording_and_job_approval) where he tried using the excellent good fair and poor questions and Obama would have had a 38% approval in that poll. It's why the liberal pollsters made up the trick for Bush.

____________________

seg:

TruthHurts:
Surely you know that virtually all pollsters have strong political leanings, mostly liberal. Except for the networks, almost all do contract polling for politicians, almost always for a specific party.

For example, Tom Jensen at PPP unabasedly cheers for liberal democrats and makes no bones about their contract polling. He is unusual only for his candor.

By the way, I applaud your "TruthHurts" moniker. Few of us are willing to admit that we often find the truth painful.

____________________

hmmm interesting!
my apologize if I miss the point but I am heart-broken about this

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR