Articles and Analysis


Sometimes the Magic Works...

Topics: Charts , House Effects , Loess regression , Pollster.com , Rasmussen , Research2000 , Trend lines

"Sometimes the magic works," said Chief Daniel George in the 1970 classic flim Little Big Man, "and sometimes it doesn't."   The same can be said about the loess regression trend lines we plot in our charts.

When we plot pre-election poll results from various pollsters on the same charts, the trend lines usually have the helpful characteristic of minimizing the impact of outlier results and pollsters with consistent "house effects" on the overall estimate. In other words, if one of five or ten pollsters produces a consistently different result, their results do not typically skew the overall average significantly so long as the timing of the various polls is more or less random.

But for some of the national measures we have been plotting recently -- especially Obama's job and favorable ratings and the question about whether Americans perceive things to be "headed in the right direction" or "off on the wrong track" -- a few pollsters that do daily or weekly tracking are producing results with large house effects. Unfortunately that combination, along with the more sporadic timing of other national surveys, is producing the appearance of trends on some charts that are not really trends.

Last night, for example, Andrew Sullivan linked to two charts that appear to show trends in recent weeks: An uptick in the unfavorable rating for Obama and an increase in the percentage saying that things are off on the wrong track. In both cases, unfortunately, the apparent trends are an artifact of timing and house effects.

Let me explain, starting with the right direction/wrong track chart, that follows. (I am using screen shots rather than our live-embedded version here to preserve the look of the chart at the time of this writing -- follow the link to the live chart to use the filter tools yourself):


What Sullivan noticed was the recent uptick in the red line (wrong track) and downturn in the black line (right direction) at the far right (or "nose") of the trend. Now look what happens when we use our filter tool to remove from the trend the two pollsters -- Rasmussen Reports and DailyKos/Research2000 -- whose weekly tracking results provide nearly half (41 of 96) polls plotted in this chart so far during 2009. The recent trend disappears producing an essentially flat line since mid-April:


So removing just two pollsters -- and particularly the two that contributed all four of the poll released in the last two weeks -- eliminates the apparent trend. One problem we have is that these two pollsters release weekly tracks, while the others poll more sporadically. Worse, virtually all of the national pollsters released surveys just before the Obama administration reached its 100th day in office, and we have experienced something of a poll drought since.

But wait. Perhaps those two weekly tracks are catching a more recent trend that we might miss if we rely (for the moment) on the other national tracking surveys that have not produced more surveys in the last few weeks.

To check, let's use the filter tool to select only the surveys from Rasmussen and DailyKos/Research 2000. And just to be safe, I will also turn up the smoothing setting to be especially sensitive to any recent trend:


The trend is almost exactly the same as the version with these pollsters removed, but you can also see that the gap between wrong track and right direction is larger on the second chart of just Rasmussen and Research 2000 (11 points) than on the previous chart excluding those two (4 points), with virtually all of the "house effect" coming from the Rasmussen survey.

So when we look at only the weekly trackers or only the other polls separately, we see flat lines over the last few weeks. When we put them together, we see a recent upward movement on "wrong track." Why? Because when combined the weekly trackers are driving the "nose" of the trend line and the trackers -- especially the Rasmussen track -- is producing consistently different results. So as the Rasmussen results have more influence in the trend line, they tend to drive the red line up and the black line down.

Now let's repeat the exercise with the Obama favorable rating. First, the standard chart showing all surveys. The recent apparent trend is the sharp upward movement on the red "unfavorable" line:


In this case, the Rasmussen and Daily Kos/Research2000 results are six of the seven surveys conducted in the month of May (the new Gallup result was added this morning, after Sullivan's initial post). If we use our filter tool to remove the weekly trackers, the apparent recent change smooths out, reflecting the more gradual increase in Obama's unfavorable rating since the inauguration:


Again, are the trackers picking up a more recent trend that the other national surveys are missing? Here is what the chart looks like if we include only the Rasmussen and DailyKos/Research2000 polls. Here, we see virtually no trend since late March:


The last chart above also clearly shows the enormous house effect separating (in this case) Rasmussen and DailyKos/Research 2000 surveys, with Rasmussen producing consistently lower favorable and higher unfavorable ratings for Obama.

We have discussed the "why" of house effects, especially the consistent differences in the Rasmussen tracking, in previous posts. This case involves something a little more troubling for us: The way house effects and timing have combined to produce misleading "trends" that are more artifact than real. That is something we need to address in a systematic way.

Update: At the suggestion of a reader, Andrew Sullivan removed

only the Rasmussen surveys with similar results to what I obtained above.   



I was actually the reader who tipped off Andrew about this via email. I should have posted something here earlier--I noticed the effect Rasmussen was having on the Favorable chart several weeks ago, reduplicated it in several other charts, and kept meaning to mention it.

Anyway, I agree with your analysis, and I will be interested to see what you come up with to address it. I might note I think this is a particularly clear example of a general problem with tracking poll trends in the aggregate when the underlying poll mix is shifting over time, something that I think also contributed to some trend artifacts during the last election polling season.



DTM, not to dispute that you tipped Mr. Sullivan off about this, but I did, too. Something tells me that we may not have been the only ones to notice - the Rasmussen numbers are obviously different from the rest when you look at the chart.

I tried removing each of the polls that's in the composite in turn. The only one that had a meaningful effect on the overall results was Rasmussen. Now, Rasmussen could be right and the rest could be wrong, but it's definitely an outlier.



Doesn't Rasmussen use a sample of likely voters or voters all the time, in other words, screening the sample more than other pollsters? Depending on what questions they use to winnow the sample, this could be the reason why their results differ from the others.

Do others agree?



FWIW, I sent Sullivan email as well, so he probably got a bunch of "try filtering out Rasmussen" suggestions.



This seems like a pretty straightforward case of Simpson's Paradox. If you read the Wikipedia article, this is a lot like the kidney stone example, but replace "stone size" with "pollster". So the pollsters' collective house effects near a point on the line will matter. Is there a way to account for this other than just separating the two out? Can you, say, run a regression on the house effects and subtract that out to get just a relative trend?



Sorry--obviously I withdraw the claim of being THE reader who tipped off Andrew. In my defense, I got a reply email informing me a post would be forthcoming, and Andrew expressed this in the singular, so that is why I assumed I was the guy.

Anyway, I would note again the problem is not so much Rasmussen's notable house effect per se, as the general problem posed for detecting trends when you have shifting poll mixes in a world with house effects.



I also sent in an e-mail to Andrew about this last night. lol

Looks like the Rasmussen issue -- which has been around for quite some time -- is something many of us have noticed.


Mark Blumenthal:

@Mainer: Yes, I agree. See the two previous posts I linked to above (and here and here) for more discussion of that issue.

@DTM, Randy, Matt, Russ: The Rasmussen difference is pretty hard to miss, no doubt. And the only reason I didn't email Sullivan myself Monday night is that I wanted to do the post above first. But I'm glad that all of you were able to use the filter tool to reach your own conclusions. That's the reason we worked so hard last year to include those sorts of interactive features in the chart.

@AySz88: I'm going to defer to Charles on this issue, but I know that he has been working on a way to adjust trend lines for "house effects" in something like the manner you describe. The tough part, however, is how we would present and/or implement such a trend line in our Flash chart. But we're open to suggestions.


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.