Pollster.com

Articles and Analysis

 

How Pollsters Affect Poll Results

Topics: House Effects
























Who does the poll affects the results. Some. These are called "house effects" because they are systematic effects due to survey "house" or polling organization. It is perhaps easy to think of these effects as "bias" but that is misleading. The differences are due to a variety of factors that represent reasonable differences in practice from one organization to another.

For example, how you phrase a question can affect the results, and an organization usually asks the question the same way in all their surveys. This creates a house effect. Another source is how the organization treats "don't know" or "undecided" responses. Some push hard for a position even if the respondent is reluctant to give one. Other pollsters take "undecided" at face value and don't push. The latter get higher rates of undecided, but more important they get lower levels of support for both candidates as a result of not pushing for how respondents lean. And organizations differ in whether they typically interview adults, registered voters or likely voters. The differences across those three groups produce differences in results. Which is right? It depends on what you are trying to estimate-- opinion of the population, of people who can easily vote if the choose to do so or of the probable electorate. Not to mention the vagaries of identifying who is really likely to vote. Finally, survey mode may matter. Is the survey conducted by random digit dialing (RDD) with live interviewers, by RDD with recorded interviews ("interactive voice response" or IVR), or by internet using panels of volunteers who are statistically adjusted in some way to make inferences about the population.

Given all these and many other possible sources of house effects, it is perhaps surprising the net effects are as small as they are. They are often statistically significant, but rarely are they notably large.

The chart above shows the house effect for each polling organization that has conducted at least five national polls on the Obama-McCain match-up since 2007. The dots are the estimated house effects and the blue lines extend out to a 95% confidence interval around the effects.

The largest pro-Obama house effect is that of Harris Interactive, at just over 4 points. The poll most favorable to McCain is Rasmussen's Tracking poll at just less than -3 points. Everyone else falls between these extremes.

Now let's put this in context. We are looking at effects on the difference between the candidates, so that +4 from Harris is equivalent to two points high on Obama and two points low on McCain. Taking half the estimated effect above gives the average effect per candidate. The average effects are at most 2 points per candidate. Not trivial, but not huge.

Estimating the house effect is not hard. But knowing where "zero" should be is very hard. A house effect of zero is saying the pollster perfectly matches some standard. The ideal standard, of course, is the actual election outcome. But we don't know that now, only after the fact in November. So the standard used here is the house effect relative to our Pollster Trend Estimate. If a pollster consistently runs 2 points above our trend, their house effect would be +2.

The house effects are calculated so that the average house effect is zero. This doesn't depend on how many polls a pollster conducts. And it doesn't mean the pollster closest to zero is the "best". It just means their results track our trend estimate on average. That can also happen if a pollster gyrates considerably above and below our trend, but balances out. A nicer result is a poll that closely follows the trend. But either pattern could produce a house effect near zero. For example, Democracy Corps and Zogby have very similar house effects near -1. But look at their plots below and you see that Democracy Corps has followed our trend quite closely, though about a point below the trend. Zogby has also been on average a point below trend, but his polls have shown large variation around the trend, with some polls as near-outliers above while others are near outliers below the trend. The net effect is the same as for Democracy Corps, but the variability of Zogby's results is much higher.

Incidentally, the Democracy Corps poll is conducted by the Democratic firm of Greenberg Quinlan Rosner Reserch in collaboration with Democratic strategist James Carville. Yet the poll has a negative house effect of -1. Does this mean the Democracy Corps poll is biased against Obama? No. It means they use a likey voter sample, which typically produces modestly more pro-Republican responses than do registered voter or adult samples. Assuming that the house effect necessarily reflects a partisan bias is a major mistake.

How can you use these house effects? Take a pollster's latest results and subtract the house effect from their reported Obama minus McCain difference. That puts their results in the same terms as all others, centered on the Pollster.com Trend Estimate. This is especially useful if you are comparing results from two pollsters with different house effects. Removing those house differences makes their results more comparable.

What impact do house effects have on our Pollster.com Trend Estimate? A little. Our estimator is designed to resist big effects of any single pollster, but it isn't infallible, especially when some pollsters do far more polls than others or when one pollster dominates during some small period of time. We can estimate house effects, adjust for these, and reestimate our trend with house effects removed. The result runs through the center of the polls, but doesn't allow the number of polls done by an organization to be as influential.

The results are shown in the chart below. The blue line is our standard estimator and the red line is the estimate with house effects removed. Without house effects the current trend stands at +2.0 while ignoring house effects produces an estimate of +1.7. A little different, but given the range of variability across polls and the uncertainty as to where the race "really" stands, this is not a big effect.























The impact of house effects isn't always this small. Looking back along the trend we see that the red and blue lines diverged by as much as 1 point in late June, an effect due significantly to the large number of Rasmussen and Gallup tracking polls during that time and few polls with positive house effects in that period. A smaller but still notable divergence occurred in late February and early March.

The bottom line is that there are real and measurable differences between polling organizations, but the magnitude of these effects is considerably less than some commentary would suggest. Many of the house effect estimates above are not statistically different from zero. Even ignoring that, the range of effects is rather small, though of course in a tight race the differences may be politically important. Finally, the effects on our Pollster.com Trend Estimate is detectable but does not lead to large distortions, even if we can see some noticeable differences at some times.

The charts below move though all the pollsters and plots their poll results compared to the standard trend and the trend removing house effects. Pollsters with fewer than 5 polls are all lumped together as "Other" pollsters. Once they get to our minimum number of polls, we'll have house effects for them too.






















 

Comments
brambster:

Charles, great information again, but I do wonder about one big thing with these results.

In How We Choose Polls to Plot: Part IV, you indicated that Rasmussen, Gallup Tracker and YouGov together account for 128 of 286 data points, or 45% of your national data (as of about a week ago). Being that your findings show that these three pollsters have the biggest McCain house effect of all, with each showing a confidence interval fully on the McCain side, wouldn't this also be influencing your findings of a +0.3 point current affect in Obama's favor? I would think the house effect is fine, but not the trend that you are comparing it to.

It would just seem wise to include these results less frequently, especially when all three have a net effect that is so one-sided.

With that one exception, this level of refinement is the best that I have ever seen.

____________________

faithhopelove:

"The poll most favorable to McCain is Rasmussen's Tracking poll...."

____________________

jsh1120:

Have to admit that I love it when my seat of the pants impressions are borne out by actual analysis. I try to be swayed when that doesn't happen, but it's much easier when the data conform to my preconceptions.

So it's reassuring to find that my rule of thumb of a 2-4% tilt toward the GOP in Rasmussen's results is supported by the data. And it's just as bracing to find that my skepticism about Zogby's results, whatever they are, seems justified.

I do find it interesting that the major national media polls (e.g. CBS/NYT, LATimes/Blmbrg, NBC/WSJ, AP/Ipsos, and ABC/WaPo) along with Pew all fall slightly toward the Dem side (and Fox toward the GOP side.) That's also been my seat-of-the-pants impression. I continue to wonder, though, whether it's a true "tilt" or simply the result of what I believe is the more careful methods of those polls compared to the daily tracking polls from Gallup and Rasmussen.

____________________

Brambster-- Thanks! The trackers are troublesome that way. Their shear volume makes them influential. And since they all lie below the trend they are pulling trend down a little. Of course, one could also say that the major media polls mostly lie above trend, so they are pulling it up. But the number of trackers increases their influence. We'll keep monitoring that. Hopefully other polling ramps up after the conventions as well, reducing the share of all polls due to trackers (remember, we only include independent samples from the trackers, so they aren't an everyday addition.)

jsh1120 makes a good point relevant to this as well. The major media polls mostly come in above trend. Bias? or nature of their sampling? Partly that's RV vs LV but it is also number of call backs and efforts to reach hard to get respondents. (But give Gallup's tracker credit for its cell phone component on that score.) As I said in the post, these house effects don't mean deliberate bias by any means. They come from a lot of reasonable decisions and differences.

Finally, let me push the point that we don't know where zero really is. Maybe the daily trackers are all closer to right, and everyone else is overestimating Obama's lead. There is nothing in the data that allow us to know the answer to this. So centering everything on the mean house effect is a convenient way of settling on a zero. It also makes sense that a group trying to measure the same thing may be closer on average. But there is no guarantee of this. So before you dismiss any poll here, do think about how you know they are "wrong".

Charles

____________________

How interesting! It seems that consumers of most individual polls should, by and large, think of house effects as being no more than 1-2 points (though the exceptions, Rasmussen, Economist/YouGov, Harris, and CBS/NYT, are bothersome). But considering the sampling error of single polls, even there the effect is not a deal-killer.

Taking the median, as I do over at the Princeton Election Consortium, is an effective way of handling outliers and results in a stable electoral vote (EV) estimator. Overall, it seems that the Pollster approach (and mine) of collecting polls is an effective means of reducing the house-effect problem to very manageable levels.

____________________

John:

Excellent article Charles. You mentioned that some of the 'house effect' is due to RVs vrs LVs and I was wondering if you have calculated what is the average overall difference between RV polls and LV polls and whether the difference has changed over time?

____________________

cafezoo:

I'm wondering whether they disclose sponsor(s) of polling before asking questions over the phone. I suspect the name Fox or NYT might lead answers in one way or the other. Is there any research about this possible "house effect"?

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR