Pollster.com

Articles and Analysis

 

Rasmussen's Senate House Effect

Topics: House effect , Rasmussen

Whenever I meet a Democrat, and they find out I write for Pollster.com, the first question almost always asked is "what are your thoughts on Rasmussen? They are always biased against the Democrats". On the liberal site Daily Kos, Steve Singiser's nightly political wrap-up packs all Rasmussen polls at the bottom of his posts and mockingly refers to the polls as from the "House of Ras". I also got in on the criticism last month in a post where I pointed out that Rasmussen had higher pro-Republican house effects during important news cycles in 2008.

But does Rasmussen have a large pro-Republican house effect in 2010? Looking at the generic ballot, the answer seems to be absolutely. Using only Rasmussen polling, the Pollster.com aggregate gives the House Republicans a 7.3% advantage as of this writing. Using all other pollsters except for Rasmussen, House Republicans hold only a 0.1% lead. Some of this may be their use of a likely voter model, although it is unlikely that it accounts for all of Rasmussen's difference.

Of course, the generic ballot is only one of the many contests that Rasmussen polls. Rasmussen has accounted for a little over 50% of the legitimate polls conducted for United States Senate races in 2010. Being that Rasmussen has flooded the zone, we must ask whether their Senate polls have had the same sort of house effect as their generic ballot. Many liberals would like to believe so, but no one to my knowledge has affirmed or disproved it... until now.

David Shor, a visiting graduate student collaborator at Princeton University, has estimated Rasmussen's house effects in all possible Senate races (his methods are outlined here). Using data supplied and collected by Rasmus Pianowski and me, he found that the difference in house effects between Senate races to be mostly insignificant (or "eerily consistent") . However, the difference between Senate races and the generic ballot was highly significant*.

Screen shot 2010-07-13 at 3.09.23 PM.png

Instead of a 5% pro-Republican house effect as seen on the two way generic ballot, the pooled Senate house effect is only 2%. For example, if you encounter a 44% to 36% Republican lead on Rasmussen's generic ballot, it is probably a tie: 44% for the Republicans / (44% for the Republicans + 36% for the Democrats) - 5% House Effect = 50% of the decided vote for the Republicans. On the other hand, a 44% to 36% Republican lead in a Senate race means the Republican is leading: 44% for the Republican / (44% for the Republican + 36% for the Democrat) - 2% House Effect = 53% for the Republican of the decided vote.

Does this variation in generic vs. Senate house effects make a difference in terms of how many seats Republicans would pickup if the election were held today? Shor, who, along with his Stochastic Democracy blog team, Princeton Election Consortium, and me launched a preliminary 2010 Election Projection System, has shown that differing house effects do make a difference in our projections. If the Rasmussen house effect from the generic ballot had been applied to Senate races, Republicans would only be predicted to pick up 3 seats. With a Senate specific house effect, they are predicted to pick up 5 seats with Missouri and Pennsylvania falling into the Republican column.

It should be pointed out that Rasmussen, like any other pollster, will have outlier polls that will not fit perfectly in with any assigned house effect. This analysis also tells us that on average some of the pollsters in this chart had consistently different results that benefited one party's candidates. More specifically, when viewing Rasmussen's Senate polls, one should realize that these polls tend to be more pro-Republican than other polls, but not as pro-Republican as Rasmussen's generic ballot polls. Of course, we will not know whether this house effect portends to accuracy until Election Day.

*Note: House effects were estimated in each Senate race and then averaged to create a single "Senate house effect". In the case of YouGov and Quinnipiac, the Senate and generic ballot house effects were averaged.

 

Comments
Farleftandproud:

Siena for the senate of course would show them the most favorable to Democrats because they poll NY state more than anywhere else. I never see Siena take a shot at Missouri or Kentucky.

____________________

seg:

I like the way the author dismisses the effect of LV vs RV and adult:

"Some of this may be their use of a likely voter model, although it is unlikely that it accounts for all of Rasmussen's difference."

The justification for this dismissal is Nate Silver's analysis, which uses a particular form of cherry picking: picking the greatest deviation for a comparison. If you look at today's generic poll, for example, you will see a much, much smaller deviation between Rasmussen and others.

Makes you wonder why they ALL switch to LV after 1 September. You don't have to wait until the week before the election to compare Rass to the others; you can do it for two whole months. More importantly, you can compare the average of the two months for ALL of the pollsters to the actual election on 2 November.

Interestingly, Ras seems to show declines for demos before others finally show the same. Is he on to something or is his evil plan working?

Also, I note that Harry uses R2000 as one of his comparitors, which is amazing considering that it has pretty well been shown to be totally bogus.

I guess I am totally unimpressed with Harry and disappointed in Nate.

____________________

StatyPolly:

Thanks, Seg, for saving me lots and lots of typing. I read Harry's piece and was formulating a nice, long and slow response in my head while I ran a couple of errands. I get back and vuala, Seg already done it.

That 7.3 Rass GOP advantage number looked familiar though. I know exactly where I saw it.

http://www.gallup.com/poll/128075/Vote-Congress-Remains-Tied-Among-Registered-Voters.aspx

Look at the table entitled "Net Democratic Vote for Congress 1994-2006 Monthly averages based on registered voters". Those are the last four mid-term elections. What's the point of this table? "Percentage support of Dem candidates minus Repub candidates"

Final months' polling for 94, 98, 02, 06 is 0,7,6,11. Those numbers mean that during the final month of polling generic ballot was tied among RV in 94, and Dems had 7,6 and 11 point advantages in 98, 02, and 06 respectively. Totals for that line is 24. Next line is "Final two party vote" -8,-1,-4,8. Total of that line is -5. If you deduct -5 from 24, you get the total discrepancy between Gallup's final months' polling average for those four election cycles and the actual election outcome. The number is 29. Divide by four, you get 7.25. That's the average discrepancy between Gallup's RV polling and actual vote for the past four mid-term elections.

Additionally, Frank Newport wrote a couple of pieces on the subject of RV v. LV models and cited the same numbers.

I would certainly argue that the enthusiasm gap is much greater this cycle than the average of the past four mid-term elections, thus 7.3 deviation between RV and LV actually seems insufficient.

____________________

Paleo:

The LV model is precisely where the house effect can reveal itself. If you take only registered voters, there is no room for manipulation. The likely voter screen is where you can play games. Or, to be more charitable, narrow the sample.

____________________

real_american:

Why have you completely ignored the Gallup analysis that states that the registered voter model that they currently use always understates Republicans by 8%?

Gallup says that republicans are 8% higher than their polls and you say that Rassmussen polls show republicans 7% higher than Gallup. That identifies the entire gap. It doesn't justify you dismissing it as meaningless.

According to that, Gallup says you are just another political liberal hack who doesn't know anything about polls or pollsters and will do anything within your power to bash Rasmussen. If you do know anything about polls, you have let your politics take over - or you have let the website owner tell you what to say.

Is this the type of shoddy partisan hack articles we can expect from Huffington Post from now on?

____________________

seg:

Paleo:
Note that the goal is to predict the election results. Otherwise, none of us would be reading pollster.com.

I will say this again: failing to correct for differential turnout generally has greater error than any LV filters at issue.

If you wish to plump up demo numbers, just go straight to Adult and ignore RV. If you wish to middle of the pack in plumping up demo numbers, use RV. That way your readers get less of a shock when you switch to LV in the home stretch to make you look better when compared to actual election results.

Actually, there is a unique opportunity for incorrect LV filters, and PPP seems to take advantage of it. You let the excellent turnout for demos in 2006 and 2008 strongly influence your model, despite the fact that only ignoratii and partisan hacks believe it will turn out that way in 2010.

That's how the Boston Globe's "well-respected" poll managed to mis-predict Brown's win in Mass by 15%.

By the way, I charted Rasmussen's generic polls for the last midterm. Despite Silver's unsupported calumny, Rasmussen did not "converge" on the election results. In fact, his error increased as they got closer. His results 2 months out were right on the money. If he was playing games, he was inept at it.

In September, I will be extremely interested to see the differences in those who use LV filters (those who don't quit polling as the election approaches. Wonder what that tells you?).

____________________

Michael Weissman:

Harry- Seriously, 3 of your 14 bars represent R2K, whose boss has admitted "adjusting" results at his "discretion"? Are you guys really using those "data", or have you just not updated your figures? I noticed over on your Stochastic site that R2K results had been covered with a semi-transparent shade. Does that mean you've dropped R2K or not?

____________________

Ptolemy:

Any of these polls only have meaning in comparison with earlier polls. When Gallup compares generic ballot numbers with Gallup historical data, we get a pretty good idea about the 2010 election outcome:
http://www.gallup.com/poll/124010/Generic-Ballot-Provides-Clues-2010-Vote.aspx

But the Gallup data never show the GOP significantly ahead of the Dems, even when the GOP wins seats! That's OK, neither Gallup nor Rasmussen is an objective measurement, but they both reveal the same trend. Same thing for the Senate races, each poll needs to be compared with similar polls from previous elections. Take a look at some of the 2008 and 2006 charts on Pollster.com; there are some dramatic differences from today.

____________________

Dms:

I'm the author of the site, I'd like to address some points,

Rasmussen's House Effect is much more then a LV/RV distinction. Democracy Corp and PPP both use likely voter models, and they have *much* smaller house effects then Rasmussen. The estimated house effects for every other pollster's Likely-Voter polls are close to zero.

Seg,

You're mistaken. Rasmussen's house effect really did converge to zero right before the 2008 election, look at http://stochasticdemocracy.blogspot.com/2010/02/rasmussen-polling-irregularities-first.html


Weissman,

We estimated R2K's PIE(Design Effects) separately, and found that their senate polls were on the level of Zogby Interactive. Their initial polls matched *terribly* with what other pollsters found when they came in.

So the model lowers the weight of their polls tremendously. If you'll look at the table, or read the methodology page, you'll see that the average R2K poll has an effective sample-size of 25.

Their polls have no influence on the model, they're just posted on the tables for reference.

____________________

Michael Weissman:

Dms- OK, although the bogus R2K data is still being used in your calculations, it has negligible effect on the key stats. That's not good hygiene, but it it isn't important. On your figure here, however, R2K accounts for 25% of the controls, about 50% of the net blue weight. Nothing about the appearance, including the error bars, suggests any reason to take those bars less seriously than the others. As a result, you're presenting seriously bogus visual info, not mere "reference". It's really easy to delete points from a bar graph.

I'm not defending Ras, who transparently distort issue polls and who seem to slant more R in narrative-setting polls than in near-term horse-races. However, use of really sloppy arguments or data can weaken what should be a strong argument.

____________________

real_american:

From the article: http://www.gallup.com/poll/140966/Registered-Voters-House-Voting-Preferences-Tied.aspx

-------------------------
"Democrats typically need a cushion of at least five percentage points among registered voters to maintain an advantage once turnout is taken into account. For example, Democrats consistently led by double digits among voters in 2006 before winning enough seats to take party control of the U.S. House, so they still held an advantage even with a Republican edge in turnout. In 1994, however, the best Republican year in recent memory, the generic ballot results among all registered voters generally showed a tied race or a Republican advantage for most of that election year."
---------------------------------

I know there is an article somewhere that shows an 8% gap but since I can't find it, I'll have to back off to a 5% gap.

But Gallup has never been slammed by liberals as having an evil house effect and they expect at least a 5% difference between their registered voter model and their likely voter model - as well as compared to actual election results.

How do you justify an out-of-hand dismissal of the likely voter model? Are your panties in a wad because Rasmussen shows a 7% gap and Gallup shows a 5% gap? Is 2% really worth all of this fuss and waste of time?

Also, as pointed out earlier, you still include the phone R2K polls and you include those ridiculous YouGov internet polls.

I have to wonder what the real issue with liberals and Rasmussen is. They could care less how much admitted and blatant liberal pollsters manipulate their data. They defended R2K until their fraud was revealed. For some reason, Rasmussen seems to represent a real threat.

My other question is, which Gallup polls you included above? Did you include the USA Today / Gallup polls which use an all voter model? You don't specify and I don't trust anything coming out from a huffington post website.

____________________

Dms:

Real American -

We specify everything in methodology and pollster page of our website. We show every poll, and how our model accounts for it.

We also share our polling database in easily downloadable spreadsheets that show how our polls were coded.

To answer your questions,

1) R2K polls are excluded from the model, their polls are shaded in red for reference. This was mentioned earlier on the comment thread.

2) There is nothing wrong with YouGov polls. They are probably one of the best pollsters out there.

3) Rasmussen polls make up half of our senate poll database, and so yes, 2% is a large difference.

4) We treat Gallup-A, Gallup-RV, and Gallup-LV as separate pollsters. The USA-Today/Gallup polls were, if memory serves, coded as Gallup-A .

____________________

Dms:

Real American -

We specify everything in methodology and pollster page of our website. We show every poll, and how our model accounts for it.

We also share our polling database in easily downloadable spreadsheets that show how our polls were coded.

To answer your questions,

1) R2K polls are excluded from the model, their polls are shaded in red for reference. This was mentioned earlier on the comment thread.

2) There is nothing wrong with YouGov polls. They are probably one of the best pollsters out there.

3) Rasmussen polls make up half of our senate poll database, and so yes, 2% is a large difference.

4) We treat Gallup-A, Gallup-RV, and Gallup-LV as separate pollsters. The USA-Today/Gallup polls were, if memory serves, coded as Gallup-A .

____________________

StatyPolly:

You see, Harry and DMS, to the 80% of population that are not hard core partisan leftists, this piece has the look and feel of a hatchet job from start to finish. You come in with a bias, and the rest of the way attempt to grasp at imaginary straws. How about offering a motive, for example? Is Ras simply incompetent or sinister? If DemCorps shows the same R+6 for the same time period that Ras shows R+6 using the same LV screen, does that make DemCorps' poll simply errant, while Ras is necessarily being narrative? What about Newport's assertion that during the last four mid-term elections, Gallup's RV polls underestimated actual GOP-Dem vote differential by 7.25% - the exact number claimed by you as Ras' House effect? Care to attempt to tear that down?

I've read only a handful of Mark B's editorials here since I started visiting this site, but until the Huff Post announcement a few days ago, I did not even realize that he is a career Dem pollster. Just solid, bias-free intellectual analysis of topics. At least in the pieces I looked over.

Here is what a professional evaluation of Rasmussen polling looks like, guys:

/blogs/why_is_rasmussen_so_different.html

____________________

real_american:

I appreciate your direct repsonse, DMS. I didn't check your links because I won't click on anything that takes me to the daily kos or huffington post hate sites. Plus, Nate Silver has gotten so politically biased that going to five-thirty-eight is a waste of time.

I hope you can appreciate how skeptical people have become with the bashing of rasmussen reaching hysterical levels among the liberal community. We have liberals on this site calling for rasmussen to be investigated for criminal offenses for poll results they don't like.

I hope you can also appreciate how there is a huge credibility gap coming out of anything owned by huffington post. Her hatred of rasmussen and anything that might show republicans in a positive light is well known and well documented.

I know it sounds argumentative, but 2% is not enough of a difference to get all excited about.

Another gallup article: http://www.gallup.com/poll/127982/Understanding-Gallup-Election-2010-Key-Indicators.aspx

-------------------------
"As a starting point in determining which party has the advantage, if Democrats have close to a double-digit lead among registered voters, they are virtually ensured of also having a lead among actual voters -- whatever turnout happens to be on Election Day.

From that point, things become a little less clear-cut. Smaller Democratic leads do not necessarily mean the party is losing the race to control the House, but rather that the eventual election outcome will be more dependent on turnout. In general, the closer the registered voter results get to an even split, the better Republicans can expect to do, given usual turnout patterns."
-----------------------

So, Gallup is saying that anything less than a 10 point lead for democrats in their registered model means that the outcome is uncertain. It depends on turnout. That means voter enthusiasm. Poll after poll has shown republicans to have a huge lead in enthusiasm.

That puts Rasmussen't 7% difference well within what Gallup's claim that democrats need a minumum of a 10% lead to guarantee getting at least 50% of the vote.

Here is another: http://www.gallup.com/poll/124010/Generic-Ballot-Provides-Clues-2010-Vote.aspx

"Turnout proved to be pivotal in 2002 as the Democrats' five-point lead among registered voters turned into a six-point deficit once likely voter preferences were measured (the actual vote on Election Day showed a five-point Republican advantage)."

So in 2002, Gallup's registered voter model underreported republicans by 11%. Again, Rasmussen's 7% is well within that margin.

____________________

seg:

Dms:
Thank you for replying to me.

Although I am interested to hear your 2008 analysis, I thought I referred to 2006. I did not look at the 2008 data because I feared it would not represent midterm elections well. Unless my memory is even worse than I think, there was no convergence in 2006.

I do research in a complete different area, but it seems to me that if you wish to inpute bias for a pollster, you should look at his entire body of work, not just the generic ballot. If Ras deviates in the generic but not in others so much, I would look closely at what he does differently for the generic. That might tell you something about LV filters in general and not just Ras.

Writing a more informative column:
Your column would have been much more informative if you had used numbers instead of qualitative judgments. For example, I might have written something like: (numbers guessed at)

"In 2010 Ras has deviated by 7% from the mean of RV and Adult pollsters and by 5% from two other LV pollsters. Note that this figure is not simple to determine because few pollster poll throughout the year for the generic vote.

"In previous elections, RV has deviated from actual election results by +3 to +14 Republican. Hence, RV polls are interesting for trends but not particularly useful for predictions. For that reason, adult and RV for polls are discarded from further discussion.

"After September 1, those still polling will all switch to LV, testifying to its widespread acceptance as being more predictive. In past elections, the mean of the LV polls taken the week before the election has deviated from election results by -2 to + 3% with a range among all pollsters of -5% to 5%. Rasmussen has deviated by an average of 2.5% with a range of -3.5 to +2%. Again, direct comparisons are difficult (especially for range) since Ras samples more often than others. In any cases, those values place him well within the 90% confidence interval of errors made by all pollsters in the last week before the election.

"However, if you compare the deviation between the average of a pollster's LV values for the two months prior to the election, there are only 3 with more than 3 polls. With such small numbers, statistical comparisons are meaningless.

"In short, although it is possible that Rasmussen has a positive bias for Republicans, it is difficult to conclude actual bias, much less support a hypothesis of convergence, because of the small numbers of competing LV pollsters and the fact that actual voter behavior may or may not actually converge."

Your explanation of including R2000 is unsatisfactory:
You used a graph because you know that visual comparisons are more easily grasped and remembered than recitation of statistics. So why include the distortion in the most impactful part of your message?

Furthermore, where were you when even a sporadic observer like me was saying R2000 was full of sh*t (and not just because I wanted different results)?

A friendly suggestion:
You do not read as a neutral seeker of truth. I highly recommend that you consider that as an option. There are plenty of partisans but few who inspire trust with their even-handed analyses. If you are one of those very few, you are likely to find the blue ocean of honest inquirers more profitable than the red sea of partisans. More importantly, it is difficult to think clearly and well and lead cheers at the same time.

If you wish to sway opinions, coming across as a partisan automatically limits your audience to those cheering for your side.

If you found time to read this, thank you for your indulgence.

____________________

gabe:

Likely voter models are more likely to have voter enthusism involved in the model. If you just poll adults or even registered voters it is based on nothing but random sampling or reading voter role names. Likely voter samples are based on demographics, past election statistics and voter enthusiasm. No wonder the GOP is way ahead on a likely voter sample. Oh and food for thought here. Ras had the GOP at +6 this week on the generic ballot, a new WashPo/ABC poll had the GOP at +4 on their likely voter model and last week Democracy Corps had the GOP at +6 on their model. Taking that into account and Ras does not look that much like an outlier in likely voter models.

____________________

Dms:

Seg,

Thank you for the thoughtful feedback. I didn't write the column, if I had, the tone would have been quite different.

I agree with you that RV and A polls are useful for trends, but not for actual results. That is what our model is attempting to do. I think our model captures trends very well because of this. The debate is in determining the actual trend. We're working on that.

The model was just launched on Monday, and there are a lot of changes that are going to be made over the next couple of days. Please try to follow it, constructive criticism is always appreciated.

____________________

seg:

Dms:
On the issue of a consistent cluster (however small or large)

In the Black Swan, Taleb describes numerous studies that have found that those who predict for a living generally have awful records but show strong "herd" effects. That is, they agree with each other more than the actual outcome.

Of the many occupations studied, pollsters were not included. That is too bad, because any discussion of House effects assumes the herd is correct and the outlier is wrong. Maybe they usually are right as a group, but is that always true?

I know that Gallup has done very well predicting the generic vote using LV filters and that in presidential elections, averages of pollsters (Silver, RCP) have done much better than all but one or two (possibly lucky) pollsters.

However, as Taleb points out incessantly, the herd is right when nothing upsets the apple cart and dead wrong when the dreaded Black Swan appears. His unrelenting point is that all models assume the past predicts the future, which is sometimes correct but eventually becomes completely wrong.

Gallup predicted 1994 in the last week pretty well, as I recall, but we really have a very limited data set to work from and the population and the issues change continuously.

I would maintain a humble attitude about both predictions and judgments about who is right and who is wrong. Given the high degree of unhappiness among voters and the fact that only a (non-representative) minority will vote, I don't know how anyone can confidently state that Ras is any more likely to be wrong than PPP. I bet Jensen would be the first to agree.

I also note that all pollsters seem to be way off from time to time, including PPP.

____________________

RWCOLE:

How did Rasmussen manage to go from giving Bush some of the highest job approval ratings of any poster to giving Obama among the lowest ratings?

Did he change his polling methods?

If so, was it done with the sole purpose of producing lower ratings?

____________________

seg:

DMS:
Did the students actually write this?

On re-reading the column, in the last paragraph the writer seems to be confusing House effect with bias. A double jolt of humility would be a good thing, I think.

By the way, is there a website that explains your findings day to day or week to week?

____________________

Dms:

Seg,

I'm perfectly sympathetic with your argument. Rasmussen very well be right.

Right now, we estimate which house effects are "right" by specifying opinion on the last election, and by assuming all house effects cancel out.

We're moving today or tomorrow to the following model:

We will "center" house effects based on the average LV pollster. Most pollsters have not released LV polls yet, so we will estimate where their LV polls "would be" based off of 2006 generic ballot data.

____________________

Michael Weissman:

David and Harry- Today would be a good day for replacing the seriously misleading figure.

____________________

Dms:

Micheal,

I agree. We're waiting until all the polls come out today. It'll be in the next update. We're aiming for an 7-8 pm release. I'm a bit caught up in Neurology work stuff at the moment.

____________________

Joe Simmons:

After reading an analysis such as this or the ones Nate Silver has done, one can't conclude much more than "huh, that's interesting, I'll keep that in mind."

Democrats get giddy, seeing such work as discrediting Rasmussen's polls and Republicans get defensive. But all you're saying is that Rasmussen varies from the norm more than others. And you say that Ras may very well be right. In the link you provided, Silver offers some explanations for this.

Whether Rasmussen is overstating Republican enthusiasm/Democrat discouragement or whether other firms are trying too hard to fit their data into normative statistical models we can't yet know. When we do know, your work will help to explain why it was so.

But I would like to see some analysis of whether massaging data to accord with demographic norms is more accurate in a midterm election. Or might the enthusiasm gap act as a screen itself, making it more valid to ease up on the massaging. While Rasmussen floods the zone with its polls, analysts flood the zone with critiques. The debate is continually distorted, even without ill-intent of Rasmussen or analysts like yourself.

____________________

me.yahoo.com/a/wwrQpJUn0eDCZgGswo3gG66bG7KM5dHTw.4-:

Up until this year I've always thought Rasmussen's national polls were informative enough if you remember to deduct the 5 bonus points from the Republican side and that his state polls were about as good as anyone's. What disturbs me about his state level polling this year is the recent habit he's fallen into of doing these one-day insta-polls.

It's always been my understanding that reasons for doing fieldwork for opinion polls over a 3-day period go beyond the capacity of call centers for doing human interviews; that it's the same reason why tracking polls virtually never publish their overnight results (and probably the a lot of same reason why we have waiting periods for handgun purchases). The idea is that you want to try and separate persistent and meaningful trends from spurious, temporary fluctuations due to a primary bump, a bad news day or even crappy weather that tend to fade in a day or two.

In the field of digital signal processing, which plays by a lot of the same rules as opinion polling, this is referred to as noise rejection and two common ways of achieving it are oversampling and lowpass filtering. Oversampling just means averaging multiple samples of input to get a single sample of output. Since noise is random by definition, this produces regression to the mean in the noise component while leaving non-random features relatively undisturbed.

A rolling average, where each sample of output shares some amount of data in common with the one before as in a tracking poll, is a primitive low-pass filter. Filtering affects the frequency content of a signal so it requires more assumptions or a priori knowledge of the signal you're trying to identify. But as long as the signal is changing more slowly than the cut-off frequency of the filter, you're good.

Both of these approaches can work very well in reducing background when properly applied and and can be of great help signal identification. So as a sometime DSP guy they make good sense to me. But Rasmussen is doing neither of these things in any of his state-level polls that I've seen this year. I think this could explain some of the wild instability we've seen in some of his polling, such as the 6- to 8-point swings for each candidate in his last three Burr vs Marshall polls in NC. That alone makes me want to treat his recent work with more skepticism than I have in the past.

Another potentially troubling concern for me is that some of the things that produce momentary swings in public opinion are predictable and also happen to be the same kinds of things that draw a lot of media attention. So doing one-day polls would conceivably allow a pollster to swoop in after some newsy development in a race and immediately produce a poll that amplifies the effect of that event on public opinion, rather than just measuring it. That would obviously cross the line between research and advocacy and call me a cynic, but I find myself unwilling to believe that anyone would ever do such a thing intentionally, not to point any fingers.

-CalD

____________________

me.yahoo.com/a/wwrQpJUn0eDCZgGswo3gG66bG7KM5dHTw.4-:

Oops: "...I find myself unwilling to believe that anyone would ever do such a thing intentionally..."

I meant: "...I find myself unwilling to believe that no one would ever do such a thing intentionally..."

-CalD

____________________

Huntingmoose:

Rasmussen has been most accurate in predicting the results of the election.

And the less accurate the more biased you are.


So it is not Rasmussen who is biased, but all the others such as Daily Kos, CNN, WaPo, ABC who are so insanely off target biased towards Dems.

____________________

gm:

Seg,

You're mistaken. Rasmussen's house effect really did converge to zero right before the 2008 election, look at http://stochasticdemocracy.blogspot.com/2010/02/rasmussen-polling-irregularities-first.html

LOL - you compare the final vote to the polls in the summer and the fall? You are treating the campaign as static. What an absolute joke! I think there may have been a contributing factor going on at the same time Rasmussen had Dem support going up - namely, economic armageddon under a Republican President. Could that have been a factor, maybe? Perhaps his "convergence to zero" was a "tracking of the race"? Call me crazy.

____________________

seg:

Dms, Joe Simmons, gm, StatyPolly, Michael Weissman, and others:

What an outstanding thread! I learned a great deal from it and found it very worthwhile.

Thank you

PS In the final analysis, they appear to be treating a "House effect" as a bias. If that outlier turns out to be correct, it is the rest that experienced errors.

____________________

ROY WILSON:

Shouldn't the merit and reliability of a polling outfit be based on past results?

If so, consider how many states Rasmussen missed in the 08 election: THREE(if you count Ohio as a tie going to Obama).

Throughout the 08 race, Rasmussen's polls remained more McCain than the rest of the polls for the most part. But they were also the least volatile. Most importantly, they nearly called the final result exactly 52-46 vs. 53-46 in actuality.

And does anyone remember 2004, when Rasmussen's polls were consistently showing a close race in early September when other polls showed Bush with a huge lead? Then they showed Bush up late, and their final polls nailed pretty much everything.

And anyone remember this ad against Gallup for being pro Bush? http://www.usatoday.com/news/politicselections/nation/president/2004-09-28-gallup-defense_x.htm

And now its house effect is pro-Obama.

Bottom Line? Calls of bias really don't get us anywhere.

____________________

Hannibal_32:

I spend a lot of time reading various articles and blogs, and I am extremely impressed with the civility and deference showed on this site. Most of the posters express intelligent thoughts, and disagree in a very civil way.

In a macro sense, I tend to side with Rasmussen simply because the left has a tendency to hate those who either offer disagreement OR try to offer an unbiased viewpoint. It's childish, but very true.

____________________

gm:

Roy Wilson

I think that using 2006 and/or 2008 as standards for polls is a bit problematic being Democrat wave elections. Such elections would tend to cut down on historic type GOP gains from RV to LV model.

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR