Pollster.com

Articles and Analysis

 

Omero: Roll Call, don't SurveyUSA


Roll Call newspaper has recently teamed with SurveyUSA to conduct polls in a variety of competitive house races.  The surveys generated some Roll Call stories, local stories in the different districts, some online backlash in another, and a new Roll Call story noting the controversy. 

 

Disclosure & the backstory

 

But, before I wade any further in this topic, let me clearly note my own conflicts.  I am the pollster of record in two of the eight districts in which RC/SUSA polled (PA-10 and MO-9), which also happen to be two of the four districts where the RC/SUSA poll came in significantly more Republican than (our own) internal campaign polls.  So I have an obvious interest in challenging the RC/SUSA results.  But hear me out before you dismiss this post.  (Also, in further disclosure, many years ago I was a Roll Call intern.)

 

SurveyUSA uses an automated methodology rather than live callers for their interviews.  This methodology has stirred some controversy in the past.  Some DC media outlets do not report on SUSA polls.  Others, like Chris Cilizza at the Washington Post, express some skepticism. Here at pollster.com, Mark Blumenthal cautions against a reflexive opposition to SUSA's methodology, and their polls are reported on.  Carl Bialik at the WSJ wrote of a warming toward SUSA and their methodology here and here.  Nate Silver's accuracy ratings build on several cycles of polling, and put SUSA near the top, and Joel Bloom's 2002 paper also explores SUSA's accuracy in statewide surveys.  But the topic of survey accuracy and pollster report cards is itself is a large discussion, and Mark discusses SUSA's own report card here.

 

RC/SUSA discrepancies with public polling

 

But whatever one makes of SUSA's methodology, or national accuracy reports or ratings, there have been very clear discrepancies between the RC/SUSA surveys in Congressional races and other public polling.  Below is a table showing the RC/USA results compared to other public results.  In five of the eight races, the RC/SUSA results differ greatly from other public results. 

 

District

Firm

Firm type

Dates

Dem Cand

GOP Cand

Dem adv

AL2

M & A

GOP

7/21-7/22

39

41

-2

AL2

Anz L

Dem

8/3-8/6

50

40

10

AL2

AEA/Cap

indep

8/6-8/7

47

37

10

AL2

SUSA

indep

8/26-8/28

39

56

-17

MN3

SUSA

indep

8/26-8/28

41

44

-3

FL21

Hill

GOP

6/19-6/22

36

48

-12

FL21

SUSA

indep

8/24-8/26

48

46

2

CO4

BPN

Dem

5/13-5/15

43

36

7

CO4

SUSA

indep

8/22-8/24

50

43

7

KS2

Anz L

Dem

5/12-5/15

57

27

30

KS2

SUSA

indep

8/19-8/21

50

43

7

MO9

MA

Dem

8/12-8/14

41

39

2

MO9

SUSA

indep

9/1-9/2

38

50

-12

PA10

MA

Dem

8/19-8/21

54

27

27

PA10

SUSA

indep

8/23-8/25

49

45

4

NM1

GQR

Dem

6/29-7/2

47

44

3

NM1

POS

GOP

7/22-7/23

41

47

-6

NM1

SUSA

indep

8/26-8/28

51

46

5

 

When I look closely at some of the races with differences, I see a lack of attention to detail in the RC/SUSA surveys.  The PA-10 survey misspells the Republican candidate's name wrong throughout.  The MO-9 survey butchers the spelling of the Republican Gubernatorial candidate (and current MO-9 Congressman).  The MN-3 methodology and report makes no mention of how same-day registrants are accounted for, even though they can be as much as 20% of turnout in a presidential year.  And then there is the drastic underrepresentation of black voters in the AL-2 survey, leading Roll Call to ask SUSA to reweight their data.

 

Lack of campaign context & common-sense

 

Further, the Roll Call coverage accepted their poll findings as decisive fact, with bold headlines that ignored any campaign context.  "[Democratic candidate Bobby] Bright Anything But" (AL-2) and "Missouri 9th May Be Waste of Democrats' Efforts" are two such examples.  These stories, and other RC stories, reported on the poll findings, but ignored campaign context. 

 

For example, although it wasn't mentioned in the story, the MO-9 survey was conducted during the Republican convention, quite possibly boosting Republican participation.  (I'd also like to know what percent of the RC/SUSA sample is from Boone County, the largest county in the district, which Democratic candidate Judy Baker currently represents in the legislature.)  The PA-10 survey was conducted at a time when Democratic Congressman Carney had been on the air with positive television for the previous four weeks, and Republican challenger Chris Hackett had been largely off the air for months.  Does it make sense that the two have nearly identical name ID?  After the MN-3 survey, Roll Call marveled at the finding that younger voters give the McCain and the Republican Congressional candidate the advantage, while older voters prefer Obama and the Democratic Congressional candidate in his early 30s.  Does that seem right, given everything we know about young voters?

 

The fallout

 

Now Roll Call seems to have backed away some from their earlier reporting.  In this week's story, they write: "It appears it's also possible to get a poll to say just about anything." And also this: "some of the conclusions were universal and inescapable"--such as low Bush ratings, low Congressional approval ratings, and a concern about the economy.  These new observations are a far cry from calling a specific campaign a "waste of efforts."  But local coverage reacting to the initial Roll Call stories is unlikely to be taken back.

 

And a few words in defense of our own in-house accuracy.  Our polling correctly predicted a Baker win in the MO-9 primary. And our polling correctly predicted Carney's upset of former PA-10 Congressman Don Sherwood in 2006.  In fact, every single one of our seven Congressional candidates won their primaries (or ran unopposed).  Here at pollster.com Mark has pointed out that everyone can have an off poll.  But not all internal polls are off.

 

I think there are a few lessons from this incident.  First, there's more to judging survey quality than whether it was conducted internally or by an independent third party.  But second, and perhaps more important, Congressional handicappers should rely on more than a single poll's results to judge a race's viability. 

 

Comments

I remember the first polling firm I worked at and was raked over the coals for misspelling a candidate's name in the SPSS code. In my defense it was the first time I had programmed the 1,000+ lines of SPSS syntax for the firm (there was much more drama at the firm, but that that is another story...). However, I have to agree that reputation is everything and releasing something to the public or a candidate with obvious errors raises questions about what else you screwed up.

So, to the poor newbie at SUSA who made these typos, I salute you. Don't do it again.

____________________

Thomas Riehle:

When it comes to the final publicly released poll being accurate in predicting the results of the "actual poll" (the vote on Election Day), nobody does it better than SurveyUSA and Jay Leve. RC, stick to your guns!

____________________

Jay H Leve:

I appreciate how carefully Ms. Omero analyzed SurveyUSA’s work. Misspellings are inexcusable; we regret them, we have fixed them. SurveyUSA conducted these Congressional District polls using voter list (registration-based) sample. When SurveyUSA polls with RDD (random-digit-dial) sample, SurveyUSA typically weights to gender, age and race. When SurveyUSA polls with RBS sample, which we did for these congressional districts, SurveyUSA typically weights to gender and age but not race. In AL2, this resulted in a legitimate concern that blacks were underrepresented. In any contest where differential turnout could torque results, SurveyUSA attempts to highlight same. For example, in FL21, SurveyUSA took care to point out that those who completed the poll in Spanish voted differently than those who completed the poll in English, and that any under-representation of Spanish-speaking respondents would affect the results. In AL2, such cautionary language was needed and was missing; I regret it. SurveyUSA modeled a number of alternate scenarios which up-weighted black turnout in AL2. We made the new turnout models available to media. Placing SurveyUSA’s current work into context. In the 2006 mid-term, a total of 211 Congressional District polls from 58 pollsters were released prior to the election; SurveyUSA released 27 of the 211 polls. Using methodology in 2006 identical to the methodology SurveyUSA is using in 2008, SurveyUSA’s error on its Congressional District polling in 2006 was half that of other pollsters. SurveyUSA’s bias was 1/10th that of other pollsters. See: http://www.surveyusa.com/2006CDreportcard051807.pdf . We invite scrutiny, and welcome fair criticism, of SurveyUSA’s entire body of work. We thank Ms. Omero for suggesting ways SurveyUSA could improve its polling and ways the media could improve how polls are reported.

Jay H Leve
CEO
SurveyUSA

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR