Pollster.com

Articles and Analysis

 

When Pollsters Attack: Epilogue

Topics: 2008 , Disclosure , The 2008 Race

Nearly two weeks ago, just before we kicked off our Disclosure Project, InsiderAdvantage pollster Matt Towery used a syndicated column headlined "What's a Quinnipiac?" to attack the Florida polls conducted by the Quinnipiac University Polling Institute. Towery not only highlighted how his polls differed from a recent Quinnipiac survey but also commissioned Mason-Dixon Polling and Research to conduct a parallel poll to prove his point. Towery's unusually jocular broadside amounted to "one pollster whack[ing] the other upside the head," as Politico's Jonathan Martin put it.

"At the very least," Towery argued,

Quinnipiac numbers should stop being taken at face value as the paragon of accuracy in Florida. Somewhere in their methodology they continue to misread the state they claim to know so intimately.

When I looked at the four polls of Florida Republicans conducted recently by InsiderAdvanage, Quinnipiac and Mason-Dixon, the differences between them seem explained mostly by the inclusion of Newt Gingrich as a candidate in the Quinnipiac poll and the fact that Fred Thompson's announcement occurred during the fielding of the Quinnipiac poll but before the others. Still, Towery's suggestion that the Quinnipiac differences might be found "somewhere in their methodology" led me to ask the same kinds of methodological questions as we have been asking as part of our Disclosure Project. Their responses follow, and the difference in sampling methodology adds another possible explanation. Quinnipiac's sample of "registered Republicans" samples a population roughly four times the size of the "likely voters" surveyed by InsiderAdvantage and Mason-Dixon.

Interview Dates - One question I asked only of Quinnipiac was to provide the number of interviews conducted before and after Fred Thompson's announcement of candidacy. Doug Schwartz at Quinnipiac reports that 199 (or 45%) of their 438 interviews were conducted on or before the evening of September 5. Thompson declared his intentions later that night and received a burst of positive coverage in the week that followed. While the methodologies of these surveys differ, it is worth remembering that the other polls by InsiderAdvantage and Mason-Dixon were fielded in their entirety after September 5.

Sample Frame - Although the term is a bit wonky, one of the most important ways these polls differ is in what pollsters call the "sampling frame." Put more simply, the issue is the source for the random sample of voters called by each pollster.

Quinnipiac uses a random digit dialing (RDD) methodology that contacts a random sample of all the working landline telephone numbers in Florida and then uses screen questions to select a random sample of registered Republicans. In this case, both InsiderAdvantage and Mason-Dixon selects voters at random from the list of registered Republicans provided by the Secretary of State, using both actual vote history and screen questions to identify and interview "likely" Republican primary voters.

For more information on the debate about RDD versus list sampling, see our prior posts here and here.

How Did They Select Republican Registered or Likely Voters? - The pollsters at Quinnipiac provided a complete and relatively straightforward answer. They asked two questions about vote registration and party affiliation:

Some people are registered to vote and others are not. Are you registered to vote in the election district where you now live, or aren't you?

[IF REGISTERED] Are you registered as a Republican, Democrat, some other party or are you not affiliated with any party?

Both Towery and Brad Coker at Mason-Dixon were both initially reluctant to describe the specifics of their likely voter selection procedures, citing the need to protect "proprietary" methods. After a bit of email back-and-forth, however, both were willing to describe their methods in general terms. Let's start with Coker, answering on behalf of Mason-Dixon:

Our sample design and screening method takes into account voter registration, party registration, past primary voting history and likeliness to vote in the primary. Other factors that were taken into account were the age, county, gender and race of the voting population based on previous Republican primary elections.

What that means -- as I read it -- is that Mason-Dixon uses information on past vote history on the voter list to draw a random sample of a subset of Republicans that they consider most likely to vote. They ask those sampled individuals questions on "likeliness to vote in the primary" and screen out unlikely voters. Finally, they weight the demographics of the final sample based on the demographics of voters in previous Republican primaries.

Towery reports using a similar procedure at InsiderAdvantage: "We do poll off of [a list of] registered voters, but we do then cull that number down based on a voting history that gives us a more likely voter sample." They then ask a screen question to identify likely primary voters: "are you likely to vote in the_____presidential primary to be held ____."

What Percentage of the Voting Age Population Did Each Poll Represent? - The calculation for the Quinnipiac poll is relatively straightforward: They report starting with a random sample of 1,325 adults and using the questions above to identify and interview 438 registered Republicans. So the Quinnipiac Republican sample amounted to 33% of Florida adults (438 divided by 1,325).

Again, Coker and Towery were initially reluctant about sharing specific numbers, but ultimately provided the information necessary to answer my question. Let's start again with Coker and Mason-Dixon:

The population we were trying to capture was the roughly 1 million Republican voters who will be most likely to vote in January. In a universe of approximately 3.8 million registered Republicans, we targeted a population of about 1.2 million Republican voters and had an incidence of 83%.

So Mason-Dixon interviewed a sample designed to represent approximately 1 million voters (1.2 million * .83) out of 14.2 million Florida adults, or 7.0% of Florida adults.

Next, Towery and InsiderAdvantage:

The [target] universe based on our sample system was around 1.6 million. This reflects the slightly higher than normal turnout you see in a Presidential primary. Incidence rate, based on data I just received was around 75%. Based on your description this would mean a final "universe" of around 1.2 million voters (all registered) which I believe reflects the likely turnout for a GOP Pres. primary turnout.

So the Insider-Advantage interviewed as ample designed to reflect approximately 1.2 million adults, or 8.5% of Florida adults.

As should be obvious, Mason-Dixon and InsiderAdvantage sampled significantly narrower populations of Republican voters than Quinnipiac. Via email Brad Coker argues that a narrower "screen" is more appropriate to a pre-election poll aimed at projecting the preferences of likely voters: "I would question the validity of a poll of ‘registered Republican voters' simply on the grounds that 75% of those sampled probably won't be voting in January."

According to the Florida Secretary of State, the vote for Republican primary candidates totaled roughly 1 million in 2006 (Governor), 1.2 million in 2004 (Senate) and 700,000 in the 2000 presidential primary.

In response to my initial questions, Quinnipiac's Doug Schwartz sent this statement:

The methodology used by the Quinnipiac poll is similar to that of all the other major polling operations in the country. It has correctly predicted the outcome of every major race it has polled on in Florida during the past three years. For details on the methodology used, contact Doug Schwartz or visit http://www.quinnipiac.edu/x271.xml

That is true, but we should note that the final pre-election survey conducted by Quinnipiac in 2006 reported the results among "likely voters" rather than all registrants. So their primary voter "methodology" may shift as we get closer to Election Day. Pollsters continue to debate the merits of various likely voter models months prior to the election, something I covered in great detail in 2004 in the context of general elections. Putting that debate aside, however, the point is that the universes sampled in this instance are very different.

What Are the Demographics? - I asked each pollster to provide the results to demographic questions asked of their Republican samples. Both Quinnipiac and Mason-Dixon were quick to respond. The table below shows that the Quinnipiac sample is a bit younger. This is not surprising given that voters are typically older than non-voters.

10-02%20demographics.png

Both Quinnipiac and Mason-Dixon also included the regional composition of their samples. While their regions were not identical, their definition of South Florida came close. Mason-Dixon had fewer Republicans in their Southeast Florida region (18% in Palm Beach, Broward, Dade and Monroe Counties) than Quinnipiac (23%; although the Quinnipac South Florida region also includes Hendry County, which accounts for just 0.1% of registered Republicans statewide).

This last difference is important because, according to the Mason-Dixon cross-tabulations that Coker also provided, Rudy Giuliani ran far ahead of Fred Thompson (33% to 11%; n=70) in Southeast Florida, but trailed Thompson narrowly elsewhere (22% to 26%; n=330). So the fact that Quinnipiac had a greater percentage of respondents in South Florida provides yet another explanation for Giuliani doing better statewide in their poll.

Coker also provided this information:

Since we have the Florida voter file, we know the precise demographic profile of those who have voted in previous elections (at least in terms of county, age, gender and stated race/ethnicity). Our sample matches it within 1-2% of the actual figures from the average of 2004 & 2006 GOP primary turn-outs. Deaths and out-migration could easily account for any differences.

Towery, on the other hand, was more reticent:

We don't give out our weighting percentages or our demographic regional breakdowns because those are proprietary and if we did so, it would be like Coke giving away the secret formula, well not that big, but important to us!

Which brings us back to the whole point of our Disclosure Project. We should congratulate all three pollsters for providing the "incidence" data necessary to help us answer, in essence, not just "What is a Quinnipiac?" (to borrow Towery's headline) but also, "what is a Mason-Dixon?" and "what is an InsiderAdvantage? As a result of their disclosure, we can see how different the "target populations" were and take those differences into account in assessing the results.

It took some coaxing, to be sure. Coker has previously refused similar requests on the grounds of protecting proprietary interests. Given his extensive experience in Florida (he tells me he has conducted more than 200 statewide polls in Florida since 1984), Coker was understandably reluctant about responding in this instance. So his cooperation here is noteworthy. Hopefully, other pollsters follow his lead because the general descriptions and incidence calculations provided above could be easily replicated by every pollster and released online for every poll. Similarly, a demographic composition table, like the one above, would be an easy addition to the online documentation virtually every pollster and news organization makes available for every poll.

On the other hand, Towery's "secret formula" dodge has a fundamental flaw. Coke need not give away its "secret formula" when it prints on every can, as required by law, a list of ingredients, the number of calories and the grams of carbohydrate and other nutrients contained in each serving. As should be obvious, the our Constitution's First Amendment precludes the sort of mandatory labeling for pollsters that the FDA requires for food. However, pollsters like Towery ought to start thinking about how to better label their own products in terms of their sample composition, lest some snarky blogger ask, "What's an InsiderAdvantage?"

 

Comments
Gary Kilbride:

Looks like Matthew Towery missed a splendid opportunity for a follow up attack, "What's a Strategic Vision?"

A week after Towery's article, that Republican-based polling firm released a Florida poll with an identical margin as Quinnipiac, Giuliani leading Thompson 35-24 in Florida. I'm slowly returning to political focus but I thought I remembered those numbers, and sure enough Pollster.com mentioned it on September 27, lower on this page as posted by Eric Dienstfrey.

Here is the direct link to the Strategic Vision poll:

http://www.strategicvision.biz/political/florida_poll_092707.htm

The Strategic Vision poll was conducted September 21-23 and included Newt Gingrich, who checked in at 4%.

I may not agree with SV's partisanship but I do thank them for polling throughout 2006, which revealed Rudy's unusual strength in state after state, even while political blogs and conventional wisdom screamed he had no chance. The SV polling led me to take 10/1 on Rudy winning the GOP nomination, which don't look half bad right now.

____________________

according to the Mason-Dixon cross-tabulations that Coker also provided, Rudy Giuliani ran far ahead of Fred Thompson (33% to 11%; n=70) in Southeast Florida, but trailed Thompson narrowly elsewhere (22% to 26%; n=330). So the fact that Quinnipiac had a greater percentage of respondents in South Florida provides yet another explanation for Giuliani doing better statewide in their poll.

Apparently, a post-stratification estimate is "too wonky" for the Quinnipiac'ers. Funny, my biostats kids understand it just fine....

Wonky Question: What's the rationale for zero-one weighting of "likely voters" in your screens? Might you not get better information about swing voters if you included those who fail midway through a screen, by downweighting their response?

More Disclosure, please!

____________________



Post a comment




Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR