Pollster.com

August 20, 2006 - August 26, 2006

 

Vacation


And finally . . .

As readers may have guessed from the lack of posts the last few days, I am trying to take a much needed vacation break this week.  I interrupted it today to do finish up a few items I did not quite get to last week or -- in the case of the push polling post -- that seemed important enough to justify the interruption.  My family probably disagrees.

And speaking of family, I am going to try hard to stay away from the blog for the rest of the week.  I'll be back on Monday, and you will definitely want to stay tuned next week.  Things are about to get a lot more interesting around here.  Sorry to be a tease (again), but stay tuned . . .


Crosstabs.org

Topics: Blogosphere , Incumbent Rule , Pollsters

Last week I discovered an interesting new blog devoted to political polling called Crosstabs.org.  Actually, Crosstabs.org is something of a blog within a blog, a site nestled within the conservative site RedState.comorg.  It combines frequent posts from blogger Gerry Daly -- who used to blog at his own site, Dalythoughts, and comment from time to time here on MP -- with an interesting twist.  The new site will include occasional contributions from five Republican campaign pollsters:  Robert Moran of Strategy One, Bob Ward of Fabrizio, McLaughlin & Associates, Brent McGoldrick of Grassroots Targeting, Bill Cullo of Qorvis Communications and Rob Autry of Public Opinion Strategies.

If imitation is the sincerest form of flattery, then MP certainly welcomes the presence of more professional pollsters into the blogosphere, regardless of their political perusasion.  When it comes to methodology, we all do things a bit differently, and readers will benefit from having more perspectives online.  Take the issue of the "incumbent rule," for example.  In their first week, the pollsters at Crosstabs.org have posted some thoughts worth reviewing here, here and here.

Now, obviously, Crosstabs.org will handicap polls from a conservative perspective (just as Dalythoughts did during the 2004 cycle). Chris Bowers and his colleagues at MyDD and Ruy Teixeira at Donkey Rising have long done the same from the liberal side of the blogosphere.  And while I try to keep the handicapping and commentary as neutral here as I can, there is no doubt that I am a Democratic campaign pollster.  So a little balance is not a bad thing. 

Welcome to the neighborhood Crosstabs.org!


So What *Is* A Push Poll?

Topics: Push "Polls"

Over the weekend, Greg Sargent of TPMCafe reported on what he considers "push polling, no question," involving some calls that trash two Democratic candidates for Congress, Kirsten Gillibrand in New York's 20th District and, more recently, Patty Weiss in Arizona's 8th District.

With all due respect to Sargent and his source, Mickey Carroll of the Quinnipiac University Polling Institute, both are using the wrong definition of "push polling." It is certainly more than poll questions that feed "the negative stuff," as Carroll puts it.  A true push poll is not a poll at all.  It is a telemarketing smear masquerading as a poll.

Back in February, in commenting on a different set of calls made, ironically, into the very same New York 20th District, I described real push polling in detail:

Many organizations have posted definitions (AAPOR, NCPP, CMOR, CBS News, Campaigns and Elections, Wikipedia), but the important thing to remember is that a "push poll" is not a poll at all. It's a fraud, an attempt to disseminate information under the guise of a legitimate survey. The proof is in the intent of the person doing it.

To understand what I mean, imagine for a moment that you are an ethically challenged political operative ready to play the hardest of hardball. Perhaps you want to spread an untruth about an opponent or "rumor" so salacious or farfetched that you dare not spread it yourself (such as the classic lie about John McCain's supposed "illegitimate black child"). Or perhaps your opponent has taken a "moderate" position consistent with that of your boss, but likely to inflame the opponent's base (such as Republican voting to raise taxes or a Democrat supporting "Bush's wiretapping program").

You want to spread the rumor or exploit the issue without leaving fingerprints. So you hire a telemarketer to make phone calls that pretend to be a political poll. You "ask" only a question or two aimed at spreading the rumor (example: "would you be more or less likely to support John McCain if you knew he had fathered an illegitimate child who was black?"). You want to make as many calls as quickly as possible, so you do not bother with the time consuming tasks performed by most real pollsters, such as asking a lot of questions or asking to speak to a specific or random individual within the household.

Again, the proof is in the intent: If the sponsor intends to communicate a message to as many voters as possible rather than measure opinions or test messages among a sample of voters, it qualifies as a "push poll."

We can usually identify a true push poll by a few characteristics that serve as evidence of that intent. "Push pollsters" (and MP hates that term) aim to reach as many voters as possible, so they typically make tens or even hundreds of thousands of calls. Real surveys usually attempt to interview only a few hundred or perhaps a few thousand respondents (though not always). Push polls typically ask just a question or two, while real surveys are almost always much longer and typically conclude with demographic questions about the respondent (such as age, race, education, income). The information presented in a true push poll is usually false or highly distorted, but not always. A call made for the purposes of disseminating information under the guise of survey is still a fraud - and thus still a "push poll" - even if the facts of the "questions" are technically true or defensible.

So it is not just about questions that "push" respondents one way or another, not just about being negative, not even about lying (although lying on a poll is certainly an ethical transgression).  It is about something that is not really survey at all.

The calls that the Albany Times Union reported do not fit the definition of push polling.  First, the calls involved more than just a question or two.  They included a series of "fairly innocuous questions," such as "whether the country is headed in the right direction," Bush's job rating and the initial congressional vote.  Second -- and this is a big clue -- one respondent reports that he hung up in anger one night, "only to have a different person call back the next night asking him to finish answering the questions (he did)."  That sort of "call back" is something a real pollster would do but a "push pollster" would never bother with.  Third, the Times Union's reporting plausibly traces the calls to the Tarrance Group, a polling firm that has long conducted legitimate internal polling for Republican campaigns. 

I am in no position to evaluate the substance of the attacks reportedly made in the calls in NY-20 or AZ-08, and I will certainly not try to defend them.  The attacks tested in those surveys may well have been untrue, distorted or unfair.  If so, they deserve the same sort of condemnation would we give if delivered in a television or radio ad or in an attack made in a debate.  If the attacker is lying, it is unethical regardless of the mode.  A television advertisement should not lie and neither should a pollster.  But a lie alone does not a "push poll" make.

Is this just a semantic distinction?  I don't think so.  Just about every campaign pollster, Democrat and Republican, uses surveys to test negatives messages.  If you think negative ads by Democrats, including these examples, were produced without benefit of survey based message testing, you're dreaming.  If we choose to define a "push poll" as a survey that merely tests "the negative stuff," then we better be ready to accuse just about every competitive campaign of the same "dirty tricks."

If a pollster lies in a real survey, that's sleazy and wrong.  If candidates distort the truth, let's call them on it.  But if we confuse negative campaigning -- or the survey research that supports it -- with the dirty tricks of true "push polling" then we too are distorting the truth.


Another Online Poll on Online Activity

Topics: Internet Polls , Sampling Error

Last week's "Numbers Guy" column by Carl Bialik of the Wall Street Journal Online looked at another online survey about an online activity, in this case about online alcohol sales to teenagers.  Bialik noticed something in the release that the other outlets should have at least checked.  The "online" sample was not drawn from all teenagers, but rather from teenagers who had volunteered to participate in online surveys.  It was a non-random online survey of an online activity, something with the potential to create a significant bias. 

I will let Bialik take it from there: 

People who agree to participate in online surveys are, by definition, Internet users, something that not all teens are. (Also, people who actually take the time to complete such surveys may be more likely to be active, or heavy, Internet users.) It's safe to say that kids who use the Internet regularly are more likely to shop online than those who don't. Teenage Research Unlimited told me it weighted the survey results to adjust for age, sex, ethnicity and geography of respondents, but had no way to adjust for degree of Internet usage.

Regardless, the survey found that, after weighting, just 2.1% of the 1,001 respondents bought alcohol online -- compared with 56% who had consumed alcohol. Making the questionable assumption that their sample was representative of all Americans aged 14 to 20 with access to the Internet -- and not just those with the time and inclination to participate in online surveys -- the researchers concluded that 551,000 were buying alcohol online.

Bialik goes on to raise even more fundamental problems with the release from the survey sponsor -- the Wine and Spirits Wholesalers of America -- a group that according to Bialik, "has long fought efforts to expand online sales of alcohol."  For one, their headline claims that "Millions of Kids Buy Internet Alcohol," while even the questionable survey estimate adds up to just 551,000. 

One point not raised in Bialik's excellent piece is that the survey reports a margin of error ("plus or minus three percentage points").  The margin of error is a measure of random sampling error, which applies only where there is a random sample.  This survey was based on a sample drawn from a volunteer panel, not a random sample survey.  I have raised this issue before (here and here).   Yes, random sample surveys face challenges of their own, but if a sound statistical basis exists for reporting a "margin of error" for non-probability samples, I am not yet aware of it. 


CT: Party ID from Quinnipiac Poll


In the busy run-up to a long needed vacation, I have not had a chance to write about the new Quinnipiac Connecticut poll released last week.  However, I did want to note (and promote) the very helpful comment from reader Alan that provides some very helpful information for those scrutinizing the results of that survey by party ID (which seems to be just about everyone):

In Quinnipiac's August 17, 2006 poll in Connecticut, the results are reported by party affiliation, but don't include the % of total respondents in each party, making it difficult to tell how much the Democrat, Republican and Independent voters weigh in the totals.

In case anyone else is looking for this, I asked Quinnipiac if they could provide this information, and this is what they gave me:

-------

Generally speaking, do you consider yourself a Republican, a Democrat an Independent, or what?

LIKELY VOTERS

Republican 25%
Democrat 36
Independent 34
Other 4
DK/NA 1

Thanks Alan!


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR