Pollster.com

September 3, 2006 - September 9, 2006

 

A Real "Push Poll?"

Topics: Lincoln Chafee , Push "Polls" , Stephen Laffey

Thanks to our friend DemFromCT for emailing news of what sounds like a genuine "push poll."  

Just last month, I wrote about allegations of "push polling" in congressional races in New York and Arizona.  In that case, as in so many that appear almost daily in various newspapers across the country, the allegations sounded less like a true push poll (an effort spread information disguised as a poll) and more like internal campaign surveys testing how voters respond to negative information.  

Today the Associated Press reported that a group named "Common Sense" has made automated calls to voters in Rhode Island asking about their preference in the Republican primary for Senate between incumbent Lincoln Chafee and challenger Stephen Laffey.  According to the article, "those who chose Chafee heard graphic descriptions of an abortion procedure opponents call 'partial-birth abortion,' which the poll said Chafee supports." 

The giveaway that this was a true push poll -- at least in my view -- came in the next few paragraphs: 

Eva Geoppo, 57, of Providence, said she received four phone calls because she has multiple phone lines at home. On the first call she received, she chose Chafee when asked who she planned to vote for.

"It just freaked me out," said Geoppo, who owns a general contracting business. "They said something along the lines of 'Do you realize Sen. Chafee is for partial-birth abortions and he's a war monger?'"

The next time, she chose Laffey.

"It was 'Do you need a ride to the polls?'" she said.

Yes, a survey that samples randomly generated telephone number might, in theory, sample a household with multiple phone lines more than once.  The odds of someone with four lines getting called four times, however, are astronomically small.  Geoppo's experience implies an effort to systematically dial every phone in Rhode Island.  Moreover, real surveys do not offer to give individual voters a ride to the polls.    

According to the AP story, the Chafee campaign claims that the group behind the calls is "an Ohio-based organization" called Common Sense 2006, and officials of the group did not respond to requests for comment. 

Interesting story.  We will pass along more news if it develops.


More Recollections of Warren Mitofsky

Topics: Exit Polls , Harry O'Neil , NCPP , Pollsters

My colleagues at the American Association for Public Opinion Research (AAPOR) have put up a web page tribute to Warren Mitofsky. It includes photos and recollections from many of his pollster colleagues posted over the last week on the organization's members-only listserv. I've copied a few of my favorites below, plus one tribute promoted from the comments section here on Pollster.

From Prof. Robert M. Groves of the University of Michigan:

Warren was a legend even when I joined AAPOR in the mid-1970's. Sometimes I think he was always a legend.

I first met him through working on RDD sample designs, just when the original Waksberg article came out. I called him to learn about how he handled the odd features of implementing that design; he did not know me at all, but treated me like a peer. Although he worked in a completely different domain of surveys than the one I was pursuing, I found him sharing all the values of research design and probability sampling. He knew the theory and the practice.

Later at AAPOR, he graciously introduced me to others; those were days when the interaction of the commercial and academic was strong and spirited. He scared me at first, with his gruff manner, but when we were talking surveys, he was open, friendly, and happy to see others learn what he knew. I still have vivid memories of a debate in a session at AAPOR, focused on the value of probability sampling in a world of high costs and low response rates. He was masterful.

Over the years, we invited him to speak to graduate students in survey methodology -- he was a hit because he was filled with stories that made principles memorable. I think he was a teacher by nature.

I do see this as the end of an era. I share the disbelief of others that he is really gone. He will not be replaced, and I will miss him.

[Link added].

From our friend Tom Riehle, of RT-Strategies:

It has been my pleasure over the past six or seven years to introduce pollsters from around the world to Warren. I would always tell them, after our meeting, that were it not for the man they just met, they would not have a job. They would scoff, but they did not know what they were talking about.

Warren made it possible to pursue a career in polling--not just because of the inventions and innovations in which Warren played a role (RDD, polls sponsored and distributed by national media outlets, election night projections and exit polls, just to name a few), but more importantly because of the work Warren (and Harry O'Neill and others) have done, through AAPOR and the National Council of Public Polls and other organizations, to establish and enforce the standards that make polling a trusted profession. It is hard to believe we will keep polling without Warren looking over our collective shoulders and calling us on our shortcomings (and encouraging our best efforts). But I guess the odds are, we will. Here's hoping we do so in the spirit and to the standards Warren always exemplified.

From Rick Brady, a frequent presence in the MysteryPollster comments section during our focus on the exit polls in 2004 and 2005:

Mr. Mitofsky.

That's how I addressed him in each e-mail volley following the 2004 election until he finally asked me to call him Warren. I suppose he realized that although annoying, I was harmless and genuinely interested in understanding exit polling methodology.

I'm not a pollster. Far from it. I'm a City Planning graduate student with a couple of survey research classes under my belt. I plopped some data into excel, ran some "tests" and by-golly, thought I knew a significant discrepancy between a poll and official vote count when I saw one.

But Warren was patient.

Although not immediately obvious from the content of his e-mails, the fact that he stuck with me and took the time to write nearly 100 e-mails in several months time, proved it. This weekend I re-read almost every one with fondness.

The AAPOR site has much more worth reading, of course, and presumably more tributes will populate the tribute page soon. If my colleagues are reading, I'd like to nominate one left here as a comment by my friend Elizabeth Liddle (whose contributions to the 2004 exit poll post mortems I chronicled here and here):

I never met him, but worked with him on the exit poll data over the last year. Your obituary certainly fits the man I got to know.

It still amazes me to realize that the man who dominated the exit poll field for decades took a criticism by a novice on a blog (me) seriously enough not only to re-analyze his own data, but to hire me to do more. It is so rare to encounter someone driven so much more strongly by a passion to know than by conventional assessments of whose opinion might be worth attending to. And to find it in someone of Mitofsky's professional standing is even rarer.

I will miss him.

Photos courtesy AAPOR.  Above Warren Mitofsky and Joe Waksberg in an undated photo probably taken on an election night in the early 1980s.


The Pollster That Made Up Data


Here is a story we hate to see.  The manager of a survey call center that did work for Republican pollsters pleaded guilty yesterday to defrauding her clients and now faces up to five years in prison and a possible $250,000 fine.  The Associated Press has the ugly details: 

According to a federal indictment, Costin told employees to alter poll data, and managers at the company told employees to "talk to cats and dogs" when instructing them to fabricate the surveys.

An FBI affidavit from 2004 quotes a supervisor of the company estimating that 50 percent of the data sent to Bush's campaign was falsified. FBI Special Agent Jeff Rovelli, who wrote the affidavit, said Thursday that investigators were not able to verify the claim related to Bush because that data was not located and analyzed.

Assistant U.S. Attorney Edward Chang said on several occasions when the company was running up against a deadline to complete a job, results were falsified. Sometimes, the respondent's gender or political affiliation were changed to meet a quota, other times all survey answers were fabricated.

Most pollsters, including a large number of news media pollsters, use third party call centers to conduct their interviews.  I do not have personal experience with DataUSA, the firm Costin worked for (now known as Viewpoint USA), but it certainly sounds like a subcontractor used by at least one of the pollsters for the Bush campaign. 

The 13-page indictment issued in March (via White Collar Crime Prof) has some details that other pollsters should find chilling.  In addition to the falsification described above, it claims that Costin and others instructed employees to alter and falsify the gender and political affiliation of respondents to meet job quotas and deadlines, and "modified the survey script by altering [interviewing software] source code to eliminate questions and undesired data" (p. 5).  More important, it alleges that the conspiracy began "in and around June of 2001" (p. 3) and lists specific instances of falsification that occurred between June 2002 and October 2004.   If this particular conspiracy went undetected for two to three years, you have to wonder if this is just an isolated incident or if the problem might be more widespread. 


About those Majority Watch Congressional District Polls...

Topics: 2006 , Cook Political Report , IVR , IVR Polls , Rasmussen , Registration Based Sampling , Sampling , SurveyUSA , The 2006 Race , US Census

The big news yesterday for true political junkies was the release of separate polls conducted simultaneously in 27 of the most competitive districts nationwide (with surveys in three more districts ongoing) using an automated recorded voice rather than live interviewers. The surveys were conducted for a project dubbed "Majority Watch" by the team of RT-Strategies, a DC based firm that polls for the Cook Political Report, and Constituent Dynamics, a company that specializes in the new automated methodology. While the slick Majority Watch website provides full crosstabs if you click far enough, many readers have asked, are these surveys legitimate? Are they reliable? The best answer, to paraphrase the Magic Eight Ball is, "reply hazy, ask again later."

The formal name for the automated methodology is Interactive Voice Response (IVR). Two companies - SurveyUSA and Rasmussen Reports - have conducted IVR surveys for years. While those companies do many things differently, both typically sample using a random digit dial (RDD) methodology that has the potential to reach every working land-line phone in a particular state. Unlike traditional surveys, the IVR polls use a recorded voice, rather than a live interviewer, and respondents must answer by pressing the keys on their touch-tone telephone. With IVR, the pollster's ability to randomly select a member of each sampled household is also far more limited.

The Majority Watch surveys add a few new twists. My friend Tom Riehle of RT Strategies kindly provided some additional details not included on the Majority Watch methodology page:

1) Majority Watch drew its samples from lists of registered voters rather than through random digit dial sampling. The advantage of this approach is that it solves the problem of how to limit the survey to those living in the correct district (a big challenge with RDD sampling). It also excludes non-registrants and allows the use of individual level vote history to determine who is a "likely voter."

The downside to voter list sampling - sometimes called Registration Based Sampling (RBS) - is that it only covers voters that have either provided their phone number to the registrar of voters or whose numbers are listed in public phone directories. "Match rates" (the percentage of voters on the list with working phone numbers) vary widely from state to state and district to district, but rarely exceeds 60%. If the uncovered 40% (or more) differ in their politics, a bias can result.

Pollsters continue to debate the merits of RDD and RBS sampling, and that debate deserves more attention than I will give it today. The short story is that most media pollsters continue to use RDD sampling, especially for national polls. Internal campaign pollsters have been making far greater use of list sampling, especially at the Congressional District level where they use RBS almost exclusively.

2) Majority Watch used individual vote history to select the "likely voters." The lists provided in most states by the registrar of voters typically reports vote history. If you voted in the 2004 presidential election, but not in that school board election in 2005, the list will say so. It is a matter of public record. Majority Watch used an approach common to campaign pollsters: The sampled only those who cast votes in at least two of the last four general elections in their precinct (which included "off-year contests" in 2005 and 2003). What this means, in effect, is that most of their respondents voted in both the 2004 presidential election and at least one general election race.

Majority Watch used based their "likely voter" model entirely on vote history from the list, and did not ask "screen" questions to select their sample.

3) The Majority Watch pollsters used an interesting approach to selecting a random voter in each household and matching the interviewed respondent to the actual voter on the list. They randomly selected one voter to be interviewed within each household, but then used the automated method to interview whoever answered the phone. The interview included questions asking respondents to report their gender and age. After each interview, a computer algorithm checked to see if the reported gender and age matched the data for that individual on the voter file. If the gender and age data did not match, they threw out the interview and did not include it in their tabulations.

4) According to the methodology page, the Majority Watch pollsters then weighted their data "to represent the likely electorate by demographic factors such as age, sex, race and geographic location in the CD." But how did they determine the demographics of the likely electorate in each district? The answer is surprisingly complicated.

They obtained data on each district reported by the U.S. Census as part of the Current Population Study (CPS). Keep in mind, as noted in a post last week, the CPS is also a survey (albeit with a huge sample size and very high response rate), subject to some of the same over-reporting of voting behavior as other surveys.

The Census publishes data for gender and race by Congressional District, but not for age. So the Majority Watch pollsters created their own estimate of the age distribution by applying state level CPS estimates of turnout by age cohort to the district level estimates of age for all adults. If that last sentence was confusing - and I know it was - don't worry. Just note that the estimate of age for "2002 Voters" provided in the lower left portion of each district page on the Majority Watch site is their estimate, as extrapolated from statewide CPS data and not an official Census estimate.

Also note something that has confused many readers that have looked at the Majority Watch web site. All of the demographic data that appears on their district level page is taken (or derived) from U.S.Census data. It is not based on data from their surveys!

Finally, those who have drilled down deep into the Majority Watch crosstabs will notice that the age distribution on the poll is older than their Census-based age estimate. That is because the Majority Watch pollsters also looked at the estimated age distribution obtained directly from the voter lists (based on the birthdays voters provide when they register to vote). This subject is definitely worthy of more discussion, but voter lists consistently show an older electorate than the CPS survey estimates. The Majority Watch pollsters set an age target that was a meld of their CPS estimate/extrapolation and the list estimates, and weighted the data to match. How accurate and reliable is this approach? I have no idea, and am quite sure other pollsters will see shortcomings.

Here is the bottom line for those wondering how much faith to put in the Majority Watch data: Political polling gets considerably more difficult at the congressional district level. While the Majority Watch approach is innovative, it is also new and untested, and it includes a lot of departures from the standard survey practice. And, according to Tom Riehle, the design of the survey may evolve between now and Election Day (and yes, future tracking surveys are planned):

We are privately taking the methodology and results to some tough critics to find out what questions they ask that we may not have thought to ask, in order to keep moving the quality closer to the best quality that can be achieved in telephone interviewing. In that sense, this is a work in progress, because we have made our best effort to develop an excellent methodology, and will continually improve that methodology based on the informed and legitimate questions of methodological critics.

The Majority Watch surveys may turn out to yield reliable results, or they may not. We really will not know until we watch how they compare to other public polls and the ultimate election results. And here at Pollster.com, we are hoping to help you do just that.


Morin on Mitofsky

Topics: Pollsters , Warren Mitofsky , Washington Post

This morning's Washington Post has the quintessential Warren Mitofsky appreciation. It is absolutely worth reading in full, and I find it difficult to find just one paragraph to excerpt. Here are a few that stand out:

Mitofsky cared deeply, passionately and sometimes explosively about his profession and his place in it. He didn't tolerate fools, poseurs or corporate tools, and he delighted in telling them so. Even his friends agree that he began too many sentences with the words, "Here's why you're wrong . . ." As he said it, he inevitably smiled that off-kilter, crocodile smile that he flashed whether he was pleased or angry.

It was that smile I remember most. We met at a conference 19 years ago, soon after I was hired to be director of polling for The Washington Post. I introduced myself. "Congratulations," he said, smiling broadly. "I've never heard of you."

The piece also includes some discussion of the exit poll controversy following the 2004 elections, and this bit of advice well worth heading by all of us that obsess over political polls:

In a way, Mitofsky fell victim in later life to his own success and formidable reputation. "People are expecting perfection out of the polls and out of me," he said. "They're thinking they're really going to make a decision on the outcome of close races based on exit polls. . . . Exit polls are not that good. They're approximate."

Richard Morin knew Warren Mitofsky well and captured him perfectly. Go read it all.


Warren Mitofsky: An Appreciation

Topics: CBS , CBS/New York Times , Exit Polls , Murray Edelman , Pollsters

Sixteen years ago, I called Warren Mitofsky at his office in New York with a question. What made the conversation remarkable was neither the reason for my query nor the substance of his answer. What was remarkable was that he took the call at all.

At the ripe old age of 27, I had barely four years of experience in the polling business. I knew just enough about methodology to be dangerous, yet in retrospect I knew not nearly enough of what I didn't know. I had a question about the methods Mitofsky had implemented at CBS, and I could not find the answer on my own. So my employer at the time suggested I give him a call.

By then I knew the Mitofsky legend well. Along with colleague Joe Waksberg, he invented a more efficient method to draw random digit dial (RDD) telephone samples that became an industry standard. He conducted the first exit polls for CBS News and created the election projection system now used by all of the U.S. television networks to project winners. As director of the CBS election and survey unit from 1967 to 1990, he helped create the CBS/New York Times polling partnership that became a model for other news outlets. When I placed my call in 1990, he was in the midst of creating the multi-network consortium that he would direct for another three years. Mitofsky would continue to play a major role in directing network exit polling until his untimely death last Friday.

I called with some trepidation sixteen years ago, and to my surprise he came on the line almost immediately. My odd question betrayed my own ignorance and, as I recall, puzzled him. He could have easily brushed me off, admonished me for wasting his time or lectured me about my need for more education in survey fundamentals. Yet he did none of those things. Calmly and patiently, he explained some of the "probability methods" pollsters use to select respondents within a sampled household and made some suggestions about where I might go to learn more. I remember feeling embarrassed yet also amazed that this polling legend had taken a few minutes of his valuable time to encourage my own naive curiosity about how to conduct good research.

Warren was like that. He is best known for his ardent devotion to the very highest standards in survey research. Get on the wrong side of that passion and you would likely end up, as his long-time CBS colleague Murray Edelman put it Saturday, with "the scars to prove it." He could also be famously thin skinned about criticism he considered ill informed. Yet beneath the curmudgeonly public persona beat the heart of a scholar and teacher, always open to learning from his colleagues, always ready to share his own wisdom with those genuinely willing to learn.

Less known among Mitofsky's many accomplishments was his commitment to making raw data available to scholars. His life's work -- The CBS/New York Times surveys and most of his exit polls -- have long been archived at both the Roper Center and the University of Michigan's Interuniversity Consortium for Political and Social Research (ICPSR). He had been serving most recently the chairman of the Roper Center board of directors.

One could see both the passion and the commitment to learning in his prolific contributions to the member's only listserv of the American Association for Public Opinion Research (AAPOR). In a sense, he became something of a proto-blogger over the last ten years, posting a steady stream of comments or responding to questions at all hours of the day or night. To be sure, he could blast with both barrels at arguments he considered wrongheaded. Yet despite his prominence he always seemed willing to engage any AAPOR colleague, regardless of their stature, as an equal worthy of respect.

There were also frequent flashes of his particular brand of wry humor. In 2002, he posted an Associated Press account to his fellow pollsters on the efforts of rapper P. Diddy to get into the market research business (with the lead "the 'P' in P. Diddy stands for 'public opinion'"). His subject line: "Our days are numbered."

The humor was often self-deprecatory. Just last year an AAPOR member asked about how to best respond to the backhanded compliment, "you should have a PhD." Mitofsky, who had been a doctoral candidate in mass communications but never completed his degree responded, "I just tell people I'm still working on it. I'm a slow reader."

And finally there was his response to a discussion about whether the term "pollster" contributes to the negative opinion of survey researchers. Those who knew Mitofsky will probably hear that lilt in his voice and see the twinkle in his eye that would have accompanied the final sentence of the following paragraph:

If you wonder why the term pollster is not viewed favorably, here is how some academics view polls: At an [American Political Science Association] convention meeting a professor started reporting on all the surveys done about the presidential debates during the 1976 campaign. When he finished I pointed out that he had omitted the extensive research that CBS and NY Times did on the debates. He responded by stating, "I just reported the surveys. Yours will be reported when we get to the polls." Ever since then I have understood that a survey is done by the academics or the government. Polls are what the media does. However, a poll can become a survey if archived at a reputable academic institution.

Warren Mitofsky showed us all how a "poll" can attain the highest standards of scientific survey research. He made "pollster" a label I will always wear as a badge of honor.

We will miss him.


 

MAP - US, AL, AK, AZ, AR, CA, CO, CT, DE, FL, GA, HI, ID, IL, IN, IA, KS, KY, LA, ME, MD, MA, MI, MN, MS, MO, MT, NE, NV, NH, NJ, NM, NY, NC, ND, OH, OK, OR, PA, RI, SC, SD, TN, TX, UT, VT, VA, WA, WV, WI, WY, PR