This week, many columnists and bloggers took note of the apparent drop in Republican party identification on many of the recent national surveys, especially after both Gallup and the Pew Research Center released helpful summaries with charts showing their tracking of long term trends (those links are well worth following for more detail). Although the Democratic advantage has definitely grown over the last two to four years, the changes over the last few months look more like a modest rise in independence than a shift benefiting either party.
Nate Silver also took up the issue yesterday with a chart featuring party identification data from six national polls and a LOESS regression trend line similar to what we feature in our standard charts. That post goaded us to moving up a party ID chart on our own to-do list.
Our new flash chart, embedded below, appears on this page and will update with each new national poll that features party ID results. Nate's chart had data from six pollsters -- we add six more, most notably the 20 or so since Labor Day conducted by Gallup:
The most striking feature of the chart is not the drop in Republicans, but rather the increase in independent identification. The decline in Republican ID is a nearly parallel decline for the Democrats. That trend is not surprising, as partisan identification often increases slightly during the last few months of an election year and fade afterward. However, note that the Pew Research report labels the magnitude of increase among independents as "noteworthy" as it appears much greater than what they observed in 2005.
As for our chart, keep in mind that you can use all of the interactive features to explore the "house effects" in evidence in the party ID results reported by various pollsters. Point to any dot on the chart to see a "tool-tip" box with the name of the Pollster and information on the specific release. Click on it to "connect the dots" of polls released by that pollster. Use the "filter" tool to see how the trend line would change if you drop certain polls from the chart.
One thing that will become evident quite quickly is the wide variation in the degree of independence measured by various pollsters. For a few polls (Fox News, Diageo/Hotline) the greater apparent partisanship is likely related to their interviewing only registered voters rather than adults. Much of the poll-to-poll variation results from subtle differences in the text and the extent to which pollsters train their interviewers to push for an answer. Generally, the more interviewers push for an answer (even before asking independents how they "lean") the more partisans they get.
And one last thought: One very gratifying aspect of the chart is the number of pollsters that now routinely release their party identification results. Four or five years ago, it was not so.
What do we make of President Obama's first 100 days? The answer is that he falls within the norm. His favorability rating--which is certainly high, but not historic--puts him about where George W. Bush was at this time in 2001. In fact, his rating is similar to Jimmy Carter's in April of 1977 and Richard Nixon's in April of 1969. President Obama's approval rating is typical of a "change" election President. The only "outlier" is Bill Clinton, and that is most likely due to his low vote share (43%) in the election. So forget the 100 day hype. Obama is where he should be.
Some observations on the political environment and marketplace:
1. Watch Independent voters when trying to assess how well the President is doing. While Independents gave the President strong job approval ratings in the early days of his administration, their disapproval grew rapidly so that by mid-March those Independents disapproving of the President's job rose to nearly 40%. Since that time, this number has dropped to around 30%. Our sense is that the President needs to keep his disapproval rating among Independent voters either at or below 40% to successfully sell his policies.
2. If the current trend continues, within the next 60-90 days a majority of the country will think things are going in the "right direction" for the first time in nearly six years. Not since the first quarter of 2003 (just prior to the start of the Iraq war) has a majority of Americans believed that things were going in the right direction. National polling suggests that there has been substantive improvement in the perception of the direction of the country in the last several months. Since the election, the percentage of Americans who think the country is off on the wrong track has declined by more than 20 points. In fact, the biggest drop was the period between the election and the President's inauguration, and there has been a steady wrong track reduction since that time. Today, the country is almost evenly-split between people believing that things are going in the "right direction" and those who think that the country is "off on the wrong track."
3. It's almost all about the stock market. We should really start calling it the political-economy. If you are trying to figure out whether Americans are feeling better or worse about the President or the economy, just check the stock market. Since March 9th, when the Dow hit bottom at 6547 it has improved nearly 25% to 8254 (at the time of this writing).
DOW Jones Industrial Average in 2009
Now look at public opinion polling on the state of the economy. For nearly one month after passage of the stimulus bill and its signing by the President on February 17th, there was very little movement on the measure. Approximately 70% of voters thought the country was getting worse for that period of time. It wasn't until the market uptick that started in the middle of March that you began to see fewer people saying that the economy was worsening and more saying that it was getting better. Of course, the media echo chamber is a contributor, as were some of the data (economic indicators) we began seeing the last month. During the same period, we also saw the President's job approval improve. While they will never say it, the White House political operatives are paying close attention to the stock market because for the immediate future, their success is intrinsically tied to it.
An update on Monday's post that linked to a ABC News polling director Gary Langer's recent column criticizing polls based on "opt-in" internet panel surveys that report a margin of error. "You need a probability sample to compute sampling error," he wrote. Opt-in panels that claim sampling error are "trying to nose their way" into "the house" of probability sampling and "don't belong there."
Simon Jackman, professor of political science at Stanford University, responded with a blog post that takes a different perspective:
Observation: all survey respondents “opt-in”. Would-be respondents (selected via random sampling or not) decide whether to respond or not, or can’t be reached at all. We then weight the data we get to try to deal with any resulting biases. The resulting standard errors should be computed taking the weighting into account (in almost all media polling I see, they are not, and the standard error is computed a la Stats 101 with the number of completed interviews in the denominator), but in any event, even the correct standard errors are conditional on the way the weights were computed. The Stats 101 “textbook purity” of “simple random sampling” has long been left behind…particularly given some of the horror stories you hear about [random digit dial] RDD [telephone survey] response rates.
So I tend to think the “you can’t trust opt-in Internet polls” line is something of a beat-up. Sure, there is work to be done in understanding the properties of data generated this way, and how to compute a standard error with these data. I don’t see this as an impossible hill to climb. It is critical that this work get done, because if/when we can get comfortable with the bias issues (and we know what the issues are), then I think its game over.
Jackman, we should point out, was a principal investigator for the Cooperative Campaign Analysis Project (CCAP), an academic survey conducted using the opt-in internet panel maintained by YouGov/Polimetrix (the company that also owns Pollster.com). Nevertheless, this conversation is important since, as Jackman points out, "internet polling is not going away" and just about every pollster I know concedes that we need to find a way to "solve the problem" of random sampling over the internet.
SurveyUSA is currently tracking national public opinion regarding the N1H1 Swine flu. Their first night of tracking can be found here and was conducted on 4/27 among 1,200 adults. The second night's results (conducted 4/28) can be found here. SurveyUSA will continue to track this regularly going forward.
How concerned are you that you might get sick from Swine flu? Very concerned? Somewhat concerned? Not very concerned? Or not at all concerned?
22% Very (4/27: 13%)
28% Somewhat (32%)
33% Not Very (38%)
16% Not At All (16%)
Should airline flights between the United States and Mexico continue? Or stop?
And, based on your expectations of him before he took office as president, would you say he's now doing ...(ROTATED) a better job than you expected, a worse job than you expected, or is he doing the kind of job you expected he would do?
A few more details on the new national poll we just posted from a name you have not heard before. Resurgent Republic is the name of a new conservative organization, headed by Republican pollster Whit Ayres (of Ayres, McHenry & Associates) and former Republican party chair Ed Gillespie. It is explicitly modeled on Democracy Corps, the group led by Democratic pollster Stan Greenberg. From their "about" page:
Resurgent Republic is a 501(c)(4) organization dedicated to shaping the debate over the proper role of government. It is an independent, not-for-profit organization modeled on Democracy Corps, which has made important contributions to the public debate from the left and has proven to be a valuable resource for labor unions, environmentalists, and liberal Congressional leaders.
Resurgent Republic seeks to replicate on the right the success Democracy Corps has enjoyed on the left. As does Democracy Corps, Resurgent Republic will make survey and focus group results publicly available.
Resurgent Republics advisory board also includes a who's who of respected Republican campaign pollsters: Glen Bolger, Linda DiVall, Ed Goeas, John McLaughlin and Jan Van Lohuizen.
Keep in mind that Resurgent Republic, like Democracy Corps, aims primarily to "shape the debate" in Washington. So while the pollsters involved have reputations for providing their clients with accurate data, and while both Ayres and Greenberg are also very good about releasing complete, filled-in questionnaires for every survey (allowing us to see full text, question order and results from demographic items), we should keep in mind that they make choices about what questions to include in public surveys and or what results to highlight in public memoranda are shaped by the goal of influencing opinion leaders. As with any sponsored research, take it with a grain of salt.
Yesterday's departure of Sen. Arlen Specter from the Republican Party re-opened the debate over the ideological direction of the Republican Party. Did the GOP move away from Specter, or was it Specter that left the GOP? Where do the American people fall?
My focus on this site over the last few weeks has been on young voters. And most of the news I have had for the Republican Party has been bad news, presenting a picture of a young cohort less convinced of the virtues of limited government, more supportive of gay marriage, and more inclusive of minority groups less prone to voting Republican.
In all of this, the overall ideological makeup of young voters has not yet been examined. Are young voters more liberal than older voters? Are they more likely to identify as Democrats? Recently on The View, Meghan McCain declared that 81% of young voters identified as Democrats. Though I appreciate Ms. McCain's efforts to draw attention to the GOP's troubles with young voters, the number is greatly exaggerated (and I would argue that exaggerating the problem does the cause no favors).
But the actual numbers are not much more pleasant for the GOP. According to the EMR exit polls at the presidential level, in 2008, 45% of voters 18-29 identified as Democrats while only 27% identified as Republicans. The gap between Democratic and Republican identification has not been so wide since 1976 when only 19% of voters 18-29 identified as Republican. Yet in 1976, young voters did not flee the GOP for the Democratic party. The above figure shows that voters left the Republican Party and became independents that year; Democrats actually saw a 7 point dip among 18-29 year olds in 1976 as well.
The 2008 shift is most concerning for the Republican Party in two ways. First, it shows the highest proportion of young voters identifying as Democrats since 1972. Second, it shows the largest gap between 18-29 year old party ID and overall party ID in that same time frame. Consider 1976, when the post-Watergate voters abandoned the GOP. In that year, Democrats enjoyed a 16 point advantage over Republicans overall. The gap among 18-29 year olds was 21 points - large to be sure, but not so different from voters overall.
Yet in 2008, there was a more marked difference between young voters and the overall electorate. While Democrats held a 7 point advantage over Republicans in terms of party identification overall, that advantage jumps to 18 points among voters 18-29.
However, in terms of ideology, while young voters are quite different from voters overall, the major change did not occur this year or even this decade. In 2008, "Liberal" made a one point gain among young voters, "conservative" a one point loss. The change in young voters didn't look terribly different from the change (or lack thereof) overall, a surprising finding given the major shift in partisan identification.
What is interesting is to take a look at 1992, when liberal overtook conservative among young voters. Conservatism took a five point hit that year, but took an 8 point decrease among young voters. Meanwhile, "liberal" picked up three points overall, but picked up seven points among young voters. Ever since 1992 re-calibrated the ideological makeup of the young electorate, the "liberal" label has outpaced "conservative".
Even odder, take a look back at the first chart of party identification. In 1992, the year the young electorate began identifying "liberal" more often than "conservative", the partisan makeup of young voters was actually more Republican than voters overall. So is ideology simply not as linked to partisan behavior? Or did the ideological shift in the early 1990's simply wait to manifest itself in 2008 as a party identification shift due to a different ideological alignment of the parties themselves? The Republican Party in the 1990's and early 2000s was able to attract young voters despite the fact that young voters were more likely to be liberal than conservative. Even as recently as 2004, Democrats only had a 2 point advantage among young voters.
Between 2004 and 2008, young voters' more liberal ideology started to match up with their partisan identification. A center-left young electorate (emphasis on center) was no longer evenly divided between the parties. As for reasons why, there are countless theories that have been offered to explain the shift. Some say young voters felt out of touch with a GOP that had nominated an older candidate (indeed, look at 1996 when the Republican Party ran the older Bob Dole against Bill Clinton). Some say the Republican Party moved to the right and became an unacceptable option for young center or center-left voters. Some may point to Obama himself as a large driver of young voters affiliating with the Democratic Party.
In order to evaluate the claim that young voters left the Republican Party because of the allure of the Obama candidacy, it is helpful to look at the 2006 election and a handful of midterms preceding it. If the Obama candidacy itself was driving young voters to become Democrats, we would expect to see young voter party identification that was similar to overall party identification, or at least we would expect to see behavior that makes sense in the context of the previous election or two. Yet while in 1998 and 2002 there were roughly equivalent numbers of young Republicans and young Democrats showing up at the polls, in 2006 there was a massive shift toward the Democrats ending in a twelve point Democrat advantage in party identification [in the electorate overall, that advantage wound up being two points, a far smaller gap].
As it turns out, young voters began abandoning the Republican Party long before Barack Obama was even a serious contender for the presidency. Those pinning the Republican Party's poor fortunes among young voters on the Obama candidacy miss the source of the problem and certainly underestimate its severity.
I've been troubled in recent months when discussing the issue of young voters with some fellow Republicans. There seems to be a sort of conventional wisdom that we should expect young voters to trend liberal and Democratic, that the behavior of young voters in 2008 is not serious cause for concern. This stems from a belief in partisanship as a life-cycle factor, that voters start liberal and Democratic and wind up older, conservative, and Republican. But the data paint a very different picture. Take the graph of partisan identification for instance; over the last few decades, young voters have not identified with the Democratic party in substantially higher numbers than voters overall. Even conservatism had its moment among young voters in the 1980's. Yet with the end of the Reagan presidency, young voters shifted toward liberalism. This ideological shift did not play out into actual partisan identification in a meaningful way until 2006 and 2008.
Another bit of conventional wisdom I hear from my fellow Republicans about the youth vote is that they need to vote Democratic twice before they are "locked in for life", supporting the notion that there is still time to turn the tide among this generation. Unfortunately, given that the shift began in 2006 and not 2008, for many voters the GOP may simply be too late. For the rest, if the Republican Party does not take immediate action to repair its brand, this generation may exhibit similarly low levels of Republican identification for years to come.
4/20-26/09; 3,534 adults, 2% margin of error
Mode: Live Telephone Interviews
"As President Barack Obama concludes his first 100 days on the job, Gallup Poll Daily tracking for the week of April 20-26 finds 65% of Americans approving of how he is doing and only 29% disapproving. Obama's average weekly job ratings have varied only slightly thus far, ranging from 61% to 67%.
The new president's approval rating at the 100-day mark is notable in that nearly all major demographic categories of Americans are pleased with his job performance, as evidenced by approval ratings above the majority level. Only in terms of political and ideological categories does Obama have a significant proportion of detractors; a majority of Republicans and self-described "conservatives" disapprove of his job performance."
Some miscellaneous polling related fallout from the announcement today by Pennsylvania Senator Arlen Specter that he is changing his party affiliation from Republican to Democrat:
Specter's statement includes a reference that implies he has been watching the public opinion polls:
Since [voting for the stimulus package], I have traveled the State, talked to Republican leaders and office-holders and my supporters and I have carefully examined public opinion. It has become clear to me that the stimulus vote caused a schism which makes our differences irreconcilable.
Specter's long-time pollsters, the Republican firm Public Opinion Strategies, has announced they are resigning from Specter's team and posted the following statement on their company web site:
"Senator Specter has been a record-setting U.S. Senator, and we have been part of his campaign team in 1992, 1998, and 2004, but because of his surprising decision to switch parties today, we will no longer be involved," said Glen Bolger, a Partner in the firm who worked on the '98 and '04 campaigns. "As Republicans, we are disappointed by Senator Specter's decision."
Finally, the pollsters at Quinnipiac University have sent out a release reminding us that their Pennsylvania poll in late March showed Specter "running stronger as a Democrat than as a Republican:"
Voters approve 52 - 33 percent of the job Specter is doing, with a 71 - 16 percent positive from Democrats and a 41 - 37 percent boost from independent voters, off-setting a 52 - 36 percent disapproval from Republicans. This is Specter's highest approval among Democrats and lowest approval among Republicans since Quinnipiac University began polling Pennsylvania in 2002.
More relevant to the headline of the Quinnipiac release, quoted above: 51 percent of Democrats said Specter deserved to be reelected as Senator compared to 32 percent of independents and 30 percent of Republicans.
Update: The Washington Post's Jon Cohen reminds us of the long-term trend toward Democratic party identification in Pennsylvania.
Update 2: In his press conference, Specter attributes his final decision to "my own poll results."
The decision has been reached gradually, as I have traveled the state in the last several months. Specifically, I got my own poll results back last Friday, late last week. And I consulted with my campaign managers and had a long discussion with Joan and my son ___ over the weekend . . . and came to a decision this past week.
It is not at all surprising that an officeholder would make this sort of decision after consulting internal polling. What is unusual is to hear such him acknowledge it so plainly. The poll in question was presumably conducted by Public Opinion Strategies.
Update 3: We have created new charts showing Specter's overall job approval and favorable ratings (among all adults/voters). Needless to say, all polls included as of this writing were conducted prior to today's announcement.
Update 4: Specter former pollster, Republican Glen Bolger, just posted the following "tweet":
Current meaningful country song lyrics -- God is great, beer is good, people are crazy.
Certainly not the last word on this subject, but an apt place to end this post. Time for a beer, Glen.
Over the last few weeks, I had the chance to attend two different events that covered similar ground: a day-long conference on survey quality organized by the Harvard University Program on Survey Research and a private (but on-the-record) meeting of a number of media pollsters last week hosted by Stan Greenberg's Democracy Corps. Thanks to those events, as well as the recent AAPOR report on the problems of the New Hampshire and other primary polls in 2008, my notebook is full of new ideas and material for future posts. Throw in the upcoming AAPOR conference that is now just not quite two weeks away, and my inbox is overflowing. The sheer volume of has been a little overwhelming.
So today I want to try to review a few themes that have been constant across these various events.
What is Quality? One obvious theme is the way the survey research profession is struggling for consensus on the meaning of quality in an environment where new technologies are reshaping everything. New technologies are limiting our ability to reach respondents as we used to, but they are also producing new methods. The growth of cell-phone only households, the reality of response rates below thirty percent, the increased use of listed samples of registered voters and the proliferation of automated telephone polls and online surveys drawn from volunteer panels have many polling practitioners asking each other hard questions.
It is true, as former CBS polling director Kathy Frankovic points out, that the questions raised by technical advances are not new. Another "proliferation" of newer polls using new technology that "makes polling easier" occurred in the 1970s, she reports, when telephones replaced in-person interviewing, and again in the 1980s and 1990s, when personal computers replaced mainframes making data analysis cheaper and easer. Still, that knowledge comes as little help.
One of the striking things about the Harvard conference was the occasional disconnect between the presentations of the most respected academic survey scientists, often concentrating on advanced theoretical constructs of the measurement of survey error, and the questions that often followed from pollster and researchers in the audience: What practical things should we be doing now? What are the "best practices" we should follow? What characteristics should journalists and journal editors look for in a quality survey?
Twenty years ago, the speakers at this sort of conference would have been much closer to consensus on a set of best practices for telephone surveys on politics. Now such questions are much more difficult to answer. As one speaker at the Harvard conference put it, we are at "this odd moment in history where there are conceptual tensions in the field."
Standards and the "Margin of Error" - For Gary Langer, the ABC News director of surveys, "the solution" to the question of what makes for good data is firmly rooted in the standards they require to put polls on the air at ABC News. Here is an excerpt from the description of those standards on the ABC News web site:
[I]n all or nearly all cases we require a probability sample, with high levels of coverage of a credible sampling frame. Self-selected or so-called "convenience" samples, including internet, e-mail, "blast fax," call-in, street intercept, and non-probability mail-in samples do not meet our standards for validity and reliability, and we recommend against reporting them.
We do accept some probability-sample surveys that do not meet our own methodological standards – in terms of within-household respondent selection, for example – but may recommend cautious use of such data, with qualifying language. We recommend against reporting others, such as pre-recorded autodialed surveys, even when a random-digit dialed telephone sample is employed.
At the Harvard conference, Langer spoke out in particular against surveys based on opt-in Internet panels that claim a margin of error for their results, and followed up with a column in the same vein last week. "A probability sample is a hallmark of good data," he wrote, "a sign that it lives within the framework of inferential statistics." Opt-in internet surveys that claim a margin of error, on the other hand, amount to "samples taken outside that framework [that] try to nose their way out of the yard and into the house. They don't belong there."
[Interests disclosed: The owner of Pollster.com is YouGov/Polimetrix, a company that conducts opt-in internet surveys and was among those singled out by Langer for criticism at the Harvard conference. That said, I have issued similarcriticisms of the reporting of a margin of error for internet surveys].
Stanford Professor Jon Krosnick, another speaker at the Harvard Conference, took a slightly different tack. He presented a compilation of efforts to measure and quantify survey accuracy using seven different methods across a wide variety of different surveys. Krosnick explained that his intent was to provide "a supplement to the margin of error... our only statement of accuracy" in most surveys as a way to account for the various "other" forms of variation not accounted for by random sampling error.
What makes Krosnick's approach different is his willingness to use measurements of accuracy to evaluate new forms of surveys, including opt-in panels and automated telephone surveys.
Assessing accuracy is useful for comparing various methods and to identify more accurate ones. So if you want to know, well, does representative sampling help you or not? Is representative sampling going to lead to more accurate data?
Krosnick examined seven methods to assess accuracy, although only a few tried to compare results from traditional telephone methods to those based on opt-in Internet panel. On some methods he found bigger errors for opt-in surveys, on others he found much closer results. Still, this was as much a demonstration of a method of comparison than an attempt at a comprehensive review.
Nevertheless, these approaches of Langer and Krosnick may be on something of a collision course. Langer's philosophy, shared by many others in the field, has been consistent: "a good sample is determined not by what comes out of a survey but what goes into it." This philosophy warns against efforts to measure individual poll accuracy since it may be no guarantee of quality. Krosnick's approach -- in theory, at least-- is to try to quantify total error for all surveys, including those gathered using opt-in internet panels. So what happens if non-probability sampling can produce data that is effectively as accurate as surveys that reach out to random samples? At that point will we need to reassess the standards?
Disclosure. A constant theme across all of the meetings and presentations is the need for more and better disclosure for those who put survey data into the public domain. At the Harvard Conference, Phil Meyer, a long-time journalist, survey researcher (author of the highly regarded text book, Precision Journalism) and now professor emeritus in journalism at the University of North Carolina, gave a talk on "the rise and fall of truth in polling." He shared the frustration expressed by so many others regarding the lack of cooperation by many pollsters with AAPOR's recent investigation of New Hampshire and other 2008 primary polls.
What I found intriguing, however, was his hope that the "marketplace" might react by creating a "reputation-based hierarchy," that would reward pollsters that disclose and punish those that withhold. "The real accountability system," he said, "is the market -- we need an efficient market, a public that cares." Meyer also noted that the mass media have tried to create that market with mixed results. Now he looks to specialized media (blogs like this one) to become a forum where "specialists in polling might become the arbiters of standards," a development that would amount to journalism healing itself. He closed with the image of the lighthouse used in the logo of the Scripps-Howard news chain along with their corporate motto: "Give light and the people will find their own way."
The resistance that many pollsters showed to repeated requests from AAPOR's investigation on the New Hampshire polling problem demonstrates the limits of formal enforcement mechanisms of professional organizations like AAPOR. As Frankovic put it, the discussion around standards and disclosure rarely extends beyond the "professional polling community."
I believe that sites like Pollster, RealClearPolitics, and FiveThirtyEight have a real opportunity to broaden the dialogue and help create the reputation based hierarchy that Meyer talked about. But that is a much bigger topic. I will have more to say about it in the coming months. Stay tuned.
(A hat-tip and thanks to Mike Mokrzycki for his excellent Twittered notes on the Harvard conference that helped fill in a few blank spots in my own).
From what you've seen or heard in the news, do you think it is appropriate for Norm Coleman to make an appeal to the Minnesota Supreme Court, or that Coleman should accept the trial court's ruling declaring Al Franken to winner of the Senate race?
28% Appropriate for Coleman to appeal
64% Coleman should accept the trail court's ruling
State of the Country
50% Right Direction, 48% Wrong Track (chart)
Obama has ordered the release of previously secret records of Bush administration policies on the interrogation of terrorism suspects. Do you support or oppose Obama's decision to release these records? Do you support/oppose this strongly or somewhat?
Do you think the Obama administration should or should not investigate whether any laws were brokethe Bush administration?
The survey research community is focusing intently on the challenges posed by the fast-growing share of Americans who are cell-phone-onlys (CPOs). In fact, there are 40 papers being presented on the topic at the AAPOR conference next month. One of the practical issues faced by pollsters is whether the cost of reaching CPOs is worth the payoff. Last week, Scott Keeter, Mike Dimock, and Leah Christian hosted a forum at Pew during which they discussed this tradeoff. But pollsters aren't the only people who have to make cost-benefit decisions when it comes to deciding whether to attempt to contact CPOs. Campaign organizations must make the same calculation.
So how well did the campaigns do at contacting CPOs during the 2008 campaign? The chart below compares the percentage of those with landlines and cell-onlys who reported being contacted by a campaign representative in 2008. The data comes from the National Election Study (NES), which uses residential sampling and face-to-face interviews to interview both landline and CPO respondents. In the chart below, the blue bars show the percentage of each group that reported being contacted while the black lines represent 95% confidence intervals for these percentages.
The chart shows that CPOs were much less likely to be contacted by the campaigns than people with landlines. Over half of landline respondents reported being contacted compared to less than one-in-three CPOs. This sizable difference holds up even when controlling for age, income, education, partisanship, and a variety of other factors.
The next chart (below) indicates that for those CPOs who were contacted, the contact tended to come overwhelmingly from Democrats. Over 80% of CPOs who were reached by the campaigns were contacted by the Democratic side while just a little over one-third were reached by Republicans. Republicans were significantly more competitive with Democrats when it came to contacting those with landlines.
Unfortunately, the NES did not include questions asking respondents how they were contacted by the campaigns. But a subset of respondents to the 2008 Cooperative Congressional Election Study (which I've analyzed in previousposts) were asked these questions. The chart below plots the responses for those who had landlines compared to CPOs.
CPOs who were contacted by one of the campaigns were significantly less likely to have had that contact over the phone compared to those with landlines. Otherwise, there were not major differences between how landline and CPO respondents were contacted. CPOs were somewhat more likely to get an email while those with landlines were a bit more likely to receive snail mail, but neither of these differences are large. The percentage being contacted in-person or by text message were nearly identical for both groups.
Overall, the findings from these surveys suggest that shedding your landline may help you avoid those pesky campaign calls in future election years. While Democrats were a little more successful than Republicans in reaching CPOs, the cell-only crowd was almost as successful avoiding campaign volunteers as they were hiding from pollsters.