An update on Monday's post asking whether the recorded, automated "survey" conducted by Well Point (the parent company of Anthem Blue Cross / Blue Shield) amounted to what some call "SUGGing," or selling under the guise of survey research.
The short answer, from Howard Feinberg, director of government affairs for the Marketing Research Association (MRA), is "no." The Anthem survey, which did not explicitly attempt to sell anything does not meet their definition SUGGing, a practice that MRA has taken a strong position against, along with fundraising under the guise of surveys (FRUGGing). Feinberg says that the Anthem particular call is really a form of "political advocacy under the guise of research," although it falls short of what Feinberg and most others consider "push polling."
I also received a follow-up message from Patrick Glaser, MRA's director of research standards:
I’ve spoken about the WellPoint “survey” with the MRA’s Professional Standards Chair. As planned, we will be following-up with the organization to learn more about the details of the project. In terms of MRA’s position, depending on the details of how they are presenting the activity to the “respondents,” their relationship to the respondents and how they are utilizing the information, this could potentially fall into an area where our codes do not currently provide specific guidance and/or should be discussed by the full committee.
I believe we may see these types of situations become more common, and I’m going to recommend to our Professional Standards Committee that they develop additional specific guidance about situations that are similar to this one as well as other situations that may vary a bit in their nature, but still relate to the same underlying issues.
Incidentally, our friend Desmoinesdem (who makes it a habit to answer all survey calls) blogged earlier today about what sounds like a clear cut example of fundraising under the guise of a survey ("FRUGGing"), conducted by Newt Gingrich's American Solutions political action committee. The caller identified herself as representing American Solutions, asked the respondent to participate in a "brief survey," and then depending on the responses given, ended the call by soliciting a donation.
The MRA has suggestions about what you can do about this sort of abuse of surveys and directions on how to report such calls to them directly, on their web site.
This is something of a bittersweet post. Eric Dienstfrey, my relentlessly hard working number two here at Pollster.com, will be moving on to bigger and better things in the fall. He has been accepted into the Graduate Program in Film Studies at the University of Wisconsin-Madison's Department of Communication Arts. Congratulations Eric!
This news means that we have a job opening and big shoes to fill. This is a full-time, entry-level position in Washington DC with health care benefits, and we anticipate hiring in mid to late June. Applicants should have excellent proofreading skills, strong attention to detail and an abiding interest in political polling. While not required, the ideal applicant would also bring some previous knowledge of or experience in web site development/administration (especially with Movable Type), statistical analysis (especially with the R programming language) or database development (especially with PythonSQL).
If you are interested and would like more details on this unique opportunity, please email me and attach a resume.
Update: We have filled the opening. Many thanks to all that applied.
We get a lot of questions, comments and complaints about the effect particular pollsters have on our trend estimates. This is an important question, and today I'll start a series of posts on this issue. I want to encourage your comments and feedback. Over the series of posts I'll try to answer what I can, and we'll improve our approach when you raise points we aren't doing well enough and can improve on. The focus will be presidential approval, but many of the issues are generic.
Yesterday Mark posted on the Rasmussen daily tracker and whether IVR interview methodology was enough to explain the generally low approval readings from that poll. Here I want to extend this to address the frequent comment the Rasmussen is systematically distorting our trend estimates because his results are consistently below the trend line.
Let's start with a bit of data. Rasmussen represents 90 polls in the Obama series above, while Gallup's daily provides 87 polls and all other pollsters contribute 55 polls.
Even a casual glance at the figure makes it clear that the Rasmussen dailies run 2-3 points below the blue trend line, while Gallup's daily runs about the same above the trend. The other pollsters scatter widely around the trend.
The most common comment we get is that Rasmussen is clearly too low and is distorting our trend estimate downward. If we removed only Rasmussen, it is certainly true the trend estimate would shift up. But the problem is how do you "know" that it is Rasmussen who is wrong? As one commenter put it "It's so annoying to see Obama at 57% when everybody knows he's over 60%!!!" Well, that IS annoying if you "know" the truth, but how do we know the truth? When I talk to Republicans they are equally certain we "know" Rasmussen is right and that it is Gallup that is obviously wrong. How can we address this difference of views in a non-partisan, data oriented, way?
The best estimates we get for our trends are when we have lots of different polling organizations represented and none of them contribute a disproportionate share of the polls. When we get in trouble is the opposite, when one poll dominates and we have few other polls to calibrate against. An extreme case would be if we only had Rasmussen right now, or only had Gallup. Happily, that isn't the case.
At the moment we have 55 polls by firms other than Rasmussen or the Gallup daily (I include 3 USAToday/Gallup and 1 Gallup only polls which are not dailies). These 55 polls come from 23 different firms with the most from any one firm being 4 polls. This is just what we want for a standard of comparison-- lots of pollsters, none contributing too many.
This doesn't mean there are no house effects. Every polling organization has a house effect, some larger than others. But across all the pollsters we get heterogeneity in those effects with low balancing high and the result being the best estimate of the trend we can manage with polling data alone.
The chart above estimates the trend using only these 55 polls from the 23 non-daily pollsters. That trend is plotted by the black line. The blue line is our standard trend estimate, using all the polls, including the dailies. And for comparison I've show the trends for Rasmussen only and for Gallup daily only.
Clearly both Rasmussen and Gallup are quite different from the overall trend or from the non-daily trend. Pick your poison, neither of these is in agreement with the non-dailies. You can prefer high or you can prefer low, but the dailies are about equally far off the black trend.
But the key point for us is that the black line for non-dailies is very close to the standard blue trend using all the polls. The average absolute difference is barely 1 point (1.009, in fact) and 95% of the days find less than a 2 point difference between the blue and black lines. Sometimes blue is higher and sometimes black is higher. The average difference (not absolute difference) is that blue is 0.3 points below the black line. (The black line is a bit more variable because it uses only the 55 non-daily polls rather than all 232 polls.)
There are cases where we can't do this sort of analysis because of a lack of diversity in pollsters. Approval is a happy exception. It is clear there are pollster differences, but at this point they are not drastically affecting our results. If you SELECTIVELY exclude only low polls, then of course you can drive up the trend, just as you can selectively exclude only high polls and drive the trend down.
But when we take the most diverse collection of polls, we get pretty much the same trend estimates as we do with all the polls. (You can go to the interactive charts and pick what to include or exclude and see how big a range you can get. Selection of high or low polls is the key to making the trend move a lot.)
Now, this is only part 1 of this series. I'm not claiming our trends are infallible. Far from it! I know all too well that they can break when given too little data or various kinds of bad data.
In the next installment of the series I'll respond to your comments here, and show an example of a more problematic case.
I am pondering two somewhat related questions this afternoon, but both have to do with national surveys conducted using an automated ("robo") methodology (or more formally, IVR or interactive-voice-response) to measure Barack Obama's job approval rating. One is the ongoing Rasmussen Reports daily tracking, the other is the just-released-today national survey by Public Policy Polling (PPP).
Both surveys are certainly producing lower job approval scores for President Obama than those from other pollsters. The difference for Rasmussen is painfully obvious when you look at our job approval chart, magnified by the sheer number of data points they contribute to the chart. Look at the chart and you can see two bands of red "disapproval" points with the trend line falling in between. Point to and click on any of the higher scores and you will see that virtually all come from Rasmussen. Similarly point to and click on a Rasmussen "black" approval point and you will see that virtually all of their releases fall somewhere below the line.
The most recent Rasmussen Reports job rating for Obama s 55% approve, 44% disapprove. Use the filter tool to drop Rasmussen from the trend, and the current trend estimate (based on all other polls) is, with rounding, 61% approve, 30% disapprove. Leave Rasmussen in and the estimate splits the difference. The latest PPP survey produces a result very similar to Rasmussen: 53% approve of Obama's job performance and 41% disapprove.
I know that Charles Franklin is working on a post that will discuss the impact of the Rasmussen numbers of the job approval chart, so I am going to defer to him on that aspect of this discussion. (Update: Franklin's post is up here).
But since some will find it very tempting to jump to the conclusion that the IVR mode explains the difference -- as PPP's Tom Jensen did back in February -- I want to take a step back and consider some of the important ways these surveys differ from other polls (and with each other) that have little or nothing to do with IVR.
First consider the Rasmussen tracking: Like many other national polls it begins with what amounts to a random digit dial sample -- randomly generated telephone numbers that should theoretically sample from all working landline telephones. However, unlike many of the national surveys, it does not include cell phone numbers, it screens to select "likely voters" rather than adults, and Rasmussen weights by party identification (using a three-month rolling average of their own results weighted demographically, but not by party). Rasmussen also asks a different version of the job approval rating. Other pollsters typically ask respondents to say if they "approve" or "disapprove" Rasmussen asks if them to choose from four categories, "strongly approve, somewhat approve, somewhat disapprove or strongly disapprove."
And Rasmussen uses an IVR methodology.
Now consider PPP: Unlike Rasmussen, they draw a random sample from a national list of register voters compiled by Aristotle International (which gathers registered voter lists from Secretaries of State in each of the 50 states plus the District of Columbia and attempts to match each voter with a listed telephone number in the many states where that information is not provided by the state. As far as I know, Aristotle has not published the percentage of registered voters on that list for which they lack a working telephone number, but it is likely a significant percentage. The critical issue is that the population covered by PPP is going to be different than that covered by other pollsters including Rasmussen.
So any coverage problems aside, PPP still samples a different population (registered voters) than most other public polls. Like most other pollsters, but unlike Rasmussen, they do not weight by party identification. Finally, the also ask a job approval question that is slightly different from most other pollsters.
Consider these versions:
Gallup (and most others): "Do you approve or disapprove of the way Barack Obama is handling his job as president?"
Rasmussen: "How would you rate the job Barack Obama has been doing as President... do you strongly approve, somewhat approve, somewhat disapprove, or strongly disapprove of the job he's been doing?"
PPP: "Do you approve or disapprove of Barack Obama's job performance?"
Note the very subtle difference: Others ask about how Obama is "handling his job" or about the job he "has been doing as president." PPP asks about his "job performance." MIght some respondents might hear "job performance" as a question about Obama's performance on the issue of jobs? That hypothesis may seem far fetched (and it probably is), but a note to PPP: It would be very easy to test with a split-form experiment.
Oh yes, in addition to all of the above, PPP uses an IVR methodology.
As should be obvious from this discussion, not all IVR methods are created equal. I happened to be at a meeting this morning with Jay Leve of SurveyUSA, one of the original IVR pollsters. As he pointed out, "there is as much variability among the IVR practitioners as there would be among the live telephone operators" on methodology, including some of the other more arcane aspects of methodology that I haven't referenced.
So the main point: While tempting, we cannot easily attribute to IVR all of the apprent difference to Obama's job rating as measured by Rasmussen and PPP on the one hand, and the rest of the pollsters on the other. There are simply too many variables to single out just one critical. The lack of a live interviewer may well play a role, but the differences in the populations surveyed, the sample frames and the text of the questions asked or some other aspect of methodology may be just as important.
More generally, just because a pollster produces a large house effect in the way they measure something, especially in something relatively abstract like job approval, it does not follow automatically that their result is either "wrong" or "biased" (a conclusion some readers have reached and communicated to me via email), only different. Observing a consistent difference between pollsters is easy. Explaining that difference is, unfortunately, often quite hard.
State of the Country
48% Right Direction, 44% Wrong Track (chart)
Barack Obama campaigned on a platform of "change." Do you think Barack Obama so
far is living up to his promises to change the way things work in Washington, do you
think he is breaking those promises, or is it too soon to tell?
30% Living up to those promises
15% Breaking those promises
54% Too soon to tell
Here is an update on Strategic Vision, one of the three polling firms that never responded to repeated requests for information by the American Association for Public Opinion Research (AAPOR) investigation of the problems with primary election polling in New Hampshire and elsewhere in 2008. Jim Galloway of the Atlanta Journal Constitutioncontacted Strategic Vision's CEO David Johnson about a new Georgia poll they released yesterday and asked him to comment on his firm's lack of cooperation with the AAPOR committee:
Johnson, the CEO of Strategic Vision, said he received a single request from the organization. "I got the request for this two days before the report was released," he said. "And I've got the e-mails to prove it." Johnson said the AAPOR says it sent a request by certified mail, but he never received it.
I forwarded Johnson's comments to Nancy Mathiowetz, the former AAPOR president who oversaw the task of requesting information from the 21 polling organizations that released surveys in the four states studied by the AAPOR committee. She replied with two Federal Express receipts showing that documents were sent to Johnson at the Atlanta "headquarters" address listed on the Strategic Vision web site, one on March 5, 2008 and the second on October 1, 2008 -- a full year and six months, respectively, before the release of the AAPOR report.
While we cannot know what happens to once a document arrives at an organization, the Fed Ex receipts confirm that the AAPOR documents were received and signed for on both occasions.
Regardless of when they first learned of the requests, nothing prevents Strategic Vision from disclosing the requested information right now. The AAPOR report indicates that their investigators were unable to obtain Strategic Vision's response rate, their method of selecting a respondent in each sampled household, a description of their weighting procedures and information about their sampling frame or the method or questions used to identify likely voters -- all information that, according to AAPOR's code of ethics, a pollster should always disclose with a public poll report. Johnson could share this information with all of us right now if he wanted to.
And as for the raw data for all individuals contacted and interviewed -- as well as all of the other information requested -- the AAPOR report makes clear that it is not too late. The committee has deposited all of the information they received in the Roper Center Data Archive where, according to the report, "it will be available to other analysts who wish to check on the work of the committee or to pursue their own independent analysis of the pre-primary polls in the 2008 campaign." Moreover, "If additional information is received after the report's release, the database at the Roper Center will be updated."
Johnson's response to the AJC may sound familiar. Long time readers will remember that I made my own requests of pollsters that had fielded surveys in Iowa during 2007. Strategic Vision was one of five organizations that never answered any of my questions. Unlike AAPOR, I relied on email since I lacked the budget to send requests via Federal express. Thanks to my Gmail archive, I can report the following:
I sent an initial request by email to David Johnson on September 27, 2007 and heard nothing back.
I followed up with a reminder on October 17, 2007 that produced the following response (from the same email address for David Johnson I had used for the original request):
I did not receive this email of 9/27. I am not sure why unless it has to do with our hosting company or server. I will be glad to get you responses and as things would have it, will be releasing an Iowa poll tomorrow
Two days later, having heard nothing further, I sent Johnson another reminder and received this response:
I am working on your responses now. I was slammed the past two days with deadlines.
It was certainly a busy time, so I waited another eleven days before reporting on the degree of cooperation I received from the Iowa pollsters and six weeks more before posting an analysis of of the information I had received. Unfortunately, I never heard anything more from David Johnson.
This sort of episode makes it clear that we are naive to expect all pollsters to provide disclosure of meaningful methodological information even "on request" even to organizations like AAPOR. Last Friday, I attended a conference on survey quality at Harvard University, where UNC professor Phil Meyer said that our best hope is a "real accountability system" based on public pressure, "a more efficient market on the demand side." He is absolutely right.
Update: A belated hat-tip to reader EC for the tip on Galloway's AJC item.
4/12/09; 1,200 adults, 2.9% margin of error
Which of these statements do you most agree with? One: Same-sex marriages should be recognized nationwide. Two: Each state should decide whether to recognize same-sex marriages? Or three: Same sex marriages should be banned nationwide.
29% Recognized Nationwide
19% Each State Should Decide
50% Banned Nationwide
I attended a presentation last week at the Pew Research Center (sponsored by the DC AAPOR chapter) on some of the practical issues they have encountered in their innovative work on cell phone polling. I'm still catching up from a few hectic days that have followed but want to pass along a few interesting details they shared.
Most of what was new in the session will be of more interest to pollsters than to political junkies wondering about how pollsters are dealing with the growing number of Americans without landline phone service. Fortunately, for those of you in the latter category, the PRC shared most of their more general data obtained from calling cell phones during the 2008 campaign in a report released this past December (see their summary, full report pdf and our review).
Here are a few highlights that seemed especially noteworthy or new in the presentation by Pew's survey research director Scott Keeter, associate director Michael Dimock and research associate Leah Christian:
Pew has now conducted 18 surveys (14 in 2008 and and 4 in 2009) featured a "dual frame" sample of both landline and mobile phones. In those surveys, they have interviewed approximately 9,400 adults via cell phone. Keeter explained that these dual frame samples are now "standard policy" for Pew's political surveys.
They have found cell phone users just as willing to answer their phones and cooperate as landline users. Their response rate for cell phones (23%) is virtually the same as that for landlines (24%)
Pew's calling center is finding it "cheaper and easier" to interview by cell phone. Just a few months ago, Keeter reported that it cost Pew "two to two and half times" as much for cell phone interviews as landline. Now the cell phone costs are "closer to two times as expensive" as landline interviews. Keeter attributed the improvement to their call centers "getting more familiar with the tricks of doing successful cell phone interviewing." Note that Pew interviews all cell phone users, and weights down respondents less likely to be "cell phone only." If they had screened for just the cell phone only users, the differential would be closer to four times the cost of landline interviews.
Pew continues to offer a $10 per interview incentive to cell phone respondents although, according to Keeter, other pollsters such as Gallup do not pay incentives to the cell phone respondents.
The biggest continuing methodological challenge to pollsters is determining how to combine and weight their landline and cell phone samples, partly because of what some are now calling the "cell phone mostly" problem. This issues involves those who have both mobile and landline phones but make "all" or "almost all" of their calls on the mobile phone (see pp. 6-8 from the December report, and my previous discussion of the issue for more details).
In this presentation, Dimock presented results showing that the cell phone mostly respondents were less likely to say they could be reached "right now" on a landline phone (52%) than those who use their cell phones for only some or few calls (63%). "This is really a spectrum," Dimock explained, without "a clear cut line between the group of people who absolutely can't be reached on a landline and another group who absolutely can't," but rather "a probability across the spectrum of people." As such, Pew's approach is to interview everyone reached via cell phone and weight on the inverse of their probability of selection.
Christian presented data on the problems identifying the actual geography of respondents based on their telephone number. Before the widespread adoption of cell phones, telephone area codes could identify the state and time zone of each number with great accuracy, as landline telephone numbers are closely associated with geography. Cell phone numbers, on the other hand, are assigned based on the geography where you first purchased your cell phone. The relatively new ability to "transport" cell phone numbers from one carrier is creating a growing mismatch between phone numbers and geography.
Pew has been able to confirm and quantify that trend by asking respondents to provide their zip codes. That additional data shows that a 5% mismatch for their cell phone samples on region (presumably census region), a 9% mismatch on state and a 39% mismatch on county. This discrepancy has two practical implications: First, obviously, when interviewing via cell phone, pollsters cannot treat phone numbers as an accurate measure of geography, especially at the county level. Second, they have to be careful about scheduling call time based on area code. A few years ago, pollsters could safely dial west coast area codes until 9:00 p.m. Pacific time. Now it is all to easy to ring someone much later at night, so pollsters have had to modify their procedures.
A hat tip and thanks to Susanna Fox of the Pew Internet Project and our friend Alex Lundry, for their helpful notes posted notes during the session on Twitter
Last week, I took a look at two issues where young voters tend to diverge with older voters. Traditional Republican messaging about the gay marriage and the perils of big government is quite different from the ways young voters tend to look at the issues and if the Republican Party wants to prevent a generation of voters from becoming solidly Democratic, they should assess both the policies and messages that are used to reach out to younger voters.
But beyond these two topics, the Republican Party is facing changing demographic forces that present a challenge to its long term growth. This is not a new notion, and I am obliged to give credit where due: Ruy Teixeira and John Judis' 2002 book The Emerging Democratic Majority looked at political and population trends and predicted that in 2008 these trends would come together produce a Democratic majority.
While I haven't looked extensively at whether or not Teixeira and Judis' predictions have come to pass (2008 Democratic victory aside), I can certainly agree that the racial makeup of young voters supports their conclusion. In short, young voters are less likely to be white than voters overall and are becoming increasingly more diverse. While 77% of voters overall in 2004 were white, only 68% of voters under age 30 were white. By 2008, that number was only 62%. Both African-Americans and Hispanics were found in higher proportions among young voters. In 2004, African-Americans made up 15% of young voters while making up 11% of voters overall; 13% of voters 18-29 were Hispanic compared to 8% of voters overall. By 2008 those numbers had increased, with African-Americans comprising 18% of voters 18-29 and with Hispanics comprising 14%.
So what does this mean for a Republican Party that has been branded (fairly or unfairly) as a party of "old white guys"? Put simply, the party cannot survive with this label attached. The recent demographic changes in the United States have been extraordinary; between the 1990 and 2000 Censuses, the number of Hispanics in the United States increased from 22.4 million to 35.3 million, and increase of over 58%. In 1980, 80% of the population identified as white (non-Hispanic); by 2000, that number had fallen to 69% of the population. These changes have expressed themselves in the demographic makeup of the younger voting cohort. With future generations of voters less and less likely to be made up of overwhelming proportions white non-Hispanics, the issue of expanding the Republican Party's appeal to younger voters is inextricably linked with the issue of expanding the party's appeal to minority communities.
In addition to the makeup of the voters themselves, today's young voters have grown up in a society that handles race in a dramatically different way than previous generations. Take for instance college campuses across the United States. In October 1985, there were some 10,846,000 Americans enrolled in college, 9,323,000 of which were white and just over 1,000,000 were African-American. Hispanics made up 579,000 of those enrolled in college as well. By the 2000 Census, those numbers had exploded; just over 17.4 million Americans were enrolled in college and of those, about 11.6 million were white non-Hispanic, while another 1.9 million were Hispanic and 2.2 million were African American. While college enrollment overall was up by 62% in 2000 over 1985, enrollment among Hispanics had more than tripled and more than doubled for African-Americans.
Universities across the United States today boast more diverse student bodies than in decades prior and students in those institutions are far more likely to interact with people of other races and cultures than previous generations. A party that appears to be uninterested in the concerns of (or votes of) African-Americans or Hispanics does not only risk forfeiting a growing segment of the population (and educated population) as a whole. But as white students attend schools and universities with more diverse student populations, the needs and concerns of the African-American and Hispanic communities will not be the abstract concerns of a group of citizens with which they have little contact; quite the contrary, a generation more accustomed to a multicultural America will be likely to find a racially homogenous party to be out of touch. So long as the Republican Party appears inattentive to the needs and desires of minority communities, the Republican Party can be almost certain to retain its minority party status.
President George W. Bush appointed numerous African-Americans to his cabinet during his eight years in the White House - National Security Advisor and then Secretary of State Condoleezza Rice as well as Secretary of State Colin Powell to name some of the most prominent appointees. Yet despite the prominent placement of African-Americans in the Bush cabinet, no gains were made among African-American voters. The impact of the election of former Maryland Lieutenant Governor Michael Steele, an African-American, to the leadership of the Republican Party has yet to be seen. Indeed, Steele was largely derided early in his term for such statements as his expressed desire to take conservative principles and "to apply them to urban-surburban hip-hop settings".
African-Americans and Hispanics need to be given reasons to believe that their concerns are being legitimately heard and addressed by the Republican Party. Republicans have had a great deal of success with the Hispanic vote in Florida (particularly the Cuban community) in the past in part as a result of the Republican Party's tough stance on Cuba. In the 2000 campaign,80% of Cubans in the state of Florida voted for George W. Bush, proving a key component of the victory in that state where a margin of 537 votes ostensibly handed Bush the Presidency. By authentically addressing a concern of a portion of the Hispanic community, Republicans helped to develop a credible base of support.
Yet the Republican Party continues to stumble in terms of its handling of the Hispanic and African-American communities. For instance, in late December 2008, candidate for RNC Chair Chip Saltsman, the former campaign manager for the Huckabee presidential campaign, distributed a CD of songs including a track entitled "Barack the Magic Negro", prompting outrage and a rather public and embarrassing moment for the Republican Party. Perhaps even more surprising, some leaders within the Republican Party rushed to Saltsman's aid as POLITICO ran a story with the headline "'Magic Negro' flap might help Saltsman".
Just a troubling is the perception that the GOP ignores minority communities; in 2007, the four major contenders for the Republican presidential nomination declined to attend a forum on issues relevant to the African-American community, and Univision had to cancel a discussion it planned when only McCain agreed to attend.
This incident is to say nothing of the damage to the Republican Party's standing among Hispanics that occurred as a result of the immigration debate that flared in the Summer of 2007; according to a Pew Research Center study, while in July of 2006 Democrats enjoyed only a 21 point party identification advantage among Hispanics, by December of 2007 that had widened out to a 34 point Democratic advantage, alongside a sharp increase in the importance of the immigration issue among Hispanics. In 2004, Bush lost Hispanic voter 44-53, a 9 point margin, yet by 2008, McCain lost Hispanics to Obama by a 36 point margin, garnering 31% of the Hispanic vote compared to the 67% that voted for Obama.
Younger voters are more comfortable with immigration reform than are older voters. In a May 2008 New Models study, age was a significant factor in terms of belief in the statement "Illegal immigration is significantly hurting the country". While a majority of young voters still believe the statement (51%), there is a softening of opinion among young voters compared to the overall (62%) and particularly compared to older voting groups. Furthermore, in a Spring 2008 Harvard Institute of Politics study of 18-24 year olds, when presented with an immigration reform proposal that would give "illegal immigrants now living in the U.S. the right to live here legally if they pay a fine and meet other requirements", 46% of the respondents in the Harvard study supported the proposal while 30% opposed it and 24% neither supported nor opposed. This is not to say younger voters are not concerned about illegal immigration, but rather that they are likely to be more open to reform.
The importance of addressing the needs of minority groups is clear. As a younger and more diverse cohort seeks a party to identify with, the Republican Party must authentically address issues of concern to minority communities. As African-Americans and Hispanics seek opportunities for socioeconomic mobility, efforts such as those to reform education and improve opportunities for small business should be promoted. These policies, such as efforts to improve teacher quality and to reduce needless regulation and taxes on small businesses, would not be a stretch for Republicans to support and speak to the concerns of minority communities.
Moving forward, in order to remain a party that is acceptable for young voters, the Republican Party must shed its image as the party of "old white guys". This includes a change in tone and messaging from those who are the face of the party (in an official or unofficial capacity) as well as an emphasis on policies that have proven, positive outcomes for minority communities. America is quickly becoming an increasingly diverse nation, and the Republican Party must evolve its message and agenda to address these changes in order to have relevance with young voters.
Politico'sBen Smith passes along a report from a reader who received an "automated poll" from her health insurer, Anthem Blue Cross and Blue Shield. The recorded call began with the message that "President Obama is planning to enact health care reform," and continued with these questions:
1) "Are you aware of the debates regarding health care reform?"
2) "How interested" are you in health care reform?
3) "How willing are you to get involved so we can improve our nation's health care system?"
4) "How willing are you to attend a town-hall meeting" on this subject?
5) "As Well-Point Health Care works to solve our nation's health care problems," would you like future updates?
Smith also reports that the call closed "by asking for the respondent's sex and whether they were the subscriber or spouse." He adds: "WellPoint is Anthem's parent company and is the nation's largest health insurer," with "more than 34 million subscribers."
For what it's worth, this call sounds less like a sample survey and more like the sort of massive data "harvesting" we have seen often over the last two or three years. The "would you like future updates" question suggests that the call is not about asking questions of a representative sample of a few hundred for further study, but about cheaply identifying massive numbers of customers for further follow-up via direct marketing. The last two questions (gender and "are you the subscriber?") would allow the callers to match the respondent with name on the list with reasonable accuracy.
We have seen manyrecentexamples that communicated a message while also gathering data from full populations rather than small samples. The automated, interactive-voice-response technology makes such an effort cheap and easy.
Unfortunately, these efforts put a burden on legitimate survey research. Most pollsters agree that the long term, 20-30 year decline in response rates has resulted mostly from the explosion of telemarketing calls. Many potential respondents assume that any call from a stranger is a sales call, and the sheer volume of telemarketing makes has made us all leery of any calls that begin with the tell tale sounds of an operator in a call center.
We can debate whether this sort of call qualifies as what the Market Research Association (MRA) describes as "SUGGing" - selling under the guise of research. (Update: the Marketing Research Association does not believe it does -- see "Update 2" below). At least the Well-Point calls ask permission to provide "future updates." Whether it counts as "selling" or not, we have to assume that massive data harvesting conducted under the guise of a survey makes it harder for legitimate surveys to win cooperation from potential respondents.
Update: Smith has updated his post to point to point to more details obtained by Jacob Goldstein of the Wall Street Journal's Health Blog:
The company placed three million automated calls. Of those, 142,000
connected and 66,000 people told the computer on the other end of the
line that they'd be interested in learning more, WellPoint spokeswoman
Cheryl Leamon told the Health Blog.
Insurers sometimes enlist interested beneficiaries to help sway
public opinion. "If there are members who are interested in supporting
our pos and being parts in the health care policy debate we want to
make sure that they are able to participate," Leamon said.
The Health Blog item includes a link to an online version of the "survey." Both the URL (wellpointsurvey.com) and the text characterize the questions as a "survey." On the other hand, both also identify WellPoint as the sponsor and the introduction offers no promise of confidentiality and says explicitly that "we at WellCare...need your help."
So this is definitely a massive data harvesting project. Is it ethical?
Update 2: Howard Feinberg, director of government affairs for the Marketing Research Association (MRA), emails to say, no, this particular "survey" does not fall under their definition of SUGGing. More details here.
USA Today / Gallup
3/27-29/09 (release 4/20); 1,007 adults, 3% margin of error
Mode: Live Telephone Interviews
In your opinion, which of the following will be the biggest threat to the country in the future -- big business, big labor, or big government?
55% Big Government
32% Big Business
10% Big Labor
"Now, 80% of Republicans view big government as the biggest threat to the country, up from 68% in December 2006. At the same time, Democrats' perceptions of the greater threat are completely reversed. In December 2006, 55% of Democrats said big government posed the greater threat, while 32% said big business did. In the latest poll, a majority of Democrats now view big business as the greater threat (52%) while only about one in three think big government is."
Yes, this feature normally appears on Friday's or over the weekend. Unfortunately, I spent much of Friday at a conference and Saturday on a train, both lacking in a wifi connection. So here is a belated listing of interesting things written by pollsters or about surveys and survey data from last week.
Charlie Cook breaks down the new Cook/RT Strategies results on Obama's job approval and generic congressional ballot test.