Today at the AAPOR Conference in Chicago, I interviewed Peter V. Miller, Northwestern University professor and the organization's outgoing president just after he addressed the full conference on AAPOR's transparency initiative.
Miller made news today when he announced the organizations that pledged to support the initiative by routinely depositing methodological information to a public archive that AAPOR is creating (I subject I wrote about at more length earlier this week). According to Millter, the news media organizations and media pollsters that have pledged to support the initiative so far include: ABC News, the Associated Press, CBS News, Gallup, GfK, Knowledge Networks, the New York Times, the Pew Research Center, SurveyUSA and the Washington Post. Miller was careful to emphasize that "we are not cutting off the membership now, this is just the beginning." [Update: Peter Miller shared via email the full list of organizations that have committed, so far, to joining the transparency initiative, and it appears below, after the jump].
We are conducting interviews this week with participants at the annual conference of the American Association for Public Opinion Research (AAPOR). The following links take you to all the videos and occasional Twitter updates from me, Kristen Soltis and others at the conference.
Today at the AAPOR Conference in Chicago, I interviewed Jeff Jones, managing editor of the Gallup Poll, on his work using the generic House vote to predict the number of seats each party will win in the U.S. House in this year's election.
We are conducting interviews this week of participants at the annual conference of the American Association for Public Opinion Research (AAPOR). The following links take you to all the videos and occasional Twitter updates from me, Kristen Soltis and others at the conference.
Do you want to see the Republicans or Democrats win control of Congress?
All adults: 45% Democrats, 40% Republicans
Hispanics: 54% Democrats, 27% Republicans
Do you think the number of LEGAL immigrants from foreign countries who are permitted to come to the United States to live should be increased, decreased or left the same as it is now?
All adults: 17% Increased, 35% Decreased, 47% Left the same
Hispanics: 25% Increased, 16% Decreased, 57% Left the same
How serious a problem is ILLEGAL immigration for this country today?
All adults: 66% Extremely/very, 26% Somewhat, 7% Not too/Not at all
Hispanics: 65% Extremely/very, 25% Somewhat, 9% Not too/Not at all
Do you favor or oppose providing a legal way for illegal immigrants already in the United States to become U.S. citizens?
All adults: 59% Favor, 39% Oppose
Hispanics: 86% Favor, 13% Oppose
In general, do you think that police crackdowns on undocumented or illegal immigrants... (
All adults: 49% unfairly target Hispanics, 45% Treat all ethnic groups fairly
Hispanics: 73% Unfairly target Hispanics, 22% Treat all ethnic groups fairly
From what you may have read, seen or heard about the new immigration law that was just passed in the state of Arizona, do you favor, oppose or neither favor or oppose this law?
All adults: 42% Favor, 24% Oppose
Hispanics: 15% Favor, 67% Oppose
All adults: 35% Democrat, 26% Republican, 25% independent (chart)
Hispanics: 36% Democrat, 8% Republican, 24% independent
At the AAPOR Conference yesterday, I interviewed Peter J. Woolley, director of Fairleigh Dickinson University's Public Mind Poll, on an experiment he conducted on how to measure support for an independent or third party candidate. Woolley and his Fairleigh Dickinson's colleague Daniel Cassino are presenting a paper on this experiment today, but they originally published the survey data back in October.
We are conducting interviews this week of participants at the annual conference of the American Association for Public Opinion Research (AAPOR).
The following links take you to all the videos and occasional Twitter updates from me, Kristen Soltis and others at the conference.
Yesterday, I interviewed Douglas Rivers, President and CEO of YouGov Polimetrix and professor of political science at Stanford University, on a paper he presented at the AAPOR Conference on the statistical properties of internet and phone polls in the 2008 Elections.
Rivers authored a guest contribution on Pollster responding to study on the accuracy of web surveys. I wrote on two-part column on the controversy last October. Interests disclosed: YouGov Polimetrix is the owner and primary sponsor of Pollster.com.
We are conducting interviews this week of participants at the annual conference of the American Association for Public Opinion Research (AAPOR).
The following links take you to all the videos and occasional Twitter updates from me, Kristen Soltis and others at the conference.
If you think pollsters in the US have it rough - difficulty getting folks to agree to participate, difficulty finding good samples given the rise of cell phone only households, etc. - try conducting public opinion research in Afghanistan. Over the last few years, ABC News has worked with research firm D3 Systems and the Afghan Center for Socio-Economic and Opinion Research to pull together some unique and fascinating research on shifting public opinion in the country. Their Emmy award-winning work was presented in the first session here at AAPOR and I was there to hear what they'd done and what they'd learned.
Research of this nature is of interest not just because of its unique nature but also because of its impact. For instance, their research found that beliefs about civilian casualties were linked to optimism about the country's situation and a variety of other indicators. With the "winning the hearts and minds" item so integral to the conflict in Afghanistan, research like that conducted by ABC/D3 highlights key links between public opinion and things like the conduct and outcome of a war effort.
I had the opportunity to chat with ABC News' Gary Langer and D3's Matthew Warshaw about their research - take a look to hear more about the findings of their research and the challenges they encountered.
This post is the first of a series of video interviews we'll be doing over the next few days at this year's conference of the American Association for Public Opinion Research (AAPOR) in Chicago, Illinois. Over the last few years, I've found that video-blogging is a better way to provide you with a window into the substance of what happens at AAPOR while still finding time to attend the sessions...and sleep.
Today, for example, was a whirlwind of planes, trains and automobiles. So we taped a quick interview with, well, me to introduce this series.
And did you notice I said "we?" Well, if you play the video above, you should hear me say that they person behind the camera asking the questions is regular Pollster contributor Kristen Soltis who will be sharing interviewing duties with me this year. Thank you Kristen!
You can see all the videos, including those from last year's conference, here. Once again, credit for the animated graphic title that introduces each video goes to Kristen's colleague Lisa Mathias at the Winston Group. Thank you (again) Lisa!
And stay tuned, we'll have much much more on Friday and over the weekend.
Alex Lundry is Vice President and Director of Research for TargetPoint Consulting, a conservative political polling, microtargeting, and knowledge management firm. You can connect with him on Twitter where he expresses his opinions with great clarity so as to avoid confounding CMU's sentiment analysis.
Researchers at Carnegie Mellon have shown that unstructured text data pulled from Twitter can in some instances be used as a reliable substitute for opinion polling (link to study PDF). The results are impressive, and though pollsters needn't start looking for another line of work, I think they ignore this study at their peril.
Using very simple tweet selection mechanisms along with measures of the tweet's sentiment ("Obama's awesome" = approve, "Obama sucks" = disapprove), these researchers were able to:
extract an alternate measure of consumer confidence that was very highly correlated (r=73.1%) with the standard poll derived confidence metric,
use this Twitter-derived measure of consumer confidence to accurately forecast the results of the consumer confidence poll, and
measure President Obama's job approval rating and correlate it with Gallup's daily tracker at a level of r=72.5%.
However, the same methodology failed miserably when it came to the 2008 presidential horse race obtaining a correlation of r=-8% with Obama's level of support in the Gallup tracker.
It seems then that aggregate Twitter sentiment shows great promise as a polling substitute for high volume and relatively binary opinions and attitudes: are you hot or cold on the economy, do you like or dislike the President? But the polynomial nature of items like a campaign horserace or the health care debate makes it difficult to extract meaningful opinions amid a crush of unstructured data.
Yet this is no reason for pollsters to shrug away these results. There is great predictive power hidden away inside this sort of latent data just waiting for the extraction of opinions, attitudes and trends in voter sentiment. Pollsters would be wise to begin incorporating these data into their work: analyzing Google Trends search data, counting Facebook friends, YouTube views and web traffic, or simply doing more with the rich verbatim data we typically capture in our surveys and focus groups. (And it's not just politics where this is applicable; tweet volume and sentiment have also been shown to be an incredibly accurate predictor of a movie's box office returns).
This study also highlights a debate the polling community must have sooner or later: can the shortcomings of dirty data be overcome by a mix of sheer volume, sound data preparation/manipulation and savvy analysis? In this new era of IVR, online panels, social media and big data, the answer is increasingly pointing to yes - especially when you consider the advantages of speed, cost and access that these non-traditional data collection methods enjoy.
Finally, it's worth taking a moment to consider just how stunningly impressive these results are. What level of precision might there have been with a more sophisticated methodology? Tweets were selected for study based merely upon the presence of a single word - imagine the accuracy if selection allowed for the use of synonyms, alternate spellings or Boolean operators. Moreover, as the researchers themselves point out, there were no geographical restrictions and no consideration of either online idioms or the practice of retweeting.
This is an exciting, important study, and the polling community should be taking it very seriously. It is well worth your time to read the whole thing, and I'm very curious to hear your take on it in the comments section below.
Is America on the verge of a European-style multi-party democracy? A May 12th Wall-Street Journal/NBC poll finds that 31% of American adults' view on the two-party system is that, "The two-party system is seriously broken, and the country needs a third party." This sentiment is consistent with my analysis of partisan voter registration, which shows a slight rise in the number of people who are eschewing the major political parties to register with minor political parties or affiliate with no political party.
What is the cause of discontent with the major political parties? The most accessible answer is that it is a product of the times. Voters' attitudes are tied to the economy and they are expressing their displeasure with those in power through multiple measures of low trust in government, low approval of the political parties, and a desire for an alternative.
The economy is likely a major factor for voter anger towards the parties, but there are long-term historical trends that also shed light as to why minor parties may be poised for modest electoral success. In the figure below, I plot from 1870-2006 the effective number of parties elected to the US House (black line) along with a measure of the ideological cohesiveness of the two major party's caucuses (blue and red lines).
The effective number of political parties weights the number of parties by their relative strength. When two major parties hold all the seats and are near parity, the effective number of parties is close to two. When one party dominates the other, it is less than two. When minor parties win seats, it is greater than two. For it to be close to three, essentially at least three parties must be near parity.
Since the American Civil War, the effective number of political parties in the US House has generally only been substantially greater than two in the period before 1920. Two significant minor party movements during this period, the Populist and Progressive movements, account for the minor party candidates elected during this period. Indeed, the success of the minor parties during the era is understated, as major party candidates would often run "fusion" campaigns with minor parties, by running under such labels as the "Democrat Populist" and "Republican Progressive" parties.
What is also distinctive during the period of minor party success is the ideological cohesiveness of the major political parties. I analyze the ideology of US House members using a widely-used measure by congressional scholars is known as DW-NOMINATE scores, which identify the ideology of members from the votes they make in Congress. A measure of ideological homogeneity is the standard deviation of these ideology measures. When it is low, the congressional parties are more ideologically cohesive.
Minor parties have greater electoral success when the major parties are more ideologically cohesive. If the relationship is not visually apparent, the correlation coefficient between the effective number of US House parties and the standard deviation of the Republican ideological voting scores is -0.39, a strong relationship. The correlation for the Democrats is a weaker -0.20. This is due to the greater ideological dispersion of the congressional Democrats when the conservative Southern and liberal non-Southern wings of the party existed in the uneasy New Deal coalition formed by FDR during the Great Depression, and which began fragmenting during the Civil Rights movement of the 1960s. Since 1982, the congressional parties have again become more ideologically cohesive.
As American parties become more ideologically rigid, more space is provided to minor party candidates to flourish. If so, then we may be on the verge of at least some electoral success for minor political parties. However, this success is most likely to be fleeting. Since the dissolution of the Whig Party prior to the Civil War, the major parties have proven themselves quite capable of absorbing these minor parties into their electoral coalitions.
5/7-11/10; 1,002 adults, 4.3% margin of error
Mode: live telephone interviews
Do you favor, oppose, or neither favor nor oppose increasing drilling for oil and gas in coastal areas around the United States?
50% Favor, 38% Oppose
Which is more important to you as you think about increasing drilling for oil and gas in coastal areas around the United States?
49% The need for the U.S. to provide its own sources of energy
47% The need to protect the environment
Approval/Disappproval on Handling Oil Spill
Obama: 42 / 33
British Petroleum: 32 / 49
35% Democrat, 26% Republican, 25% independent, 14% Don't know (chart)
Preference for Congress after 2010 elections
Democratic Control: 44%, Republican Control: 44%
Recently Passed Health Care Plan
38% Good idea, 44% Bad idea (chart)
"The Arizona law makes it a state crime to be in the U.S. illegally. It requires local and state law enforcement officers to question people about their immigration status if they have reason to suspect a person is in the country illegally, making it a crime for them to lack registration documents."
64% Support, 34% Oppose
"How likely do you think it is that the decision in Arizona to promote strong enforcement of immigrants who are NOT in the U.S. legally will lead to discrimination of Hispanic or Latino immigrants who ARE in the U.S. legally?"
66% Likely, 31% Unlikely
More Off-Shore Drilling
60% Support, 34% Oppose
"Do you think that the federal government is doing enough or is not doing enough to deal with the environmental problems caused by the recent Gulf Coast oil spill?"
43% Enough, 45% Not Enough
"In general, do you approve or disapprove of using racial or ethnic profiling in combating terrorism?"
51% Approve, 43% Disapprove
State of the Country
34% Right Direction, 56% Wrong Track (chart)
32% Democrat, 24% Republican, 40% independent (chart)
Three new polls out this morning on Pennsylvania's Senate primary all lead to the same conclusion: The race between Senator Arlen Specter and Congressman Joe Sestak has closed dramatically in the last month and the three most recent snapshots of the race suggest a very, very close contest. That much is crystal clear.
Maybe I'm feeling grumpy today, but aside from that obvious big trend -- the movement toward Sestak over the past month -- I'm seeing a lot of web chatter explaining day-today patterns that are mostly statistical nose. So I want to pass along a few warnings and a thought or two about the numbers before us today.
The latest Muhlenberg College/Morning Call track, based on interviews conducted Saturday through Tuesday, shows a dead even result (45% to 45%). The new survey from Quinnipiac University, conducted from last Wednesday through Monday night has Specter "ahead" by a two-point margin (44% to 42%) that falls well within the survey's +/- 3% margin of sampling error. And among just 150 likely Democratic primary voters surveyed by Franklin & Marshall College from Monday last week through Sunday, Sestak has a statistically meaningless two-point advantage (38% to 36%).
Two quick thoughts about the Franklin and Marshall poll: I have mixed feelings about their decision to release results for just 150 likely Democratic primary voters. On the one hand, there is nothing inherently wrong with a sample of 150. It just has a larger margin of random sampling error (+/- 8%) than a sample of 400 (+/- 5%) or 1,000 (+/- 3%). Does crossing the threshold from +/-5% to +/-8% take us over some magic line? I don't think so.
On the other hand, I do not understand the logic of sending out an email to journalists touting the 38% to 36% result, with a 31-page report attached, yet omitting from both documents the undecided percentage on the vote question likely voters, the results when undecided likely voters were asked how they lean and the fact that they interviewed just 150 likely voters (though the report does include the +/- 7.9% margin of error for that subgroup). We got the n=150 number by reading the poll article in the Scranton Times Tribune.
Even more curious is the decision to include a set of cross-tabulations in the report (Table A-1 on page 9) that appears to break out those 150 interviews into even smaller demographic subgroups. How many interviews were conducted among typically smaller subgroups such as the non-whites, veterans or 18-34-year-olds among the likely voters? That they don't say.
So much for transparency.
As long as we're focused on sample sizes, I also want to pass along a tip for those of you who, like me, have gotten addicted to the daily tracking fix from Muhlenberg University. Their rolling-average design has important advantages, but one often overlooked drawback: We tend to get seduced by the false sense of precision inherent in those smoothed out daily numbers. In tables like the one below (reproduced from the most recent Muhlenberg release) they present something of an optical illusion. Do you notice a few apparent "trends" that last, oh, about three or four days before receding? We notice that smooth movement and forget that its an artifact of the rolling averages. More importantly we also forget that +/- 5% margin of error applies separately to every number in the table.
Let's consider the same data a little differently and look just at the non-overlapping samples (as in the table below). With the margin of error in mind, most of the differences between the three columns are indistinguishable from statistical noise. The only clear trend is the 10-point increase in Joe Sestak's favorable rating over the 12 day span of the tracking. Yes, it's tempting to read great meaning into the small fall and rise in Specter's vote and favorable percentages in the table, but neither change is anywhere close to statistically meaningful.
(I'm most struck by the lack of trend in Sestak's low UNfavorable rating -- remarkable given that the Specter campaign has reportedly been pounding Sestak with attack ads for the last two weeks. But I digress...)
So what do we know? There is no question that the most recent results -- from Muhlenberg, Quinnipiac and even Franklin & Marshall -- represent a big change from where the race stood just a few weeks ago, but any real trends over the last few days are just too small to be visible amidst the statistical noise.
Update - A reader emails:
I also liked this part [of the F&M report, p. 4, emphasis added]:
The May Franklin and Marshall College Poll shows Joe Sestak with a narrow
advantage over incumbent Senator Arlen Specter among those Democrats who are
most likely to vote, 38% to 36%, with about one in four likely voters still undecided.
When undecided voters who are leaning toward a candidate are allocated, the pool of
truly undecided voters is about 15%.
So they push leaners to get it down to 15% undecided, but they never tell you how those leaners break? Also, they list RV and LV results for the Republican Governor primary matchup, but they never tell you the LV sample size anywhere
Michael Wolf is an Associate Professor of Political Science at Indiana University. He can be reached at firstname.lastname@example.org.
Andrew Downs is Director of the Mike Downs Center for Indiana Politics and is an Assistant Professor of Political Science at Indiana University. He can be reached at email@example.com.
Craig Ortsey is a Continuing Lecturer of Political Science at Indiana University. he can be reached at firstname.lastname@example.org.
The authors would like to thank Brian Schaffner for his suggestions on an earlier draft of this piece.
Tea Party observers have floated two explanations for the group's emergence since their unexpectedly intense protests last year. The first explanation - embraced by conservative commentators and the movement itself - is that the Tea Party is comprised of grassroots citizens upset at the direction of the country and the deficit. Democrats champion a second explanation, that the Tea Party is composed of Republicans upset that President Obama and the Democrats control Washington. If the Tea Party is a movement against Washington politicians no matter their political stripes, then establishment Republicans must be wary of disaffected voters picking off their incumbents in primaries and President Obama faces a genuine rejection among voters he attracted in 2008. If it is simply Republicans upset at losing the presidency, 2010 looks more like a normal midterm election than an anti-incumbent revolt.
To get a better feel for the political dynamics behind the Tea Party, The Mike Downs Center for Indiana Politics asked registered Indiana voters whether they identified with the Tea Party, their vote intention for the Republican primary, and a series of election-related questions. Our first noteworthy finding is that 36% of registered likely Hoosier voters identified themselves with the Tea Party, while 61% of Republicans did.
Contrary to the "throw the bums out" rhetoric surrounding the movement, however, a plurality of Tea Partiers intended to vote for Dan Coats to be the Republican nominee for US Senate. Coats, a former senator and lobbyist who had homes in North Carolina and Washington, D.C. (but not Indiana) prior to jumping into the race, was recruited by the National Republican Senatorial Committee and was clearly the Washington establishment candidate. The candidates who reached out most aggressively to the Tea Partiers, Bates, Behney, and Stutzman, did relatively better with Tea Partiers than with non-Tea Party identifiers, but they still lagged behind Coats. Between this poll and the election, Stutzman's support surged, but that movement was more likely due to Senator Jim DeMint's Senate Conservatives Fund's late but strong support of his candidacy than due to a grassroots shift of non-Republican Tea Partiers looking his way.
So where were the pitchforks and torches against establishment Washington? Our findings demonstrate that Tea Partiers are overwhelmingly Republican. The blue bars in Figure 1 show the percentage of Indiana Tea Partiers in each partisan category. Four in ten Hoosier Tea Partiers are strong Republicans, and when weak Republicans and independents who lean Republican are added to the strong Republicans, nearly 80 percent of Tea Party identifiers are Republican Party adherents. Less than 10 percent of Tea Partiers are Democrats or independents who lean Democratic. True independents make up less than 13 percent of Tea Partiers.
The second piece of evidence that supports the position that the Tea Party is a Republican phenomenon comes from the red bars of Figure 1. Here the percentage of Indiana Tea Partiers who voted for Obama in 2008 is presented across each category of party identification. Less than seven percent of all Tea Party adherents voted for Obama, and they are largely comprised of a handful of disappointed Democrats. The differences between the red and blue bars represent McCain supporters, implying that the great majority of Tea Party independents were McCain voters and even half of the Tea Party Democrats were McCain voters. The genesis of Tea Party identification does not result from a rejection of Obama by his own supporters; rather, it arises more from upset McCain supporters - hardly a broad-based grassroots movement. What explains this pattern of Tea Party identification that looks as if it may have begun on November 5, 2008 rather than after the stimulus bills or auto bailouts? If the Tea Party were a response to the conditions of the country or frustration with spending, then a negative view of the direction of the country or a concern over the deficit should lead to an even distribution of Tea Party identification across party identification, or perhaps a bell-curve distribution concentrated among those independents who identify with the Tea Party. To test for these possibilities, we ran a logit model that yielded four significant explanatory variables. Two of these variables are issues associated with the Tea Party: believing that the US is on the "wrong track," and holding that the deficit is the most important issue facing the US. Another two significant variables are longer-term determinants: party identification and voting for John McCain in 2008. A factor analysis shows that party identification, view of national direction, and 2008 presidential vote all hang together as a single factor (the results of the logit model and factor analysis are available upon request). These outcomes imply that it is unlikely that the distribution of those viewing the national direction poorly is separate from Republican identifiers who voted for McCain. However, the salience of the deficit issue may still lead non-Republicans to be more apt to identify with the Tea Party and that distribution may be concentrated outside of Republicans.
Figure 2 indicates that this hypothesis is not correct. It presents the predicted probability of identifying with the Tea Party when one views the deficit as the most important issue (blue bars), the probability of Tea Party identification when the respondent believes that the deficit is the most important issue and views the US as being on the wrong track (maroon bars), and these two factors combined with voting for John McCain in 2008 (yellow bars) across each category of party identification. The overall message of this figure is that party identification conditions all of the factors that increase the probability of Tea Party identification. The distribution of deficit hawks' likelihood of identifying with the Tea Party is not a bell-shaped curve centered around independents, and in fact follows the strength of party identification in a nearly perfect progressive step-by-step pattern. When combining this factor with the view that the country is on the wrong track and (in a second step) with having voted for McCain, it is clear that the robust explanation for Tea Party identification is related to Republican Party identification rather than a populist reaction to national direction and deficits. What makes this result even more dramatic is when Figure 2 is juxtaposed against Figure 1. The predicted probability of Democrats identifying with the Tea Party given these attitudes looks impressive in Figure 2 (roughly 0.5 to 0.6 probability of Democrats identifying with the Tea Party when they hold these attitudes and voted for McCain). However, there are almost no Democrats who hold these attitudes and who voted for McCain. Only nine of the 343 Tea Party identifiers are strong Democrats, weak Democrats, or Democratic-leaning independents who hold these attitudes and voted for McCain. In other words, the maximum 0.6 probability of Tea Party identification by Democrats of any stripe given these conditions is deceptively strong. On the other hand, it is very telling for Republicans. Indeed, the variable with the largest marginal influence on Tea Party identification is voting for McCain in 2008, but for Hoosiers this act is intertwined tightly with Republican Party identification and viewing the country on the wrong track. Of the two explanations for the Tea Party's rise (a grassroots non-partisan movement upset at Washington policy versus Republican frustration with losing the 2008 election and the Obama administration's policies), our evidence from Indiana supports the latter. The Tea Party in Indiana is a Republican phenomenon whose effects will most likely be on voter mobilization rather than voter choice in next November's elections. While these results are only from one state, there is reason to think that a state with a culture of Midwestern agricultural individualism would be more likely than most states to have a Tea Party movement independent of partisan politics. The fact that it is not bodes ill for the grassroots explanation being correct in other states. The Tea Party is popular because it has provided aggrieved Republicans with a "reset" button unconnected to the past. Rather than voicing their frustrations by placing "Don't Blame Me! I Voted for McCain!" bumper stickers on their cars (which we do not expect to see soon in Indiana or elsewhere), the development of the Tea Party has operated as a convenient vehicle for Republican grievances that is unconnected to the unpopular end of the Bush era.
This weekend in response to a post I wrote about possible Pennsylvania Senate match-ups Alan Reifman asserted that Toomey "is too far to the right for Pennsylvania." When I saw Reifman's post, I was going to respond "but Pennsylvanians elected Rick Santorum... twice." But before I did, I decided to contrast Santorum's and Toomey's DW-Nominate scores. DW-Nominate scores classify House and Senate members as liberal or conservative based on all their roll call votes than can be identified as liberal or conservative. These scores allow one to compare how rightward or leftward legislators are on a single dimension -1 to 1 scale with higher positive scores indicating a more conservative record*. What I found surprised me.
Using joint House and Senate scaling (which treat the House and Senate a single body to compare scores across chambers), we find that Pat Toomey (.718) had a considerably more conservative voting record than Rick Santorum (.349). To put that number into context, Lincoln Chafee (the ultimate liberal Republican and now independent) had a DW-Nominate score of .002 and Republican Arlen Specter had a score of .067. Republican Specter was slightly to the right of Chafee; Santorum was considerably right of Chafee; and, Toomey was much further right.
Still, I wanted to get a better idea of how conservative Toomey voting record was. So, I pulled the DW-Nominate score of every United States legislator (House and Senate) since 1995**. Toomey is on the rightward edge of even the GOP caucus as seen in the percentage histogram below, while his possible Democratic opponents in the 2010 Pennsylvania Senate contest are actually slightly more centrist than their party as a whole. Indeed, of the 1,004 legislators to receive a DW-Nominate score for their career since 1995, Toomey ranked as the 22nd most conservative.
Toomey ranked more conservative than 97.9% of all United States legislators since 1995. He had a more conservative voting record than J.D Hayworth, Jim DeMint, and was about as conservative as Jesse Helms. Only Tom Coburn and Tom Tancredo scored further to the right.
To put it into prospective, Pat Toomey would most likely be the second most conservative Republican in the United States Senate, which would be quite an accomplishment considering Pennsylvania has supported every Democratic Presidential candidate since 1992 (and Obama won it by 10%).
*For those interested, you can read a more in-depth non-technical explanation of DW-Nominate scores here and a more technical discussion here.
**The reason I use 1995 as the cutoff is because prior to the 1980's a legislator's liberal-conservative record was also highly correlated with a second dimension of DW-Nominate scores. Since the 1980's, however, the scores I use correlate highly with a legislator's overall record vote. Also, many conservative Democrats, who left the Congress after 1994, made Congress less polarized. In an effort to correctly contextualize each legislator's record discussed here, I decided to use 1995 as my starting point for scores.
Trends in party voter registration since the 2008 presidential election suggest that a small, but perhaps meaningful, number of registered voters are abandoning the major political parties in favor of minor political parties or are forswearing any party affiliation.
Twenty-eight states, plus the District of Columbia, allow persons to register with a political party. Among these states, the reported number of registered voters has declined by 2.6 million or 2.6% since the November 4, 2008 presidential election. This decline is expected. Election administrators remove people who have moved from their address or are otherwise no longer eligible. Absent an interesting election to stimulate new registrations, the voter rolls are not replenished as fast as they are purged of these defunct registrations.
The Democratic Party has experienced a slightly greater absolute loss in the number of party registrations, at 1.2 million, compared to the Republican Party, at 1.1 million. However, since there are considerably more registered Democrats (a partial consequence of the universe of states that permit party voter registration, see the state numbers below) the percentage loss since 2008 of 3.5% is greater for Republicans than the 2.7% loss for the Democrats.
The number of persons registering with a minor party is actually increasing, by 52,810 or 2.4%. Further, although the number of those unaffiliated with a political party is decreasing, the pace of this decrease of 389K or 1.6% is less than that of the major political parties.
America is a long way from having a viable multi-party system at the federal level, like we are currently witnessing in the United Kingdom. However, these trends are consistent with the notion that some American voters are willing to express their frustration with the major parties by registering with a minor political party or affiliating with no party. Indeed, the increase in unaffiliated registrations is a long-term phenomenon observed since the 1970s.
Now, it should be noted that those who self-identify as "unaffiliated" tend to align themselves with a political party at the polling booth. But, this begs a question that scholars of political behavior have not adequately addressed, which is why these independents act like partisans but do not want to associate themselves with a party. Registration-based sample surveys could be used to address this by asking probing question of unaffiliated registrants. I hope that this is a research agenda someone would be interested in exploring.
Returning to the increase in minor party registrations, as discussed in Paul Herrnson and John C. Green's edited volume Multiparty Politics in America, people who identify with minor political parties tend to be more sophisticated than those who are unaffiliated with any political party. These people tend vote and volunteer for campaigns more often than the general public. Their absence from the major political parties may adversely affect major candidate campaigns, particularly where a minor party candidate is on the ballot.
This is not simply a Tea Party movement. There are a number of different minor political parties that range across the entire ideological spectrum. For example, in Maine the only state-recognized minor party is the Green Party, which has seen an increase of 8,790 or 34.1% since the 2008 presidential election. In North Carolina the only state-recognized minor party is the Libertarian Party, which has seen an increase of 3,685 or 101.3%. Maryland may demonstrate how this trend is an expression of frustration. The increase of 21,167 or 29.2% is entirely among the 23,897 new registrations with the Maryland Independent Party. A check of the Maryland Independent Party website shows little activity to account for a grassroots groundswell that trebled the party's support.
There are a number of interesting trends worthy of mention among the state statistics. In the two states where voter registration increased, Delaware and Colorado, only the number of registered Republicans decreased. The Colorado trend may be worthy to watch in the coming months since statewide elections are expected to be competitive. The only states where Republican registration increased are Louisiana and New Jersey. In 2009, New Jersey Republican Gov. Christie won a closely-watched election and Louisiana held some state and local elections. This suggests that Republican registration may yet recover during the fall 2010 elections as the campaigns gear up.
Now, for some data notes. In the state table, I note the date of the most recent voter registration report from each state. In some cases these reports are quite recent, while for a few the last report may have been for a fall 2009 election. I report active plus inactive registration statistics, where available. Inactive registrations are people who have not voted recently, but because they are registered to vote, I have chosen to include them. There are eight states that do not have statistics for minor party registration, because the state did not provide separate statistics for minor party and unaffiliated registrants in 2008 or their current report. Among these states, the unaffiliated registration tend to be either increasing or is not experiencing as steep of a decline as the major parties, suggesting that minor party activity is hidden within these numbers.
Finally, some caveats. Partisan registration is not a perfect measure of the state of national partisanship for several reasons. We cannot know from these statistics what is happening in the twenty-two states that do not have party registration. People who identify themselves with a political party in surveys may not register with a party because they do not intend to vote in primaries. They may wish to vote in another party's primary because that party dominates in the general election and they want to cast a more meaningful vote in that party's primary. They may not run out and change their party registration whenever they change their party self-identification. Despite these limitations, none satisfactory explain why we would observe an increase in minor party registrations and, in my opinion, do not adequately explain why unaffiliated registrations would show less of a decline than major party registrations.
Later this week, I'll be in Chicago for the annual conference of the American Association for Public Opinion Research (AAPOR). One of the more newsworthy aspects of this year's conference is the "Transparency Initiative" of AAPOR's current president, Peter Miller.
Until this week, the initiative has been mostly an idea, born out of AAPOR's recently higher profile in publicizing "the failure of survey organizations to be open about their research methods," as Miller put it last fall. Regular readers may recall AAPOR's work to investigate the polling failures in New Hampshire and elsewhere during the 2008 presidential primaries, and its recent public censure of two non-members for failing to disclose basic facts about their methodologies: Dr. Gilbert Burnham, regarding research he published on civilian deaths in Iraq, and Strategic Vision, LLC, regarding pre-election polling data they released in 2008.
Last fall, Miller concluded that what AAPOR's efforts to date have been inadequate. "Despite decades of work," he wrote "transparency in public opinion and survey research remains an elusive goal." The investigation of the polls in New Hampshire in 2008 was a focal point:
AAPOR's Ad Hoc Committee that studied pre-primary polls in the winter and spring of 2008 intended to release its report in time for our annual meeting in May of that year. The members of the committee hoped their findings would inform polling practice in the general election. Instead, the committee issued its report in April 2009, about a year late, because many organizations that published pre-primary poll results took so long providing methodological information. In the end, the committee had to publish its findings based on partial data.
It is obvious that if an AAPOR committee cannot efficiently gather methodological information for a report commissioned in the aftermath of a significant polling failure (in New Hampshire in 2008), then transparency is not the guiding norm that it should be in our profession.
So Miller proposed that AAPOR follow a different course. Rather than focus solely on violations of the AAPOR ethical code, Miller proposed to create positive incentives, to "give AAPOR's stamp of approval to survey organizations for timely and complete methodological disclosure." Toward that end, he also proposed to create an AAPOR administered archive -- a "system for collecting and storing disclosed information in one place" -- and to "provide education and assistance" to survey organizations that pledge to routinely deposit information about their surveys to that archive.
This morning, Miller gave a hint of what is coming later this week. He sent an email message to the entire AAPOR membership, urging members "in a position to decide whether your survey organization can participate...to join the initiative." Miller writes that he plans "to publicize the names of organizations that have agreed to help during my Presidential Address" on Friday. He adds:
This is a big commitment for AAPOR, maybe the biggest thing that the Association has ever tried to do. At the same time, it appears that such a program is essential at this time when the status and credibility of our profession is under unprecedented threat. It can move AAPOR from an occasional, largely ineffective re-actor in the realm of survey standards to a proactive positive force for the profession. And it can coalesce polling and survey organizations around a common goal of openness and integrity.
Regular readers will know that Miller's initiative jives neatly with my own ongoing interest in improving disclosure of polling methodology (discussed most completely here). As such, it should come as no surprise that I strongly support this initiative. Among other things, I will be a participant along with Miller in the opening plenary session on Thursday night. I look forward to reporting more details about Miller's transparency initiative, and about the rest of the AAPOR conference. This week especially, you will want to stay tuned...
[Interests disclosed: I served on AAPOR's Executive Council from 2006 to 2008]
Do you support or oppose President Obama's health care plan, or do you not have an opinion?
43% Support, 49% Oppose (chart)
Do you support or oppose drilling for oil off the American coastline?
55% Support, 30% Oppose
Does the oil spill in the Gulf of Mexico make you more or less likely to support further drilling for oil off the American coastline, or does it not make a difference to you?
21% More likely, 43% Less likely, 36% No difference
41% Democrat, 35% Republican, 24% independent/other (chart)
Pew Research Center
5/6-9/10; 994 adults, 4% margin of error
Mode: Live telephone interviews
Obama Job Approval
47% Approve, 42% Disapprove (chart)
Economy: 41 / 51 (chart)
Terrorist threats: 49 / 37
The oil leak in the Gulf of Mexico: 38 / 36
As I read some possible government policies to address America's energy supply, tell me whether you would favor or oppose each...
Promoting the increased use of nuclear power:
45% Favor, 44% Oppose
Spending more on subway, rail and bus systems:
65% Favor, 28% Oppose
Increasing federal funding for research on wind, solar and hydrogen technology:
73% Favor, 22% Oppose
Allowing more offshore oil and gas drilling in U.S. waters:
54% Favor, 38% Oppose
How would you rate the job ______ has been doing responding to the oil leak in the Gulf of Mexico?
The federal government: 33% Excellent/Good, 54% Only fair/poor
B.P.: 24% Excellent/Good, 63% Fair/Poor
In your opinion, did President Obama do all he could to get the government's response to the oil leak going quickly or do you think he could have done more?
36% Did all he could, 47% Could have done more
From what you've seen and heard, do you think that the oil leak in the Gulf of Mexico is...
55% A major environmental disaster
37% A serious environmental problem, but not a disaster
4% Is it not too serious an environmental problem
33% Democrat, 24% Republican, 35% independent (chart)
Note: This election is a special election to fill the seat of Neil Abercrombie (D-HI), who resigned to run for governor. To fill the seat, one winner-take-all election is held where all the candidates, regardless of party identification, compete.
The U.S. Senate has the constitutional authority to confirm all Supreme Court nominees. Based upon what you know at this time, should the U.S. Senate confirm Elena Kagan as a Supreme Court Justice?
33% Yes, 33% No
My column for this week looks back at how the exit poll, the pre-election polls and seat projections did in last week's elections in Great Britain. Please click-through and read it all.
Just after I filed my column on Friday, Joe Lenski, the co-founder of Edison Research, the company that conducts exit polling for the U.S. television networks, posted the following comment on AAPOR's member listserv (reproduced here with his permission). After noting the remarkable accuracy of the projection they made as the polls closed (a subject I explore in the column), he concludes:
When you consider how complicated the British electoral system is and
that the projections are being made on the results of 650 separate
constituency races using exit poll interviews from only 130 polling
stations this is a spectacular achievement.
Their success also highlights up the challenge that Lenski faces every 2 to 4 years. The U.K. exit pollsters conducted just one survey. On November 4, 2008, the U.S. exit pollsters fielded 51 separate surveys for which they dispatched over 1,000 interviewers to conduct more than 100,000 interviews.
Last week, the U.K. exit pollsters asked just one question (vote preference) because they were charged with just one task: estimating the number of seats won by each party. The typical November 2008 U.S. exit poll asked voters to answer at least two dozen questions about who they were (demographically) and why they made the choices they did.
U.S. exit polls are designed mostly to help explain the results. Their predictive role is mostly supplementary. They help confirm that blow-out contests are really blow-outs, but when the outcome is in any doubt, network election analysts wait for actual vote counts from randomly selected precincts or, if necessary, from all precincts before "calling" the outcome
The point here is that the term "exit poll" can mean very different things in different places, and the design choices are ultimately up to the television networks and other news organizations that pay for them.
Two minor notes about the column: Technically, the results are known for all but one of the 650 constituencies. The Thirsk & Malton district will not hold its election until May 27 due to the death of one of the minor party candidates in late April. Since the Conservatives won Thirsk & Malton by a huge margin in 2005, most consider this a "safe" conservative seat. Thus, with Thirsk & Malton included, the final result is likely to be 307 seats for the Conservatives, 258 for Labour and 57 for the Liberal Democrats.
Finally, for those wanting to dig deeper into scoring the accuracy of individual pollsters and prognosticators, David Shor has taken a first stab on his Stochastic Democracy blog.
Update: PoliticsHome just posted their own look back at how their model performed. It's worth reading in full, but here are two key excerpts of the commentary from Rob Ford (which follows up on the lengthyfour-partexchange between Ford and Nate Silver):
The [PoliticsHome] model did perform well, although there was a large slice of luck involved. We were in fact wrong to assume that the Tories would outperform in the marginals, but this was balanced by Lib Dem underperformance everywhere to deliver roughly the right result.
We did see very clear patterns of differential swing in Scotland, as we predicted, although the differences were even larger than the polls had suggested. There were also differential patterns in Wales and in seats with large ethnic minority populations. These would both have been near the top of my list of expected differential effects, but we had no polling evidence on them so did not incorporate them in our model.
The big story, though, with regards the UNS vs differential swing debate is that the pattern of swing was remarkably uniform:
The change in Conservative vote varied by less than two percentage points moving from their weakest to their strongest areas, and they actually underperformed somewhat in their weakest areas relative to the average
The change in Labour vote varied somwehat more, but there was no systematic relationship with prior strength - if anything the party performed worse in areas where it started off somewhat weaker.
The change in Liberal Democrat vote showed more evidence of proportionality, falling back three points in the strongest areas while rising in the weaker areas. But even here the evidence of proportional swing was weak and patchy at best.
Given the lack of any clear relationship between prior strength and outcomes, we would expect proportional swing based models to perform quite poorly, and so it has proved.
Update 2: I neglected to link to the "early postmortem" from Anthony Wells of the UK Polling Report, who has and will continue to post much more on this subject.
Also, the British Polling Council issued its own statement and scoring of the accuracy of this year's final polls:
While not proving as accurate as the 2005 polls, which were the most accurate predictions ever made of the outcome of a British general election, the polls nevertheless told the main story of the 2010 election -- that the Conservatives had established a clear lead. All but one of the nine pollsters came within 2% of the Conservative share, and five were within 1%.
The tendency at past elections for polls to overestimate Labour came to an abrupt end, with every pollster underestimating the Labour share of the vote, though all but one were within 3%. However, every pollster overestimated the Liberal Democrat share of the vote.
For those who missed it, the ABC News/Washington Post poll (PDF) released last week included a question about the misperception that President Obama was not born in this country. They found that 20% of Americans think Obama was not born in this country, including 31% of Republicans:
What's striking is that the results are almost identical to the CBS News/New York Times poll released last month, which found that 20% of Americans think Obama was born elsewhere, including 30% of Tea Party supporters:
In short, this myth isn't going away any time soon. For more, see my previous posts on the birther myth and my research with Jason Reifler on the persistence of political misperceptions.