Next Thursday, I will fly to Hollywood, Florida to cover the annual conference of AAPOR -- the American Association for Public Opinion Research. That coverage will include a reprise last year's popular video interviews of those presenting on topics of interest to Pollster.com readers, as well as comments via Twitter as time permits. I will have more details on our coverage next week, but the videos will all appear right here on Pollster.com
Meanwhile, for the true polling geeks out there, we would love your input on potential interviewees. The full program is available for download, either as a full pdf file or (if you're looking for something specific) through a search form. If you are a survey geek, are there any particular topics, papers or speakers you really hope I interview? If so, please email me or leave a comment below.
Would you [ROTATE: strongly prefer, somewhat prefer, somewhat oppose, or strongly oppose] the creation of a bipartisan blue-chip commission to lead an investigation into how the US government interrogated detainees during the war on terror?
Here is another follow-up on Monday's back-and-forth between Democratic pollster Stan Greenberg on behalf of the organization known as Democracy Corps and Republican Whit Ayres on behalf of the new group, Resurgent Republic. Greenberg helped found Democracy Corps the years ago as a means to provide "free public opinion research and strategic advice" to those on the Democratic side, while the newly launched explicitly Resurgent Republic aims to provide a similar service for Republicans. Resurgent Republic released its first survey last week. Greenberg has some harsh criticism for it, Ayres quickly responded.
In his critique, Greenberg asked RR to "explain what about your methodology produces" results for party identification that he considered at odds with other national polls. In my summary of the spat, I asked both pollsters to publicly disclose(a) whether they weight their results by party identification or anything like it and (b), if so, how those weights are determined.
Both have now responded and both say they do sometimes weight on a "case by case basis" to rolling averages of their measurements of party ID or (in the case of Democracy Corps) something close to it.
Jon McHenry, a partner at Ayres, McHenry and Associates, the firm that conducted the first survey for Resurgent Republic. says that in tracking surveys conducted for its clients, they "typically do weight according to the rolling average of all the interviews conducted in the race that year." This approach is similar to the "dynamic weighting" applied by Rasmussen Reports.
Andrew Baumann, senior associate at Greenberg Quinlan Rosner Research, the firm that conducts the Democracy Corps surveys explains that after weighting on demographics, they check whether they consider the weighted sample "markedly tilted politically or ideologically" on a wide number of questions. If so, they will weight on a three-month rolling average of the "recalled presidential vote," or sometimes recalled congressional ballot.
In other words, every Democracy Corps questionnaire now asks respondents whether they cast a ballot in November 2008, and if so, which candidate they voted for. They weight on recalled vote questions because they theoretically measure "behavior rather than attitude for most people - thus potentially, less influenced by political events of the moment," though they concede that respondents often report past voting inaccurately. Their argument is that while a recalled vote question is not perfect, in theory at least, respondents should report the same 2008 vote choice in April that they would have reported in February. If a survey is different on that measure, they are more comfortable that the difference is random error and not a real change.
On their most recent surveys, Resurgent Republic did not weight by party, while Democracy Corps did weight on recalled presidential vote to match the weighted-only-by-demographics average obtained on their last three surveys.
What should we make of this? The practices of both Resurgent Republic and Democracy Corps are different from most national media polls (though notall). Some traditional media pollsters will consider these practices nothing short of anathema. On the other hand, this sort of dynamic party weighting is much more common among pollsters that conduct internal surveys for campaigns, particularly for weekly or nightly tracking programs. As Jon McHenry points out, they struggle for methods that capture the modest trends in party identification that sometimes occur during a campaign "in a way that doesn't freak out the campaign if you get a night that's pretty different than other nights.
Also, the procedures the pollsters describe mostly involve weighting to their own (comparable) measures of party, but not always. Note that Ayres and McHenry opted against weighting their first survey for Resurgent Republic by party ID, partly because they lacked previous comparable measurements of their own, but partly because they considered their results " within the range of party numbers seen this year (and as it turns out, close to the 28 R/35 D result in Gallup's latest release)" (link added).
More tomorrow on the perils of comparing party ID across different pollsters.
Meanwhile, this information has implications for our new chart of
party identification. For reasons that are hopefully obvious, our
intent is to include only results from pollsters weighted by
demographics and not by party ID. So we will definitely exclude Resurgent Republic from the party ID chart, and I strongly "lean" to excluding Democracy Corps as well. We also need to follow up with the other public pollsters to be sure we are up to date on their procedures.
Any comment from our knowledgeable readers?
The complete verbatim responses from both McHenry and Baumann follow after the jump.
Editor's note: According to their web site, the PEG PAC endorses political candidates in Pennsylvania and "engage[s] in political campaigns on behalf of pro-business candidates seeking statewide political office."
Today the National Center for Health Statistics (NCHS) of the Centers for Disease Control and Prevention (CDC) released its latest biannual report on the prevalence of households without wireless or standard telephone service. This latest, which covers the latter half of 2008, shows the trend toward "cell phone only" households continuing unabated as hat 18.4% of adults were reachable only by cell phone, while another 1.7% lacked telephone service of any kind.
A refresher for those unfamiliar: CDC monitors the cell-phone-only population because it conducts huge ongoing health "surveillance" surveys via telephone, and as such, asks questions about telephone usage on their ongoing, in-person National Health Interview Survey (NHIS). Traditionally, telephone surveys have relied exclusively on random digit dial (RDD) samples that reach only those with landline phone service. These regular CDC estimates are a big reason why most national media polls are now including samples of cell phone users.
The NCHS estimate of the percentage of American households with only wireless phones increased 2.7 percentage points (from 17.5% to 20.2%), amounting to "the largest 6-month increase observed since NHIS began collecting data on wireless-only households in 2003," according to the report. They also note a big jump in what some call "cell-phone mostlys. "One of every seven American homes (14.5%) received all or almost all calls on wireless telephones, despite having a landline telephone in the home."
This latest report also includes a new feature: A chart with regression lines showing the growth in wireless-only households by age and by year. The chart makes clear that while the wireless-only population remains disproportionately younger, it is also growing rapidly among Americans over 30 years of age as well.
For further reading: More on the latest report from Carl Bialik. And see this link for all of our recent coverage involving cell phones and surveys.
**One reason why it seems like less than six months. In March, NCHS released a supplemental report that provided wireless-only estimates for all 50 states (our summary here).
* "...500 registered voters working from an updated statewide voter file. Only voters with prior vote history in prior election years G06, G04 and/or G02 were contacted, as well as newly-registered voters from the 2008 elections."
[J. Ann Selzer is the president of Selzer & Company and conducts the Des Moines Register's Iowa Poll.]
Can you trust your data when response rates are low? And, in this age of the ubiquitous internet, do we make too much out of its inability to employ random sampling? We asked and answered those questions in a study we conducted a few years ago, commissioned by the Newspaper Association of America. Given recent online discussions of data quality, I revisited this study.
In April and May of 2002, five surveys-asking the same questions-were conducted in the same market. The only difference was the data collection method used to contact and gather responses from participants. This rare look at what role data collection methodology plays in the quality of data yields some fascinating results. Our goal for each study was to draw a sample that matched the market, to complete interviews with at least 800 respondents for each separate study, and to gather demographics to gauge against the Census.
Method of contact. Our five methods of contact were:
Traditional random digit dial (RDD) phone (landline sample);
Mail panel, contracting with a leading vendor to send questionnaires to a sample of their database of previously screened individuals who agree to participate in regular surveys, with a small incentive;
Internet panel, contracting with a leading vendor to send an e-mail invitation to a web survey to a sample of online users who agree to participate in regular surveys, with a small incentive; and
In-paper clip-out survey, with postage paid.
The market. We selected Columbus, Ohio as our market. It was sufficiently large that the panel providers could assure us we would end up with 800 completed surveys, yet it is perceived to be small enough that mid-sized markets would feel the findings would fit their situation.
Analysis. To compare datasets, we devised an intuitive method of analysis. For each of six demographic variables-age, sex, race, children in the household, income, and education-we compared the distribution to the 2000 Census, taking the absolute value of the difference between the data set and the Census. For example, our phone study yielded 39% males and the Census documents 48%, so, the absolute value of the difference is nine points. We calculated this score for each segment within each demographic, added the scores, then divided by the number of segments to control for the fact that some demographics have more segments than others (for example, age has six segments, education has three). We then summed the standardized scores for each method and those raw scores give us a comparison allowing us to judge the rank order of methods according to how well each fits the market. Warren Mitofsky improved our approach for this analysis.
Problem with the internet panel. I'll just note that both panel vendors were told the nature of the project-that we were doing the same study using different data collection methods to assess the quality of the data. I said we wanted a final respondent pool that matched the market. They would send reminders after two days. Participants would get points toward rewards, including a monthly sweepstakes. The internet panel firm e-mailed 7,291 questionnaires; after 850 completed responses were obtained, they made the survey unavailable to others who had been invited. Because the responses to the first 850 completed surveys were so far out of alignment with the Census, we opted to implement age quotas post-hoc, to systematically substitute some in the 45-54 age group (which were too plentiful) with respondents in other age groups (which were underrepresented) with additional invitations to the survey. We reported out both findings-those before and after the adjustment.
Results. Unweighted, the RDD phone contact method was best; the in-paper clip-out survey was worst.
Weighting just for age and sex improved all data collection methods. Most notable is traditional mail, which comes close to competing with traditional phone contact after weighting for age and sex. The in-paper survey showed the greatest improvement because the respondent pool was strongly skewed by older women. One in four respondents to that survey were women age 65 and older (26%). The median age was 61 (meaning, just to be clear, half were older).
Other data. This study was commissioned by the newspaper industry, so it was natural to look at readership data. Scarborough is to newspapers what Nielsen is to television, and we had their data from the market for comparison. Partly because of the skew toward higher income and especially in higher educational attainment in the internet panel, that method produced stronger readership numbers-higher than the Scarborough numbers and higher than any other data collection method. This was one more check on whether a panel can replicate a random sample, and casts suspicion on whether a panel can ever sufficiently control for all relevant factors to deliver a picture of the actual marketplace.
Concluding thoughts. I have to wonder how this study might change if replicated today. The rapid growth in cell-phone only households probably changes the game somewhat. Panel providers probably do more sophisticated sampling and weighting than was done in these studies. Our mail panel vendor indicated they typically balance their sample draw, though their database in Columbus, Ohio, was just on the low end of being viable for this study, so we're confident less rather than more pre-screening was done. We did not talk with the online vendor about how they would draw a sample from their database, though we repeatedly said we wanted the final respondent pool to reflect the market. It is our sense little was done to pre-screen the panel or to send out invitations using replicates to try to keep the sample balanced. Nor did they appear to have judged the dataset against the criteria we requested before forwarding it to us; it did not look like the Columbus market. We specified we did not want weighting on the back end because we were wanted to compare the raw data to the Census. Had they weighted across a number of demographics, they certainly could have better matched the Census. And, maybe that is their routine now. But, I wonder how the readership questions might have turned out, for example. The Census provides excellent benchmarks for some variables, but not all. Without probability sampling, I always wonder if the attitudes gathered in from panels do, in fact, represent the full marketplace.
Epilogue. Of course it would be a good idea to replicate this study given recent changes in cell phone use. The non-profit group that commissioned this study just announced it is laying off half its staff, so they are unlikely to lead this quest.
CNN / ORC
4/23-26/09; 2,019 adults, 2% margin of error
Mode: Live Telephone Interviews
25. A proposal called "cap and trade" would allow the federal government to limit the emissions from industrial facilities such as power plants and factories that some people believe cause global warming. Companies that exceed the limit could avoid fines or higher taxes by paying money to other companies that produced fewer emissions than allowed. Would you favor or oppose this proposal? (ASKED OF HALF SAMPLE. RESULTS BASED ON 1014 INTERVIEWS IN VERSION B. SAMPLING ERROR: +/- 3 PERCENTAGE POINTS.)
26. (IF FAVOR) Do you think the "cap and trade" proposal would reduce global warming, or do you think it would help reduce air pollution in general but would not affect global warming directly?
Questions 25 & 26 combined:
18% Favor, reduce global warming
23% Favor, reduce pollution, not global warming
A quick follow-up on yesterday's back-and-forth between Democratic pollster Stan Greenberg on behalf of the organization known as Democracy Corps and Republican Whit Ayres on behalf of the new group, Resurgent Republic. Yesterday, Greenberg had some harsh criticism for the first survey from Resurgent Republic. Last night, Ayres sent a full response to Greenberg's criticism, which I am posting in full after the jump.
Also, in addition to what I posted yesterday, TPM's Eric Kleefeld got a rebuttal from Ayres and then another reaction from Greenberg. Finally, Nate Silver weighed in in defense of Resurgent Republic.
(I'll post soon with my take on the substance of the argument. Apologies for the slow follow-up on this one -- between a dentist appointment this morning and some "backroom" administrative chores this afternoon, I'm a little behind the curve. Again, stay tuned for more soon).
Last week, we told you about a new Republican polling alliance, Resurgent Republic, modeled explicitly on the Democratic effort known as Democracy Corps, that debuted with a national survey and accompanying powerpoint presentation and strategy memo. Resurgent Republic is led by former Republican party chair Ed Gillespie and pollster Whit Ayres. Today, Stan Greenberg, the Democratic pollster that leads Democracy Corps, responds with a memo of his own offering both "congratulations" to the new group a sharp critique of their first survey.
First, Greenberg pointedly raises an issue some Pollster readers have flagged:
I am perplexed that your first poll would be so outside the mainstream on partisanship. Your poll gives the Democrats just a 2-point party identification advantage in the country, but other public polls in this period fell between +7 and +16 points - giving the Democrats an average advantage of 11 points. Virtually all your issue debates in the survey would have tilted quite differently had the poll been 9 points more Democratic.
If the Resurgent Republic poll is to be an outlier on partisanship, then I urge you to explain what about your methodology produces it - or simply to note the difference in your public release.
Next Greenberg hits what he describes as "self-deluding bias in question wording that might well contribute to Republicans digging themselves deeper and deeper into a hole." He provides some specific examples, but I will not try to summarize them all here -- go read the whole thing for the details. We can safely assume, however, that we will hear more from Resurgent Republic soon.
This dialogue underscores a point I tried to make last week. Both Democracy Corps and, now, Resurgent Republic aim to take public the sort of message testing that campaign pollsters do on internal surveys. Both are trying to provide ongoing open guidance to candidates and activists in their own parties. But also we have to keep in mind that they also serve as propaganda vehicles with the explicit aim of helping "shape the debate" over politics.
As such, the sort of pollster crossfire we are about to witness is not a bad thing. Having two mirror partisan polling groups on opposite sides ready to counter and respond to each others' work on an ongoing basis will help keep both sides on their toes, and maybe, a little more honest.
PS: As long as Greenberg is calling on Glllespie/Ayres to explain their methodology with respect to party identification, I'd like to broaden that request a little. Could the pollsters on both sides of this argument disclose (a) whether they weight their results by party identification or anything like it and (b), if so, how those weights are determined?
We have tentatively included the Democracy Corps surveys on our new party identification chart, because when I last asked they did not weight by party. However, as should be apparent in the chart below, since January their percentages for Republican and Democrat have been mighty consistent:
Update: Whit Ayres responds to TPM's Eric Kleefeld.
Newly Democratic Senator Arlen Specter was a guest of NBC's Meet the Press yesterday and once again cited internal polling as a reason for switching parties. David Gregory asked Specter what had changed since he said last month that he was trying to "bring back voters to the Republican party" because "we need balance" and "we need a second party." Here is Specter's response:
SEN. SPECTER: Well, well, since that time I undertook a very thorough survey of Republicans in Pennsylvania with polling and a lot of personal contacts, and it became apparent to me that my chances to be elected on the Republican ticket were, were bleak. And I'm simply not going to subject my 29-year record in the United States Senate to that Republican primary electorate.
An article in yesterday's Philadelphia Inquirer also dug deeper into the the numbers behind Specter's defection and got even more comment on that final internal poll (via Smith):
Specter, 79, said his decision to switch was sealed after final survey results from his own campaign pollster, Glen Bolger, came in April 24.
"The most important number was the approval rating - it dropped from the 60s to 31" percent just in the last few months, Specter said.
Not that long ago, Specter drew standing ovations from mostly conservative crowds around the state as he stumped with Republican presidential candidate John McCain.
But the stimulus vote was a "watershed," Specter said. "It all turned on that. The pollsters had never seen that kind of precipitous drop. It was stark."
The survey found Specter trailing Toomey among Republicans by 15 percentage points in a three-way matchup with antiabortion candidate Peg Luksik, according to sources familiar with the findings. More important, a large majority of those listed as undecided described themselves as "conservative" or "very conservative," meaning that "the pool was such we couldn't overcome" the deficit, one source said.
"The numbers reflected the exodus of moderates from the party in the eastern part of the state," said the source, who requested anonymity because he was not authorized to disclose the results.
As noted here last week, it is not at all unusual for a U.S. Senator or member of Congress to conduct an internal poll to assess their standing and their chances for reelection. It is not at all surprising for those results to influence their decision about whether to run for reelection, retire, or seek higher office. What is unusual, however, is to hear an elected official speak so candidly about the role polling played in such a big decision.
And that gets to the nub of the issue with Specter. He is being candid -- not a trait typically associated with a politician -- yet his candor illustrates what many see as opportunism. One striking aspect of last year's presidential election was the extent to which those who succeeded, even temporarily (Obama, McCain, Huckabee and Palin), were more likely to be perceived as authentic and a departure from typical politics, while those they defeated (Clinton, Edwards and Romney) were more often seen as more political and packaged.
So how will Pennsylvania voters perceive a politician who is, as the LA Times'Doyle McManus put it (via Kurtz), "cheerfully open about the cynicism of his move?"
It is probably too early to tell from the new Quinnipiac poll, released this morning. Yes, it was fielded since Specter's announcement last week, but it lacks any questions aimed specifically at Specter's perceived motives or authenticity. That said, the initial reactions appear to work to Specter's advantage, as both his favorable rating (up from 45% to 52%) and the percentage who say he deserves reelection (up from 38% to 49%) have increased since late March.
Not surprisingly, much of the immediate improvement measured by the Quinnipiac poll comes from Democrats (and is somewhat offset by further declines among Republicans), but it also includes significant positive movement from political independents: In late March, note quite a third of independents (32%) said that Specter deserves to be reelected, while nearly half (49%) said he did not deserve reelection. Now, independents are evenly divided (44% to 44%) on whether Specter should be returned to office.
For the moment at least, Specter's switch may helped him with Pennsylvania's moderates. Only time will tell.
A Washington Post article last week noticed national movement to the left on issues like gay marriage, illegal immigration, and the legalization of marijuana.The conventional wisdom in Washington says social and cultural issues may continue to galvanize the Republican base, but most voters are thinking about the economy, or to a lesser extent, the war.But in fact, these recent poll findings show voters have moved to the left not just on the economy, or the war, but also on social issues.
Gay marriage, in particular, shows the most movement.Looking at past Washington Post/ABC-News polling on the issue, support for legalizing gay marriage is now at a record high (49% support, 46% oppose).It is the first time that fewer than half oppose gay marriage.Importantly, much of this change has come from an increase in "strong" support for gay marriage.Almost as many strongly support gay marriage (31%) as strongly oppose it (39%).In 2006, the last public data point, twice as many reported strong opposition (51%) as strong support (24%).
As Josh Marshall at TPM notes, other public polling also shows recent shift in support for gay marriage.But he notes little change in a recent Quinnipiac poll, perhaps because of a question wording change in which respondents were asked about "a law in your state" rather than a more broad, "should it be legal or illegal" for gay couples to marry that we see in the WP/ABC poll.That is a good hypothesis.I would also look to the rest of the Quinnipiac survey for evidence that national views toward gay rights are softening.A majority disagree (58%) that gay marriage is a threat to heterosexual marriage.And majorities support other rights, such as adoption and serving in the military.
There is also real leftward movement in views on legalizing "a small amount of marijuana for personal use."Almost as many favor legalization (46%) as oppose (52%).In the 1985, the WP/ABC-News poll showed nearly three-fourths (72%) opposing.
When it comes to illegal immigration, voters seem to make a distinction between border security and illegal immigrants currently in the country.Views on whether "the US is or is not doing enough to keep illegal immigrants from coming in the country" have been, surprisingly, relatively stable since 2005 (from when public data are first available).This recent poll continues to be consistent with past results.But more voters than ever before (61%) support giving illegal immigrants now living in the US the right to live here legally if they pay a fine and "meet other requirements."
Gun control is a bit of an exception to this pattern.Voters are evenly divided between supporting "stricter gun laws in this country" and opposing it (51% support, 48% oppose).Prior to 2008, most polls showed net support for stricter gun control at 60% or higher.However, gun ownership appears mostly stable (41%), if not in slight decline (46% in 1999).
This overall pattern suggests there are more opportunities for candidates to be on the more liberal side of these issues.However, with the economy still dominating voters' concerns, social issues will likely take a back seat for most voters, at least in the near term.
In a recent conversation with Jay Leve, founder and head of SurveyUSA, I was alerted to a split sample telephone survey experiment he conducted last October in the San Francisco Bay Area. It was on the subject of the government's plan to bailout or rescue Wall Street.
Though the experiment dealt with an issue that is so last year (!), I'm writing about it now because it demonstrates how questions about specific policy plans can produce misleading results about the public's views of the broader issue - a classic case of not seeing the forest for the trees.
Jay tested four different ways of phrasing the bailout question, and each one found mixed to slightly negative results. But then two follow-up questions starkly contradicted these results to suggest a clear majority of the public was supportive of the bailout efforts.
While the SurveyUSA experiment tested four different ways of wording the bailout issue, three are rigorously comparable, and so I mention them first. I'll come back to the implications of the fourth version later in this post.
Each of the following questions was asked of a split sample of just over 500 respondents:
The government may invest billions to try and keep financial institutions and markets secure. Do you think this is the right thing for the government to do? The wrong thing for the government to do? Or, do you not know enough to say?
The government may spend billions to bail-out Wall Street. Do you think this is the right thing for the government to do? The wrong thing for the government to do? Or, do you not know enough to say?
The government may spend billions to rescue Wall Street. Do you think this is the right thing for the government to do? The wrong thing for the government to do? Or, do you not know enough to say?
Results of Three Versions of Bailout Question
"Right or Wrong Thing to Do?"
A. Invest billions to keep markets secure
B. Spend billions to bail out Wall Street
C. Spend billions to rescue Wall Street
Was the Public Ambivalent?
With these results, one would have to conclude that the public was at best ambivalent toward a bailout or rescue of Wall Street. Given the sample sizes, Form A results are significantly different from those in Form C, suggesting "rescue" is the most negative way to phrase the issue and "invest" is the most positive way.
But then come two additional questions that throw a completely different light on the issue. The question below taps into a general feeling about the issue, and finds that people seem to want Congress to do something - and that they're more afraid Congress will do too little than too much.
More Afraid Congress Will Under-react Than Over-react
What concerns you more: that the government will do too much to fix the economy? Or, that the government will do too little?
The problem with comparing the above question to the other three is that this one doesn't explicitly allow for "no opinion," while the first three questions do.
Public Supports Bailout
But the next question is comparable in offering an explicit "don't know" response, and it also suggests that a clear majority of the public wants Congress to do something.
Should Congress Support SOME Economic Rescue Effort?
Do you want your representative in Congress to vote FOR an economic rescue? To vote against an economic rescue? Or, do you not know enough to say?
Note that despite offering an explicit "or don't you know enough to say" option, this question shows a clear majority in favor, with only 30 percent opposed.
Had we depended on only the first three questions, which were asked of separate split samples, for our understanding of the public, we might well have concluded that whether it was "bailout" or "rescue" or "invest," the public was either evenly divided about a bailout or leaning against such an effort. But these last two questions suggest that less than a third of the public was opposed to an economic rescue/bailout plan in principle, while a clear, though small, majority (54 percent) of people were in favor.
When we look at the results of the fourth version of the split sample experiment, we find even more support for some type of rescue plan. This version included three substantive options (compared with just two for the other three versions - which is why I'm treating this question differently from the first three versions), followed by the explicit option of not expressing an opinion.
Fourth Version of Bailout Question
Pass this plan
Take no action
Congress is working on a plan to buy and re-sell up to 700 billion dollars of mortgages. What would you like the Congress to do? Pass this plan? Pass a different plan? Take no action? Or, do you not know enough to say?
This version shows the least support for the current plan, but its second option - to pass a "different" plan - suggests that more than seven in 10 respondents are in favor of some rescue plan (31 percent the current plan, plus 40 percent for a different plan). These results also suggest that only 13 percent (rather than 30 percent as shown in Table 3) are opposed to some kind of effort.
These results, based on a telephone survey in the San Francisco Bay Area, demonstrate how variable the survey results can be - even when the survey is about an issue that is the subject of much media attention. The first three versions of the bailout question could reasonably be interpreted to suggest that the Bay Area public was either ambivalent or negative about any bailout plan, while the question reported in Table 3 gives the opposite impression - that the same public was in fact looking for some action by the federal government.
The important lesson: Sometimes, asking about specific plans can blind us to the larger issue of whether some type of action is still desired.
The last question reaffirms that when questions have two options in favor of a policy (pass the current plan, or pass a different one) and one against (take no action), the "opinion" that is measured can be quite different from when a question has just one option in favor of a policy and one opposed.
The last question can also be interpreted to have two options against the "current" plan - only one option says pass the current plan, while two options are against it (pass a different plan, and take no action).
This doesn't mean that three options should never be offered. The example does reaffirm, however, that the more options that are offered in general, the less similar the results will be to questions that offer only two options.
Note: My thanks to SurveyUSA's Jay Leve for pointing me to his very insightful experiment.