Articles and Analysis


Strategic Vision: A Bigger Story?

Topics: AAPOR , David Johnson , Nate Silver , Pollsters , Strategic Vision

The Strategic Vision story is getting far more interesting. In the wake of a public reprimand from the American Association for Public Opinion Research (AAPOR) for failing to disclose "essential facts" about his company's methods, and after more than a year of doing his best to avoid public comment, Strategic Vision CEO David Johnson now has much to say and is threatening legal action. Meanwhile, over at FiveThirtyEight, Nate Silver says he has found evidence that "suggests, perhaps strongly, the possibility of fraud" in Strategic Vision's numbers.

On Wednesday, he provided this response to a call from Jim Galloway at the Atlanta Journal Constitution:

Strategic Vision CEO David Johnson said his firm had wanted to appeal the judgment, and said a Sept. 17 hearing had been scheduled - and then canceled by the AAPOR. "We've asked for a copy of the complaint that was filed against us, and who filed it," Johnson said. "How can you respond to something when you don't know who filed the complaint."

Moreover, he added, "We're not a member of their organization. I don't know anything about them."

Johnson also gave Galloway an email sent by AAPOR to Johnson this past June acknowledging receipt of "some of the information requested regarding polls in New Hampshire and Wisconsin" (emphasis added).

Yesterday, James Verrinder of the website Research reports that Johnson now "vows legal action" against AAPOR and some of its members:

Johnson said he "disagreed completely" with the charge levied at his firm by AAPOR and vowed to take legal action against the association. He said the firm had supplied AAPOR with all the information it had requested on 19 June this year, and had electronic proof of what was sent.

Johnson believes a competitor is behind the original complaint to AAPOR, and wants to see the source of the action against his firm. "I find it unusual," he said, "that an organisation that says they are all about transparency won't supply us with details of the complaint. What they were asking for were trade secrets."

He said: "We will be taking legal action. We have spoken with our attorneys and have gotten them the documentation and should know exactly the venue and specific charges that we will be filing against AAPOR specifically and individual members of AAPOR personally."

Johnson alleges that the AAPOR's acted "maliciously" in issuing its ruling. "I think it was timed to coincide with the results of a poll we had out yesterday [on the gubernatorial elections in Georgia]," he said.

Both accounts also include responses from AAPOR President Peter Miller. Miller told Galloway that "AAPOR had sent Johnson notices four times asking him to confirm his attendance at that hearing last week, and finally ended up canceling because of the lack of any response." Miller told Verrinder that it was "completely wrong" that a competitor had filed the complaint and reiterated that Johnson's 2009 response, received after the release of the AAPOR report in April, did not include all of the requested information. In a previous article, also published today, Verrinder had reported that the reply still did not provide requested information about response rates and weighting or estimating procedures.

Now separately, Nate Silver claims to have found evidence of a non-random pattern in the trailing digits of the percentages reported by Strategic Vision in their public polling since 2005, and the implications of that assertion are pretty explosive. Last night, he raised some suggestive questions that mirror some of the unsubstantiated gossip and prodding I've received via email for years from a Democratic activist or two in Georgia. But this morning, as Silver puts it himself, he's making a much more concrete allegation (emphasis his):

Certain statistical properties of the results reported by Strategic Vision, LLC suggest, perhaps strongly, the possibility of fraud, although they certainly do not prove it and further investigation will be required.

In other words, Nate is suggesting that Strategic Vision has been making up its numbers. The analysis he reports this morning is based on Benford's Law, the same principle similar to the principles behind much the analysis of Iran election fraud that we reported this summer [see the clarification below]. The idea is the last digit of numbers with two or more digits should have a uniform distribution. A '1' should occur as often as a '2,' a '3' etc. According to Nate, the pattern for Strategic Vision is far from uniform:

[T]his data is not random. It's not close to random. It's not close to close. Which brings up the other possibility: Strategic Vision is cooking the books. And whoever is doing so is doing a pretty sloppy job. They'd seem to have a strong, unconscious preference for numbers ending in '7', for instance, as opposed to those ending in '6'. They tend to go with round numbers that end in '5' or '0' slightly too often. And they much prefer numbers with high trailing digits like 49 and 38 to those with low ones like 51 and 42.

I haven't really seen anyone approach polling data like this before, and I certainly haven't done so myself. So, we cannot rule out the possibility that there is some mathematical rationale for this that I haven't thought of. But it looks really, really bad. There is a substantial possibility -- far from a certainty -- that much of Strategic Vision's polling over the past several years has been forged.

I recognize the gravity of this claim. I've accused pollsters -- deservedly I think in most cases -- of all and sundry types of incompetence and bias. But that is all garden-variety stuff, as compared against the possibility that a prominent polling firm is making up numbers whole cloth.

I would emphasize, however, that at this stage, all of this represents circumstantial evidence. We are discussing a possibility. If we're keeping score, it's a possibility that I would never have thought to look into if Strategic Vision had been more professional about their disclosure standards. And if we're being frank, it's a possibility that might actually be a probability. But it's only that. A possibility. An hypothesis -- as yet unproven.

Predictably enough, my email box is filling with the same question: What do you think of this? My first reaction is similar to "Mark" (not me) and some other commenters on FiveThirtyEight: The analysis is intriguing, but I would find it far more convincing if he ran comparable statistics for some of the other prolific pollsters in the same contests since 2005 (Rasmussen, SurveyUSA, Quinnipiac, ARG, Zogby, Mason-Dixon, etc). If the Strategic Vision pattern is really different from all the rest, then it would reduce the possibility that the pattern Nate found "is a function of polling in general" (as commenter Matt puts it).

Also, while I stipulate that I am no expert in Benford's Law, my sense from reviewing the analysis of the Iranian election is that its assumptions can get extremely complex. As such, we need to very cautious about jumping to conclusions based on the pattern that Silver is reporting. That said, I will have more about Strategic Vision, AAPOR and the theme of transparency in my National Journal column on Monday.

Update and Clarification: To prove I'm no expert, I initially described Silver's analysis incorrectly. Mark Lindeman is right in the comment below when he says that Nate Silver is expecting a uniform distribution, not a Benford distribution.  

I also exchanged email with Walter Mebane, the University of Michigan professor whose has done much work in this area, most recently on fraud in Iran.  He reviewed Silver's post and urges caution, saying that some of the comments there (such as those from Mark, Allen and Zach) "cover the kinds of further questions" he would want to ask. Like Zach, Mebane says that with two-digit numbers, we should not expect a uniform distribution of the last digit, especially if it is based on percentages that have been rounded in a biased manner.  Echoing commenter Mark, he says that a "comparison with other polling houses would probably be the most informative and quickest thing to do."

Update 2:  For those wondering whether Strategic Vision, LLC has any real clients, a colleague passes along some indisputable proof that they do.  The Friedman Foundation for Educational Choice has used Strategic Vision to conduct a series of statewide surveys since 2007 (click on any link showing a state's "Opinion on K-12 Education and School Choice").  I count 14 surveys in all since 2007, the most recent released just last wewek. 

Ironically, these reports show that the Friedman Foundation demonstrates a prominent commitment to "methods and transparency."  Check Page 2 of the most recent report for Nebraska: "We are committed to sound research and to provide quality information in a transparent and efficient manner."  A methodology section found within includes the sort of information -- including response rates and call disposition reports -- that Strategic Vision continues to resist releasing for their political surveys. 


Mark Lindeman:

To clarify -- although this is just beginning to sink in for me -- Nate Silver isn't applying Benford's Law at all. In distributions that follow Benford's Law, small digits tend to be more common than large digits. (Walter Mebane has done a lot of work with "2BL," second-digit Benford's Law, where the expected distro is different than for the first digit.)

Nate is expecting a uniform distribution, not a Benford's Law distro. I see several of his commenters speculating about a BL distro, but I'm not sure why they would expect one.



To me it's a novel argument that if you are not a member of a professional society, their calling you on your professional misdeeds is somehow illegitimate. Can a criticism of a doctor or lawyer by the AMA of ABA be negated by a refusal to join the organization? I'm reminded of the reflexive posture of every conservative when you cite the New York Times or NBC or any other mainstream source, that's it's liberal-leaning and unreliable. If it isn't on Fox or Newsmax.com or some other part of the echo chamber, they discount it. Why ISN'T Strategic Vision a member of AAPOR? So they can avoid its scrutiny?


Matt Dabrowski:

I'd like to hear your thoughts on the usefulness, and indeed the citation worthiness, of Benford's Law, which Nate Silver uses to call Strategic Vision out.

As you know, Benford's Law has a long history in accounting, but less repute in election forecasting. I personally am not fond of it.

Why? It's not substantive evidence of fraud. The biggest practitioner of this type of work is Walter Mebane. Sometimes he finds a good election, but most of the time he finds bad elections. They're been a lot of corruption and irregularity charges in U.S. elections over the past 10 years, and Mebane's supported most of them. Often there's little hard evidence other than his statistics to back him up.

Benford's Law is a statistical method. You have to ask questions about the method if it only ever confirms your own hypothesis: that elections are being rigged. That's often Mebane's viewpoint.

I also think there's a critical moral component here. After Mebane told the press about the Iranian election, Republicans in Congress discussed a military response to the crisis. (Originally he told the Irish Times that there was no statistical fraud, but later changed his mind.) Frankly, there's not too many other responses to a rigged election: riot in the streets, send the troops, you name it. It's about as serious as it gets in our line of work. Claims of a rigged election means people die.

So someone who goes around trumpeting illegal activities needs to do a serious gut check. Furthermore, it's incumbent on the rest of us to think long and hard before we credit these claims as worthy of report. Mebane's reckless, if he isn't a charlatan.

Benford's Law isn't hard evidence of election fraud. It's barely statistical evidence. When clients ask me about Benford's Law, that's what I tell them, and I encourage them to ignore guys like Mebane until there is concrete and legitimate evidence of fraud. A professor's word alone can cast doubts on an election and bring people out into the streets, even if he's wrong. It shouldn't taken lightly.

What's good for the goose is good for the gander. If we shouldn't quote Mebane talking about Benford, we shouldn't quote Nate Silver either. And it took barely a few hours for a commenter here to find fault with Silver's methods.


Mark Lindeman:

Matt, what on earth?

"They're been a lot of corruption and irregularity charges in U.S. elections over the past 10 years, and Mebane's supported most of them."

That is simply and wildly untrue. Mebane has actually taken heat for debunking some of those charges.

I agree that allegations of fraud shouldn't be undertaken lightly. Perhaps allegations of incitement to violence shouldn't be, either, but maybe that's just me.


Matt Dabrowski:

I appreciate your comments, Mark. I know about your expertise on this subject, and I was quick to cite you during the controversies after the 2004 elections. Maybe Mebane is the stopped clock that's right twice a day, but I don't think you can be half-way on this. That's the point I want to make.

Either statistical methods should be the arbiter of the legitimacy of elections, or they should not. I believe they should not. Hard evidence should be.

You and I both understand the statistical methods. We know how useful they are, but how weak they can be. I'm confident I can advise my polling clients based on my survey results, but I don't want to bet my country on a p-value, or someone else's country. Mark Twain didn't call it "lies, damn lies and statistics" for nothing.

I hate to sound tinny on this issue. But when you look around the world and see a number of democracies in jeopardy, I think my concerns are worth consideration. Loose lips can sink ships. This is one of the ethical concerns that confront survey practitioners in the modern era.



I think there are finer shades to the question than a whether a single piece of evidence can damn someone to guilt of fraud. For example, I think the question here is actually whether there's enough reason to demand further scrutiny into what's causing the statistical oddity (which seems to be an emphatic 'yes' in both cases). To torture the metaphor: these aren't meant to be evidence in a trial in the court of public opinion, but more an argument to open an evidence-gathering mission.

There's also a difference between sparking protest/riot due to the belief of fraud without hard evidence, and protest/riot because the ability to investigate has been frustrated. If someone has speciously convinced rioters that there was fraud, then that is irresponsible. But, in my opinion, if people are convinced that something should be investigated, and then they decide to riot and protest because it's not being investigated, then so be it. I think Iran is a mix, but this Strategic Vision case is much more likely to lean towards the latter.



The commenters at FiveThirtyEight have also done a lot of sleuthing about Strategic Vision LLC's various and sundry "offices" around the country. At least two of them appear to be only UPS Store mailboxes (in the last 24 hrs. SV removed the addresses from their websites, most likely because of the scrutiny this story has brought).

The issue of disclosure of methodological details on surveys ought to be something the electronic and printed media focus on as a matter of course. AAPOR can provide a kind of "seal of approval" to a survey organization (though no certification of the quality of a given survey), and the Strategic Vision case may raise awareness of the need for greater vigilance about survey quality.


Mark Lindeman:

Matt, I don't understand why you characterize Mebane as the "stopped clock that's right twice a day." Our opinions about the 2004 election weren't substantially different, and his critiques of bad fraud arguments were earlier and much more consequential than mine. He and coauthors were also quick to debunk wild claims about the 2008 New Hampshire primary. I could keep going, but maybe there's no substitute for reading the work, or following it in real time. Even in the Iran case, it seems obvious that Mebane wasn't a "stopped clock": he repeatedly revised his conclusions in the light of new evidence and analysis. Conceivably at some point he said something irresponsible to the press, but I never saw it, and you haven't quoted it.

"Either statistical methods should be the arbiter of the legitimacy of elections, or they should not. I believe they should not. Hard evidence should be."

I don't think anyone thinks that statistics "should be the arbiter." In this case perhaps Mebane can speak for himself (pp. 24-25):

"It is important to be clear that none of the estimates or test results in this report are proof that substantial fraud affected the 2009 Iranian election.... [Long, detailed discussion omitted.] In general the tests’ best use is for screening election results, not confirming or refuting claims of fraud. A significant finding should prompt investigations using administrative records, witness testimony and other facts to try to determine what happened. The problem with the 2009 Iranian election is that the serious questions that have been raised are unlikely to receive satisfactory answers. Transparency is utterly lacking in this case. There is little reason to believe the official results announced in that election accurately reflect the intentions of the voters who went to the polls."

The better the election system, the less reason there is to resort to statistical inference. But I don't think it makes any sense to demand that folks like Mebane seal their "loose lips," when you can be sure that other less careful analysts won't.


Mark Blumenthal:


As it happens, I just saw the same comments on Fivethirtyeight, which Ben Smith also confirmed. What's amazing is that the supposed Strategic Vision main office -- at 2451 Cumberland Parkway in Atlanta -- is a shopping center that's home to a UPS Store and a Mailboxes Etc, but no office complex that I can see via street view. That's the address that AAPOR sent it's FedEX deliveries to...which explains a lot.

As for disclosing (and reporting) methodological details as a matter of course, I've got this proposal out there. Any thoughts?

Working on another post for later today...



@Mark: It's a terrific idea. I think that you may want to try to get certain news organizations (including major weblogs such as Politico, TPM, Huffington, and others) to sign on to the disclosure scoring, and agree to use it and report it -- or link to AAPOR or Pollster or whoever maintains the score. (I think Pollster might usefully play that roll.)

You will, of course, have to come up with some document that validates your choice of indicators and how you weight them in the overall score. (BTW/ I really like the NCPP's request for the survey questions in the language of the interview -- this would apply within the U.S. for, say, Spanish-language interviews, or Canada for French-language interviews, not just for all overseas interviews.)

Another note: I see in Ben Smith's story that Johnson says he changed the addresses on the website in the "last month." As you will see from the discussion on 538, this change was literally made late YESTERDAY -- which is "within the last month" alright.


Matt Sheldon:

Mark -

I think the witch hunt against Strategic Vision is perhaps a bit off the mark.

It rests on 2 unproven assertions:

1. That Strategic Vision is some sort of outlier in terms of the trailing digits metric.

2. That this metric is useful in assessing "other than methodological" differences in polling.

Both are unproven.

The main line of evidence is that the trailing numbers skew in a non-random direction, that can only be explained by falsifying data using non-random techniques.

This is not particularly compelling because of the following:

1) Trailing digit patterns are largely dictated by how you treat undecideds (how hard you push them to commit) and how far in advance of the election you poll. Far in advance will produce more undecideds and thus impact trailing digits.

2) Pollsters vary GREATLY in their effort to classify undecided voters.

- Some word the question loosely to encourage soft commitment
- Some as a "leaner" follow-up (Rasmussen)

It is the strength of effort to classify undecideds and the method in doing so that will cause a non-random skew in the data.

Nate Silver's singling out of Quinnipiac Polls felt very much like cherry-picking.

Why did he not show 10, 15 or 50 pollsters trailing digit patterns?

He should have run the analysis for all major pollsters and showed us the distribution for each.

He did not do that? Why? Where is his dataset for public scrutiny?

I see none.

The fact is that some have much greater skews that Strategic Vision.

Here is my experiment.

I took all presidential approval polls from George W. Bush as archived by the Roper Center.


This produced 2,894 trailing digits.

What is good about this is that it is that pollsters are measuring:

- The same basic question
- In the same geography
- Under similar conditions
- Over the same time period

Note that the GWB approval rate ranges from 19% to 92% with an average near 50%. This almost perfectly mimics a normal distribution.

The result?

The trailing digit SKEW differed wildly across pollsters even under CONTROLLED conditions:

FIRM N %0-4 %5-9 Spread
Fox/OpDyn 282 49 51 1
Gallup 226 46 54 7
Gallup/CNN/USA 218 42 58 17
Pew 200 50 51 1
Newsweek 186 46 54 8
ABC/WP 172 52 48 5
CBS 162 49 51 2
Democracy Corp 154 52 48 4
ARG 138 38 62 25
NBC/WSJ 138 41 59 17
CBS/NYT 132 58 42 15
All 2,894 48 52 3

Even among these firms measuring...

- The same thing
- In the same geography
- Under similar conditions
- Over the same time period

We saw a spread on %0-4 Trailing Digits vs. %5-9 as high as

ARG: 25 points
NBC/WSJ: 17 points
Gallup/CNN: 17 points
CBS/NYT: 15 points

Strategic Vision's spread was 10 points, and this was...

- Different candidates
- Different geographies
- Different time periods
- Close races

Strategic Vision would be #5 if included with the pollsters above.

Even under highly controlled conditions, Nate Silver's metric proves absolutely useless.

How can it be valuable in less controlled tests.

Nate's own method indicts about 30-40% of pollsters as outright frauds.

Do you agree?


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.