Articles and Analysis


Amy Simon: Random Digits or Lists

Topics: 2006 , Pollsters , Response Rates , The 2006 Race

Today's Guest Pollster's Corner contribution comes from Amy Simon, a partner at Goodwin Simon Victoria Research.

News media and academics hold up Random Digit Dialing (RDD) sampling methodology as the gold standard for survey samples for elections. Meanwhile, many top notch political pollsters have been serving their clients well for years by instead using samples selected from the official list of registered voters (the statewide voter file), often called Registration-Based Sampling (RBS).

RDD samples are created when a computer randomly generates the last four digits of a phone number. The advantage of RDD is that everyone with a working landline phone is included in the sample - it doesn't matter if your phone service was just turned on that morning or if your number is unlisted, since the sample isn't generated from a list of actual phone numbers. An obvious disadvantage is that an RDD sample also includes business numbers, fax numbers, disconnected numbers, and even numbers that have never been connected - so the costs of administering an RDD sample are higher since the built-in inefficiencies bring down your contact rate.

An RBS sample draws a sample from a list of registered voters. The obvious advantage of using voter files for survey samples - one that has been noted for years - is that voter file studies are cheaper to administer than RDD studies. RDD surveys have to churn through not only bad numbers but also have to bear the cost of screening out the large portion of adults who are not registered voters, in order to find their real interview targets: respondents who self report as registered voters and who, after applying their own likely voter models, the pollsters define post-interview as likely voters.

With RBS surveys, when you do reach an actual person on the phone, you already know -- since you ask for them by name - that you have a real live actual registered voter on the line and therefore have a better production rate. (The cost difference between the two methods is even more significant in a primary or other low-turnout election scenario, but the debate about using RDD versus RBS samples in low, medium, and high turnout elections is another topic requiring its own separate discussion.) In states that have high quality voter history showing which registrants have actually voted in different types of elections, pollsters can use a likely voter screen to draw the sample in the first place, further ensuring that they are interviewing people most likely to vote in the kind of election they are attempting to measure.

Yet the news media and academics engaged in polling question whether RBS studies can be as accurate as RDD studies, since no voter registration list is 100% up to date, nor does any voter file include 100% of the phone numbers of voters. In fact, the phone match rate for a voter registration list is not only less than 100% but it can vary significantly across a state based on geography, with suburban areas showing a higher match rate than either urban or rural areas. So drawing an RBS sample requires special expertise in terms of controlling for this and other issues about who is potentially over-or under-represented in your sample. So why is it that so many experienced political pollsters continue to use RBS samples despite these concerns about its accuracy? We do so because we find that in many instances (though certainly not all) it is just as accurate, or even more so, than RDD studies.

In fact, some academics and media outlets have been experimenting with voter file survey samples and have found this to be the case. Several have publicly shared at least some of their findings about the ways in which the results do or do not differ when using RDD versus voter file samples. Several studies worth reviewing are by Mitofsky, Lenski and Bloom, by Gerber and Green in Public Opinion Quarterly and the online archive of Gerber and Green's work maintained by the list vendor Voter Contact Services (VCS). These studies have largely shown that RBS studies can be just as accurate and in some cases, more accurate, than RDD studies. One hypothesis offered is that samples drawn from voter registration lists by definition consist of actual voters, while RDD studies rely entirely on respondents' self-reporting about whether they are in fact registered to vote. Given the larger and larger portion of the adult American population that is not registered to vote, the potential for survey error when relying on self-reported behavior may be introducing larger error than carefully designed RBS studies contain.

In one recent example, we saw virtually no differences between the results of an RDD and an RBS study. We provide here just one example from our own work as the polling firm for Ned Lamont for U.S. Senate in Connecticut. In the course of the general election, at one point in September we simultaneously conducted both an RDD study and an RBS study. The results were dramatically in sync, with a margin of error of +/- 4.0 percent on the n=600 RBS study and a margin of error of +/- 3.5 percent on the n=800 RDD study. Considering the far higher cost of using RDD samples as compared to RBS samples, these results certainly give weight to the common practice among political pollsters of using voter file samples instead of RDD samples in general election campaigns.



Greg Smith:

Enjoyed your article, Amy. In case my name looks familiar, my firm (Greg Smith & Associates) submitted last month's "Guest Pollster" article. Also, I'm familiar with your firm, since you did some work for Jerry Brady, an unsuccessful candidate for governor here in Idaho in 2006 (Democrat).

One consideration not mentioned in your very good article -- as you know, Idaho is one of the fastest growing states in the nation. To this end, we have been fortunate in continuing to provide accurate statewide results for political and nonpolitical clients alike ONLY by utilizing an RDD approach. Otherwise, we would grossly undersample "newcomers".

Your work for Brady produced results that were perhaps a little too optimistic for him. What sampling approach did you use? RBS?

Look forward to your reply!

Greg Smith
Greg Smith & Associates
Boise, Idaho
208.921.9458 (cell)


Amy Simon:

Dear Greg, Thanks for your thoughtful comment. As a pollster, you know the choice about when to use RDD v. RBS is complex and has to be assessed in each state and election individually (as I referenced briefly about low turnout elections). In this brief post introducing the topic, I did not detail the kinds of situations where you would only use RDD - and an Idaho election is certainly one of those. Idaho is a rare state in that it has same day voting, with no pre-registration required, and each year, a significant portion of the electorate is a same-day voter who was not pre-registered. So even beyond the state's rapid growth rate, the same day voting law makes Idaho a place where we would not consider using RBS. In fact, all of our polling for Brady there was conducted using RDD. In terms of the Brady results, you may recall there were multiple publicly released polls in mid-October(including newspaper polls) that all showed Brady in a dead heat with Otter. Why did they show a closer race than the final election results? Of course no one can know for sure, but my surmise is that it is related to the following: Idaho is a heavily Republican state. All of the public polls released in mid-October showed a large undecided vote, and these undecided voters were more heavily Republican. Not speaking for the campaign but for only for myself, it seems that if the undecideds are heavily Republican, then the Republican candidate simply has to run a good base voter campaign to "bring them home" in the closing days of the election. Given that the Republican Governors' Association came in with a heavy TV buy and that there were other heavy GOP commnunications with voters in the closing days, I think the Republicans were effective at bringing their undecided base voters back home. I'd love to hear your thoughts on that question if you have them.


Greg Smith:

Thank you for your quick reply, Amy. Were you yourself the primary analyst from GSVR who did the Brady research?

Yes, our polling was one of the ones you referred to (i.e., showing virtually a dead heat between Otter and Brady). Our work in this case was for the media, and not for gubernatorial candidates.

Interestingly, we have tracked the same-day registration closely for the last few election cycles here in Idaho, and this same day registration (in the past) has increased turnout by 6-8% -- not exactly "small potatoes" (sorry -- I couldn't resist).

Your comments about the undecideds being largely Republican and "coming home" are certainly correct. What is really interesting, and at least in part helps to explain the rather large win by Otter over Brady (8%), is that the undecided were in fact DISPROPORTIONATELY Republican. Other races, such as the closely contested 1st Congressional District seat, did not show this particular characteristic.

You of course already have my cell # (208.921.9458). If you are in Boise any time soon, please take the liberty of calling me and we can go have a cup of coffee (on me!). As you might guess, I am in somewhat of a vacuum of quality market/public opinion researchers here, and I would love to shoot the breeze with you.

Again, great article!


Gregory P. Smith
Greg Smith & Associates
208.921.9458 (cell)


Post a comment

Please be patient while your comment posts - sometimes it takes a minute or two. To check your comment, please wait 60 seconds and click your browser's refresh button. Note that comments with three or more hyperlinks will be held for approval.