475241-fingers-crossed-behind-back.jpg

[Image courtesy of news.com.au]

A key recommendation in the recent Presidential Commission on Election Administration’s report is that election offices should do a better job of forecasting turnout as part of the effort to reduce unnecessary lines at the polls.

One challenge to that, however, is that such numbers aren’t easy to come by; while voter files can be used to gather of some of that data, it often lacks the demographic component that can help assess whether and how turnout various across different communities. The gap is often filled by survey data, but that data itself has shortcomings as well. Specifically, because voting is seen as socially desirable, people on average tend to over-report turnout – in other words, say they have voted when they have not. This can create difficulties when the resulting data is used for forecasting and analysis.

A recent Data Dispatch from Pew’s election team examines some new research that attempts to alter the wording in standard survey questions to attack the over-reporting problem – and while the changes tested don’t yet eliminate over-reporting they do seem to help reduce it somewhat.

Here’s what the authors (Michael Hanmer, Antoine Banks, and Ismail White) have to say in the abstract of their paper:

We attempt to reduce over-reporting simply by changing the wording of the vote question by highlighting to the respondent that: (1) we can in fact find out, via public records, whether or not they voted; and (2) we (survey administrators) know some people who say they voted did not. We examine these questions through a survey on US voting-age citizens after the 2010 midterm elections, in which we ask them about voting in those elections. Our evidence shows that the question noting we would check the records improved the accuracy of the reports by reducing the over-reporting of turnout.

In other words, people conducting the survey signal explicitly to respondents that they can check their answers – and, to a certain extent, it works. From the Dispatch:

Political scientists and researchers interested in civic behavior have long recognized that survey respondents often say they voted in the last election when, in fact, they did not. Many attempts have been made to rewrite standard survey questions to encourage respondents to answer accurately. Recent research published in the journal Political Analysis … presents an experiment using two new questions to improve survey measures of voter turnout:

Control Question (from the American National Election Studies)–“In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. Which of the following statements best describes you? ”

First Experimental Question–“In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. By looking at public records kept by election officials, we can get an accurate report of who actually voted in November, and in previous elections. Of course, these public records do not say who you voted for. Part of our study will involve checking these records against the survey reports. Which of the following statements best describes you?”

Second Experimental Question–“In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they just didn’t have time. We also sometimes find that people who say they voted actually did not vote. Which of the following statements best describes you?”

All three questions offered the same answer choices: “(1) I did not vote (in the election this November); (2) I thought about voting this time but didn’t; (3) I usually vote but didn’t this time; and (4) I am sure I voted.”

The first experimental question significantly reduced inaccurate responses to 17.2 percent, compared with 24.8 percent for the control question and 20 percent for the second experimental question. [final emphasis added]

This is very important research; though I admit that I find it fascinating that even in the most successful experiment more than 1 in 6 people (17.2%) still over-report their voting, suggesting either that they don’t believe an accurate check is possible or they simply don’t care. Almost as interesting, signaling directly to voters (as in the second question) that people lie about voting seems to be associated with more people doing just that – almost 1 in 4 (24.8%)!

Either way, this paper is a helpful first step in addressing the over-reporting problem and – along with continued improvements in voter file management and maintenance – will clarify the turnout picture for forecasting and analysis purposes.

Thanks to Pew for sharing the paper and to the authors for this revealing look at how to confront and overcome the human factor in survey research on elections.