New CalTech/MIT Report Looks Back – and Ahead – at Election Reform in America


[Image courtesy of]

My friends and colleagues at the CalTech/MIT (or, in Cambridge, the MIT/CalTech) Voting Technology Project (VTP) have just released a new report entitled Voting: What’s Changed, What Hasn’t, and What Needs Improvement. The report is an update of VTP’s classic 2001 report What Is, What Could Be which both heralded the birth of the collaboration between two of the nation’s finest academic centers on technology and set the agenda for many of the election reforms that ensued over the following several years:

[O]ur original mandate was to do the sorts of things that faculty from Caltech and MIT are known for–studying technology and developing innovative solutions to technological problems–with an intention of understanding the problems seen with voting technologies in Florida and other states. However, as we rapidly learned from our research, election administration in the United States is complex and highly decentralized. Our research quickly expanded beyond a narrow focus on voting machines to include voter registration, polling places, absentee voting, election finance, and the overall administrative structure of elections.

Over the ensuing decade-plus, VTP has been a key center of gravitas on many of these issues. Probably the most significant contribution to the field was the development of the “residual voting” measure aimed at identifying leaks in the nation’s election system:

[Our] research led us to develop an innovative yardstick to study the basic performance of voting technologies, which we termed the “residual vote rate.” (The residual vote rate is simply the number of over- and under-votes in a particular race, expressed as a percentage of the number of people who turned out to vote.) Using the residual vote rate, we could answer the central question posed by the 2000 presidential election: how accurate and reliable were these different voting systems?

The answer, of course, was “not very”, which led to the virtual elimination of error-prone punch-card ballots. VTP’s research also identified the nation’s voter registration system as a key weakness, finding that as many as half of the votes “lost” in 2000 were due to registration issues. That led VTP researchers to work closely with leaders in the field like the National Academy of Sciences to design better approaches to registration.

VTP’s latest report brings the tradition of empirical analysis of election administration to the current day, and makes a number of recommendations about the best way forward after 2012. Two of these stand out, in my opinion, as the most significant.

First, the VTP report embraces election auditing, including a call for legislative mandates for audits and a commitment to research and development of voting technology that is designed to facilitate auditing and move away from security “testing”:

De-emphasize standards for security, aside from requirements for voter privacy and for auditability of election outcomes. While testing for minimal security properties is fine, expecting [testing authorities] to do a thorough security review is unrealistic and not likely to be effective. Instead, statistically meaningful post-election auditing should be mandated. (“Audit the election outcome, not the election equipment” (Stark and Wagner 2012))

Second, the VTP report takes a distinctly contrarian view on the expansion of no-excuse absentee ballots and vote-by-mail, saying that jurisdictions should “[d]iscourage the continued rise of no-excuse absentee balloting and resist pressures to expand all-mail elections” based on a number of concerns including coercion and the threat of fraud, accuracy, speed and the loss of “public ceremony” in voting. The report isn’t as clear here on the way forward, except for a call for better research on the costs and benefits of different modes of voting. That probably makes sense, as it will almost certainly be difficult to un-ring the absentee/vote-by-mail bell, but VTP’s skepticism should make everyone who cares about elections think hard about the rationale for, and experience with, the expansion of “postal voting” across the nation.

There is so much else in the report, including short “perspectives” from scholars and experts in the field – and me. [SPOILER ALERT: Mine makes a baseball reference. Try not to be surprised.] I plan to do several blog posts in the next few weeks based on the report.

It’s tremendously encouraging to see that twelve years in, the VTP – led by the lovely and talented Co-Directors Charles Stewart (MIT) and Mike Alvarez (CalTech) – is still going strong. Their new report is a must-read for anyone who cares about how we got to where we are in the field of elections and what it means for the foreseeable future of how, when and where Americans cast their ballots.

2 Comments on "New CalTech/MIT Report Looks Back – and Ahead – at Election Reform in America"

  1. David desJardins | October 18, 2012 at 10:45 am | Reply

    The whole theory of “residual vote” analysis seems fundamentally flawed, at least as the authors are applying it. They assert that vote-by-mail is worse because more people cast no votes in some races. But there’s no logical basis to such a conclusion. Maybe people who vote by mail are more likely to decide not to vote in some races, while people who vote in person are more likely to fill in all of the races. That doesn’t say anything about error or inaccuracy, it’s just a correlation with how people choose to vote. To argue that we should restrict millions of people from voting by mail because the authors don’t like the idea of some people not voting in some races (and they don’t even show that this would be less if everyone were forced to vote in person, maybe it’s just that the same people who are more likely to abstain are more likely to vote by mail, and they would also abstain more if they voted in person) seems a huge, unwarranted leap.

  2. Charles Stewart | October 19, 2012 at 3:52 pm | Reply

    On the issue of the residual vote rate. I would encourage David deJardins to read the considerable academic literature that has grown up using the residual vote rate as a dependent variable. He is quite right that simply looking at the residual vote rate in one election can’t distinguish between voter abstention and other factors that lead to over- and under-votes. The statistical work that has used the rv rate to gauge the quality of voting machines and other methods, like absentee voting, has been best applied dynamically, leveraging off the fact that some counties change their voting technology while others don’t (or some change the % of voters using absentee ballots, while others don’t) over long periods of time. The California research cited in the report, for instance, examined voting in California over the past two decades — a period of considerable variation (across time and space) in both the use of voting machines and in the usage of vote-by-mail.

    To conclude, if the research we were relying on was of the sort described in the above post, I would agree. But, it’s not. Like any research area, more work could be done, and refinements could certainly be made. If a voter wants to purchase some convenience by voting absentee, at the risk of a 97.5% chance his/her vote will be counted (vs. 99% if the vote is cast in a precinct), that voter should do so with his/her eyes wide open. There is, as they say, no free lunch.

Leave a comment

Your email address will not be published.