[Image courtesy of lucreid.com]
There is nothing quite like new data to set the election geek world into a frenzy of delight.
The EAC’s release yesterday of the latest report on the Uniformed and Overseas Civilians Absentee Voting Act (UOCAVA) is the latest information we have about the fate of ballots cast by military and overseas voters. The report (teased mercilessly on Twitter by the EAC in a masterstroke of geek marketing, by the way) is especially important as it reflects the first data reflecting changes made by the MOVE Act of 2009.
The data contains lots of good news for anyone who cares about the ability of military and overseas voters to participate in democracy from a distance and appears to validate somewhat the efforts by Congress and state/local election offices to improve the UOCAVA balloting process.
But in this good news lurks a persistent problem that I find simultaneously puzzling and frustrating: Why aren’t more election offices responding fully to the survey?
MIT’s Charles Stewart (as usual) has a succinct analysis of the initial results at Election Updates. The two key numbers he notes:
- “[t]he fraction of jurisdictions reporting how many ballots were returned rose, from 90% to 94%.”;
- “only 75% of jurisdictions (up from 72% in 2008) reported how many ballots were rejected because they arrived too late to be counted.”
In addition, even a cursory look at this EAC graphic reveals that over half (55.2%) of UOCAVA ballots rejected in 2010 were rejected for a specified reason were coded as “other” – and that many other rejected ballots (more than one in ten) were not categorized at all.
While the overall UOCAVA news is good, this missing data makes it hard to get the complete picture. Fortunately, I know that the geeks out there are already hammering away at the data and will be able to tell us more soon.
Until then, we need to look again at the process by which we gather data like this ad figure out how to improve response rates. To do so, we need to determine which one or more of the following factors are hindering response rates:
- – outright refusal/failure to respond;
- – inability to respond fully because of data collection issues;
- – data coding challenges created by the “fruit salad” problem; or
- – the dreaded “other”.
Most likely, every jurisdiction has some combination of these in play – and addressing and overcoming each of these will be a key aspect in improving response rates.
We must never forget that the ultimate goal is to assess the performance of the nation’s election system. But getting better data is a key intermediate step in the process – and figuring out how to reduce if not eliminate the number of “empty mailboxes” is a good way to start.