[Image courtesy of ElectionDiary]
On Monday, I blogged about the outside expert’s report examining last April’s troubled election in Anchorage, AK. One key takeaway from that report was the failure of election officials to properly predict turnout, which led to a ballot shortage at the polls.
But even that report noted that while written guidelines for predicting turnout were desirable, the need for “institutional knowledge and common sense” made it difficult to craft such guidelines. To an observer, that might seem puzzling – given years of turnout data, why should predictions (even rough predictions) be so hard?
Brian Newby’s latest ElectionDiary explains why. Brian describes his process for estimating turnout – in other words, the demand for ballots – at Johnson County, KS’ upcoming August 7 election. Right at the outset, Brian does a nice job of analogizing the problem to similar challenges facing high-volume service providers, and identifying the potential trouble spots:
20 years [ago,] my gig was product manager for telecommunications relay service at Sprint. That service bridged persons who were hearing with those who were deaf, hard of hearing, or speech-disabled. It was a service opportunity created by the Americans with Disabilities Act, grew from start-up to $65 million a year in revenue before I changed roles, and is the only business of any size where Sprint ever was the market-share leader over AT&T.
That was a call-center business and our primary unknown, after winning a state contract, was the day one call volume. I created some variation of math soup, blending county and state populations with the population of persons who were deaf, to come up with an estimate and we were usually pretty close. To underestimate would result in long-answer times and to over-estimate would lead to profitability issues.
Predicting election turnout has the same ramifications–long lines or excessive costs are the two bookends.. [emphasis added]
Usually, he says, he looks for a pattern – ” 2010 might feel like 2002, 2008 like 2004, and so on” – but this year “the pattern is missing” because ” there has never been an August election [like this one] without a competitive race for a national or statewide office.”
At the end of the day, Newby is planning to use a combination of what he calls “math soup” and gut feeling to estimate the turnout:
I’m predicting a 20 percent turnout simply because that’s what it feels like. I can’t scientifically define the “20-percent feeling,” and that bothers me. We always build the election for slightly more than my prediction, so we’re preparing for 25 percent, or about 75,000 voters.
Right now, we are entering applications for advance ballots by mail. I expect we will send out about 8,000 next Wednesday, the first day allowed. If that represents 5/13 of the advance voting number (the metric from 2008), that would give us about 21,000 advance voters.
If those voters represent a third of the total, we’d have 63,000 voters. If half, we’d have 42,000.
That’s a turnout range between 11 and 17 percent. Those are downright spring-like numbers, not August numbers.
Reading Newby’s post, I am struck by how much of the process is guesswork – educated, to be sure, but guesswork nonetheless. The whole process makes it much clearer why election officials are loathe to commit to fixed guidelines for turnout prediction.
Of course, as Newby observes, “I’m particularly good at predicting turnout the day after the election” – but the rest of his post demonstrates why any prediction before then is so hard to accomplish successfully.