Thumbs.Up.Down.jpg

[Image courtesy of shellygrrl/Flickr (hat tip to Wired)]

Every now and then, a really interesting piece rolls through my Twitter feed; earlier this week, it was a Wired piece about the growing use of “A/B testing” on the web:

Welcome, guinea pigs. Because if you’ve spent any time using the web today — and if you’re reading this, that’s a safe bet — you’ve most likely already been an unwitting subject in what’s called an A/B test. It’s the practice of performing real-time experiments on a site’s live traffic, showing different content and formatting to different users and observing which performs better.

The article notes that A/B testing (explained in further detail here) has been around for a little more than a decade, most notably by giants like Google and Amazon, who use the procedure to test and tweak virtually every aspect of their online experience.

What fascinated me, though, was the discussion of how A/B testing might make the leap into physical space – and more specifically, into the realm of policymaking:

“It is one of the happy incidents of the federal system,” wrote Associate Supreme Court Justice Louis D. Brandeis in 1932, “that a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.”

In the realm of politics A/B testing makes an unexpected argument for things like block grants and state, as opposed to federal, power. As Silicon Valley’s A/B devotees can increasingly attest, not everything is best solved by discussion and debate. Differences in the way policy is implemented and issues are addressed at the state level make for a rough 50-way A/B test–yielding empirical data that can often go where partisan thought-experiments, and even debate at its most productive (but nonetheless theoretical) cannot.

Of course, the article had me at “data”; but would this work in elections – and if so, how? The article author poses an analog from the world of corrections:

Here’s one way that could play out. Say (as has too often been the case) my car gets ticketed on street sweeping day: the ticketing officer runs my plates, which show whether I’m in the Restitutive Group or the Punitive Group. If the former, I’m fined the $10 it takes the city to hand-sweep that fifteen-foot section of curb. If the latter, I’m fined the $75 it will take to make me think twice every time I park. Lawmakers would determine the relevant metric (say, recidivism) and would quickly establish, to a scientific certainty, whether the stiffer penalty had the desired effects. Why debate when you can test?

The challenge, of course, is that such tests treat users as fungible; while this might be good for ad copy or layout, it’s a little trickier when “learning that a particular [policy] is [good or bad] comes only after you’ve administered it to real people living real lives.”

Even using A/B in an online context – like with a polling place finder – is a little uncomfortable because of the inherent risk that voters who fall into one group or the other might be disadvantaged because of that random selection.

Having raised the red flags, however, I do think the concept of A/B testing has some merit; it could be built into other parts of the elections process – for example, pre-election usability testing or officer/pollworker training – where using the combination of random selection and observation might teach you something that simply thinking through a problem can’t.

If nothing else, the exercise of thinking through how A/B testing would or wouldn’t work is a valuable exercise for anyone who seeks to learn more about election administration.

Plus, it offers continued justification for my wide-ranging Twitter feed!