Here in Silicon Valley, we’d prefer our technology to be free of annoying social complexities. We’re extremely good at imagining a world where a particular innovation has won the day, but we’re also pretty talented at ignoring the messy transitions necessary to actually get there.
Our most current case in point is the autonomous vehicle. The received wisdom in the Valley is that the technology for self-driving cars is already here — we just have to wait a few years while the slowpokes in Washington get with the program.
Within five years, we’ll all be autopiloted around — free to spend our otherwise unproductive driving time answering email, Snapchatting, or writing code.
Except, come on, there’s no way that’s gonna happen. Not in five years, anyway.
Let’s be honest — who wants to buy an autonomous car that might choose to kill you in any given situation?
Social and moral agency
It’s the messy human bits which will slow it all down. Sure, the technology is pretty good, and will only get better. But self-driving cars raise major questions of social and moral agency — and it’s going to take us a long, long time to resolve and instrument the answer to those questions.
And even when we do, it’s not clear we’re all going to agree, meaning that we’ll likely have different sets of rules for various polities around the world.
At the root of our potential disagreement is the Trolley Problem. You’ve most likely heard of this moral thought experiment, but in case you’ve not, it posits a life and death situation where taking no action insures the death of certain people, and taking another action insures the death of several others.
The Trolley Problem was largely a philosophical puzzle until recently, when its core conundrum emerged as a very real algorithmic hairball for manufacturers of autonomous vehicles.
Our current model of driving places agency — or social responsibility — squarely on the shoulders of the driver. If you’re operating a vehicle, you’re responsible for what that vehicle does. Hit a squadron of school kids because you were reading a text? That’s on you. Drive under the influence of alcohol and plow into oncoming traffic? You’re going to jail (if you survive, of course).
But autonomous vehicles relieve drivers of that agency, replacing it with algorithms that respond according to pre-determined rules. Exactly how those rules are determined, of course, is where the messy bits show up.
The Trolley Problem
In a modified version of the Trolley Problem, imagine you’re cruising along in your autonomous vehicle, when a team of Pokemon Go playing kids runs out in front of your car.
Your self-driving vehicle has three choices: Swerve left into oncoming traffic, which will almost certainly kill you. Swerve right across a sidewalk and you dive over an embankment, where the fall will most likely kill you.
Or continue straight ahead, which would save your life, but most likely kill a few kids along the way.
What to do? Well if you had been driving, I’d wager your social and human instincts may well kick in, and you’d swerve to avoid the kids. I mean, they’re kids, right?!
But Mercedes Benz, which along with just about every other auto manufacturer runs an advanced autonomous driving program, has made a different decision: It will plow right into the kids.
Why? Because Mercedes is a brand that for more than a century has meant safety, security, and privilege for its customers. So its automated software will chose to protect its passengers above all others. And let’s be honest — who wants to buy an autonomous car that might choose to kill you in any given situation?
It’s pretty easy to imagine that every single automaker will adopt Mercedes’ philosophy. Where does that leave us? A fleet of autonomous robot killers, all making decisions that favor their individual customers over societal good?
It sounds far-fetched, but spend some time considering this scenario, and it becomes abundantly clear that we have a lot more planning to do before we can unleash this new form of robot agency on the world.
It’s messy, difficult work, and it most likely requires we rethink core assumptions about how roads are built, whether we need (literal) guardrails to protect us, and whether (or what kind of) cars should even be allowed near pedestrians and inside congested city centers.
In short, we most likely need an entirely new plan for transit, one that deeply rethinks the role automobiles play in our lives.
That’s going to take at least a generation. And as President Obama noted at a technology event last week, it’s going to take government.
…government will never run the way Silicon Valley runs because, by definition, democracy is messy. This is a big, diverse country with a lot of interests and a lot of disparate points of view. And part of government’s job, by the way, is dealing with problems that nobody else wants to deal with. — President Obama
Governance takes time. The real world is generally a lot messier than the world of our technological dreams.
When we imagine a world of self-driving cars, we imagine that only one thing changes: the driver shifts from a human to a generally competent AI. Everything else stays the same: The cars drive on the same roads, follow the same rules, and act just like they did when humans were in charge of them.
But I’m not convinced that vision holds. Are you?
About the Author
John Battelle is founder, executive chairman and CEO of NewCo Festivals, a mashup of an artist open studio, a tech conference, and a music festival, all focused on innovation and the vibrant cities that nurture change. This article first appeared on LinkedIn’s Influencer blog.