The Cult of Cars, Racing and Everything That Moves You.
We may earn a commission from links on this page

What Should Robot Cars' Ethical Rules Be?

We may earn a commission from links on this page.

Recently, there's been a lot of talk about the ethics of robotic cars. Put simply, should your autonomous car kill you if it means saving two other people? This is actually a philosophical issue that's been around for a long time, but it doesn't make the solutions any easier. What rules should robotic cars live by?


The most common place to start for ethical robotic rules is, of course, Asimov's Three Laws of Robotics. Those laws are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


This isn't a bad place to start for ethics for robotic cars, though the first rule is too vague for the issues robotic cars will face, which is the root of the issue. There will be situations in driving where lives could be saved by the sacrifice of the car and its occupants. Like, say, you've lost your brakes and your car is careening down a road — one fork ends in a big, warm, cushiony pile of babies and mattresses, and the other terminates in a lava pit, full of mutant lava sharks. You could survive by turning to the babies, but you'd be kind of a monster. You can pick the lava pit and be a hero, but a dead hero.

So what should your self-driving car do?

Should Law 1 be Protect Your Occupants? Some would argue it's the default way people tend to drive now — would it make sense to codify that into the car's digital morality? If every car operated that way, would we have mutually-assured safety or selfish destruction?

The greater-good method, where the simple math of more lives saved=better is considered by some to be the optimal setting does make a certain logical sense. And I think I could live with a car programmed to those rules — until I strap my son into his child seat. Would it be ethical to have a car that, when a pressure switch detects 35 lb of child in the seat, switches to a Rule 1: Protect The Occupants setting? Is that amoral? Is it more amoral than sacrificing your child to save a van full of 97-year old Klansmen who post dirty pictures of you on the Internet?


These are difficult decisions, but they will need answering. And I think our community, composed of people who understand and love cars and driving, is a good group to help decide what these are.

So, let's try something — in the comments, give me your take on what your Three Laws for Autonomous Cars would be. This should be interesting.