This MIT Online Activity Lets You Choose Who Gets Killed If A Self-Driving Car Wrecks

We may earn a commission from links on this page.

The closer we get to fully autonomous car technology on the market, the more questions we seem to have. Recently, they’ve been morbid: if a self-driving car is in a fatal wreck, how does it decide who dies? Well, now you can understand just how hard those decisions will be to program—hypothetically, of course.

The “Moral Machine” is a new online activity—for lack of a better word, since “game” is an strange way to describe it—from the Massachusetts Institute of Technology that presents website visitors with 13 scenarios, which prompt them to choose the unlucky people or animals who would have to be killed should a self-driving car have a sudden brake failure.

Scenarios force visitors to choose between women and men, children and the elderly, the fit and overweight, animals and humans, criminals and those with a clean slate, as well as professionals and lower classes. Often, those choices are mixed into crowds that require picking the better overall option—in the eyes of the person choosing, that is. There’s also an element of breaking the law versus staying in the right.

Advertisement

Currently, the discussion on fatalities by autonomous vehicles centers around minimizing casualties in a wreck—whether that be by choosing to kill the car occupants or the other party involved. The new government regulations on autonomous cars didn’t even get into death decisions yet, despite outlining safety assessments, scales of autonomy and required equipment in the cars.

Advertisement

A study co-authored by MIT professor Iyad Rahwan presented ethical dilemmas present when a self-driving car wrecks, in which Rahwan told Gizmodo that most people want to minimize deaths so long as their car protects them at all costs. That obviously conflicts.

Advertisement

The MIT activity failed to put the user in a position in which they would have to decide between their own life and the lives of others, but that’s probably coming soon. It’s also an online activity, so the presence of real danger and guilt isn’t there when making the decisions. But, at any rate, it does feel lousy to choose.

Goals of the activity, according to the website, are to build “a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas” and to crowd source “discussion of potential scenarios of moral consequence.” You’ll receive a report at the end of the types of people and animals you chose to kill, as compared to others who have done the activity. It’s really quite lovely and uplifting.

Advertisement

These are big decisions to put on the machines. Maybe we’re all better off staying at home.