Vote 2020 graphic
Everything you need to know about and expect during
the most important election of our lifetimes

This MIT Online Activity Lets You Choose Who Gets Killed If A Self-Driving Car Wrecks

Image via MIT’s Moral Machine
Image via MIT’s Moral Machine

The closer we get to fully autonomous car technology on the market, the more questions we seem to have. Recently, they’ve been morbid: if a self-driving car is in a fatal wreck, how does it decide who dies? Well, now you can understand just how hard those decisions will be to program—hypothetically, of course.

Advertisement

The “Moral Machine” is a new online activity—for lack of a better word, since “game” is an strange way to describe it—from the Massachusetts Institute of Technology that presents website visitors with 13 scenarios, which prompt them to choose the unlucky people or animals who would have to be killed should a self-driving car have a sudden brake failure.

Scenarios force visitors to choose between women and men, children and the elderly, the fit and overweight, animals and humans, criminals and those with a clean slate, as well as professionals and lower classes. Often, those choices are mixed into crowds that require picking the better overall option—in the eyes of the person choosing, that is. There’s also an element of breaking the law versus staying in the right.

Advertisement

Currently, the discussion on fatalities by autonomous vehicles centers around minimizing casualties in a wreck—whether that be by choosing to kill the car occupants or the other party involved. The new government regulations on autonomous cars didn’t even get into death decisions yet, despite outlining safety assessments, scales of autonomy and required equipment in the cars.

The ethical decisions self-driving cars will likely have to face, and how they’ll likely see the options. Photo credit: Iyad Rahwan via Gizmodo
The ethical decisions self-driving cars will likely have to face, and how they’ll likely see the options. Photo credit: Iyad Rahwan via Gizmodo

A study co-authored by MIT professor Iyad Rahwan presented ethical dilemmas present when a self-driving car wrecks, in which Rahwan told Gizmodo that most people want to minimize deaths so long as their car protects them at all costs. That obviously conflicts.

The MIT activity failed to put the user in a position in which they would have to decide between their own life and the lives of others, but that’s probably coming soon. It’s also an online activity, so the presence of real danger and guilt isn’t there when making the decisions. But, at any rate, it does feel lousy to choose.

Advertisement

Goals of the activity, according to the website, are to build “a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas” and to crowd source “discussion of potential scenarios of moral consequence.” You’ll receive a report at the end of the types of people and animals you chose to kill, as compared to others who have done the activity. It’s really quite lovely and uplifting.

These are big decisions to put on the machines. Maybe we’re all better off staying at home.

Staff writer, Jalopnik

Share This Story

Get our newsletter

DISCUSSION

soundman98
soundman98

ok, i’m trying it, currently on 6/13.

i just want to say that it’s bullshit that it brings up the class of the people. a doctor, athlete, fat person, or executive as they indicate is not any more or less valuable then any other category of person. however, the caveat with the pregnant lady, i’ll allow as that is a paradox(some people don’t consider the fetus as human yet, while others do. and depending on personal beliefs, it can heavily skew whether a pregnant woman is more or less ‘valuable’ then a non-pregnant woman).

there is also no way that an autonomous car can know the social status of any individual in the crosswalk unless we all start getting chipped, and the tinfoil hat people have problems with that.