Image via MIT’s Moral Machine

The closer we get to fully autonomous car technology on the market, the more questions we seem to have. Recently, they’ve been morbid: if a self-driving car is in a fatal wreck, how does it decide who dies? Well, now you can understand just how hard those decisions will be to program—hypothetically, of course.

Advertisement

The “Moral Machine” is a new online activity—for lack of a better word, since “game” is an strange way to describe it—from the Massachusetts Institute of Technology that presents website visitors with 13 scenarios, which prompt them to choose the unlucky people or animals who would have to be killed should a self-driving car have a sudden brake failure.

Scenarios force visitors to choose between women and men, children and the elderly, the fit and overweight, animals and humans, criminals and those with a clean slate, as well as professionals and lower classes. Often, those choices are mixed into crowds that require picking the better overall option—in the eyes of the person choosing, that is. There’s also an element of breaking the law versus staying in the right.

Advertisement

Currently, the discussion on fatalities by autonomous vehicles centers around minimizing casualties in a wreck—whether that be by choosing to kill the car occupants or the other party involved. The new government regulations on autonomous cars didn’t even get into death decisions yet, despite outlining safety assessments, scales of autonomy and required equipment in the cars.

The ethical decisions self-driving cars will likely have to face, and how they’ll likely see the options. Photo credit: Iyad Rahwan via Gizmodo

A study co-authored by MIT professor Iyad Rahwan presented ethical dilemmas present when a self-driving car wrecks, in which Rahwan told Gizmodo that most people want to minimize deaths so long as their car protects them at all costs. That obviously conflicts.

The MIT activity failed to put the user in a position in which they would have to decide between their own life and the lives of others, but that’s probably coming soon. It’s also an online activity, so the presence of real danger and guilt isn’t there when making the decisions. But, at any rate, it does feel lousy to choose.

Sponsored

Goals of the activity, according to the website, are to build “a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas” and to crowd source “discussion of potential scenarios of moral consequence.” You’ll receive a report at the end of the types of people and animals you chose to kill, as compared to others who have done the activity. It’s really quite lovely and uplifting.

These are big decisions to put on the machines. Maybe we’re all better off staying at home.