The more you think about autonomous cars, the more questions you raise, causing you to think even more. It’s like being trapped in the most useless perpetual motion machine. Occasionally, though, interesting questions arise, like this one: what will crash testing an autonomous car entail?
At first thought, you may be thinking Jason, you idiot. They’re still cars. You’ll still have to see how their crumple zones absorb impact energy, how their airbags deploy, how their restraint systems keep their passengers from harm, just like we do with cars today.
This is all true, of course, but with autonomous cars, it’s only a half-solution. You’re just testing the body, but autonomous cars mean you have to crash-test the mind as well.
Unlike conventional meat-driven cars, when it comes to crash testing autonomous vehicles, I think current crash testing protocols will be very inadequate, because they just run a car on a straight track into a big-ass wall or they ram heavy carts into the cars.
To really test an autonomous car, new crash tests will need to be devised that place the cars in situations where the accident could be avoided altogether, or, if the accident is unavoidable, decisions are made so the outcome will provide the least amount of damage.
And, because autonomous cars are really robots with the capacity to end a human life, this all now becomes a colossal can of very active and ethically ambiguous worms. If we expect NHTSA to come up with a standard set of crash avoidance and mitigation tests for autonomous cars – and we’d be crazy to think we shouldn’t – we need to realize that defining a set of tests also means that we need to define a set of ethical rules that we expect these cars to adhere to.
This, of course, is your pedantic philosophy major roommate’s wet dream (well, one of them) because just the idea of figuring out how to come up with a universally-acceptable decision tree that could end up in someone’s death is a massive, thorny, and wildly difficult undertaking that we’ve just barely begun to consider.
For example, if we decide that a reasonable autonomous vehicle safety test involves a high-speed, highway-like situation with an unexpected road condition that causes the autonomous car, despite its computer-fast reflexes, to lose control.
Since this is a safety/crash test, we’ll need real-world-type things to possibly crash into; let’s say another vehicle and a roadside barrier.
The autonomous car that has lost control has just enough remaining ability to at least put some directional input into its vector of travel. It could hit the car, likely harming or killing that car’s passengers, it could point to the barrier, likely harming or killing its own passengers it’s carrying, or it can try some desperate other maneuver that could possibly impact both.
This is a pretty vague setup, but hardly an unlikely possibility; the presence of other cars and obstacles is all but a given for autonomous car travel at any moment, and weather or unexpected infrastructure issues can certainly happen to make such an event happen.
The point is, crash testing is going to need to step up to not just test the car’s physical ability to deal with crashes, but we’ll need to test and rate cars’ ability to avoid them, or deal with them in the best manner.
And then there’s the issue of what standard of behavior do we choose? Are we going to let every manufacturer come up with their own algorithms and decision trees and, essentially, code of ethics, or do we have the NHTSA mandate one standard set all cars adhere to?
Will people start buying a particular brand because they know, in dangerous situations, a, say, Lincoln is more likely to favor its occupants, as opposed to a Subaru, which is rapidly getting a reputation for selfless sacrifice?
When autonomous cars are around in significant numbers, and communicating with one another, how much information do we want them to share about their occupants, for possible use in judging decisions in emergencies?
Should each car broadcast how many occupants it has? The ages of the occupants? Criminal records? Credit scores? Should baby seats have embedded RFID chips that give them priority over other vehicles in emergency situations?
I don’t have the answers to these questions, and I suspect for many of these questions there simply are no universally right answers.
Even without a solid, absolute basis of right and wrong, though, if we want to move ahead with autonomous cars, we’re going to need some kind of basic rules, I think. Perhaps there can be variations of basic behaviors by manufacturer, or perhaps we’ll allow priority weighting of ethical decisions to be something that the owner can adjust, in a very bizarre preference panel buried in an infotainment system.
I have a neighbor who is a professor that teaches classes in just these sorts of human-machine ethics. In speaking with her, I discovered that she has had essentially zero contact with anyone in the automotive industry.
This seems insane to me. Companies like Tesla are already deploying (if still limited) autonomous vehicles, and every major car company is developing them to some degree. We’re a couple of decades at the most away from populating public streets, for the first time, with robots capable of ending a human life.
I’m not trying to be an alarmist, but the time to get all the major car companies together, along with the insurance companies, the NHTSA, and whatever group professional ethicists have, to sit down and really start discussing these issues is now.
Before we start selling these things in mass, we need to address how they’re going to behave in tricky situations, and, ideally, we need to do our best to come up with some basic set of rules.
Asimov’s Three Laws of Robotics don’t really factor in the no-intention-to-harm situations that are most car accidents, and they don’t deal with the idea that sometimes harming one or some people may in turn help other people. The real world of cars is much muddier and less clear than even Asimov imagined for his shiny, humanoid robots.
This process should be as public as possible, too, even though that’s all but guaranteed to make it so much harder and drag it out so much longer. Idiots will be heard from, loudly and frequently. But I think it has to be something everyone is aware of, because in the end it could affect anyone.
This started by just wondering about what an autonomous car crash test would be like, but as you can see, that’s just the problem we’ll have to figure out after we decide what the rules and criteria we’re trying to test are.
And I suspect that process will prove difficult enough many of us may just want to volunteer to be the crash dummies. But we still have to do this.