As the future of driverless cars pedals (itself?) onward, one person in the business wants to slow the process — at least, until all of the bases are covered.
A Stanford University engineering professor, programming driverless race cars has long been an interest of Chris Gerdes. But as manufacturers and other minds behind the idea of an autonomous car continue to develop and ready themselves to put products on the market in the future, Bloomberg reports that Gerdes doesn’t believe the industry is quite ready — he says there are “a lot of subtle, but important things yet to be solved.”
A man who is regarded as “Switzerland” — the neutral voice — in the car industry, as philosophy professor Patrick Lin told Bloomberg, executives began meeting with Gerdes at his lab after a driverless-car workshop. The workshop focused on what cars still need — ethics.
Bloomberg had the chance to pick his brain a bit recently:
Take that double-yellow line problem. It is clear that the car should cross it to avoid the road crew. Less clear is how to go about programming a machine to break the law or to make still more complex ethical calls.
“We need to take a step back and say, ‘Wait a minute, is that what we should be programming the car to think about? Is that even the right question to ask?’” Gerdes said. “We need to think about traffic codes reflecting actual behavior to avoid putting the programmer in a situation of deciding what is safe versus what is legal.”
Gerdes’ work on autonomous cars includes programming a driverless race car to race the entire Pikes Peak Hill Climb course in Colorado and experiments with monitoring electrical activity in the brain as drivers race around the track, as he described in his 2012 TEDx talk:
As Gerdes mentioned in the TEDx talk, the inspiration for his goal of making autonomous cars more intuitive comes in part from the “high bar” set by the amazing capabilities of the human body and mind:
“We believe that before people turn over control to an autonomous car, that autonomous car should be at least as good as the very best human drivers,” Gerdes said.
While Gerdes firmly believed in the human-skill aspect of the autonomous car at the time, human ethics entered the picture when he received an email from George Bekey, co-author with Lin of the book entitled Robot Ethics.
Questions arose about unavoidable accidents, moral choices about where to steer if collision with an object is unavoidable and the like. If vehicles will be driving as humans, how can their programming act and make decisions as a human would in driving situations? When encountering problematic situations on the road, a driverless car is far more complex than simply staying in the lines.
“With any new technology, there’s a peak in hype and then there’s a trough of disillusionment,” Gerdes said. “We’re somewhere on that hype peak at the moment. The benefits [of driverless cars] are real, but we may have a valley ahead of us before we see all of the society-transforming benefits of this sort of technology.”
Essentially, Gerdes echoes what we all heard as kids — “Better safe than sorry.” Except in this case, he’s advising us not to dive head first into the future of driving rather than diving hand first into the cookie jar before dinner.
If we’re patient, the outcome may end up being all the more sweeter — just like we’ve always been instructed.
Photo credit: AP Photo/Michael Sohn
Contact the author at email@example.com.