Yesterday’s news that a self-driving Uber struck and killed a pedestrian in Arizona made it clear that Silicon Valley has succeeded in setting impossibly high expectations for autonomous driving technology. That needs to change.
The crash occurred around 10 p.m. Sunday in the City of Tempe. Police have said that an Uber-owned Volvo XC90 was traveling in autonomous mode, when it struck a woman who was walking outside of a designated crosswalk. An Uber safety operator was behind the wheel, which is standard for companies testing autonomous cars on public roads. The car doesn’t appear to have slowed down beforehand, police say.
It’s the first known pedestrian death associated with an autonomously-operating car, and expectedly, it drew significant attention from the public, similar to a 2016 fatal crash involving a Tesla Model S owner that was using Autopilot, the car’s less-capable feature that permits automated driving on highways.
The crash is certainly a nightmare for automakers and the tech industry that’s laser focused on commercializing autonomous cars, set off a morality play of sorts that transpired over social media. Who knew everyone had such unique, incisive takes about self-driving cars!
Numerous stakeholders and people with very strong opinions were clearly waiting in the wing for this moment to happen—a moment everyone knew was inevitable—to spout off a take. But there’s a few basic points that should be taken into consideration.
One misconception seems to be that driverless cars are ready for the primetime. Sure, it’s wild to see Google deploy fully-autonomous cars without a driver at the wheel—but that’s only happening in a very limited area. The technological advances necessary for driverless cars to operate anywhere at any time are vast. But that’s the expectation that’s set for the public by autonomous car developers: We’ll have driverless cars ferrying passengers around in robotaxis, and, eventually, we’ll achieve zero traffic fatalities because autonomous driving will become ubiquitous.
Beyond successfully developing technology that can handle all the variables thrown at drivers on a daily basis, the reality of the situation is far more complicated. The expectation that people are going to be willing to cede control of their cars is short-sighted. The cost of the technology is also going to be insurmountable for most for the foreseeable future.
And yet Uber—whose primary motivation for this is not having to pay human drivers anymore—wants to have real autonomous commercial vehicles on the road in the next 18 months. That’s unlikely, even more so after this week.
Even charitable estimates suggest that self-driving cars will comprise only about half of all vehicle sales across the globe by 2040. And that’s an optimistic look. It takes about 15-20 years for an auto fleet to completely turnover, meaning if automakers only made driverless cars starting in 2040, we wouldn’t achieve a world where everyone is transported in a self-driving car until 2055.
That isn’t going to happen (short of a ban on human-controlled driving). There’s going to be a mix of self-driving and manually-operated cars on the road likely through the turn of the Century and beyond. Without more regulatory intervention to improve traffic safety among regular drivers, that combo surely means car fatalities are going to continue unimpeded.
The technology’s going to be deployed in some capacity, and it’s safe to say that’ll be for purposes of ride-hailing services like Uber and Lyft—but only in limited areas, like a small swath of an urban core.
A story this month by tech news outlet The Information illustrated this point. It focused General Motors, and the status of its self-driving car unit, Cruise. The upshot?
Despite some advances, Cruise’s vehicles being tested in San Francisco are still repeatedly involved in accidents, or near-accidents where a person has to grab the wheel of the car to avoid a collision. As a result, despite promises that autonomous vehicles could be available in the next few years, it is likely to be a decade before the cars come into wide use in major cities, according to a person with direct knowledge of Cruise’s technology.
Resolving those issues—just so the technology can handle driving in cities for purposes of ferrying people around like a taxi—might not be resolved for a decade. And the expectation from autonomous driving boosters is that the technology will be widely deployed sooner than we think? It doesn’t jibe with reality.
At some point, I think it’s safe to say that autonomous cars will advance to the point a driver isn’t needed and they’ll be able to operate as robotaxis. If they advance beyond that to capably handle any road, anywhere—and then the public willingly accepts them—is nothing more than a guess.
I make the point about public acceptance because autonomous car developers have been very clear about being guided by what the markets want. If car buyers decide en masse that they’re uninterested in purchasing an autonomous car, do you really think major corporations like Volkswagen and GM are going to stop making human-controlled cars? Nah. As I said back in December, they exist to make money. They’re not public utilities, or replacements for public transit.
GM’s president, Dan Ammann, made it clear what the automaker sees in self-driving cars.
“We believe this is the biggest business opportunity since the creation of the internet,” Ammann said. For Uber, the company’s racing to deploy autonomous technology because it’d benefit its bottom line by allowing them to cut out costly drivers.
The idea that autonomous driving is about making driving safer above all else is misguided. Automakers have had decades to perfect, say, seat belt interlocks that aren’t annoying and work effectively. The public is supposed to accept they can make a car that drives itself, but are unable to create a working solution that requires front-seat passengers to buckle their seat belts in order for a car to turn on? It’s illogical.
One obvious question that hasn’t been resolved is how to determine liability in a crash like this. If Uber’s car is found to have been driving irresponsibly, is the safety operator responsible for not taking the wheel? If the car didn’t have a driver, can Uber be held liable? How will courts respond if a lawsuit’s filed?
More input is also needed to offer guidance on how these cars should be deployed. Until this month, Arizona allowed self-driving cars to be tested in the state with virtually zero oversight. When dealing with companies that previously touted mantras like “Safety Third” for how to deploy autonomous cars, it’s malpractice to not move with urgency to create a regulatory framework for how self-driving technology can be tested.
If lawmakers and the public ultimately concludes that self-driving cars are safer, that means they’ll continue to be tested in real-world environments. That’s the unfortunate byproduct of the situation. Bryant Walker Smith, a law professor at the University of South Carolina who studies autonomous cars, summed it up well in a blog post yesterday:
Regardless of whether this crash was unavoidable, serious developers and regulators of automated driving systems understand that tragedies will occur. Automated driving is a challenging work in progress that may never be perfected, and I would be skeptical of anyone who claims that automated driving is a panacea—or who expresses shock that it is not.
Were it not for automakers and Silicon Valley types playing up expectations like we’ll see millions of robot cars on the road anytime soon, maybe it wouldn’t be as necessary to focus inordinate amounts of attention on a single crash.
In the U.S., Walker-Smith points out, there’s about one fatality for every 100 million vehicle miles traveled. Automated driving technology is still nascent, and though it’s just one fatality, it still has an incredibly long way to go to prove it’s inarguably safer than humans.
In a world where self-driving cars and autonomous cars mix, it’s unquestionably important to focus on how the automated technology performs in normal driving environments. It’s just one crash, but it needs to be covered because the ones touting the technology as superior have so grossly contorted the conversation around it. The reality of the situation needs to be kept in check.