In the future, there’s no getting away from computers ratting you out for your human mistakes.
Today Tesla Motors had to respond to yet another highly-publicized story of one of its cars crashing into something, this time a building full of people, with the owner blaming the vehicle’s semi-autonomous capabilities. And, today Tesla proved once again its vehicle’s Autopilot system was not to blame by releasing the data from the vehicle at the time of the incident.
The most recent story, as reported by Electrek, was about a Tesla Model X that had crashed into a building just a week after the owner had taken delivery. The owner claimed that the Model X, “suddenly and unexpectedly accelerated at high speed on its own climbing over 39 feet of planters and crashing into a building.”
The owner described the incident on Tesla’s forums and stated that Tesla should halt Model X deliveries immediately.
Tesla later responded to Electrek about the Model X incident claiming that the vehicle’s Autopilot feature had not been activated at the time of the incident or before, and the vehicle’s data logs showed the accelerator pedal being pressed just before the impact. Here’s the full statement from Tesla as reported by Electrek:
“We analyzed the vehicle logs which confirm that this Model X was operating correctly under manual control and was never in Autopilot or cruise control at the time of the incident or in the minutes before. Data shows that the vehicle was traveling at 6 mph when the accelerator pedal was abruptly increased to 100%. Consistent with the driver’s actions, the vehicle applied torque and accelerated as instructed. Safety is the top priority at Tesla and we engineer and build our cars with this foremost in mind. We are pleased that the driver is ok and ask our customers to exercise safe behavior when using our vehicles.”
Of course the information Tesla receives from the vehicle only offers a vague picture of the moments before, during and after an accident. But it’s enough to prove that the owner’s claims of the Autopilot system failing, or the vehicle acting on its own in some other way, simply isn’t accurate.
The owner’s claims in this situation are precarious, or perhaps even suspicious, considering just a few weeks ago another similar story was widely publicized about a Model S crashing into a parked trailer. That owner claimed it happened after he parked the car while Tesla’s data proved that the owner had physically directed the Autopilot system to drive forward from inside the vehicle.
The obvious suggestion is that, in both cases, the owner doesn’t want to accept responsibility for the accident. But perhaps that’s unfair to suggest, as the previous Model S incident resulted in Tesla updating the protocol for activating the Autopilot system, offering that a possibly-confusing user interface could have played a part in the accident.
But the case of the Model X crashing into the building sounds like a relatively common phenomenon of human error where the driver mistakenly presses the accelerator pedal instead of the brake and panics.
It doesn’t seem out of the question that there could be some early confusion with the Model X being a brand new car just recently delivered to the owner and his wife.
Unfortunately whatever happened placed a very expensive SUV in the side of a building.
Most of us have probably tried to get out of an unfortunate situation before, whether it’s an accident we’re at fault for or something else.
The reality now is that, with advances in technology and a future promising more automation and less human responsibility, human error will only be more obvious and easier to prove. Especially by companies with the data to back themselves up defending from users possibly looking to shift the blame for their mistakes.
I’m not suggesting that Autopilot is perfect, or that an incident where the system fails to prevent an accident from happening is impossible. I’m just suggesting that we should all be more aware of what buttons we do and don’t press, as someone is always watching.