Tesla Didn't Add Eye-Tracking And Steering Wheel Sensors To Autopilot Over Cost, Driver Annoyance: Report

Illustration for article titled Tesla Didn't Add Eye-Tracking And Steering Wheel Sensors To Autopilot Over Cost, Driver Annoyance: Report

Tesla had once considered adding additional safety features like eye-tracking technology and steering wheel sensors to its Autopilot semi-autonomous system, like General Motors competitor Super Cruise has, but allegedly didn’t because of cost, according to a new report from the Wall Street Journal.


Concern over drivers continuing to pay attention to the road even after engaging Autopilot had some developers worried since as far back as 2015, the WSJ reports, citing unnamed people familiar with the matter. They were anxious that there were not enough safeguards to make sure people stayed attentive.

From the story:

Tesla Inc.’s engineers repeatedly discussed adding sensors that would ensure drivers look at the road or keep their hands on the wheel both before and after the driver-assistance system was introduced in 2015, these people said.

Tesla executives including Chief Executive Elon Musk rejected the ideas because of costs and concerns that the technology was ineffective or would annoy drivers with overly sensitive sensors that would beep too often, the people said.

One way of making sure that the driver is attentive is through the use of eye tracking software, which through the use of a camera and infrared sensor, watches the driver’s eyes to make sure that they are focusing on the road and not elsewhere (again, GM’s Super Cruise does this). Another way is installing sensors into the steering wheel, which would be able to tell if a driver was keeping his or her hands on the wheel.

Two recent crashes involving Tesla cars have the National Transportation Safety Board investigating the company. One investigation into the fatal Model X crash in Mountain View, California, pertains to the company’s semi-autonomous Autopilot feature, which Tesla said was on during the time of the crash.

In the March Model X crash, Tesla admitted that Autopilot was engaged, but that the driver had received “several visual and one audible hands-on warning” earlier in the drive and that his hands were not on the wheel six seconds before the crash.

Tesla warns all drivers about to engage Autopilot that it is their responsibility to stay in control of the car. On the company website, it includes a note that reads, “Every driver is responsible for remaining alert and active when using Autopilot, and must be prepared to take action at any time.” The car sounds off with audio and visual warnings to keep drivers engaged as well.


The people that the WSJ spoke to claim that, ultimately, decisions over such technology came down to cost:

“It came down to cost, and Elon was confident we wouldn’t need it,” one of those people said. Executives conveyed there was pressure for each vehicle to reach a certain profit margin, according to the people familiar with the matter.


Currently, Autopilot has a sensor that uses small movements in the steering wheel to judge whether the driver is holding it, but a quick touch from the driver can disable the dashboard warnings momentarily.

The NTSB, according to the WSJ, found this to be an unsatisfactory way to making sure drivers were staying engaged:

The National Transportation Safety Board said Autopilot lacked “an effective method of ensuring driver engagement” by allowing drivers to ignore warnings and keep their hands off the wheel for up to five minutes at a time. It also said Tesla’s steering-sensor system doesn’t ensure a driver is watching the road.


Tesla says that it’s doing everything it can make sure that its cars and technology provide the safest rides for customers. A spokesperson told the WSJ in a statement:

“Everyone at Tesla is not only encouraged, but expected, to provide criticism and feedback to ensure that we’re creating the best, safest cars on the road. This is especially true on the Autopilot team, where we make decisions based on what will improve safety and provide the best customer experience, not for any other reason.”


In the past, Tesla has mentioned driver inattentiveness before. Yet, it’s curious that despite knowing that people pretty much can’t be trusted to be safe and responsible with semi-autonomous tech, Tesla supposedly chose not to implement extra safety features to keep costs down.

Writer at Jalopnik and consumer of many noodles.



Would there be another reason not to do it? Seems pretty valid. Am I supposed to be outraged that company didn’t add safety features with a high cost and a marginal effectiveness? I realize that last part is open for debate but if we assume that the ratio of false positives was indeed a factor its not really hard to fault the logic.

Now Im not saying I support Telsa’s autopilot initiative, in fact I’m against semi autonomous driving all together as it straddles a dangerous gray area between driver alertness and computer competency and I agree with the report that suggest that the current system is probably not effective but this is more endemic of the gray area in semi autonomous like I mentioned. In any event, its hard for me to get all hot and bother by the news that they didn’t adopt an costly system with questionable effectiveness because...why would they?