Hackers Show That Tesla Autonomous Sensors Can Be Fooled, But It's All A Bit Stupid

We may earn a commission from links on this page.

Autonomous driving cars have many components that are analogous to our own moist, biological sensory organs. A Tesla’s radar emitter and camera, for example, serve a similar purpose in driving as your big, beautiful eyes. You know how if you cover your eyes, you can’t drive very well? That’s essentially what researchers proved on a Tesla.

Security researchers from University of South Carolina, China’s Zhejiang University and Qihoo 360, a security company from China, have been preparing a paper for the upcoming Defcon hacker conference that will demonstrate how they’ve been able to spoof the autonomous driving sensors on a Tesla, and, really many other brands, since these types of sensors are common to many types of cars.

The researchers, using over $90,000 of equipment on a cart, including a signal generator that jammed the RF signals the Tesla’s bumper-mounted radar emitter send and recieves, effectively blinding the radar system. That system is used to determine the presence and distance of cars and obstacles in front of the car.

Advertisement

In this test, which they videoed, the cart of all that expensive equipment itself is in front of the Tesla, which perceives the cart as a car in front of it. When the jamming system is turned on, you can see the car disappear from the Tesla’s display, confirming it’s now invisible to the car:

See? Works great. You know what could also disable that front radar sensor? A bit of duct tape, or some mud, both of which are available widely at prices significantly less than $90,000.

Advertisement

Also, for this sort of radar-based attack to work at driving speeds, the jamming rig would need to be in front of the target car, broadcasting constant jamming signals behind, towards the target car, all in hopes of making that target car not see the car in front of them – which might be the car doing the jamming?

Advertisement

Jesus, just PIT maneuver the Tesla into a guard rail, already. It’s cheaper and quicker. We do have other things going on other than murdering this Tesla driver, you know.

Advertisement

There is one advantage here, for nefarious goals: this doesn’t seem to alert the Tesla anything is amiss, it just makes the car disappear. I mean, you’d still see it out the huge window in front of you, but the car wouldn’t. So, unlike a balloon of paint on your windshield, theoretically, this could be done fairly secretly. I mean, again, if you never look out the windows.

The researchers also used an Arduino connected to an ultrasonic transducer to spoof the car’s ultrasonic parking sensors, used in the autopark and ‘summon’ features, into thinking there were obstacles where none actually existed. Well, no obstacles other than the little stand holding the Arduino rig, which has to be within a few feet of the vehicle to work.

Advertisement

Oh, they also showed that an object could be rendered invisible to the ultrasonic sensors if you wrapped them in acoustic sound-deadening foam. I guess that’s a great way to hide if nobody inside the car wonders why that chunk of voice-over booth is sneaking past the car.

I understand the importance of finding the limits to what an autonomous car’s sensors are, absolutely, but most of this particular research seems like it could be summarized in a two-word abstract: no shit.

Advertisement

If you jam the radar signals of the radar emitter, of course the car won’t be able to find objects in front of it? Did anyone doubt that? Same goes for the ultrasonic sensors – if you impede their ability to work properly, they won’t work properly. The exact same results can be obtained in a human driven car by flinging a balloon full of paint at the windshield – it blinds the drivers vision sensors, and makes the vehicle unsafe to drive.

I know the research teams are doing good work overall, and I suppose there is a possibility people with ill intentions and a unloving mothers might try to cause autonomous vehicles to crash. Even so, they’re not going to spend 90 large on a signal-emitting rig. They’ll buy a used Tercel and drive it into you, or something. They can confuse ultrasonic sensors by taping a penny over the emitter, still a good $29.99 cheaper than rigging up an Arduino ultrasonic emitter.

Advertisement

They tested other attacks as well, including blinding the Tesla’s visual imaging cameras with lasers, even managing to create permanent dead pixels in the camera’s CCD. You know what other visual imaging devices can be blinded and incur possible permanent damage from having lasers shined in them? Eyeballs.

Tesla responded to Wired regarding these experiments, saying

“We appreciate the work Wenyuan and team put into researching potential attacks on sensors used in the Autopilot system. We have reviewed these results with Wenyuan’s team and have thus far not been able to reproduce any real-world cases that pose risk to Tesla drivers.”

Advertisement

This time, I mostly agree with Tesla. Not that there’s not risks, but more that there’s no reason to really wonder about these particular risks. If somebody is so pissed at you that they’re considering spoofing your car’s autonomous systems in hopes of causing you to crash, is there any situation where they wouldn’t abandon that as too expensive and complicated, and just hire some goons to work you over or shoot you? In the end, it’s a much more guaranteed result for probably the same or even less money.

Car hacking is an issue, absolutely, but just confusing sensors at close distances with expensive equipment isn’t the sort of hacking we need to be worried about. If someone is within 10 feet of your car and wants to do you harm, a flung cinder block beats an Arduino any day.

Advertisement

Still, sensor issues do happen, and we need to be sure autonomous cars can handle such issues safely, and likely more than crying for a human to come help. A person often can guide a disabled car safely off the road if something compromises their ability to drive; a robot car should be able to at least attempt the same thing.