This Billboard That Confuses Tesla Autopilot Is A Good Reminder Of Why Self-Driving Is Still A Long Way Off

Illustration for article titled This Billboard That Confuses Tesla Autopilot Is A Good Reminder Of Why Self-Driving Is Still A Long Way Off
Screenshot: Twitter/Tesla

Last summer, Elon Musk, the shy, reclusive CEO of the Tesla electric-car making concern, stated that the company was “very close to Level 5 autonomy” that is, autonomous vehicles capable of complete self-driving and all that remained were “many small problems.” I argued that those many small problems are, in fact, a huge deal and represent the inherent chaos of the world that must be dealt with. A fun example of this is currently blowing up online, as a Tesla owner shows how his car gets confused by a billboard.

Advertisement

The Tesla owner, Andy Weedman, tweeted out this image of the spot where his Model 3, using the Tesla Autopilot driver-assistance system,kept slamming on the brakes” in the middle of a road, with no clear reason why:

A bit of looking around reveals what was confusing the car. It’s this billboard alongside the road:

undefined
Screenshot: Twitter

It’s one of those stop-for-school-bus reminder billboards, and like many of its kind carries an image of a stop sign. Weedman also recorded a video of the Tesla stopping for the billboard, so you can see it in action:

I should mention that not everything going on here is bad, of course. Autonomous driving is an incredibly difficult problem, and the Tesla is doing some things very well.

Advertisement

The recognition of the stop sign image on that unlighted billboard is impressive, for one thing. What’s less impressive is mistaking it for an actual stop sign, a mistake that, significantly, almost no human would make.

While this is described as an “edge case” by Weedman in the video, the truth is that the world is absolutely crammed with out-of-the-ordinary situations we call edge cases. These edge cases are extremely important to developing full autonomy, partially because they remind us that, fundamentally, computers are absolute morons.

Advertisement

I remember first hearing this back when I was learning to program in BASIC as a kid in the 1980s; computers, no matter how fast they may be or how good at math they are, are colossal idiots and lack any semblance of what we humans think of as common sense.

While computers are many, many orders of magnitude better and faster and more powerful today than the old 1 MHz Apple II that I was coding on, deep down they’re still idiots.

Advertisement

The term “artificial intelligence” is sort of a misnomer, too. It’s not “intelligence” as we understand it, that a computer can possess, even in artificial form. It’s a simulacra of intelligence, a lot of brute-force if-then-else conditionals (that’s oversimplified, but not entirely untrue) that give an effective illusion of intelligence.

That’s why the Tesla had no idea the stop sign was on a billboard, next to what would have been a 20-foot-tall cop. We’ve actually seen this happen before with Teslas, in one case notable enough that his Highness the Burger King even took an interest.

Advertisement
undefined
Photo: Jason Torchinsky

I once fooled Mazda’s street sign recognition system with some markers and a misspelled sign, as you can see above there.

Advertisement

Training Teslas to recognize a stop sign on a billboard is likely possible, but then we open the possibility of ignoring a stop sign positioned so that it appears to the car’s cameras to be part of a billboard behind it, a not-unlikely scenario. It also doesn’t preclude malicious fake stop signs from being made and put up places that wouldn’t fool a human but could easily fool a car.

These situations may also be a good reminder that maybe it’s ridiculous for us to expect AVs to do everything independently. If we’re serious about wanting AVs as a culture, maybe it’s time to integrate some kind of QR visual authorization code onto traffic signs to eliminate counterfeit or confusing ones?

Advertisement

Current AV tech is impressive, no question, but it’s a huge mistake to prematurely believe these machines are better than humans, because at the moment, they’re not. They’re idiots.

That said, it is at least one trait the machines share with many of us, so that’s encouraging.

Senior Editor, Jalopnik • Running: 1973 VW Beetle, 2006 Scion xB, 1990 Nissan Pao, 1991 Yugo GV Plus, 2020 Changli EV • Not-so-running: 1977 Dodge Tioga RV (also, buy my book!: https://rb.gy/udnqhh)

DISCUSSION

shanemorris
Shane Morris

As someone who programs AI systems, I do find this funny. We’re getting extremely good with image processing, and it’s actually amazing how far we have come in the past five years. (I admittedly don’t work in image processing, but I know people who do, so I’ll say they’re very smart people and just leave it at that.)

This same thing actually applies in many parts of AI, including the work I do in prediction. Human beings understand context and situations. AI really only knows a limited number of variables, and none of those variables are human experience.

Story time...

I was contracted by a grocery store chain to help improve the prediction engine in their online ordering application. The problem we needed to solve was vegetarian and vegan suggestions: Basically, if we notice you have many vegetarian or vegan items in your cart, it’s safe to assume we should show you related products.

We used an external image processing license from another application, and it was working flawlessly. With this one exception. Our users kept getting bacon as a recommendation. Of course, this is a wasted opportunity, because it’s highly unlikely a vegan person is going to order bacon. The problem was, I couldn’t figure out why this brand of bacon kept popping up.

I bug tested this damn code for hours. I called up the company that did the image processing and asked about error rates, and how their recognition system worked. They told me they weren’t aware of any issues. But my client wasn’t hearing it. They were basically like, “Shane, we need our vegan and vegetarian users to stop being recommended bacon.” I replied, “I have no goddamn idea why we keep getting this damn bacon popping up. I’m really sorry. I have looked at the code 1,000 times, and I swear to God it’s perfect. There’s not a single problem on my end.”

This goes on for days. It’s getting to the point where my client thinks I’m an idiot, incompetent, and unable to do the job. I’m starting to wonder if I’m actually... failing? My clients are threatening to cancel my contract. So I decide to go to the store and just... shop. I fill up my cart physically with every single thing that I’ve seen in the recommendation engine. Then, I go to the meat section, and get the brand of bacon that keeps being recommended...

Now, if you look really closely, you’ll notice a label on this bacon. It says “vegetarian feed” on it. That’s right. The image processing AI looked at this bacon, saw the word “vegetarian” and thought, “This is a vegetarian product.” Why? Because the word vegetarian on a product must mean vegetarian. AI doesn’t know what a pig eats. When I was looking at the tiny image thumbnail in the app, all I saw was a package of bacon. I didn’t see the fine print on the bacon package that implied the pigs were fed a vegetarian diet.

This is why AI is good, but still a long way from perfect, and when you remove the human experience from something, you tend to fail in ways that are... freakin’ hilarious and weird.