Last summer, Elon Musk, the shy, reclusive CEO of the Tesla electric-car making concern, stated that the company was “very close to Level 5 autonomy” — that is, autonomous vehicles capable of complete self-driving — and all that remained were “many small problems.” I argued that those many small problems are, in fact, a huge deal and represent the inherent chaos of the world that must be dealt with. A fun example of this is currently blowing up online, as a Tesla owner shows how his car gets confused by a billboard.
The Tesla owner, Andy Weedman, tweeted out this image of the spot where his Model 3, using the Tesla Autopilot driver-assistance system, “kept slamming on the brakes” in the middle of a road, with no clear reason why:
A bit of looking around reveals what was confusing the car. It’s this billboard alongside the road:
It’s one of those stop-for-school-bus reminder billboards, and like many of its kind carries an image of a stop sign. Weedman also recorded a video of the Tesla stopping for the billboard, so you can see it in action:
I should mention that not everything going on here is bad, of course. Autonomous driving is an incredibly difficult problem, and the Tesla is doing some things very well.
The recognition of the stop sign image on that unlighted billboard is impressive, for one thing. What’s less impressive is mistaking it for an actual stop sign, a mistake that, significantly, almost no human would make.
While this is described as an “edge case” by Weedman in the video, the truth is that the world is absolutely crammed with out-of-the-ordinary situations we call edge cases. These edge cases are extremely important to developing full autonomy, partially because they remind us that, fundamentally, computers are absolute morons.
I remember first hearing this back when I was learning to program in BASIC as a kid in the 1980s; computers, no matter how fast they may be or how good at math they are, are colossal idiots and lack any semblance of what we humans think of as common sense.
While computers are many, many orders of magnitude better and faster and more powerful today than the old 1 MHz Apple II that I was coding on, deep down they’re still idiots.
The term “artificial intelligence” is sort of a misnomer, too. It’s not “intelligence” as we understand it, that a computer can possess, even in artificial form. It’s a simulacra of intelligence, a lot of brute-force if-then-else conditionals (that’s oversimplified, but not entirely untrue) that give an effective illusion of intelligence.
That’s why the Tesla had no idea the stop sign was on a billboard, next to what would have been a 20-foot-tall cop. We’ve actually seen this happen before with Teslas, in one case notable enough that his Highness the Burger King even took an interest.
I once fooled Mazda’s street sign recognition system with some markers and a misspelled sign, as you can see above there.
Training Teslas to recognize a stop sign on a billboard is likely possible, but then we open the possibility of ignoring a stop sign positioned so that it appears to the car’s cameras to be part of a billboard behind it, a not-unlikely scenario. It also doesn’t preclude malicious fake stop signs from being made and put up places that wouldn’t fool a human but could easily fool a car.
These situations may also be a good reminder that maybe it’s ridiculous for us to expect AVs to do everything independently. If we’re serious about wanting AVs as a culture, maybe it’s time to integrate some kind of QR visual authorization code onto traffic signs to eliminate counterfeit or confusing ones?
Current AV tech is impressive, no question, but it’s a huge mistake to prematurely believe these machines are better than humans, because at the moment, they’re not. They’re idiots.
That said, it is at least one trait the machines share with many of us, so that’s encouraging.