George Hotz is such a computer geek that he refers to a modern car as “a computer.” He’s not exactly wrong, but he’s not exactly right, either. Still, he’s right enough that he was able to hack together a fairly viable autonomous car in the space of a couple months, with off-the shelf hardware. It’s also proven to be an effective tool for chapping Elon Musk’s ass.
First, let’s just clarify what this car can do, according to the Bloomberg report that broke the news. It’s an Acura ILX, and with Hotz’s modifications, it seems capable of autonomous driving at roughly the same level as Tesla’s autopilot.
That means it works in a highway-driving context, with almost no human input, but not in the more information-dense and chaotic environment of city driving. It’s autonomous, but still in a limited set of circumstances.
That’s not to belittle the achievement, Tesla’s or Hotz’s. What makes what Hotz did so interesting is that he essentially started from scratch, interfacing a Linux computer into the car’s CAN bus via the OBD port, and connecting a full set of sensors and cameras to make it all work.
As you watch this, keep an ear open for the cringe-tacular line where Hotz says:
“I’m never going to say we’re changing the world — I mean, we are, but...”
Come on, buddy. Get a grip.
The way he approached the problem is very clever, and worth noting for anyone starting a big project: he divided it into parts. First, he needed access to all the car’s controls electronically, bypassing the human interfaces normally used to talk to the by-wire throttle, brake, steering, etc.
To do this, he interfaced a high-end PC joystick to the car, and had joystick inputs actuate the throttle, steering, brake, etc. So, now the car’s essential controls were handled via electronic inputs, not physical.
Next, he worked on the sensory and logic side of things, using a set of cameras for visual image processing, obstacle tracking, road curvature detection, and so on. He also used LIDAR sensors to accurately get data on distance and provide a better ‘picture’ of the car’s environment.
Once he had these parts, it was just a matter of sending input from the computer instead of the joystick to control the car.
Instead of defining a bunch of rules of driving, Hotz opted to have the car “watch” him drive and learn from that. Incredibly, after only 10 hours of training, the car was able to drive on its own, pretty effectively, as seen in that Bloomberg video there.
What’s really interesting here is less what he’s done—because it’s been done before—but how he did it with off the shelf, common hardware (well, maybe not the LIDAR), his own software, and that it seems to all work reasonably well. It’s very impressive, and very encouraging for those of us who’d like to have cars be more than unknowable black boxes.
Most cars with any sort of autonomy use systems and hardware developed by a company called MobilEye. Tesla uses their own home-grown system (though future plans will incorporate MobilEye components), but still Tesla felt annoyed enough by what Hotz did to issue a “correction,” which is a particularly irritating thing to call a response to an article that they didn’t write. But, you know, whatever.
Also, it should be made clear that Hotz calls out Tesla a number of times in the interview, and clearly wants them to respond. The company did. So, mission accomplished, chief.
In its not-at-all-pissed off response, Tesla officials said:
The article by Ashlee Vance did not correctly represent Tesla or MobilEye. We think it is extremely unlikely that a single person or even a small company that lacks extensive engineering validation capability will be able to produce an autonomous driving system that can be deployed to production vehicles. It may work as a limited demo on a known stretch of road — Tesla had such a system two years ago — but then requires enormous resources to debug over millions of miles of widely differing roads.
This is the true problem of autonomy: getting a machine learning system to be 99% correct is relatively easy, but getting it to be 99.9999% correct, which is where it ultimately needs to be, is vastly more difficult. One can see this with the annual machine vision competitions, where the computer will properly identify something as a dog more than 99% of the time, but might occasionally call it a potted plant. Making such mistakes at 70 mph would be highly problematic.
Tesla’s not wrong here, as such. Hotz’s system is clearly in the early stages, and has not benefited from all the hours of testing and training that Tesla’s system has. But I think it’s a mistake to discount it out of hand, like this letter seems to.
If Hotz’s software were released in an open-source context, along with his hardware specifications, who’s to say that independent experimenters wouldn’t be able to rack up thousands and thousands of hours of training for the system, with everyone making (hopefully vetted) tweaks and improvements?
I think it’s absolutely possible for an open-source, Linux-like autonomous car system that could be retrofitted to existing cars could happen. The problem is, though, should it.
When your Linux PC crashes because of some wonky update, maybe you lose some files. When your autonomous car crashes because of some wonky update, maybe you lose some limbs. So, Elon’s point about needing 99.9999 percent accuracy (and hence safety) is well-taken.
I still don’t think that means we have to roll over and leave this important technology to the established companies. We need some sort of rigorous testing and certification, sure, but that should be something that any player, corporate or community-driven, can achieve.
Contact the author at jason@jalopnik.com.