There’s an excellent story on The New York Times right now, a look inside Tesla’s development program for their semi-automated driver assist systems that the company confusingly (perhaps deceptively) calls Autopilot and Full Self-Driving. The article is full of interesting insights from sources inside the company, but there is one detail that I’m finding really incredible right now: during the filming of an Autopilot demonstration video, one that comes out and says the car was “driving itself”, that car hit a barrier all by itself and had to be repaired. That part wasn’t included in the video that was published, as I’m sure you’re shocked to hear.
This detail really sticks with me because it’s a video that Tesla has had on their site since 2016, and was released to coincide with Telsa’s announcement that their cars have all the hardware needed for full self-driving, even though this proved not to be true even by Tesla themselves, who had to upgrade the FSD computers in cars for newer versions of their FSD software.
This video is also notable because it begins with this significant bit of text:
So, the takeaway from this video is clearly intended to be hey, look, the car’s driving itself! We only had to put a dude in the seat because Johnny Law made us!
That’s why this part from the NYT story is so incredible:
As Tesla approached the introduction of Autopilot 2.0, most of the Autopilot team dropped their normal duties to work on a video meant to show just how autonomous the system could be. But the final video did not provide a full picture of how the car operated during the filming.
The route taken by the car had been charted ahead of time by software that created a three-dimensional digital map, a feature unavailable to drivers using the commercial version of Autopilot, according to two former members of the Autopilot team. At one point during the filming of the video, the car hit a roadside barrier on Tesla property while using Autopilot and had to be repaired, three people who worked on the video said.
The video was later used to promote Autopilot’s capabilities, and it is still on Tesla’s website.
Okay, so not only was the route pre-planned and pre-mapped, but the car hit a roadside barrier while under Autopilot control? Both of these things seem like pretty huge deals that should have been mentioned in that video, and the fact that the car actually hit something would be hilarious if it wasn’t so damn irresponsible.
Well, I guess it can be both.
Maybe they were planning on including another text card like this one, but they had to cut it for time reasons:
Should all of this end up being absolutely confirmed, how is this any different than other techno-sham videos like the infamous Nikola EV truck that was found to be powered more by gravity than electricity when it was discovered that it was just rolled down a hill for a promo video?
It’s really not different at all. If anything, it may be worse, because this video was used to deceptively suggest capabilities of a system deployed into real people’s hands and used on public roads.
Of course, I’m not the only one to make this association:
Despite its age, Tesla and its supporters leaned heavily on that video for years as evidence that Teslas were essentially fully self-driving, with only some details left to sort out, and how hard could that be?
Elon’s attitude towards the path to actual full-self driving was noted in the story, which quoted a statement Musk gave to Fortune:
By the end of 2015, Mr. Musk was publicly saying that Teslas would drive themselves within about two years. “I think we have all the pieces, and it’s just about refining those pieces, putting them in place, and making sure they work across a huge number of environments — and then we’re done.”
Just think about that statement, and marvel at it for a moment. He’s basically saying they have a car with sensors and cameras and a computer and some software, and all they have to do is, you know, put them together and make sure they work all over the place! Easy!
It’s the same way that I basically have a Porsche 356 in my driveway because I have a Beetle with its carbs taken off and a bunch of metal—I just need to put them together and make sure they work!
In both of those examples the “making sure they work” phrases are doing about as much heavy lifting as Atlas holding the Earth and his girlfriend’s purse.
The article also documents Elon Musk’s focus on making camera-only-based vision systems for automated driving, shunning the common radar systems Tesla once used and more advanced lidar systems many other companies in the automated driving space employ.
Generally, it is believed in the industry that more and varied sensors are better, so each method’s strengths and weaknesses can help cover and compensate for one another.
Elon likes to remind people that his camera system should work just fine because, after all, clammy, soft human beings drive cars with only two little cameras in their heads! Teslas have eight, like the far superior spider.
To this, Schuyler Cullen, robotics and artificial vision expert, reacted very satisfyingly in an interview, also quoted in the NYT story:
“Cameras are not eyes! Pixels are not retinal ganglia! The F.S.D. computer is nothing like the visual cortex!”
And, of course, he’s absolutely right. It’s easy to want to analogize computers to brains and cameras to eyes but the truth is those two systems work in completely different ways. Mammalian eyes have been in, um, development for millions of years via evolution and natural selection, and we do not entirely understand how all of vision is processed in the brain.
Eyes and cameras are analogous on a surface, kid’s book level, but that’s it. Our biological brains do not operate like computers, and sending an image from an eye via the optic nerve to the occipital lobe of the brain has nothing to do with the process of camera data traveling over a system bus to the AMD Ryzen CPU in the Tesla FSD computer. It’s just not how any of this works.