I had an interesting experience recently when I was on the podcast of my friend Sean Carroll, a very respected physicist who normally has guests far, far smarter than me. What I found especially interesting was moving outside of my usual audience bubble of fetishy gearheads like myself, and into the world of the sorts of people who listen to hard science podcasts. As you can imagine, many of my views on autonomy got a lot of blowback, but I want to be clear about something here: I’m not alone.
Most of harshest YouTube comments about the podcast revolved around my criticisms of Level 2 automation (I suspect because, as a podcast, no one could see my hair) —that is, the semi-autonomy that is the highest level of autonomous driving commercially available, and the level that systems like Tesla’s Autopilot are currently at.
In my book, I have a whole chapter just called “Semiautonomy is Stupid,” which should give you an idea about how I feel about these Level 2 systems like Tesla Autopilot. While I believe the technology is impressive and improving every day, conceptually there’s a problem because humans simply are not good at having an automated system that does nearly all of the work, and yet being able to leap into action at a moment’s notice. It’s just not how humans work.
Of course, saying this prompted comments like these:
I appreciate that the guy linked to some safety reports, but they are Tesla’s safety reports, so take that for what it’s worth. And I don’t think cruise control is a good example here because it simply doesn’t hit that threshold of doing most of the driving work.
Really, normal cruise control is a good example of a very collaborative driver’s assist system, one where there are still constant input demands made on the human driver, but the technology helps mitigate them in a well-considered way.
Of course, I also was accused of being in cahoots with Tesla “shorts” because you know how I roll:
So many angry Tesla stans:
Then, in case I forgot that this time I was not reaching out to my usual audience of Jalops, we get Captain Joyless here and his proclamation that “having fun should only happen on a racetrack, never on public roads”:
And, of course, comments reminded people that I’m not a Serious Man with more degrees than a thermometer, which I can’t say I disagree with.
Now, with that in mind, I’d like to make clear that, while yes, I myself am a barely-educated, frequently-drooling simpleton with these ideas, there are actually genuine experts in the field of automation and human interactions with automation that seem to agree with me.
I was turned on to this fascinating conversation on the very subject by Ed Niedermeyer, who sent me a link to a recent panel from the Partners for Automated Vehicle education that featured three panelists, all of whom have been specializing in just this sort of research for decades.
In here, they cover the same fundamental issues I’ve written about before, like the Takeover Problem and the related errors that happen when you demand a human to take over from an automated system, the separate task of “vigilance” as opposed to actually doing something, which Dr. Michael Nees of Lafayette college explains very well:
“We think of automation as a machine doing a task that a human used to do... you might think that means a human does nothing. But in fact there’s abundant literature that shows the human is not incurring no workload, the human is now doing a different task and that task tends to be monitoring, a vigilance task, looking for rare events...that is a task that humans are not well-equipped to do.”
Also extremely powerful are what Dr. Missy Cummings of Duke University has to say about Level 2 systems:
“I think we’re looking at this all wrong. I think we need to quit talking about the levels of automation and just say if you are expecting the human to take over control—regardless of what the level of automation is—regardless of where they fall on the SAE level—if the human is expected to intervene under any circumstances, then the human should be able to...go in less than a second.
The takeaway is torque monitoring on the driving wheel is terrible and should be outlawed that should not happen at all...we should never use that as a proxy for driver monitoring.”
I suggest watching the whole thing, as it’s fascinating to hear what these genuine experts have to say about systems like Tesla’s Autopilot:
Dr.Cummings even says that if you asked anyone in the field 20 years ago if systems, like we see now, would be a good idea, no one would have agreed.
It’s powerful stuff, and I hope that all of those people clamoring for more data and voices from people with real experience and degrees will take a moment to hear these people—and many others—out.
I’m not against autonomy at all! I think it’s possible and will come, but if we put too much-unwarranted faith in the current semi-autonomous systems and don’t think through the human behavioral side of the equation enough, it’s just going to screw things up for everybody, to the point of people actually getting killed because of misplaced ardor for a tech company.