We live, my friends, in a world of anarchy. Chaos reigns over our streets. We live in a society that tolerates half-demented, half-driving barely-robot cars. Teslas that can be fooled with an orange. Cadillacs that stare at you with the eye of Sauron. Volvos and Kias that require you to hold your car’s hand all the god damn time. It’s absurd. It’s inane. It’s time for someone to step in and whip the entire automotive industry into shape. The semi-autonomous revolution desperately needs a government intervention.
Specifically, it needs regulation. It needs a standard, so that when someone gets in a semi-autonomous car, they know how to use it, how not to abuse it, and fundamentally, that the silly thing will work properly.
There are so many driver assist systems, and so many words for driver assist systems. Advanced Driver Assist, semi-autonomous vehicles, Level 2 AVs, and more. They each have their own brand names, like ProPilot or SuperCruise or Autopilot or Pilot Assist and so many more. They each have their own capabilities and requirements on the driver about what you can and cannot do while the car is assisting you. It’s confusing.
Despite the similar brand names, each manufacturer’s system can vary wildly in both standards and quality. We all know about the widely-publicized trials and travails of Tesla’s Autopilot, which in theory requires drivers to maintain regular contact with the steering wheel but in practice quite obviously does not, mostly because it doesn’t seem to care.
On the other end of the spectrum are systems like Cadillac’s SuperCruise, which stares at you with a camera above the dash to make sure you’re watching the road all the time. In my informal tests, I could look away for about three to five seconds before the system would start barking at me to return my gaze. This is all for the benefit of having the luxury to sit there, in a statuesque fashion, to make sure the car doesn’t crash into anything.
In between those extremes are systems like Nissan’s ProPilot, which actually does seem to attempt to care if you keep your hands on the wheel, but does a very bad job of it. Sometimes, lightly resting your hand on the top of the wheel will be enough to satisfy the car. Other times, tightly gripping the wheel at 9 and 3 will not be enough, and ProPilot will disengage no matter how many different ways you try and hold onto the wheel.
This is partly a product of how marketing works—every company wants to be distinctive, nobody wants to admit they’re just doing what everyone else is—and also a failure of regulators to adopt some kind of standard way to talk about what driver assist systems do.
Fundamentally, we need some standard way to talk about what computer-controlled cars can and can’t do in a way that people can easily understand. And as these systems become more advanced, we will need a way to prove these cars are capable of driving themselves for extended periods. Both these problems can be solved by giving robot cars driving tests.
As it is now, companies get permission to test their cars on roads with human safety drivers, all their behavior is logged, they drive around for years on a provisional basis, and prove through experience the safety drivers almost never have to intervene. At least, this is how we assume it will work; regulations are largely still being written.
But these provisional tests should merely be the first phase, the bare minimum to advance to the next test. Humans, for example, typically get learner’s permits, gain experience driving like an idiot for a while, then take a driving test before resuming driving like an idiot for the rest of their lives. Robot cars should have to do the same.
Like the ones humans take, the robot driving test should be on public roads, subject to the randomness and varieties of daily life. Also like the ones humans take, it should be overseen by the grumpiest human that can be located by authorities, subject to the randomness and varieties of a human’s mood, such as whether their burrito re-heated in the microwave well or was still a little cold inside. And, finally, in keeping with the ones humans take, it should be pass-or-fail, all-or-nothing, no matter how unfair the circumstances of that day’s parallel parking setup may be.
Of course, there ought to be standard tasks the robot must execute to prove its superiority to human drivers. I humbly offer the following suggestions:
- Driving down two miles of a standard crap American highway with faded lane markings, ample potholes, and at least two (2) bends in the road while being cut off by at least one (1) middle-aged man in a gray CR-V doing double the speed limit
- Not running over and deflating a basketball and/or soccer ball that rolls into the road
- Maintaining enough space from the car in front in a traffic jam to avoid risking a fender bender, but not so much space that other cars which try and shortcut up the entrance ramp lane that is clearly ending in a few hundred feet can rampantly cut you off making you a sucker
- Being able to tell the difference between an inflatable deer thrown in front of the car (allowed to hit/destroy) with a real-life deer that wanders in front of the car (not allowed to hit/destroy unless avoiding risks bodily harm to others)
- A three-point turn
- Detecting a cyclist crossing the road at a location other than a marked crosswalk or path and determining not to hit that cyclist
- The ability to navigate a standard fast food drive-thru scenario while making the appropriate stops at the speaker and window
- The Sandwich Test: the test driver should be able to eat an entire sandwich of his/her choice, with both hands, as the car drives itself without any intervention whatsoever. The situation/circumstance will be of the tester’s choosing to ensure environmental variety and prevent automakers from designing to the test.
These will of course vary state by state, but hopefully this will serve as a potential starting point for regulators to consider before more people get hurt.