Autonomous cars are coming. In some ways, they’re already here. And as anyone who has watched a video game avatar run at full speed right smack into a wall can tell you, computer systems are not infallible. Right now, when there’s an issue in a near-autonomous car, the car cries to you for help. This is a bad solution, and I think there’s a better way: human drivers on call to take over via remote control.
Here’s the problem: even in robust, well-tested systems, there exists the possibility for an autonomous driving system to fail. It could be really terrible weather, or mud or snow covering some sensor windows, or just a bug in some code that managed to evade detection in testing.
On current autonomous systems (which mostly just means Tesla at the moment), when the car no longer determines it’s capable of driving itself, it warns the occupants to take over control of the car and drive it manually.
Of course, the idea that you can rely on the people inside a car driving itself to leap into action and take over the controls is at best hilariously generous to human nature, and at worst downright dangerous.
We’ve already seen people apparently sleeping at the Tesla’s autopiloted wheel. As autonomous cars become more common, I’m pretty sure people will be reading or texting or masturbating or eating or sleeping or whatever in their cars as they’re driven around. I don’t think they’ll necessarily be ready to take over at a moment’s notice.
So, if we know that we need some sort of backup if a car’s autonomous systems fail, and we don’t want to rely on the person in the car who fell asleep while watching a movie and eating a hoagie, what should we do?
Here’s what we do: we get a real person to drive, just not one anywhere near the car.
Autonomous driving systems should fail over (yes, that’s a real term -ed.) to being remotely-driven first, and then, if there’s some communication or connectivity issue, only then ask the people in the car to get their hands where we can see them and take over driving.
It’ll work because we’re already basically doing it to kill people
There’s no reason that remote-driving shouldn’t be possible; we already remote-drive armed drones in the Middle East from the middle of America’s fried-food heartland on a routine basis. Car companies have been experimenting with the idea of remotely-driven cars for years, and it’s been proven possible.
Even with sensors obscured by weather or debris, a human driver, using the lane-departure camera system or some other similar windshield-peering camera, would likely be able to drive, and a human’s greater driving experience and the nature of the brain and visual system means that conditions that might confuse a computer would be no big deal for a human.
We’ve all driven in bad visibility conditions in lousy weather. It’s no picnic, but humans can certainly do it. Overall, a computer-based autonomous system will definitely have better reflexes and drive more safely and rationally than a human could, but there are still situations where a pair of moist human eyes—even if they’re thousands of miles away—can’t be beat.
A company like Tesla would have a facility with some number of specially-trained remote drivers in front of what would essentially be fancy driving sim rigs. When an autonomous vehicle determines it cannot continue, it would contact the remote driving facility, and the car’s controls would be patched to a remote driver’s terminal.
The remote driver would announce to the occupants of the car what’s going on, and would make sure that the occupants confirmed verbally that someone is awake and available to take control if there’s a communications issue.
The remote driver would pilot the car to its GPS-set destination, or until the autonomous system determined it was able to resume control. The remote driver’s actions would be logged and incorporated into the autonomous system’s machine-learning systems, in the hopes of making the autonomous system even better.
It’ll make jobs!
I would think a staff of, oh, 50 remote drivers available on call 24/7 would likely be enough, given how (ideally) rare the need to take a car over would be. Maybe more, depending on the automaker and the size of the autonomous fleet.
For security reasons, I’d imagine that each automaker would have a proprietary system with their own encryption; there’d probably be a lot of controversy regarding letting police or other agencies be able to take control, and that question is probably its own article itself.
I can also imagine a strange situation where the idea of a remote driver could be used as a sort of alternative to autonomous vehicles. They wouldn’t necessarily be any better than an autonomous driver in most common circumstances, but a premium carmaker that focused on performance cars could advertise a crack team of professional drivers, maybe current or ex-racing drivers, to give their customers a “more engaging” driving experience.
“More engaging” could also be thinly-veiled code meaning that unlike autonomous cars, a remotely-piloted car may have a driver willing to speed, race, or do possibly other unsavory, exciting, or illegal things. Remotely-driven human cars may get a sort of ‘dangerous’ image.
Not, like, Pinto-dangerous, but more like maybe-help-you-get-laid-dangerous. Fine line.
That’s sort of an unlikely, niche scenario, though. I think there is real value to having a remote human driver able to take over when an autonomous system fails, and I’m certain it would be way more reliable than asking the people inside the car to drive, something (I’m looking ahead a bit) they may not have done for years, if at all.
Plus, it gives those of us who still love to drive a potential new job opportunity, as well, and, if you believe this, it’ll help you justify all that time you spend playing Forza of SimRacing or whatever to your partners/parents/pets.