As far as I can tell, there’s no definitive data on how often someone’s pulled over by a fake cop. But it’s not unheard of for some moron to put flashing lights on their car and arrest someone! So, when Google revealed last month that it tested how its self-driving cars will respond to emergency vehicles, I had a thought—however seldom this may occur: What happens if a robotcar is pulled over by someone impersonating a cop?
Fourth, if the “officer” starts acting oddly when you ask to see his ID — threatening you, behaving in a non-professional manner, pounding on your door, etc. — seriously consider putting the car in gear and getting out of there.
Fine idea! But here’s the gist of my OK-But-What-If-A-Zombie-Horde-Blocks-The-Car hypothetical: how’s a robotcar—with no pedals, a steering wheel, or an accelerator—going to identify the cop is fake?
The general idea from Google’s tests in July was that self-driving cars could rely on a suite of sensors to “see emergency vehicles and their flashing lights even further and clearer with our custom vision system, radars, and LiDARs.”
This makes sense, of course. The cars should be trained to be reactive, and yield or pull over as needed. But if a fully-autonomous car gets pulled over by a fake cop, and it becomes evidently clear they’re a phony, and that the driver should speed off, what’s the car going to do? It’s trained to pull over for a police cruiser with flashing lights. Is there an emergency ignition button?
There’s the more obvious concern about people “bullying” self-driving cars, say, in a busy downtown core. A colleague I won’t name for having a lack of imagination said my fake cop idea was absurd because of how infrequent this occurs. Maybe you agree! I don’t—and I can’t come up with a better idea than an emergency ignition button. Ford has an idea to use removable pedals and steering wheels. Maybe fully-autonomous cars can keep those in an easy-to-reach storage spot? I don’t know.
Help me solve this conundrum.