On Monday, there was a sad incident at a VW plant in Germany where an industrial robot that was being set up picked up a man and crushed him against a metal plate, killing him. This is essentially like any industrial accident — tragic, of course, but these factory deaths do occasionally happen. So why are there so many jokes about this in my social media feeds?
The difference is, of course, that the machine that did the killing was a robot, and not just some other piece of automated industrial equipment, like a body panel press or welding jig. When we see the word ‘robot’ in context with killing, it’s almost impossible for most of us not to make some sort of joke about the coming revolution or how we welcome our new robot masters, and are prepared to provide 20W-50 oil massages or whatever.
We got this story sent to us in tips a number of times, as well as it popping up on Facebook and Twitter, and almost every time, the image conjured up was not that of a horrible industrial accident, but rather C-3P0 covered in gang tattoos shiving a guy in the gut. It wasn’t even limited to individuals, either — this was the illustration used in the story written by The Times of India until they changed it to a VW logo:
At the very least, that’s irreverent, at the worst, it’s making a joke out of this poor guy’s death.
I’m not trying to be a scold here — these were my first reactions as well, and I suspect if I wrote up the story late at night, I very likely would have made the same sort of jokes as well, because, well, I can be sort of an asshole when it comes to focusing on the joke instead of maybe the bigger issues.
But I have had time to think about it, and I think there’s a lot we can learn from our reactions here.
First, we do not treat robots the same way we do other machines — even an industrial robot like the one in this story, which is non-humanoid-looking, non self-aware, and really is just like any other piece of equipment that executes a stored program. Sure, it reacts to its environment, but in very specific and scripted ways — this machine is designed to grab and manipulate auto parts, and it had no idea it was grabbing a human being when it killed the worker.
If this story had the headline “VW Worker Killed In Industrial Accident,” no one would have put this on a Facebook wall or anything. But as soon as the word “robot” entered the story, we’re all ascribing motives and intent and attitudes and emotion to this machine. Even intelligent articles that explore the reality of these sorts of accidents can’t help but to buy into the “killer robot” idea.
Even this article in Time describes the robot more like some wild beast that’s been barely trained to build cars than a precision piece of electronics and hydraulics:
Though the company uses some lightweight robots to work on the production line next to humans, a spokesperson told the Financial Times that this was not one of those robots. The type of robot that crushed the employee is usually kept in a cage. The man was working on the robot inside the cage when he was grabbed.
Look at that paragraph: this was not a lightweight robot that works next to humans — it was a big brute that normally has to be kept in a cage. If you’re not picturing a huge, gleaming steel gorilla right now, then you’re a better person than me.
We can’t help it; culture has trained us to do so, and it’s not necessarily a bad thing. In several car factories I’ve been to, I’ve seen oddly intimate relationships between workers and robots. At BMW’s South Carolina SUV plant, I encountered this:
The tour guide we had spoke about the robots on the factory floor in surprisingly human terms. One she said was a Swiss robot, and since it was Swiss, it was very, very precise. Like it’s subject to the same stereotypes as swiss people. She referred to these big industrial arm-like robots as having faces on several occasions, and once described a robot sheathed in a white anti-dust cover as “being dressed in white.” I found it kind of endearing.
All of this makes me wonder how this robot will be treated when it goes back to work — because, of course, it’s not going on trial. Even if it had some sort of autonomy and legal status and responsibility, it’s still a wildly expensive piece of equipment. Will the workers always think of it as the murdering robot? Will it have a reputation? Will workers feel less at ease working with it than one of its identical counterparts on the line?
This whole thing is all in our heads, of course, but that doesn’t make it any less real. We need to really think about how our interactions with robots, good and bad, makes us feel and react now, because it’s only going to get more common and more complex. When a robot that’s more capable of making decisions kills someone, what will we do then?
When an autonomous car makes the choice to kill one kid to save five adults, how will we react?
I don’t have any solutions here, I just want to call some attention to what we’re doing, because I think it’s significant. Personally, I believe that it may well be hypothetically possible for a sufficiently advanced machine to become sentient. If and when that day comes, I hope we’ll have found some way to deal with these beings better than a mix of nervous jokes that mask some strange uneasiness we’re not really prepared to address.
Contact the author at email@example.com.