From the very beginning, people have been concerned not only about what unmanned systems can do for them, but also what they can do to them. Indeed, the first time the word “robot” was used, was in a 1920 Czech play that depicted a machine rebellion against mankind.
More recently, there has emerged an international campaign to stop “killer drones.” This blog has previously expressed skepticism over the supposed “inherent immorality” of unmanned warfare. However, I think the more important issue may be the lethality of non-military unmanned systems.
For one thing, concerns about safety issues have significantly delayed the spread of unmanned system to civilian markets. An expert once told me that safety and legal concerns are the single biggest factor in delaying unmanned agricultural systems. They are also a major reason for the slow adoption of autonomous cars.
Many are anxious over the role of robots in industrial accidents. The rate of injuries and deaths of civilians in work settings is nothing to sneeze at. For example, we are used to thinking of casualties during war as being something that happens exclusively on the battlefield, but “The Bureau of Labor Statistics reported that each year between 1942 and 1945 there were some two million disabling or deadly industrial accidents, a total of more than six million.” Indeed, the eHistory website asserts that for every American military casualty, there were eight industrial casualties on the home front.
After a worker’s death in Germany, killer robots and industrial accidents are in the news. In response to to this report, journalist Margarita Noriega interviewed a cyberlaw expert. The interview below originally appeared in Vox.
After sharing a story on Twitter about a robot who killed a man in Germany, Ryan Calo, professor of robotics and cyberlaw at the University of Washington School of Law, replied that it is not that unusual for robots to kill people. Naturally, I had a few questions. Here they are with Calo’s answers, including why robots aren’t going anywhere anytime soon.
Margarita Noriega: What just happened in Germany?
Ryan Calo: A man was killed while setting up an industrial robot at a Volkswagen plant. Apparently the robot grabbed the man and crushed him against a metal plate.
Margarita Noriega: You mentioned that this was more commonplace than people might think. Can you explain?
Ryan Calo: About a person a year dies in robot related accidents in the U.S. alone. You can see this in the statistics complied by OSHA, the Occupational Safety and Health Administration. Recent incidents include “Maintenance Worker Is Struck And Killed By A Robot”(2013) and “Robot Crushes And Kills Worker Inside Robot Work Cell” (2012). The reason people are reporting on this death, I think, is that robots are very much on the mind. Of course, we should keep this all in perspective—more people are killed by bees or sharks than robots, at least in the United States.
Margarita Noriega: What do we mean by “robot” in these cases, anyway? And how is “likeness to humans” defined?
Ryan Calo: In my work, I define a robot as having three qualities that distinguish it from previous or constituent technologies. A robot must sense its environment, process what it sense, and then be organized to act directly on the world. I don’t define human likeness because I don’t think it’s necessary. I do, however, think that robots that look and act like people raise particularly interesting legal and ethical issues. You can read more here in my article Robotics and the Lessons of Cyberlaw.
Margarita Noriega: How are these cases handled, since the robot has no malicious intent?
Ryan Calo: Most industrial accidents—robot or otherwise—are handled in the U.S. by workers compensation. This means that the worker or their family receives a statutorily defined amount of money from a fund, depending on the severity of their injury. Technically the worker could sue the manufacturer of the robot but would have to show that s/he operated the robot within specifications. Usually some human error is involved, as apparently was the case in Germany. Where it gets tricky is when robots are not designed for any particular purpose and can run third party apps.
Another hard issue is what to do when the robot displays emergent behavior, that is, behavior no one involved in the programming could anticipate. My paper Robotics and the Lessons of Cyberlaw discusses. We’re already seeing this with Internet bots that, for instance, threaten someone’s life.
Margarita Noriega: You predict here that robot cases will increase. How does one avoid being killed by a robot, anyway?
Ryan Calo: I think the answer is that you are very unlikely to be killed by a robot, unless you come into military conflict with the United States. If you are, for instance, a factory worker, it is very important to follow established security protocols around robots and especially to stay out of the danger zone unless you are sure the robot is off. What I predicted in that 2009 blog post was that robotics would become the subject of laws and legal cases. And it is. Think of all the recent cases and statutes involving drones.
Margarita Noriega: What question haven’t I asked about killer robots?
Ryan Calo: One question is what is different about industrial robots. Industrial robots generally stay in one place and do the same thing over and over. They are dangerous because they can exert enormous force are not usually able to sense the presence of a person. As robots come to do more and more things, as they leave the factory floor and enter our offices, hospitals, and homes, they will be designed to be much safer.