robot-apple ethicsRecently, the above question was asked in Quora, a social media website.  Below is my answer.  As you will read, my primary motivation was  in preventing people from forming onions based on superficial analysis and insufficient information.

This is an extremely complex issue, involving long-standing rules of war, international issues, cultural attitudes, and of course the implications of a rapidly developing, disruptive technology. I urge caution in expressing opinions about this topic; something that may seem simple and obvious may actually not be supported by the facts.

For example, some folks maintain that remote control of lethal unmanned systems make killing emotionally easier, since the human operator doesn’t directly experience combat.  The victims are faceless and invisible. As I point out in Things I’m sick of hearing about-an opinion piece, studies show that human operators of Unmanned Aerial Vehicles (UAV, i.e. drones) actually experience a high degree of Post-Traumatic Stress Syndrome, because they are more likely to see their victims. They may even “get to know them” over a discrete period of time, due to UAVs persistent surveillance capabilities.

Remote control is relevant to the question of independent robots, because the use of total autonomy will probably be limited. “Sliding autonomy,” with a human operator exercising control at certain times, and in specific situations, could very well be more common.  How liable will the human operator be if he fails to intervene in the autonomous actions of an unmanned system?

Everyone is entitled to their opinion, but let’s be honest, not all opinions are equal.  I may have an opinion about the use of supernova observations to determine the rate of the expansion of the universe, but on this topic the view of  an astronomer who has studied it all his life is more valuable.

Similarly, ethical standards for autonomous military robots will not be determined in debate by ordinary people, but hammered out in tedious sessions by academics, lawyers, and military personnel.  To get an idea of just how difficult this issue is, check out Cal Poly’s Autonomous Military Robotics: Risk, Ethics, and Design.

I am not suggesting that we completely abandon this issue to the experts. In a democracy, we should determine when, where, and how lethal force is being used in our name.  However, we all have a tendency to listen to the loudest and most strident voices, even if what they are shouting is superficial and poorly thought out.  I am simply saying that we should simply exercise prudence before embracing a point of view that may be ill-informed.  Also, we we should respect those bureaucrats who are performing unexciting, hard work on this issue.

BTW, ethical issues for non-military unmanned systems actually may be more complex.  As discussed in Unmanned Farmer, liability is a major stumbling block to the adoption of agricultural robotic systems. If an autonomous tractor runs over someone, who is liable?  The farmer? The manufacturer?  The software company?  The system integrator?  No one wants to be the first to find out.