Last July, Dallas police killed a suspected gunman with an Unmanned Ground Vehicle (UGV). By any conceivable stretch of the imagination, this was a “good kill.” The gunman had murdered 5 policemen, threatened to kill more with hidden bombs, and refused to give up even after hours of negotiations. A credible threat against civilians had been made. The threat had to be neutralized in a timely fashion. Using an UGV reduced the danger to officers, and protected civilians at the same time.
We are not sure what kind of robot was used, but it was probably one designed to detect Improvised Explosive Devices (IED). Ironically, the robot, which was most likely designed to counter explosive devices, killed the gunman by detonating a bomb (a pound of C4 explosives).
Sources claim that the UGV was a Remotec Andros F-5 model (some say it was a MARCbot, but this appears to be speculation). We do know that the UGV was remotely controlled. Both the MARCbot and the Andros F-5 have been used in overseas combat operations.
The tactic of jury rigging an explosive device to a UGV in order to perform a kinetic action is not a new one. The Brookings Institution reported in the “Military Robots and the Laws of War” that soldiers would strap Claymore anti-personnel mines to UGVs in attempts to kill insurgents.
Some are alarmed that police used military equipment and tactics on American soil. Many feel that the whole “killer robot” scenario was just plain ominous. The following quotes are typical.
“The ‘targeted killing’ of a suspect on ‘U.S. soil,’ as opposed to extraterritorial declared or undeclared war zones, where this operation also has clear precedents too, has captivated the attention of scholars and the public. … the event raises many issues about the rules of engagement and the constitutional rights of a suspect—issues that obviously the Dallas police completely skirted, and do not seem too willing to discuss in the aftermath.” Javier Arbona, Assistant Professor, University of California at Davis in American Studies and Design, UC Davis
“As with other game-changing technologies, police robotics—especially as weapons—could change the character of law enforcement in society.” Patrick Li, Director of the Ethics & Emerging Sciences Group and a philosophy professor at California Polytechnic State University, San Luis Obispo, IEEE Spectrum
“…placing a bomb on a police robot with the intention to kill a suspect—if that is, in fact, what happened—would represent a major shift in policing tactics….It now appears that a tactic of war deployed on foreign soil is being used on the streets of American cities. There are still a lot of moving pieces and things to sort together in the aftermath of the police shootings in Dallas. But one thing is clear: the rules of police engagement might have just changed forever.” Daniel Rivero, Fusion
As evidenced by the above quotes, legal experts and ethicists are anxious over the use of a “killer robot.” You know who didn’t seem concerned? Police.
“Admittedly, I’ve never heard of that tactic being used before in civilian law enforcement, but it makes sense. You’ve got to look at the facts, the totality of the circumstances. You’ve got officers killed, civilians in jeopardy, and an active shooter scenarios. You know that you’ve got to do what you’ve got to do to neutralize that threat. So whether you do it with a sniper getting a shot through the window or a robot carrying an explosive device? It’s legally the same.” Dan Montgomery, former police chief, Time
I called AMREL’s Director of Public Safety Programs, William Leist, to ask his opinion. A former Assistant Chief California Highway Patrol, he has many years of police experience. He fell into the camp of no big deal.
“Deadly force is deadly force. Whether we use a bomb, a vehicle or a firearm, if the situation is such that deadly force is authorized, lawful, and necessary, the mechanism really shouldn’t matter.”
I am inclined to strongly agree with Bill Leist (and not just because he works with AMREL in supplying rugged computers and biometric devices to law enforcement). The use of a UGV in this particular instance isn’t really a game changer, as some experts are maintaining.
However, let’s be devil’s advocate and point to potential problems. There is an old saying, “good cases make bad laws” In other words, a specific high-profile instance may not be the best guide for similar circumstances in the future. Just because the use of a UGV in Dallas was a lethal operation that even Gandhi would have approved doesn’t mean that there isn’t potential for abuse.
The adoption of new technologies does not always lead to expected results. Police have expanded an application of another new technology, UGVs, beyond the original purpose. Granted, this new application is justifiable in this instance, but are we in a position to predict future consequences?
Police are being overwhelmed with technology. Law enforcement officer are dealing with a multitude of biometric devices, communication technologies, non-lethal weapons, biometric equipment, and a wide variety of computer platforms. In many departments, training has not kept up with the needs. Should there be certain minimal standards for operators of UGVs? Who sets them?
Suppose something goes wrong and a civilian gets hurt. The police will say that it’s not their fault, because the UGV malfunctioned. The manufacturer will say their UGVs are not designed for lethal operations, so it isn’t their responsibility. Granted, almost all police actions have liability issues. However, unmanned technologies pose liability problems that are beyond traditional concerns, and are major factors in slowing their adoption.
What about visibility? Whether a shooting is justified or not may depend on what a policeman sees or what he thinks he sees. Remote-controlled unmanned systems, who operators may have limited visibility, may present new problems.
Using lethal force with an UGV does not herald a new era of law enforcement. The adoption of this new technology is unlikely to fundamentally alter the current legal dynamics surrounding the use of police force. However, they may present new complications and challenges.
Ultimately, what I want to know is whether the decision to take him out was rational or emotional Police officers had been killed earlier in the night. “Exploding the bad guy” certainly presents a certain poetic satisfaction on an emotional level, but if the police weren’t 100% sure that the bad guy was bomb-free and that the robo-splosion would not trigger a secondary explosion, then this tactic was at the very least irresponsible. People often compare this event to using sniper fire, but sniper bullets have a much lower probability of setting things on fire.
Police do not train to kill people with explosives, nor should they IMHO. It is fortunate that there were no major unintended consequences in this case, but I want my local police to rely on training rather than creative luck to get the job done. The dangerous precedent here is found in the use of explosive breaching tools in ways they were never intended to be used.