In “UAV Implementation at the Infantry Platoon Level” (Military & Aerospace), the author reported “I spent 2 ½ years over 2 deployments to Iraq as an Infantryman and we rarely had good UAV support. When we did have UAV support, it was not always ‘top of the line’ because the operators were FOB based and it was an office job that became a ‘check the block’ duty.”  The author complained that UAVs were not being “pushed down to the platoon level,” because “…most Commanders are concerned about losing platoon level UAVs.”

His comments about the implementation of UAVs are an interesting example of how Human Robot Interaction (HRI) difficulties frustrate the proper implementation of novel technology.  From the very beginning of the introduction of unmanned systems into the battlefield, there has been a debate about the best positioning of human operators.  For the most part, a consensus has emerged that the dangers posed to operators in the front lines are outweighed by the advantages of close coordination with forward-placed warfighters.

In this instance, UAV deployment was influenced not by concerns about the operators, but–according to the author- by fear of losing valuable equipment. Clearly, these commanders hadn’t gotten the memo that the point of unmanned systems is to assume risk, so the troops don’t have to.

Livescience.com in “Real Soldiers Love Their Robot Brethren” reveals that other soldiers also haven’t gotten this memo.  Quoting Peter Singer (author of “Wired for War: The Robotics Revolution and Conflict in the 21st Century”), they describe a “… soldier who ran 164 feet under machine gun fire to retrieve a robot that had been knocked out of action.”

The phenomenon of soldiers risking their lives for robots was also reported in “Why Bomb-Proofing Robots Might Be a Bad Idea” (Wired.com).  In fact, the author of that article suggests that we should reconsider the ideas of outfitting robots with expensive classified electronic countermeasures, because that “…undermines the purpose of having a disposable army of machines to handle irregular war’s most dangerous work.”

So, in addition to obstructing proper implementation, HRI difficulties affect actual combat. A great deal of research has been done on HRI, but human behavior has a way of confounding even the most dedicated researcher.

Even the User Interface (UI) itself can cause unanticipated problems. In an article to be published in the March OCU Pros newsletter, David Bruemmer, VP of R&D at 5-D Robotics reveals some unexpected problems with commonly used UIs.  Simply put, video feeds and other information-rich UIs may actually be detrimental to the operation of an unmanned system (To read this article and receive the OCU Pro newsletter, sign up here).

The unpredictability of how humans interact with robots may frustrate the drive to field novel technology as fast as possible. This obstacle emphasizes the rather unsurprising idea that end-user input is important early in the development process (At least it should be unsurprising to anyone who reads this blog).

Of course, not all unpredictable human interactions with robots have dire consequences.  Check out this video of a “weaponized” BigDog robot being used in ways that the designers surely never envisioned.

1 reply
  1. Dave
    Dave says:

    As for myself, I will not risk my life for a machine, but if the machine provides information/real time video that would prevent the deaths of any of my Soldiers, you bet I would run after it. Been shot at enough already. Shoot me an email and I’ll give you the other 17 pages or so with regard to UAV implementaion and manned rotorcraft support and lack there of. I believe that UAS systems should be throw away tools. I’ll also send you another paper that I’m working on regarding AI singularity in UAS/UAV targeting systems (not the sci-fi version) Interesting comments though. Thanks

    Dave
    posted @ Monday, May 23, 2011 10:50 AM by Dave Hickman (above mentioned author) envision1972@gmail.com

Comments are closed.