A research scientist at Georgia Institute of Technology is hypothesizing that intelligent robots can behave more ethically than humans on the battlefield.
A research scientist is studying whether intelligent robots can behave more ethically than human beings on the battlefield, according to the International Herald Tribune .
Ronald Arkin of the Georgia Institute of Technology is under contract to design software for battlefield robots for the U.S. Army. He's looking into whether autonomous robots and drones not controlled by Army personnel can make better battlefield decisions than humans.
In a report to the army last year, Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built to show no anger or recklessness, Arkin wrote, and they can be made invulnerable to what he called "the psychological problem of 'scenario fulfillment," which causes people to absorb new information more easily if it agrees with their pre-existing ideas.
A survey done in 2006 by the Office of the Surgeon Multi-National Force in Iraq and the Office of the Surgeon General, U.S. Army Medical Command found that troops "who were stressed, angry, anxious or mourning lost colleagues or who had handled the dead were more likely to say they had mistreated civilian noncombatants."
The article quotes Arkin as saying: "It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of." Arkin said the robots would first have to be programmed on such topics as when to fire on a tank and how to distinguish civilians.
As an example, Arkin cited a robot pilot who spots a tank at the entrance to a cemetery where civilians are gathered. The robot decides not to fire, but later fires at another tank in a more remote location. "In another case, attacking an important terrorist leader in a taxi in front of an apartment building, for example, might be regarded as ethical if the target is important and the risk of civilian casualties low." Arkin is testing his hypothesis on computers.