Could Robots Be Programmed To Know Right From Wrong?

The US Navy has invested $7.5 million to research the possibility of ‘moral robots.’

The Office of Naval Research (ONR) is giving $7.5 million worth of funding over five years to researchers from universities such as Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to research how to give autonomous robotic systems a sense of right and wrong and moral consequence.

The use of lethal fully autonomous robots are prohibited by the United States military, and even semi-autonomous robots first get instruction or confirmation from an authorized human personnel before selecting and engaging targets, but it seems the military isn’t completely closed off to the idea of autonomous robots being left alone to operate on their own.

Paul Bello, Director of the Cognitive Science Program at the ONR told Defense One,

Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before. For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.

Bello also added that even if the systems being used by armed forces are not armed, these systems may still need to make moral decisions, especially in first-response or search-and rescue situations or medical missions. In such cases, a robotic system may still be forced to decide who to evacuate first or who to give treatment to.

Some researchers on artificial intelligence approved of the grant since they reasoned that the military is continuously creating and using systems that may one day need to make moral decisions. Systems like drones and autonomous vehicles are being put in situations that can be unpredictable and these systems may need to be programmed with some way to weigh different options and decide on the “right” or “moral” one. Some argue that such systems may even be better than people when it comes to making these decisions because they will process through more options than a person might.

On the other hand, those who do not agree with creating “moral” robots argue that autonomous systems simply follow the program or code that was built into them by a human designer, and that program or code may not always be the “right” one or the universally acceptable one.

Though resolutions or agreements on the ethical and legal issues surrounding autonomous weaponry systems are still far ahead in the future, at least researchers are taking the first step in exploring how to give these systems a sort of “moral compass.”

Source: Defense One, Vice News

Quantcast