Scientists to explore the challenges of providing autonomous robots with a sense of right and wrong — and the consequences of their actions
As we all learned from the 1986 film War Games, machines have the upperhand in warfare when it comes to making logical decisions (such as, the only winning move in nuclear war is not to play). But now it seems the US Navy is not content with that party trick, as it is working on teaching artificial intelligence how to make moral and ethical decisions, too.
A multidisciplinary team at Tufts and Brown Universities, along with Rensselaer Polytechnic Institute, has been funded by the Office of Naval Research to explore the challenges of providing autonomous robots with a sense of right and wrong — and the consequences of their actions. Matthias Scheutz, principal investigator on the project, and director of the Human-Robot Interaction lab at Tufts, believes that what we think of as a uniquely human trait could be simpler than most of us thought.
Warbots 2.0
See on www.wired.co.uk