Quantcast
Channel: Software, Business & Technology Innovation » Society
Viewing all articles
Browse latest Browse all 2

Can we Train Robots Morality?

$
0
0

Other than described in the Sci-Fi literature of the last decades, today, most artificial intelligence (AI) is still kind of fragile and with little awareness about the world around it to allow it “surviving” as part of our society.

Some people argue that we are still far way in creating autonomous robots that understand the rules of humans living and interaction in very-day life and less predictable situations, and, that we are even further away to apply those rules to get integrated into society.

Others are convinced that we are very close to create autonomous robots that react to and act humans-like in varied complex situations.

What most AIs are missing to be more “human” is morality.

Researchers of Tufts University, MA; Rensselaer Polytechnic Institute, NY; and Brown University, RI, work together with the US Navy to create robots capable to develop their own sense of morality.
First, the teams are working on theoretical models. After the models are proven successful, they will be integrated into AI systems that are capable to autonomously evaluate difficult situations and can make complex ethical decisions, which even can overrule the originally inflexible instructions it was given.

Moral Competence in Machines?

Prof. Matthias Scheutz at Tufts University recently stated that “moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree.”
The team of scientists around Prof. Matthias Scheutz is trying to characterize human moral competence in its basic components to lay the foundation for a framework for human moral reasoning that later on will be translated into in an algorithm to be implemented into the computer program of an intelligent machine.
The “moral” algorithm would allow robots to override its instructions when confronted with new evidence and also would have the ability to justify its actions to the humans who control it.

The vision of the scientists is that all of the AI’s choices first go through an initial ethical check before they execute tasks, which is depending on complex interactions with their environment.
With this skill, a robot that may be ordered to transport instantly needed medication and finds an injured person along the way, is able to analyze the circumstances and independently choose to immediately help the person in need, to call additional help or carry on with its original mission.


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images