HAL 9000 (credit: Warner Bros.) Is it possible to develop moral autonomous robots with a sense for right, wrong, and the consequences of
Via luiy
Get Started for FREE
Sign up with Facebook Sign up with X
I don't have a Facebook or a X account
Your new post is loading...
|
Is it possible to develop “moral” autonomous robots with a sense for right, wrong, and the consequences of both?
Researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute think so, and are teaming with the U.S. Navy to explore technology that would pave the way to do exactly that.
“Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree,” says principal investigator Matthias Scheutz, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at Tufts.
“The question is whether machines — or any other artificial system, for that matter — can emulate and exercise these abilities.”
But since there’s no universal agreement on the morality of laws and societal conventions, this raises some interesting questions. Was HAL 9000 (HAL = (Heuristically programmed ALgorithmic computer) moral? Who defines morality?