Can we trust robots to make moral decisions?

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots. Chatbot Tay, designed to speak like a teenage girl, turned into a Nazi-loving racist after less than 24 hours on Twitter. “Repeat after me, Hitler did nothing wrong,” she said, after interacting with various trolls. “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now.” Of course, Tay wasn’t designed to be explicitly moral. But plenty of other machines are involved in work that has clear ethical implications. Wendell Wallach, a scholar at Yale’s Interdisciplinary Center for Bioethics and author of “A Dangerous Master: How to keep technology from slipping beyond our control,” points out that in hospitals, APACHE medical systems help determine the best treatments for patients in intensive care…


Link to Full Article: Can we trust robots to make moral decisions?