Tay, the neo-Nazi millennial chatbot, gets autopsied

A user told Tay to tweet Trump propaganda; she did (though the tweet has now been deleted). Microsoft has apologized for the conduct of its racist, abusive machine learning chatbot, Tay. The bot, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was turned off less than 24 hours after going online because she started promoting Nazi ideology and harassing other Twitter users. The company appears to have been caught off-guard by her behavior. A similar bot, named XiaoIce, has been in operation in China since late 2014. XiaoIce has had more than 40 million conversations apparently without major incident. Microsoft wanted to see if it could achieve similar success in a different cultural environment, and so Tay was born.Unfortunately, the Tay experience was rather…


Link to Full Article: Tay, the neo-Nazi millennial chatbot, gets autopsied