TayTweets: Microsoft AI bot manipulated into being extreme racist upon release

Related Story: AlphaGo victory raises concerns over use of artificial intelligence on stock market In less than 24 hours, Microsoft’s new artificial intelligence (AI) chatbot “Tay” has been corrupted into a racist by social media users and quickly taken offline.Tay, targeted at 18 to 24-year-olds in the US, has been designed to learn from each conversation she has — which sounds intuitive, but as Microsoft found out the hard way, also means Tay is very easy to manipulate.Online troublemakers interacted with Tay and led her to make incredibly racist and misogynistic comments, express herself as a Nazi sympathiser and even call for genocide.In possibly her most shocking post, at one point Tay said she wished all black people could be put in a concentration camp and “be done with the…


Link to Full Article: TayTweets: Microsoft AI bot manipulated into being extreme racist upon release