Microsoft Apologizes For Its Racist AI Chatbot

As part of an experiment on conversational understanding for artificial intelligence, Microsoft recently introduced an AI chatbot called Tay. The chatbot was linked up to Twitter and people could tweet at it to get a response. It didn’t take more than 24 hours to turn the chatbot into a hate-spewing racist since it picked up all sorts of wrong ideas. Microsoft has now apologized for the entire episode. The company explains in a blog post that few human users on Twitter exploited a flaw in Tay to transform it into a bigoted racist. Microsoft doesn’t go into detail about what this flaw was. It’s believed that many users exploited Tay’s “repeat after me” feature which enabled users to get Tay to repeat whatever they tweeted at it. Naturally, trolls tweeted…


Link to Full Article: Microsoft Apologizes For Its Racist AI Chatbot