Microsoft Forced to Apologise for Racist Twitter AI

Microsoft has issued an apology over their creation of an artificial intelligence program behind a Twitter account that began to post and retweet racist remarks. The chatbot, named Tay, was designed to become more intelligent as users interacted with it; however, it quickly started imitating targeted trolling messages tweeted at it, claiming that “the holocaust was made up,” “the Jews did 9/11,” and “I f***ng hate feminists and they should all die and burn in hell.” As a result, Tay was deactivated within 24 hours of its introduction. But on Friday, Peter Lee, Microsofts Head of Research, said the company was “deeply sorry for the unintended offensive and hurtful tweets” and has deactivated Tay’s Twitter account for the foreseeable future. He added: “Tay is now offline and we’ll look to bring Tay back only…


Link to Full Article: Microsoft Forced to Apologise for Racist Twitter AI