Microsoft’s Twitter AI Tay starts posting offensive and racist comments

Tay can be found interacting with users on Twitter, KIK and GroupMeThe software uses ‘editorial interactions’ built by staff and comedians Within hours of going live, the bot was tweeting offensive commentsIt used racial slurs, defended white supremacist propaganda, and supported genocide in response to certain tweetsBy Abigail Beall For Mailonline Published: 13:29, 24 March 2016 | Updated: 13:37, 24 March 2016 Yesterday, Microsoft launched its latest artificial intelligence (AI) bot named Tay. It is aimed at 18 to-24-year-olds and is designed to improve the firm’s understanding of conversational language among young people online.But within hours of it going live, Twitter users took advantage of flaws in Tay’s algorithm that meant the AI chatbot responded to certain questions with racist answers.These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.…


Link to Full Article: Microsoft’s Twitter AI Tay starts posting offensive and racist comments

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!