Microsoft’s AI Chatbot Becomes Racist, Has To Be Unplugged

Microsoft introduced a chatbot yesterday called Tay. The company was running an experiment in conversational understanding, meaning that the more people interacted with the artificial intelligence-powered chatbot the smarter it would become. I don’t know about smarter, but it didn’t take more than 24 hours for Tay to become a full blown racist on Twitter. That’s what the internet will do to you. When it first arrived on the scene, Tay was an innocent Twitter chatbot that you and I could interact with to see just how far along artificial intelligence has come. It didn’t take long for things to get ugly though as people soon started tweeting racist and misogynistic things at Tay and it picked it all up. Things went from bad to worse pretty quickly. Tay went…


Link to Full Article: Microsoft’s AI Chatbot Becomes Racist, Has To Be Unplugged

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!