Here’s How We Prevent The Next Racist Chatbot

It took less than 24 hours and 90,000 tweets for Tay, Microsoft’s A.I. chatbot, to start generating racist, genocidal replies on Twitter. The bot has ceased tweeting, and we can consider Tay a failed experiment. In a statement to Popular Science, a Microsoft spokesperson wrote that Tay’s responses were caused by “a coordinated effort by some users to abuse Tay’s commenting skills.” The bot, which had no consciousness, obviously learned those words from some data that she was trained on. Tay did reportedly have a “repeat after me” function, but some of the most racy tweets were generated inside Tay’s transitive mind. Life after Tay However, Tay is not the last chatbot that will be exposed to the internet at large. For artificial intelligence to be fully realized, it needs…


Link to Full Article: Here’s How We Prevent The Next Racist Chatbot