Microsoft’s AI experiment: Embarrassment or revelation?

Can the AI experiment give insight into radicalization across the globe? This past Wednesday, Microsoft introduced Tay, an Artificial Intelligence chatbot that was designed to be a companion for 18-24 year-olds using mobile devices.  Calling the effort an experiment in conversational learning, the company’s engineers designed the bot to learn from user input and profiles of other social media users. According to Microsoft’s website for Tay, “Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians.  Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.” Sounds good, but it all went south in quite a hurry.  In less than 24 hours, Tay was radicalized…


Link to Full Article: Microsoft’s AI experiment: Embarrassment or revelation?