Microsoft: “We’re deeply sorry for unintended offensive tweets” from Tay chatbot

Earlier this week, Microsoft launched a chatbot, known as ‘Tay’, as an interactive example of its progress in artificial intelligence and machine learning capabilities. Aimed at 18- to 24-year-olds, Tay was available for users to chat with – publicly and via direct messaging – on Twitter; indeed, that was the whole point of the exercise, as the chatbot’s bio noted that “the more you talk the smarter Tay gets”. Most users enjoyed putting Tay to the test with a broad range of enquiries, and conversations on all sorts of topics. But it didn’t take long for the social experiment to go horribly wrong. Tay’s ability to learn from human interactions, to help it to respond to whatever subject matter was raised, left it wide open for abuse. If you talked…


Link to Full Article: Microsoft: “We’re deeply sorry for unintended offensive tweets” from Tay chatbot