When Well-Intentioned Artificial Intelligence Goes Bad

A week later, she was accidentally activated during testing, and within minutes had succumbed to a “kush” induced freakout. Tay is now offline, and her account made private, much like any parent will do when their teenager gets into trouble on the internet. What went wrong with Tay? The truth is, nothing. No one should find it surprising that releasing a machine learning chatbot on social media, in the guise of a teenage girl no less, would result in a wave of interactions designed to test the limits of the technology — and anyone who has ever spoken to Siri, Cortana or any other virtual assistant knows that one of the first tests involves saying the most profane statements you can think of. Microsoft was certainly aware of this; their…


Link to Full Article: When Well-Intentioned Artificial Intelligence Goes Bad