How the Microsoft Tay chatbot debacle could have been prevented with better AI

Image: screenshot, Twitter On March 23, in an effort to appeal to a prime social media demographic, 18-24 year-old women, Microsoft launched a teen-girl-inspired chatbot named Tay. Less than a day later, the bot was swiftly removed from the site after tweeting things like “i fucking hate feminists”—and that’s one of the tamer messages. But while the quick denigration of Tay’s conversations may have been unexpected for Microsoft, most AI experts agree that it was inevitable. And most say it could have been prevented, by using tools like emotional analytics and better AI testing. The Tay debacle would never happen in the academic world, for instance. “In a university, you can’t just run what is essentially an experiment on millions of users (or even on 10 users), without getting permission…


Link to Full Article: How the Microsoft Tay chatbot debacle could have been prevented with better AI