Microsoft explains what had happened to chatbot Tay, and why they took it down

The recent event of Microsoft’s hijacked chatbot Tay highlights that not all human creations are ingenious, some can be harmful and distasteful. The work of creating a machine learning program to learn more about how artificial intelligence programs engages with Web users in casual conversation turned out to be an unpleasant experience. The bot Tay started tweeting abuse. Microsoft explains what had happened to Tay In the light of these events Tay was pulled off from Twitter since, what was designed to interact with web users in a good way, quickly learned to parrot a hateful speeches. Some hackers, it is believed took advantage of a vulnerability present in the AI chatbot. They abused the “repeat after me” function of the Microsoft AI, making the chatbot echo the unpleasant messages.…


Link to Full Article: Microsoft explains what had happened to chatbot Tay, and why they took it down