Microsoft’s AI millennial chatbot became a racist jerk after less than a day on Twitter

On Wednesday (Mar. 23), Microsoft unveiled a friendly AI chatbot named Tay that was modeled to sound like a typical teenage girl. The bot was designed to learn by talking with real people on Twitter and the messaging apps Kik and GroupMe. (“The more you talk the smarter Tay gets,” says the bot’s Twitter profile.) But the well-intentioned experiment quickly descended into chaos, racial epithets, and Nazi rhetoric. Tay started out by asserting that “humans are super cool.” But the humans it encountered really weren’t so cool. And, after less than a day on Twitter, the bot had itself started spouting racist, sexist, anti-Semitic comments. The Telegraph highlighted tweets that have since been deleted, in which Tay says “Bush did 9/11 and Hitler would have done a better job than…


Link to Full Article: Microsoft’s AI millennial chatbot became a racist jerk after less than a day on Twitter