Artificial Intelligence, Tay & the Tree

On March 23rd, 2016, Microsoft released a chunk of artificial intelligence onto the Internet. Dubbed “Tay,” this was a bot designed to chat with real human beings, simulating a 19-year-old female, learning from those humans how to act more human. On March 24th, less than 24 hours later, Microsoft put Tay to sleep. The reason? She was spewing neo-Nazi, xenophobic, racist tweets. Apparently, Tay had been learning from the wrong humans—those who had chosen to teach her. Six days later, by some techie accident, she was back—for just a brief flash of spamming and very teen-like absurdities. Jon Russel at TechCrunch wrote, “This feels like how the AI apocalypse starts…” What Do We Learn From All This? Microsoft says they’re busy learning from this experience. It’s arguably the most informative…


Link to Full Article: Artificial Intelligence, Tay & the Tree