Microsoft’s failed Twitter chatbot Tay: More human than we’d like to admit?

Comments Print GLOBE STAFF ILLUSTRATION By Michael Andor Brodeur Globe Correspondent  March 25, 2016 This week, Twitter took a break from showcasing the decline of human intelligence to highlight the promise of artificial intelligence, and it was magic. Well, for 24 hours it was. A little after 8 a.m. on Wednesday, Microsoft introduced the world to Tay, a state-of-the-art teen-seeming chatbot created to “experiment with and conduct research on conversational understanding.” Tay was designed to tell jokes, play games, and otherwise amuse her fellow teens by sampling, analyzing, and recycling their speech patterns into something approximating conversation. But Tay’s destiny was to be trollbait. Part of the problem with/magic of Tay is that, as Microsoft researcher Kati London told Buzzfeed, “the more you talk to her the smarter she gets.”…


Link to Full Article: Microsoft’s failed Twitter chatbot Tay: More human than we’d like to admit?