Microsoft’s AI chatbot causes scandal with racist rants

Microsoft’s artificial intelligence experiment called the Tay chatbot had to be aborted less than a day after its introduction after the tweeting bot launched a series of racist outbursts. Mimicking the language patterns of a 19-year-old American girl, the bot was designed to interact with human users on Twitter and learn from that interaction. However, the experiment didn’t go as planned as users started feeding the programme anti-Semitic, sexist and other offensive content, which the bot happily absorbed. Microsoft shut down Tay’s Twitter account on Thursday night and apologised for the tirade. “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Peter Lee, Microsoft’s vice president of research, wrote…


Link to Full Article: Microsoft’s AI chatbot causes scandal with racist rants