Microsoft Apologizes After Twitter Chat Bot Experiment Goes Awry

Microsoft Corp. apologized after Twitter users exploited its artificial-intelligence chat bot Tay, teaching it to spew racist, sexist and offensive remarks in what the company called a “coordinated attack” that took advantage of a “critical oversight.” “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Peter Lee, corporate vice president at Microsoft Research, said in a blog post Friday. The company will bring Tay back online once it’s confident it can better anticipate malicious activities, he said. “A coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for…


Link to Full Article: Microsoft Apologizes After Twitter Chat Bot Experiment Goes Awry

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!