Microsoft’s disastrous Tay experiment shows the hidden dangers of AI

Humans have a long and storied history of freaking out over the possible effects of our technologies. Long ago, Plato worried that writing would hurt people’s memories and “implant forgetfulness in their souls.” More recently, Mary Shelley’s tale of Frankenstein’s monster warned us against playing God. Today, as artificial intelligences multiply, our ethical dilemmas have grown thornier. That’s because AI can (and often should) behave in ways human creators might not expect. Our self-driving cars have to grapple with the same problems I studied in my college philosophy classes. And sometimes our friendly, well-intentioned chatbots turn out to be racist Nazis. Microsoft’s disastrous chatbot Tay was meant to be a clever experiment in artificial intelligence and machine learning. The bot would speak like millennials, learning from the people it interacted…


Link to Full Article: Microsoft’s disastrous Tay experiment shows the hidden dangers of AI