Google DeepMind Researchers Develop AI Kill Switch

Artificial intelligence doesn’t have to include murderous, sentient super-intelligence to be dangerous. It’s dangerous right now, albeit in generally more primitive terms. If a machine can learn based on real-world inputs and adjust its behaviors accordingly, there exists the potential for that machine to learn the wrong thing. If a machine can learn the wrong thing, it can do the wrong thing. Laurent Orseau and Stuart Armstrong, researchers at Google’s DeepMind and the Future of Humanity Institute, respectively, have developed a new framework to address this in the form of “safely interruptible” artificial intelligence. In other words, their system, which is described in a paper to be presented at the 32nd Conference on Uncertainty in Artificial Intelligence, guarantees that a machine will not learn to resist attempts by humans to…


Link to Full Article: Google DeepMind Researchers Develop AI Kill Switch

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!