Google DeepMind Researchers Develop AI Kill Switch

Artificial intelligence doesn’t have to include murderous, sentient super-intelligence to be dangerous. It’s dangerous right now, albeit in generally more primitive terms. If a machine can learn based on real-world inputs and adjust its behaviors accordingly, there exists the potential for that machine to learn the wrong thing. If a machine can learn the wrong thing, it can do the wrong thing. Laurent Orseau and Stuart Armstrong, researchers at Google’s DeepMind and the Future of Humanity Institute, respectively, have developed a new framework to address this in the form of “safely interruptible” artificial intelligence. In other words, their system, which is described in a paper to be presented at the 32nd Conference on Uncertainty in Artificial Intelligence, guarantees that a machine will not learn to resist attempts by humans to…


Link to Full Article: Google DeepMind Researchers Develop AI Kill Switch