Google announces 5 safety rules for its artificial intelligence development

Every year without fail, Google promotes the depth and breadth of its “machine learning” prowess at its various developer gatherings, most notably, Google I/O.Progress in this area of is proceeding in directions that are still being discovered. When you couple that reality with Hollywood portrayals from the benevolent, like Star Trek’s “computer,” to the mission-conflicted, like 2001: A Space Odyssey’s HAL9000, to the self-aware, humanity-destroying Skynet from Terminator, it raises genuine questions about how to address practical problems, and prevent accidents in the real-world AI systems.Chris Olah at Google Research contributed to a technical paper, Concrete Problems in AI Safety, in collaboration with Google, OpenAI, Stanford University, and University of California, Berkeley, to define how to approach long-term research questions:Avoiding Negative Side Effects: How can we ensure that an AI…


Link to Full Article: Google announces 5 safety rules for its artificial intelligence development