Google seeks safe parameters for AI to learn from its mistakes

Sometimes ‘sorry’ isn’t good enough – particularly when machine learning systems ‘try something new’ which they couldn’t have reasonably known would be disastrous. With this in mind, researchers from Google’s artificial intelligence unit have drawn up tentative guidelines for AI systems that address the possible areas of exploration which might put systems – and people – at risk. In the paper Concrete Problems in AI Safety [PDF], Dario Amodei and Chris Olah from Google Brain, the company’s machine intelligence research department, join with researchers from Stanford and UC Berkeley to examine the areas in which self-learning systems might fall foul of either inadequate prior information, or of what’s euphemistically referred to as ‘common sense’. The team uses the example of a hypothetical industrial cleaning robot which has adaptive capacity to…


Link to Full Article: Google seeks safe parameters for AI to learn from its mistakes