Prevent the Robot Apocalypse by Contemplating this A.I. Question of Morality

There is a moral robotics question and scenario that must be answered in order for artificial intelligence to advance. Imagine there’s a robot in control of a mine shaft and it realizes there is a cart filled with four human miners hurdling down the tracks out of control. The robot can choose to shift the tracks and kill one unaware miner thus saving the four in the cart, or keep the tracks as they are and allow the four miners to run into a wall and die. Which would you choose? Would your answer change if the one miner was a child instead? If we can’t answer this, how do we program robots to make that decision? Those were the questions posed to panelists and the audience at the World…


Link to Full Article: Prevent the Robot Apocalypse by Contemplating this A.I. Question of Morality