RSAC 2016: Safety Issues in Advanced Artificial Intelligence (AI)

One notable keynote from RSAC 2016 last week, Safety Issues in Advanced Artificial Intelligence (AI) covered the fascinating concept of the developing field of AI, and more importantly, the type of security and safety concerns related to the rapid development of human-level, and, potentially, superhuman/super-intelligent AI. Nick Bostrom, Professor and Faculty of Philosophy at the University of Oxford, and Director of the Future of Humanity Institute compared the old days of AI to the more modern, transitory phase of the current machine intelligence era. Now in its third wave, AI is moving forward with expectations of reaching human and superhuman levels. Over time, humans have survived many natural disasters, including asteroids and supervolcanoes. Nick theorizes that what may actually cause extinction will be something entirely new – and since we,…


Link to Full Article: RSAC 2016: Safety Issues in Advanced Artificial Intelligence (AI)