Existential Risks Now to Yield AI Enrichment, Not Destruction

With a BA in Philosophy, Mathematics, and Artificial Intelligence, plus a PhD in Philosophy, Nick Bostrom is uniquely qualified to consider what happens at the junction of humanity androbotics. In July 2014, Bostrom published Superintelligence: Paths, Dangers, Strategies, which delves into the enormous potential of AI to enrich society, and the significant risk that accompanies it. This risk seems to command much of Bostrom’s attention, and he’s not alone. In July 2015, Bostrom joined figures like Elon Musk and Stephen Hawking in signing the Future of Life  Institute’s open letter about the dangers of uncapped artificial intelligence in autonomous weapons. Bostrom says that the concept of an existential risk “directs our attention to those things that could make a permanent difference to our long term future”, i.e. something that could lead to our extinction…


Link to Full Article: Existential Risks Now to Yield AI Enrichment, Not Destruction

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!