The Doomsday Invention

Nick Bostrom, a philosopher focussed on A.I. risks, says, “The very long-term future of humanity may be relatively easy to predict.” Credit Illustration by Todd St. John I. Omens Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford. Titled “Superintelligence: Paths, Dangers, Strategies,” it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential…


Link to Full Article: The Doomsday Invention