Can The Existential Risk Of Artificial Intelligence Be Mitigated?

It seems like every day we’re warned about a new, AI-related threat that could ultimately bring about the end of humanity. According to Author and Oxford Professor Nick Bostrom, those existential risks aren’t so black and white, and an individual’s ability to influence those risks might surprise you. Image Credit: TED Bostrom defines an existential risk as one distinction of earth originating life or the permanent and drastic destruction of our future development, but he also notes that there is no single methodology that is applicable to all the different existential risks (as more technically elaborated upon in this Future of Humanity Institute study). Rather, he considers it an interdisciplinary endeavor. “If you’re wondering about asteroids, we have telescopes, we can study them with, we can look at past crater…


Link to Full Article: Can The Existential Risk Of Artificial Intelligence Be Mitigated?