Elon Musk And Stephen Hawking Are Wrong About Artificial Intelligence

These questions originally appeared on Quora – the knowledge sharing network where compelling questions are answered by people with unique insights. Answers by Ramez Naam, author of the Nexus novels, climate & energy wonk, on Quora. Q: Is AI an existential threat to humanity? A: Elon Musk, Stephen Hawking, and others have stated that they think AI is an existential risk. I disagree. I don’t see a risk to humanity of a “Terminator” scenario or anything of the sort. Part of the confusion, I think, comes from how we use the term “AI” in reality and in fiction. In fiction, especially movies, “AI” means a self-aware, super-intelligent entity, with its own goals, a very broad sort of intelligence (similar to humans), and the ability to change its goals over time.…


Link to Full Article: Elon Musk And Stephen Hawking Are Wrong About Artificial Intelligence

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!