Why Would AI Want to Harm Humanity?

I’m very interested in the state of artificial intelligence (AI) these days and how people are choosing to pursue the technology. There’s an ongoing debate that’s constantly swaying back and forth: one side is in firm agreement that we should pursue AI to the max while the other side airs on the side of caution.

Honestly, I don’t know where I stand on the whole spectrum quite yet. There’s definitely some amazing applications for the future, but when people like Stephen Hawking and Elon Musk caution against it I tend to pay attention.

Regardless, it’s something I dig into every chance I get – I’ve got a bunch of Google Alerts set up to ping me when interesting AI news pops up. Low and behold, I was introduced to an awesome article penned by the folks over at Vox.com detailing Evernote Executive Chairman Phil Libin’s thoughts on AI.

They were reporting on a conversation he had with everybody’s favorite, Tim Ferriss, and included part of the transcript regarding Libin’s stance on AI:

Full audio interview can be found on Tim Ferriss’ site:

“I’m not afraid of AI. I really think the AI debate is kind of overdramatized. To be honest with you, I kind of find it weird. And I find it weird for several reasons, including this one: there’s this hypothesis that we are going to build super-intelligent machines, and then they are going to get exponentially smarter and smarter, and so they will be much smarter than us, and these super-smart machines are going to make the logical decision that the best thing to do is to kill us.

I feel like there’s a couple of steps missing in that chain of events. I don’t understand why the obviously smart thing to do would be to kill all the humans. The smarter I get the less I want to kill all the humans! Why wouldn’t these really smart machines not want to be helpful? What is it about our guilt as a species that makes us think the smart thing to do would be to kill all the humans? I think that actually says more about what we feel guilty about than what’s actually going to happen.

If we really think a smart decision would be to wipe out humanity then maybe, instead of trying to prevent AI, it would be more useful to think about what are we so guilty about, and let’s fix that? Can we maybe get to a point where we feel proud of our species, and like the smart thing to do wouldn’t be to wipe it out?

I think there are a lot of important issues that are being sublimated into the AI/kill-all-humans discussion that are probably worth pulling apart and tackling independently … I think AI is going to be one of the greatest forces for good the universe has ever seen and it’s pretty exciting we’re making progress towards it.”

There’s no denying it, Libin makes some pretty amazing points in this conversation. Notably, I think he hits the nail on the head when he poses the important question of why these intelligent machines would want to destroy us. Perhaps it’s all just Hollywood-related fears after all?

We’re pretty curious to hear what you guys think about this though. If you’re pro-AI, does Libin offer a gratifying vindication of your beliefs here? Did he miss anything? If you’re anti-AI, how do you internalize all of this?

Image Credit: Pixabay




Source: Why Would AI Want to Harm Humanity?

Via: Google Alerts for AI