I'm more afraid of my fellow humans

You’ve seen the plotline before: man builds robot, robot turns evil, man and robot fight for survival.

It’s the scenario that futurists Elon Musk, the head of Tesla and SpaceX, and British physicist Stephen Hawking want to keep in the realm of sci-fi.

Most Popular

In a letter released at the International Joint Conference on Artificial Intelligence last week, the two joined Apple co-founder Steve Wozniak and 1,000 other artificial intelligence and robotics researchers in protesting the production of autonomous weapons, better known as killer robots. They are concerned about potential malfunctions causing human casualties, ethical questions about their use and the possibility that the robots take over the world.

Be scared of robotsDobie: The smart guys are worriedCartoonMatt Davies’ latest cartoon: Sanctions growCommentSubmit your letter

They’re wasting their time. Go for it, anyway. What’s the worst that can happen?

The arguments from Musk and Hawking seem to come down to human preservation. They are afraid advanced robotics will numb any aversion to warfare and lead to more human casualties. Robots fighting wars would make the decision of war easier, they argue. But first I want to meet them at their greatest concern, and the answer to my question — human extinction.

If — and that’s a big if — robots wipe out humanity, that would mean we made them so super advanced as to merit their survival and not ours. More than 99 percent of all species that have ever lived on Earth are now extinct. Humans, or homo sapiens, will more likely than not be a part of that 99. Human extinction is almost a natural certainty.

Also, if we made robots with such advanced self-awareness, reason and consciousness as to determine humanity as a threat, then we should hand over Earth to them. Maybe they’d find a way to live peacefully and a little more pollution-free.

So Hawking would say let’s just not take that chance. But the probability of developed robotics saving more human lives is much greater than the concerns raised by Musk and his death knell ringers.

We have drones conducting much of our country’s dirty work. Assuming we develop accurate AI akin to soldiers, robots fighting for humans against other humans means less casualties on one side at least.

The letter also raises the problem of robots becoming “the Kalashnikovs of tomorrow,” meaning they could be easily mass-produced weapons. But if future conflicts are resolved by advanced robot competitions, that’s more humans on the sidelines.

The halting of government funding for robotic warfare, as their letter proposes, will not stop the natural human curiosity to see how tech advancements can be turned into weapons. Some examples:

We learned how to build a boat? Let’s put cannons on it.

We learned how to fly a plane? Let’s drop bombs from it.

We learned how to make a drone? Let’s shoot laser-guided-missiles from it.

AI will most certainly be outfitted for warfare in one way or another, so we should give it some government funding and oversight.

Contrary to what you may believe, Harvard psychologist Steven Pinker said there are less deaths from war for every 100,000 people born now more than ever. Whether you think that the world is getting more peaceful, putting robots behind and in front of weapons will mean less human deaths. And if Hawking’s worst nightmare does come true, I’m more afraid of my fellow humans than WALL-E with a gun.

Christopher Leelum, a student at Stony Brook University, is an intern with Newsday and amNewYork.

Source: I'm more afraid of my fellow humans

Via: Google Alerts for AI