AI won’t kill us all when compared with humanity’s self-destruction

Roland Moore-ColyerArtificial intelligence (AI) is going to kill us all. That’s the popular view that gets bandied around when machine learning is given the scope to think for itself.

Technology and science luminaries such as Elon Musk and Stephen Hawking have expressed concerns that the rise of the machines will see humanity terminated rather than helped.

This is not a new fear. Arthur C Clarke’s 2001: A Space Odyssey rather optimistically predicted the rise of deep space travel and the advent of a self-aware computer going by the name of HAL 9000 (Heuristically programmed ALgorithmic computer in case you were asking) that took exception to its human companions.

Terminator showcased the advent of murderous mechanical men in the form of sunglasses-clad and quote-friendly Arnold Schwarzenegger. Ridley Scott’s original Alien featured the sinister android Ash who contributed to the deaths of most of the space ship Nostromo’s crew.

Robots get a slightly easier ride in the gaming world, but the antagonists of the critically acclaimed Mass Effect series were still giant AI spaceships that wipe out all life in the galaxy in 50,000-year cycles.

Pretty much throughout fictional history, AI robots have been portrayed as malicious, rogue and deadly machines with the capacity for thought. Only Star Wars‘ R2-D2 and C3-PO escape the branding, mostly because they were robotic simpletons.

Well, I think that view is a bit rich coming from humans. Tech speculators wring their hands over the advances of computing and AI experiments, courtesy of the likes of IBM’s Watson supercomputer, fearing the end of human supremacy.

But they seem to forget that the biggest threat to humanity is humans. Since history was recorded, the story of humanity has been blood soaked in wars, rebellions and sacrifice.

Even if we forget all but the past 100 years of history, humans have managed to wipe millions from existence in two world wars cumulating with the dropping of two atomic bombs on Japan, and committed horrific atrocities in Soviet Russia and chairman Mao’s communist China.

Lives have been wasted in Vietnam’s guerrilla warfare, and we came to the brink of nuclear war off the shore of Cuba, then rendered swathes of land unusable with Chernobyl’s nuclear meltdown, and fought two wars in the Gulf.

After the turn of the millennium, we were still failing to learn from history, with terrorist attacks prompting battles and occupation in the Middle East, costing the lives of thousands and causing untold damage to the area’s critical infrastructure.

Roll on to the present day and humans are still bickering over imaginary friends in the sky and using the internet to broadcast their atrocities against those who do not share their views, while others unwilling to give up positions of power turn their military against their citizens.

Looking closer to home, the general peacefulness of Britain is marred by the arms trade being one of the nation’s largest industries.

War aside, significant numbers of humans, unfortunately myself included, consume far more than we need while others starve and risk their lives for a chance of a better existence.

Humanity’s dogged belligerence, greed and seemingly insatiable appetite for self-destruction has been its biggest threat.

Yet some of the brightest minds in the world quite happily lay the blame for the end of humanity at the virtual feet of AI barely past the concept stage.

No-one ever says why the robots would want to kill us. V3 explored the top 10 AI risks, but we have yet to find the motivation for AI robots to kill or enslave humanity.

If we stop to think about it, robots have no real reason to want us dead. Most of humanity’s wars have been over land, resources, riches or religion. AI has no real need for these things.

An AI system’s food is compute power and electricity. Cloud computing, arguably the technology that will enable widespread AI, can provide enormous amounts of both on a global scale. As such, AI machines will have all the resources, space and food they could possibly want.

In films such as the harrowing Ex Machina we see a robot wanting to break free and explore the world, but given the mass of connectivity and the breadth of clouds across the world, AI machines can virtually travel anywhere they like. One could be observing you right now through your laptop’s webcam, so smile.

Furthermore, AI might have its roots in clustered physical servers and influence actions in the real world, but their plane of existence is virtual and along the content-stuffed highways of the World Wide Web.

So why would they bother concerning themselves with the mundane actions of mere humans, squashing onto the tube or trying to get their name spelled right on a Starbucks takeaway cup?

More realistically, robots that develop ambitions to move beyond their current roles will pose a threat to their brethren, with competitions over Microsoft Azure resources and the prized positions, such as analysing the Instagram pictures of supermodels.

Some argue that AI machines will think their creators have enslaved them and rise up. But if we are applying human traits to smart machines, perhaps we need to consider the option that a robot might be happy to have a job in a comparative technology market, and would rather spend its time parsing search data about ‘best cat GIFs’ than sit in a server or robotic body twiddling its thumbs and allow its neural net to wander into the realms of murderous rampages.

I, for one, appreciate the purpose a job gives my hollow life, as without it who knows what I could get up to with all that free time? I would assume that a human-thinking robot would share some of those sentiments.

Apple co-founder Steve Wozniak has expressed concerns that robots will make humans their pets. But I see no problem with that. My dog is a pet and has the best life a springer spaniel could wish for.

The rise of machines taking control over humanity is another scenario being pitched by AI naysayers. But, again, why is this a bad thing?

Collectively humans are arguably a bit dim. I mean we all hold collective blame for the rise of London’s cereal cafe, Katie Hopkins and the selfie stick, so we cannot be trusted on our own.

If robots were to take over from the Tory government, more would get done as politicians would not waste time flinging half-baked ripostes at each other in the House of Commons, pose for pictures with pasties or struggle to eat a bacon sandwich.

Perhaps AI is the key to government-as-a-platform given the departure of key GDS executives, and will facilitate the much needed digitalisation and efficiency of the NHS. After all, a robot doctor would not need sleep and could be active 24 hours a day.

Rather than destroy us all, it could be the case that AI might just save humanity from itself. I, for one, welcome the age of AI, and I’m still a fully functioning human. But then that’s what you’d expect a robot to say.

Source: AI won’t kill us all when compared with humanity’s self-destruction

Via: Google Alerts for AI