OPINION: Why are humans so determined to create armed artificial intelligence?

(U.S. Air Force photo/Staff Sgt. Brian Ferguson)

IN the opening scenes of most of The Terminator movies, the filmmakers tells the story of Armageddon.

The iconic scene at the start of Terminator 2 documents that humanity created an artificial intelligence computer network called Skynet as part of an American defence plan. But after turning it on, the software becomes self-aware, decides humanity is the big threat, hijacks the nukes and wipes most of humanity out. In the future, those still alive have to face Skynet’s autonomous robotic killing machines, in the form of the Terminators. This includes the iconic image of a dusty skull being crushed by a grinning laser-toting Terminator as it begins to kill the human resistance.

Scenes from popular culture like this are certainly reasons as to why people fear the creation of military artificial intelligence programs. The populist fear, among other things, is also why the United Nations has announced investigating the idea of a blanket worldwide ban on the creation of autonomous machines that can make kill decisions by themselves, with 1,000 experts calling for this ban. But the UK not only opposes this proposal, but also wants to lead the charge in developing this.

Clearly, the warnings from smart minds like Stephen Hawking, Elon Musk and Steve Wozniak is not something the military manufacturers and technological developers are that interested in listening to. But at the least, artificial intelligence is already here. The development of AI has been one of humanity’s big technological developments since the early 1980s, and we are now reaching the point where we are regularly being told of claims that computers are passing the Turing Test.

Put it simply, this test was designed to see if an artificial intelligence can trick its judges into thinking its human by communicating like a real person. Last summer, claims emerged of a computer kit that passed the test and convinced its judges that it was a 13-year-old boy. However, technological journalists decided that Eugene, as the computer was not entirely convincing to all the judges. The persona was also of a Ukrainian, meaning it spoke English as a second language and in broken language.

Nevertheless, the fact it fooled a high number of its judges is still a sign full pass of the test is not far off. It is highly likely that computer intelligence and capabilities will overtake human ones within the next century, with some scientists even guessing the point of overtake could be as soon as the 2060’s.

As is seemingly the case with all technological developments, army commanders have decided the plan should be to weaponise it. This plan could have it in combination with drones programmes currently used by the American army in all manner of areas in the Middle East and Central Asia. The rise in drone use has been a defining characteristic of the foreign policy of Obama’s presidency, with thousands of strikes taking place every year.

Equipment like Predator and Reaper drones were developed during the 1990’s and 2000’s to be flown by remote control and deploy missiles, while operators control flight paths and pull the trigger from thousands of miles away. The US programmes see operators in sheds in Las Vegas and Virginia fly these contraptions over the Middle East and central Asia. It is often the case that Obama will be given the final word, and will receive intelligence over the breakfast table before approving when to deploy drones.

Finding numbers as to how many drone strikes the US army has carried out or how many have been killed by them are shrouded in secrecy, and rather worryingly, US officials have said they don’t actually know how many people have been taken out by drones. They also don’t seem to know how many of those taken out were civilians, despite lots of reports ever since the first drones were deployed indicating innocent casualties have been taken out.

But the American people approve of the scheme, and the US military is clearly getting use out of them. Drones are cheaper to build than manned aircraft, plus no money has to be spent on training and transporting pilots. They’re also trickier to detect, which might explain why there are often reports of the US drone strikes in nations they are not technically at war with.

Small, agile, remote-controlled planes that have a reduced risk of US casualties has changed American foreign policy since the first Predators were launched in 1994, and since Reapers first made widespread appearances in 2007. But despite this, it’s still somewhat easy to think of it as science-fiction that handing complete control of kill decisions to AI programs.

Nevertheless, robotics experts claim that not only are these eminently possible but are in development and could be out within 10 years. This would certainly coincide with the end of the American ban on autonomous and semi-autonomous weapons, which consisted of a 10 year ban against development and use of American soil. This bill was passed in 2012, although the UK has recently announced it wants to push for the development of these, and US and Israeli companies are interested in developing them.

With technology in place to make military drones smaller but continue to give them weaponry payloads, handing the drones AI algorithms to make the kill themselves is a worrying thought. Its bad enough that a Human Rights Watch study revealed that nobody would be accountable for a robot unlawfully killing on the battlefield, but the increasingly dystopian and apocalyptic vision these is hardly a comforting thought.

The American military has reportedly set its scientists a goal of working out whether or not it is advised to use AI in military programmes by the end of the year. Presumably approval is one step closer to a real-life SkyNet, and within the next few decades, we can expect autonomous killing machines, and dread the likelihood of them going rogue. At the very least, this initial form wouldn’t have access to the nuclear weapons – certainly necessary given the untold damage that would cause if it went rogue – and any programming would have a safeguard, but it would still have access to a weaponised militia nevertheless, and in the heat of the moment, would still have the power to think and kill from decisions it makes by itself.

The risk of such contraptions thinking for themselves and duly killing with no recourse or human way to stop them is a troubling one, and is also a concept that has massive moral and ethical dilemmas. Yet the momentum is seemingly in favour of its arrival, and its an uncomfortable thought that somewhere, scientists are spending their time creating something with extinction level possibilities for no discernible reason.

All of which begs the question – why?

Source: OPINION: Why are humans so determined to create armed artificial intelligence?

Via: Google Alerts for AI