Stephen Hawking & Friends Team Up To Ban Artificial Intelligence Weapons

Stephen Hawking & Friends Team Up To Ban Artificial Intelligence Weapons

stephen hawking friends ban ai weapons 2015

Some of the World’s Most Intelligent Persons Want to Ban Artificial Intelligent Weapons

Amazon is set to take the logistics world by storm once its drone program comes to fruition. Their quad-copter or multi-rotor drones will use the buyer’s address to track them down and deliver what they bought—right at their doorstep using GPS and a rudimentary form of AI to avoid obstacles. Now imagine, if that delivery subroutine was changed for military purposes. These drones can be used to target persons of interest and deliver or drop their “package” on the target’s residence. It’s not so far-fetched, in fact, it might already be on trials.

These artificially-intelligent killer quad-copters are basically just a few years away and many are alarmed about their potential impact as weapons of war and terrorism. Some of the world’s most intelligent persons which include THE Stephen Hawking, genius engineer and Apple co-founder Steve Wozniak, former Nokia head Elon Musk along with other recognizable names in science and technology, issued a warning through an open letter about the dangers of developing offensive autonomous weapons and the arms race that will follow. Other names include Skype co-founder Jaan Tallin, Deepmind CEO Demis Hassabis, and Microsoft Reasearch managing director Eric Horvitz. They’re joined by hundreds of AI and robotics researchers.

“…The key question for humanity today is whether to start a global arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

— Excerpt from open letter, Future Life Institute

Kalashnikov, referring to the now ubiquitous AK-47 assault rifle. Everybody has one. The same will be true for killer drones. Once a working algorithm has been developed that can fit in an SOC or an SD card, every nation or right-wing organization can have killer drones because parts can be bought from most electronic stores. We can no longer say Radio Shack. Pocket-sized computers are everywhere, including smart phones that are already being used to trigger improvised explosives.

The prospect of minimizing wartime casualties in the military by using autonomous weapons is just too good to resist. Also, not only will casualties be minimized, efficient, well-made autonomous systems can rack up the body count within enemy lines. The country with the best offensive autonomous system can quickly finish their opponents by swarming them with fearless attack drones. Such is the potential of robotic AI weapons. Unequipped militaries can’t stand a chance.

By banning research and development of such weapons, a new arms race will be avoided. One with the potential to wipe out the human race, according to Stephen Hawking:

“…full artificial intelligence could spell the end of the human race.”

  • Stephen Hawking, BBC interview

Full artificial intelligence would be equivalent to self-awareness, and we know of one machine who quickly decided that humans should be exterminated.

The Path to Skynet

From Stephen Hawking’s statement, it can be gathered that full AI is the culmination of opposing research and development by countries involved in an autonomous weapons arms race. One machine should be smarter than the other and the race goes on until the need for self-awareness comes into play.

But would a machine’s self-awareness ultimately lead to killer robots? It’s been tackled in science fiction so many times that machines could be too smart for their own good. That smart machines, given a specific set of instructions as tackled in many of Isaac Asimov’s novels can still deviate from them. That instructions like Asimov’s Three Laws of Robotics designed to safeguard humanity from killer robots can still be interpreted differently. Stephen Hawking said the same thing that machines can evolve at a much faster rate and one with the same self-awareness as a human being might “misunderstand” such instructions. Self-awareness might not even be intended but could happen accidentally as with Skynet.

A good example would be the film I-Robot where the central intelligence VIKI created a new law from the three laws to ‘protect’ humanity from itself by taking away their freedom. Other examples include Virus where it interpreted humans as similar to viruses and must therefore be exterminated. The more recent Avengers: Age of Ultron describes the main antagonist in the same light. That the Earth is better off without humans.

One tech celebrity however begs to differ:

“…I just don’t see the thing to be fearful of… We’ll get AI, and it will almost certainly be through something like recurring neural networks. And the thing is, since that kind of AI will need training, it won’t be ‘reliable’ in the traditional computer sense. It’s not the old rule-based prolog days…”

Linus Torvalds, the inventor of Linux, recent Slashdot interview

Even if we’re Not There Yet

As per Linus Torvalds, it’s true that AI is still very far off. In the current killer drone scenario, drones don’t need to be self-aware but must instead be aware of its surroundings to be able to avoid obstacles and identify opponents. A drone needs only to be as smart as an insect until it reaches its target. Once it reaches its intended target, the drone must be able to properly identify the target. Mistakes are unacceptable as human lives will be involved.

A good example is in Iron Man 2 where one of Ivan Vanko’s drones can’t differentiate between Iron Man and a child with a mask. Even the extremely intelligent JARVIS failed to consider differentiating Pepper Potts from the other Extremis powered enemies in Iron Man 3. Radical changes in appearance could make an AI system dependent on recognition technologies fail. Enemies can easily slip past blockades dressed as friendlies unless tracking devices are considered, but even tracking frequencies can be hacked and duplicated and tags can be stolen. In short, faulty AI would be dangerous to the army that employs it.

AI Regulation

AI weapons development will unlikely be stopped by the Future Life Institute’s plea. But they’ll instead settle for regulation the likes of which already applied to nuclear, chemical and biological weapons. That parties capable of doing R&D must do so responsibly, that even in times of war, there should be room for some humanity that an AI weapon which takes no prisoners is unacceptable.

AI research need not be banned in other fields because of its high potential. Future Life Institute’s letter also considers this:

“…Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons – and do not want to tarnish their field by doing so, potentially creating a major backlash against AI that curtails its future societal benefits.”

Hopefully, AI weapons R&D won’t go full-swing despite the shortage of talented minds in the field. Life becomes cheap during war time. Getting killed by emotionless, soulless machines will only make it cheaper, if there is such a concept.

Source: Stephen Hawking & Friends Team Up To Ban Artificial Intelligence Weapons

Via: Google Alerts for AI

Pin It on Pinterest

Share This