Why it's too late to prevent the robot apocalypse
Robots are going to murder you unless we stop them. Or at least, that’s the message from hundreds of experts in artificial intelligence who recently banded together to sign a very scary-sounding letter.
Revealed late last month at the big International Joint Conference on Artificial Intelligence in Buenos Aires, the note — bearing the signatures of Elon Musk, Stephen Hawking, and plenty of heavies from Apple and Google — explained that autonomous battle robots must literally be stopped before they kill us all. The fear goes something like this: These automated killing machines might fall into the wrong hands, triggering a worldwide arms race that will make deathbots as ubiquitous tomorrow as machine guns are today.
This is a weird time to raise these objections. Although technology has certainly sped up the timetable for putting mechanized warriors onto the battlefield, it’s been clear for decades that the fights of the future will not involve humans alone. We have nightmares about this stuff — and turn them into movies — because we know they’re inevitable. It’s obvious that we’re already far enough along in robotics that mere warnings or even laws can’t stop what’s next. The killer robots are coming.
Let’s remember, too, that machines that are built to kill shouldn’t be extra terrifying. Hand a person a gun, and he becomes more dangerous. But people are plenty dangerous without guns. The same is true for robots. They don’t have to be built to kill to pose a threat.
Take swarm robots. These little critters, dubbed Kilobots by researchers who have created over 1,000 of them, can be programmed for independent and group control, thanks to a new platform — open-source and free to download — called Buzz. It won’t be long before, without firing a shot, you can create mayhem on or off the battlefield by sending waves of swarmers into a flight path or into heavy machinery. Who needs a gun when you can pilot thousands and thousands of bullets?
The Marines are already hard at work developing low-cost swarmbots that can thwart cheap and effective enemy drones.The Navy is hurrying along a robo-boat designed to seek out hard-to-detect diesel-electric submarines on missions lasting up to three months at sea. It’s not armed, but it is autonomous — more than your garden-variety drone can say for itself.
With drones and other device-powered weapons, quasi-robots are already part of the battle space. Meanwhile, even ostensibly “civilian use” robots can pose a dramatic threat, especially, of course, when hacked. Put two and two together, and the only surefire way to prevent robots from going to war is to shut down robotics. And that is not going to happen.
As signatory and Apple co-founder Steve Wozniak noted: “If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”
That’s way more than a military problem. It’s the same nightmare we’ve had since we dreamed up golems. Rather than fearing robots or the future, most of all we ought to fear — and defend against — our own idolatrous fascination with power that caused us to create these humanity-threatening robots in the first place.
Sure, scary letters can help us keep up our guard. But if we keep building robots to take care of everything for us… well… game over, man.
Via: Google Alerts for AI