I binge-watched 7 movies about artificial intelligence and the most accurate one was a cartoon
The movie: Matthew Broderick plays a high school hacker named David Lightman who mistakenly hacks into a government computer in charge of the nuclear missile launch systems at the North American Aerospace Defence Command (NORAD). Thinking he’s hacked into a games company, Lightman begins to play as the Soviet Union in a what he thinks is a simulation game called Global Thermonuclear War, unwittingly setting off a series of events that may lead World War III.
The technology: The government computer, called the War Operations Plan Response (WOPR), learns from constantly running military simulations, and can autonomously target and fire nuclear missiles.
Is it possible?: WOPR combines two different technologies that exist right now, so I’d say this technology is possible with some time and effort, though it may not be a good idea. Like WOPR, DeepMind’s deep neural net system called deep-Q networks (DQN) learns to play video games and gets better with time. According to Deep Mind’s Nature paper, the DQN was able to ‘achieve a level comparable to that of a professional human games tester across a set of 49 games.’
Autonomous weapons that can target and fire on their own also exist right now. One frightening real-life autonomous weapons is the Samsung SGR-1, which patrols the demilitarized zone between North and South Korea and can fire without human assistance. These are the kind of self-targeting weapons that almost started World War III.
The takeaway: Autonomous weapons exist right now, but I can’t think of any government that would be willing to put the most dangerous weapons known to man in the hands of an easily hackable computer that doesn’t clearly differentiate between simulations and firing real weapons. However, Tesla CEO Elon Musk, physicist Stephen Hawking, and over 16,000 AI researchers don’t want to take that chance, and recently urged the United Nations to ban the use of autonomous weapons.
WOPR also has a clear set of goals — win the game at any cost, even if it means destroying humanity. It’s a clear illustration of an AI that could decimate humanity, what philosopher Nick Bostrom calls ‘existential threat.’
Source: I binge-watched 7 movies about artificial intelligence and the most accurate one was a cartoon
Via: Google Alerts for AI