What Experts Think About the Existential Risks of Artificial Intelligence

aiiscoming

In January, Elon Musk, Bill Gates, Stephen Hawking, and other sundry academics and researchers wrote an open letter calling for safety standards in industries dealing with artificial intelligence.

In particular, they called for the research and development of fail-safe control systems that might prevent malfunctioning AI from doing harm — possibly existential harm — to humanity.

“In any of these cases, there will be technical work needed in order to ensure that meaningful human control is maintained,” the letter reads.

Itself measured in tone, the letter was seen by many as the panic of a few machine-phobic Luddites, and in the ensuing backlash, the press was inundated with stories quoting AI researchers dismissing such concerns.

“For those of us shipping AI technology, working to build these technologies now,” Andrew Ng, an AI expert at Baidu, told Fusion, “I don’t see any realistic path from the stuff we work on today — which is amazing and creating tons of value — but I don’t see any path for the software we write to turn evil.”

Now the backlash to the backlash is here. Scott Alexander has compiled a list of respected academics in the field of AI research who harbor the same concern about the existential risk posed by AI.

Fear and Caution

Stuart Russell, a professor of computer science at UC Berkeley and author of a popular college textbook on AI, compared the place that the study of fail-safe mechanisms occupies in the field of AI to the place the study of containment tools occupies in the study of nuclear reactors.

“Some have argued that there is no conceivable risk to humanity for centuries to come, perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours,” Russell wrote on Edge.

Russell isn’t alone among AI researchers. A 2014 survey conducted by Vincent Müller and Nick Bostrom of 170 of the leading experts in the field found that a full 18 percent believe that if a machine superintelligence did emerge, it would unleash an “existential catastrophe” on humanity.

A further 13 percent said that advanced AI would be a net negative for humans, and only a slight majority said it would be a net positive.

Conflicted as AI researchers are on the ultimate value of machine intelligence, most agree AI that’s human-level and above is all but inevitable: The median respondent put the chance of creating human-level AI by 2050 at 50 percent, and by 2075 at 90 percent. Most expect superintelligence—intelligence that surpasses that of humans in every respect—a mere 30 years later.

I, Self-Driving Truck

What might a machine uprising look light? In science fiction films, the rise of machines usually depicts an android takeover—”The Terminator,” “I, Robot,” and “The Animatrix” all show humanity succumbing to robots in the image of man.

The way machine takeovers are depicted might lead a naive observer to think that any existential risks from AI could be avoided by simply not constructing any androids. After all, AI is fundamentally just a software program in a computer.

However, machine intelligence necessarily wouldn’t need a custom physical substrate to hold power and influence in the real world: It could simply take over everyday devices and leverage that into further control.

Horror novelist Stephen King imagined such a scenario in a short story called “Trucks.” Written in the 1970s, the story is about a world in which trucks mysteriously become sentient and start attacking humans.

A group of humans are trapped in a diner in the midst of the chaos and try to wait out the trucks, which still need gasoline to operate. Eventually, the trucks honk their horns in Morse code, ordering the humans to refuel them or be crushed to death, and the humans relent.

The story ends with the narrator hallucinating about the trucks enslaving all of humanity and imposing truck values, like the construction of highways everywhere, onto the globe, only to be shaken out of his daydream by the sound of planes overhead.

King’s short story may have been futuristic when it was written, but it is not so far removed from reality today. Self-driving trucks are already driving on our public roads, and economic pressure may force companies to replace truck drivers with autonomous navigation systems in the near future.

The Sony hacking incident last year was ample demonstration that our information systems are becoming more and more vulnerable, which is a feature, not a bug, of the increasing transfer of our infrastructure into digital space.

But for the moment, there’s no reason to start panicking yet. Artificial intelligence has great strides to make before it can match the brain, something evident in the current technology for self-driving cars: They’re crude enough that navigating a dense, urban environment is still in the experimental stages.

Source: The Epoch Times
http://www.theepochtimes.com/n3/1366189-what-ai-experts-think-about-the-existential-risk-of-ai/

Source: What Experts Think About the Existential Risks of Artificial Intelligence

Via: aboutai.com

Pin It on Pinterest

Share This