Silicon Valley Leader on Why Artificial Intelligence Probably Won’t Kill Us All

More than a few of the greatest minds in the world are concerned about AI becoming superintelligent and potentially wiping out humanity; Stephen Hawking has gone on record saying that AI could spell the end of the human race, Oxford philosopher Nick Bostrom wrote an entire book on the topic, and Elon Musk seems to say something alarmist about artificial intelligence every week. But many experts disagree that AI is such an imminent fear, including Silicon Valley leader and founder of Evernote Phil Libin. In a recent interview on The Tim Ferriss Show, Libin asserted that concerns about AI were dependent on an invalid argument which insists that wiping out humanity is the “smart” thing to do:

Opening quote

I’m not afraid of AI. I really think the AI debate is kind of overdramatized. To be honest with you, I kind of find it weird. And I find it weird for several reasons, including this one: there’s this hypothesis that we are going to build super-intelligent machines, and then they are going to get exponentially smarter and smarter, and so they will be much smarter than us, and these super-smart machines are going to make the logical decision that the best thing to do is to kill us.

I feel like there’s a couple of steps missing in that chain of events. I don’t understand why the obviously smart thing to do would be to kill all the humans. The smarter I get the less I want to kill all the humans! Why wouldn’t these really smart machines not want to be helpful? What is it about our guilt as a species that makes us think the smart thing to do would be to kill all the humans? I think that actually says more about what we feel guilty about than what’s actually going to happen.

Closing quote

The wording of his statement, “the smarter I get, the less I want to kill all the humans!” is perfect, and very Internet. But beyond that, it’s an interesting point that we often take for granted that superintelligent beings, whether alien or synthetic, would decide based on superior logic that the world is better off without humans. Although I would probably argue that ruining the environment is a pretty egregious offense, his point is well taken that this assumption is as much the result of our guilt as any type of logical thought process:

Opening quote

If we really think a smart decision would be to wipe out humanity then maybe, instead of trying to prevent AI, it would be more useful to think about what are we so guilty about, and let’s fix that? Can we maybe get to a point where we feel proud of our species, and like the smart thing to do wouldn’t be to wipe it out?

I think there are a lot of important issues that are being sublimated into the AI/kill-all-humans discussion that are probably worth pulling apart and tackling independently … I think AI is going to be one of the greatest forces for good the universe has ever seen and it’s pretty exciting we’re making progress towards it.

Closing quote

Regardless of whether humans actually deserve to be wiped out of existence, he’s absolutely right that we should examine why we would feel that way and try to fix it. And he also may be correct that we’re galloping towards true AI at breakneck speed, especially if those mildly self-aware NAO robots are any indication.

Via Vox.




Source: Silicon Valley Leader on Why Artificial Intelligence Probably Won’t Kill Us All

Via: Google Alerts for AI

Pin It on Pinterest

Share This