Is Artificial Intelligence an Existential Threat?

It is not unusual for disrupting technologies to be embraced and feared—and not necessarily in that order. That was and will continue to be true for all technologies that bring both benefit and risk; it is a duality in which many technologies have to exist. Examples throughout history have been the airplane, the automobile, unmanned weapons systems, and now even software – especially the software which powers artificial intelligence (AI). Last week at a U.S. governors’ conference, Elon Musk, the CEO of the engineering companies SpaceX and Tesla, reportedly told the assembled politicians that “AI is a fundamental existential risk for human civilization,” sounding the alarm bell. This is not the first time Musk has expressed this concern, he’s done so as early as 2014. Many have branded him a Cassandra, and if he is, he’s not a lone-wolf Cassandra; he’s joined in those views by the likes of Stephen Hawking, Bill Gates, and other experts. It is not surprising there is an equal number of experts who question Musk’s concern and believe his alarm bell is tolling for a non-existent threat. I am not an expert in AI, nor am I an AI practitioner; I’m more of a national…


Link to Full Article: Is Artificial Intelligence an Existential Threat?

Pin It on Pinterest

Share This