Mikhail Kalashnikov holds his famed AK-47 assault rifle.

Mikhail Kalashnikov holds his famed AK-47 assault rifle. Photo: AP

GIZMO GUY

Autonomous killing machines will make it easier for countries to go to war and could fall into the wrong hands, according to scientists studying artificial intelligence.

Alarmed at the prospect of another global arms race, more than 200 researchers into artificial intelligence and robotics have signed an open letter calling for an international ban on artificial intelligence in warfare. The ban is aimed at fully autonomous killing machines rather than “smart” weapons such as cruise missiles and drones, which still rely on human controllers.

Artificial intelligence could be deployed on the battlefield within several years, rather than decades, researchers warn – leading to a third revolution in warfare after gunpowder and nuclear arms. While mass-produced robotic soldiers could be portrayed as a way to reduce human casualties on the battlefield, they would also lower the threshold for going to war and thus see an escalation in conflict around the world.

Like many scientists before them, researchers looking to artificial intelligence are concerned about the potential for their work to be misused. Their call for a ban on autonomous killing machines follows in the footsteps of agreements prohibiting chemical and biological weapons, along with treaties banning space-based nuclear weapons and blinding laser weapons.

Scientists fear autonomous weapons could become “the Kalashnikovs of tomorrow”, in reference to cheap and reliable automatic rifles such as the AK-47, which are popular in war-torn countries. Were cheap autonomous weapons to become as prolific, it would only be a matter of time until they appeared on the black market and in the hands of terrorists, dictators and warlords – used for tasks such as assassinations and ethnic cleansing.

“Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” the open letter says. “There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”