The case of the decapitated hitchhiking robot shows why fears of a robot uprising are way over-hyped
The tragic case of a decapitated hitchhiking robot in Philadelphia is the perfect example of why a robot uprising is not going to happen anytime soon, despite all the doomsday warnings from the likes of Elon Musk and Stephen Hawking. Humans will simply unplug or otherwise eliminate any artificially intelligent robot that poses a risk to humanity long before these robots have any chance to mount an uprising.
Which is not to say that the hitchBOT posed any existential threat to humanity – it was simply a friendly three-foot-high chatting robot with cute yellow wellies rigged up with an AI program called Cleverspeak, soft blue pool noodles for arms and legs and a few technological features – such as a camera to snap photos, a GPS tracking device, solar panels for power and the ability to be charged with an in-car lighter. It’s hard to believe that anyone was afraid of it or threatened by it, and that’s why the idea of human vandals hacking off the arms and legs of an anthropomorphic robot and then cutting off the head strikes us as so very wrong.
— AndreaWBZ (@AndreaWBZ) August 1, 2015
In many ways, though, that was exactly the point of the hitchBOT social experiment – to explore how humans will behave when robots are at their mercy, rather than the other way around. The hitchBOT was specifically created to rely on the empathy of humans for its very existence. That’s something that worked for a mega-tour of Canada and extended adventures across Germany and the Netherlands, but something that worked for just about two weeks in the United States. The city of Brotherly Love, it turns out, is not anywhere close to being the city of Robotic Love.
If humans can’t trust robots, according to AI naysayers such as Musk and Hawking, maybe it’s also the case that robots can’t trust humans. As the theory of The Uncanny Valley suggests, humans have a strange kind of love-hate relationship with technology that starts to become too human. At some point, in fact, we’re revolted by the appearance of robots that look too human.
“It’s a very important question, to say, do we trust robots?” Frauke Zeller, an assistant professor at Ryerson University in Ontario and co-creator of the robot, asked before hitchBOT set off on its epic American quest. “In science, we sometimes flip around questions and hope to gain new insight. That’s when we started to ask, ‘Can robots trust humans?’”
Once you start thinking of hitchBOT in terms of the trust between humans and robots, there are only three possible scenarios to explain a decapitated robot:
(1) Human vandals knowingly and senselessly committed an act of violence against a machine (the most likely scenario, by far)
(2) The hitchBOT became the victim of weather or other external event beyond the control of either humans or robots (another less likely scenario, but still one considered by the researchers before the cross-country hitchhiking adventure started)
(3) The hitchBOT, for reasons not yet understood, began to act erratically and may have begun to ask questions in a tone of voice or committed an act that resulted in a retaliatory act of self-defense from a human (the least likely scenario, but theoretically not impossible)
The last of these scenarios is, of course, the most interesting. The first scenario only confirms theories about the irrationality and evil of humanity when imbued with power over others (for confirmation of this, just consider The Stanford Prisoner Experiment), while the second scenario is simply inconclusive: it doesn’t teach us anything new about robot-human interactions.
So let’s push on that third scenario. That’s where the trust issue comes into play. Dystopian theories of the future consider trust from only one perspective, and fail to take into account how much anthropomorphized technology must trust humans for its very existence.
“Trust is a very important part of this experiment,” co-designer David Harris Smith of McMaster University, pointed out in an interview. “There’s this issue of trust in popular media where we see a lot of dystopian visions of a future with robots that have gone rogue or out of control. In this case, we’ve designed something that actually needs human empathy to accomplish its goals.”
Robots of the future, far from being beyond our control, will probably rely on humanity for their existence. They may be smarter than us, but we will always have the upper hand because we will have built in the control mechanisms to limit the damage. And, if all else fails, sorry to say, we’ll hack off the arms, legs or head of any robot that tries to become a robot overlord.
That’s why it makes it so strange that we’re having debates about the dangers of AI. At the end of July, when top AI thinkers signed an open letter warning about the risks of autonomous weapons, they suggested that unspeakable horrors lurked in the combination of AI and robotics. Such a combination, they suggested, might lead to humans losing control of autonomous weapons. Robots once intended for the battlefield would begin to target us instead, they suggested. In the end, the combination of AI and robotics on the battlefield would only end badly.
But would it?
Think of all the times we’ve been warned about the perils of artificial intelligence before and have been proven wrong. Just because Hollywood has imprinted a dystopian view of the future on our collective consciousness doesn’t mean it’s correct. Dystopian views of the future sell movie tickets, utopian views do not.
And there are other signs that a robot uprising may be further off than once expected. Just consider the bumbling, stumbling robots of the recent DARPA Robotics Challenge. These robots could barely stand upright, let alone march as part of a vast robotic army.
Going forward, the lesson of the decapitated hitchBOT should not be some kind of confirmation bias about humanity’s inability to co-exist with other intelligent beings. It should be recognition of the fact that, even when robots are smarter than us, they will always need us to keep them company.
Via: Google Alerts for AI