The fear of intelligent machines draws analogies with luddites’ rejection of 19th century industrial methods, increasing production at the cost of human labour, and 21st century concerns over foreigners taking jobs.
The 4th Industrial Revolution’s intelligent technologies will execute many jobs we humans do, but much better, in all sorts of industries including financial, legal, education and transportation. Machines will observe the results of their behaviour, modify their own programmes “so as to achieve some purpose more effectively” (Turing, 1950, p.449). Such intelligent machines are not the biggest existential risk to our species. The mercantile human is and has been throughout history.
The human species is a mercantile breed for whom trade has eclipsed benevolence over the centuries. Past great empires were enriched on the backs of human slaves (Frankopan, 2016). Vegetarian Leonardo Da Vinci’s ingenuity included military designs that could throw inflammable materials causing serious injury to enemies of the 15th century Milanese (White, 2001).
Humanity’s penchant for personal pleasure through cruelty to others is viscerally captured in the new TV series Westworld (2016). It depicts a future game world in which wealthy guests, the outsiders are hosted by humanoids on whom the vilest acts can be perpetrated through depraved scenarios designed by the narrative department at the adventure park. Human visitors act out their desires inflicting pain, after which each robot’s memories are wiped to erase the suffering in a session: “these violent delights lead to violent ends” (Shakespeare, quoted in Westworld, 2016). We need only open a newspaper on any given day and learn what horrors we humans wreak on each other.
Nonetheless, “limitations of the human intellect” (Turing, 1950, p.445) require us to develop smart machines. Guidelines on future and emerging technologies (Palmerini et al., 2014), applied by multidisciplinary teams learning from and reducing mistakes of the “sufficiently elaborate” machines (Turing, 1951, p. 473; Shah, 2013), even if they do “outstrip our feeble powers” could ensure they do not “take control” (p. 475). Advances in intelligent technologies, through following instruction and learning from experience, will produce driverless vehicles reducing road casualties, enhance student engagement through deployment of bots in pedagogy, medics diagnosing accurately and sooner, surgical operations increasingly error-free, deep learning programmes improving performance for our savings and investments, conversational humanoids attending the elderly and the unwell for dignified living, sophisticated programmes helping us to harness nature and prevent damage from weather-related disasters, and all seeing machines (Person of Interest, 2015), monitoring to protect our privacy and secure us from cybercriminals. Why would we not want this future?
Anxiety over future machines intent on malfeasant behaviour is valid, but so is the dread of the Homo sapiens who have put aside humane actions to ensure competitive trade advantages for their own particular group. Cooperative interdisciplinary human teams constructing social, cultural, ethical, moral and legally-binding technologies could lead to equitable sharing of the planet’s natural resources. Artificial Intelligence does not have to be a threat to humankind, it can help us to preserve our species for longer.
References
Frankopan, P. 2016. The Silk Roads: A New History of the World. Bloomsbury Paperback, London UK.
Palmerini, E., Azzarri, F., Battaglia, F., Bertolini, A., Carnevale, A., Carpaneto, J., Cavallo, F., Di Carlo, A., Cempini, M., Controzzi, M., Koops, B.J., Lucivero, F., Mukerji N., Nocco, L., Pirni, A., Shah, H., Salvini, P., Schellekens, M. and Warwick, K. 2014. Guidelines on Regulating Robotics. Deliverable D6.2 EU FP7 RoboLaw project: Regulating Emerging Robotic Technologies in Europe-Robotics Facing Law and Ethics, SSSA-Pisa. Report accessible from http://www.robolaw.eu/
Person of Interest (2015). http://www.cbs.com/shows/person_of_interest/
Shah, H. 2013. Conversation, deception and intelligence: Turing’s question-answer game. Chapter in Cooper, S.B. and van Leeuwen J. (Eds), Alan Turing: His Work and Impact, pp. 614-620. Elsevier.
Turing, A.M. 1951. Intelligent Machinery, A Heretical Theory. In Copeland, B.J. (Ed). The Essential Turing: The ideas that gave birth to the computer age. Oxford University Press, UK
Turing, A.M. 1950. Computing Machinery and Intelligence. Mind, Vol 59(236), 433-460
Westworld. 2016. HBO: http://www.hbo.com/westworld/about/index.html
White, M. 2001. Leonardo Da Vinci: The First Scientist. Abacus paperback edition, London, UK.
Bio
Huma Shah is an AI Research Scientist and Senior Lecturer in the School of Computing, Electronics and Mathematics at Coventry University. She has a PhD in ‘Deception-detection and machine intelligence in practical Turing tests’ earned from Reading University. She has designed and conducted original experiments based on the ideas of 20th century mathematician/codebreaker Alan Turing to explore the intellectual capacity of machines through question-answer interviews. She has over 30 peer-reviewed articles in fundamental machine intelligence published in journals and presented at international conferences. Huma has co-organised two Loebner Prizes for Artificial Intelligence (2006 at UCL-UK; 2008 at Reading University-UK). She organised and chaired a ½ day public workshop on the ‘Benefits of Artificial Intelligence’ in July 2016. Huma collaborated on the EU FP7 funded RoboLaw project: Regulating Emerging Robotic Technologies in Europe: Robotics Facing Law and Ethics. The project’s dissemination included a final report, Guidelines on Regulating Robotics (http://www.robolaw.eu/). Huma feels deep learning machines could assist humans with the biggest challenges facing the Homo sapiens. Widening the stakeholder group to ensure ethical, legal, moral and social issues are embedded in intelligent machines could ensure we don’t sleep-walk into a world of smart technologies enslaving or annihilating humanity.
Link to Full Article: Read Here