Shanahan

Professor Murray Shanahan

Murray Shanahan is Professor of Cognitive Robotics in the Dept. of Computing at Imperial College London, where he heads the Neurodynamics Group. Educated at Imperial College and Cambridge University (King’s College), he became a full professor in 2006. He was scientific advisor to the film Ex Machina, and regularly appears in the media to comment on artificial intelligence and robotics. His book “Embodiment and the Inner Life”  was published by OUP in 2010, and his most recent book “The Technological Singularity” was published by MIT Press in August 2015.

CoverDesign

The Technological Singularity

In recent years, the idea that human history is approaching a “singularity” thanks to increasingly rapid technological advance has moved from the realm of science fiction into the sphere of serious debate. In physics, a singularity is a point in space or time, such as the centre of a black hole or the instant of the Big Bang, where mathematics breaks down and our capacity for comprehension along with it. By analogy, a singularity in human history would occur if exponential technological progress brought about such dramatic change that human affairs as we understand them today came to an end. The institutions we take for granted — the economy, the government, the law, the state — these would not survive in their present form. The most basic human values — the sanctity of life, the pursuit of happiness, the freedom to choose — these would be superseded. Our very understanding of what it means to be human — to be an individual, to be alive, to be conscious, to be part of the social order — all this would be thrown into question, not by detached philosophical reflection, but through force of circumstances, real and present.

What kind of technological progress could possibly bring about such upheaval? The hypothesis we shall examine in this book is that a technological singularity of this sort could be precipitated by significant advances in either (or both) of two related fields: artificial intelligence (AI) and neurotechnology. Already we know how to tinker with the stuff of life, with genes and DNA. The ramifications of biotechnology are large enough, but they are dwarfed by the potential ramifications of learning how to engineer the “stuff of mind”.

Today the intellect is, in an important sense, fixed, and this limits both the scope and pace of technological advance. Of course the store of human knowledge has been increasing for millenia, and our ability to disseminate that knowledge has increased along with it, thanks to writing, printing, and the internet. Yet the organ that produces knowledge, the brain of homo sapiens, has remained fundamentally unchanged throughout the same period, its cognitive prowess unrivalled.

This will change if the fields of artificial intelligence and neurotechnology fulfil their promise. If the intellect becomes, not only the producer, but also a product of technology, then a feedback cycle with unpredictable and potentially explosive consequences can result. For when the thing being engineered is intelligence itself, the very thing doing the engineering, it can set to work improving itself. Before long, according to the singularity hypothesis, the ordinary human is removed from the loop, overtaken by artificially intelligent machines or by cognitively enhanced biological intelligence and unable to keep pace.

Does the singularity hypothsis deserve to be taken seriously, or is it just an imaginative fiction? One argument for taking it seriously is based on what Ray Kurzweil calls the “law of accelerating returns”. An area of technology is subject to the law of accelerating returns if the rate at which the technology improves is proportional to how good the technology is. In other words, the better the technology is, the faster it gets better, yielding exponential improvement over time.

A prominent example of this phenomenon is Moore’s Law, according to which the number of transistors that can be fabricated on a single chip doubles every eighteen months or so. Remarkably, the semiconductor industry has managed to adhere to Moore’s Law for several decades. Other indices of progress in information technology, such as CPU clock speed and network bandwidth, have followed similar exponential curves.

But information technology isn’t the only area where we see accelerating progress. In medicine, for example, DNA sequencing has fallen exponentially in cost while increasing exponentially in speed, and the technology of brain scanning has enjoyed an exponential increase in resolution.

On a historical timescale, these accelerating trends can be seen in the context of a series of technological landmarks occurring at ever- decreasing intervals: agriculture, printing, electric power, the computer. On an even longer, evolutionary timescale, this technological series was itself preceded by a sequence of evolutionary milestones that also arose at ever-decreasing intervals: eukaryotes, vertebrates, primates, homo sapiens. These facts have led some commentators to view the human race as riding on a curve of dramatically increasing complexity that stretches into the distant past. Be that as it may, we need only extrapolate the technological portion of the curve a little way into the future to reach an important tipping point, the point at which human technology renders the ordinary human technologically obsolete.

Of course, every exponential technological trend must reach a plateau eventually, thanks to the laws of physics, and there are any number of economic, political, or scientific reasons why an exponential trend might stall before reaching its theoretical limit. But let us suppose that the technological trends most relevant to AI and neurotechnology maintain their accelerating momentum, precipitating the ability to engineer the stuff of mind, to synthesize and manipulate the very machinery of intelligence. At this point, intelligence itself, whether artificial or human, would become subject to the law of accelerating returns, and from here to a technological singularity is but a small leap of faith.

Some authors confidently predict that this watershed will occur in the middle of the 21st Century. But there are other reasons for thinking through the idea of the singularity than prophecy, which anyway is a hit- and-miss affair. First, the mere concept is profoundly interesting from an intellectual standpoint, regardless of when or even whether it comes about. Second, the very possibility, however remote it might seem, merits discussion today on purely pragmatic, strictly rational grounds. Even if the arguments of the futurists are flawed, we need only assign a small probability to the anticipated event for it to command our most sincere attention. For the consequences for humanity, if a technological singularity did indeed occur, would be seismic.

What are these potentially seismic consequences? What sort of world, what sort of universe, might come into being if a technological singularity does occur? Should we fear the prospect of the singularity, or should we welcome it? What, if anything, can we do today or in the near future to secure the best possible outcome? These are chief among the questions to be addressed in the coming pages. They are large questions. But the prospect, even just the concept, of the singularity promises to shed new light on ancient philosophical questions that are perhaps even larger. What is the essence of our humanity? What are our most fundamental values? How should we live? What, in all this, are we willing to give up? For the possibility of a technological singularity poses both an existential risk and an existential opportunity.

It poses an existential risk in that it potentially threatens the very survival of the human species. This may sound like hyperbole, but today’s emerging technologies have a potency never before seen. It isn’t hard to believe that a highly contagious, drug-resistant virus could be genetically engineered with sufficient morbidity to bring about such a catastrophe. Only a lunatic would create such a thing deliberately. But it might require little more than foolishness to engineer a virus capable of mutating into such a monster. The reasons why advanced AI poses an existential risk are analogous, but far more subtle. We shall explore these in due course. In the mean time, suffice to say that it is only rational to consider the future possibility of some corporation, government, organization, or even some individual, creating and then losing control of an exponentially self- improving, resource-hungry artificial intelligence.

On a more optimistic note, a technological singularity could also be seen as an existential opportunity, in the more philosophical sense of the word “existential”. The capability to engineer the stuff of mind opens up the possibility of transcending our biological heritage and thereby overcoming its attendant limitations. Foremost among these limitations is mortality. An animal’s body is a fragile thing, vulnerable to disease, damage, and decay, and the biological brain, on which human consciousness (today) depends, is merely one of its parts. But if we acquire the means to repair any level of damage to it, and ultimately to rebuild it from scratch, possibly in a non-biological substrate, then there is nothing to preclude the unlimited extension of consciousness.

Life extension is one facet of a trend in thought known as “transhumanism”. But why should we be satisfied with human life as we know it? If we can rebuild the brain, why should we not also be able to redesign it, to upgrade it? (The same question might be asked about the human body, but our concern here is the intellect.) Conservative improvements in memory, learning, and attention are achievable by pharmaceutical means. But the ability to re-engineer the brain from bottom to top suggests the possibility of more radical forms of cognitive enhancement and re-organization. What could or should we do with such transformative powers? At least, so one argument goes, it would mitigate the existential risk posed by superintelligent machines. It would allow us t

o keep up, although we might change beyond all recognition in the process.

The largest, and most provocative, sense in which a technological singularity might be an existential opportunity can only be grasped by stepping outside the human perspective altogether and adopting a more cosmological point of view. It is surely the height of anthropocentric thinking to suppose that the story of matter in this corner of the universe climaxes with human society and the myriad living brains embedded in it, marvelous as they are. Perhaps matter still has a long way to go on the scale of complexity. Perhaps there are forms of consciousness yet to arise that are, in some sense, superior to our own. Should we recoil from this prospect, or rejoice in it? Can we even make sense of such an idea? Whether or not the singularity is near, these are questions worth asking, not least because in attempting to answer them we shed new light on ourselves and our place in the order of things.

The Technological Singularity is available from Amazon
https://www.amazon.co.uk/Technological-Singularity-Press-Essential-Knowledge/dp/0262527804

 

Pin It on Pinterest

Share This