If the End of Moore's Law is Near, What's Next?

I recently read The Next Wave: A Conversation with John Markoff at Edge.org. Mr. Markoff has been a science and technology writer at the New York Times since 1988, as well as author and co-author of several books, including the just-published Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots.

In The Next Wave, Mr. Markoff talks about a variety of topics, but I’d like to focus on what I think is the main thread throughout the report, the state of Moore’s Law, which is the observation that the number of transistors in an integrated circuit doubles approximately every two years.

Mr. Markoff says that Silicon Valley has been fundamentally about Moore’s Law and that Moore’s Law has played a central role in his career since becoming a technology reporter in 1977. But then, he says, “I suddenly discovered it was over.”

The DRC-Hubo robot from Team Kaist of South Korea negotiates a stairway, one of a series of tasks to simulate disaster response that would be too hazardous for humans, at the DARPA Robotics Challenge Finals June 5, 2015, in Pomona, Calif. Team Kaist took top honors and a $2 million prize.
Associated Press

As a result of Moore’s Law, the costs of computing have been fallen at an exponentially accelerating rate for almost five decades. But for the past two years, we seem to be at a plateau, as the price per transistor has stopped falling. “I see evidence of that slowdown everywhere. The belief system of Silicon Valley doesn’t take that into account,” he says.

If Moore’s Law is indeed at a plateau, some of the more dramatic technology projections may well not come to pass. And there’s no more dramatic projection than the future of artificial intelligence, in particular, the potential advent of superintelligent machines that some fear “could spell the end of the human race.”

In his 2005 book, The Singularity is Near: When Humans Transcend Biology, author and inventor Ray Kurzweil predicted that exponential advances in technology would lead to what he calls The Law of Accelerating Returns. As a result, around 2045 we will reach the “singularity,” at which time “machine intelligence will be infinitely more powerful than all human intelligence combined,” Mr. Kurzweil said.

“I simply don’t see it,” says Mr. Markoff. AI has indeed made remarkable progress in some areas. “Machines, for the first time are learning how to recognize objects; they’re learning how to understand scenes, how to recognize the human voice, how to understand human language.”

But the progress is uneven. “What hasn’t happened is the other part of the AI problem, which is called cognition. We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think,” he said.

He covered the finals of DARPA’s robotics challenge in June. Twenty-three teams competed for a $2 million prize. Human operators guided their specially designed robots via wireless networks over a difficult course, where they had to perform eight tasks relevant to responding to a major disaster.

“It was quite an event. It was a spectacle. They built these by-and-large Terminator-style machines and the idea was that they would be able to work in a Fukushima-like environment. Only three of the machines, after these teams worked on them for eighteen months, were able to even complete the tasks,” he said. “The winning team completed the tasks in about 45 minutes. They had an hour to do eight tasks that you and I could do in about five minutes. They had to drive the vehicle, they had to go through a door, they had to turn a crank, they had to throw a switch, they had to walk over a rubble pile, and then they had to climb stairs…”

“Most of the robots failed at the second task, which was opening the door. Rod Brooks, who’s this pioneering roboticist, came down to watch and commented on it afterwards because he’d seen all these robots struggling to get the door open and said, ‘If you’re worried about the Terminator, just keep your door closed.’ We’re at that stage, where our expectations have outrun the reality of the technology.”

So, if Moore’s Law is at a plateau, and as a result, Silicon Valley might be at the end of the phase that’s been driving its progress since the 1970s, what’s next?

“Once upon a time, the center of Silicon Valley was in Santa Clara,” Mr. Markoff noted. “Now it’s moved 50 miles north [to] the foot of Potrero Hill in San Francisco. Manufacturing, which is what Silicon Valley once was, has largely moved to Asia. Now it’s this marketing and design center. It’s a very different beast than it was.”

“What worries me about the future of Silicon Valley is [its] one-dimensionality, that it’s not a Renaissance culture… It’s an engineering culture that believes that it’s revolutionary, but it’s actually not that revolutionary. The Valley has, for a long time, mined a couple of big ideas,” he said.

The first big idea was personal computing, set in motion by Doug Engelbart and later refined by Alan Kay. Then a decade later Mark Weise came up with ubiquitous computing, which Mr. Markoff described as the “profound idea that computing would disappear into everyday objects, and everyday objects would become magic. The first guy to understand that and take advantage of it was Steve Jobs. Steve Jobs first turned the record player into an iPod and then he turned the telephone into a computer,” he said.

“I’m fascinated to see what the next platform is going to be. It’s totally up in the air, and I think that some form of augmented reality is possible and real. Is it going to be a science-fiction utopia or a science-fiction nightmare? It’s going to be a little bit of both.”

Technologists have long been anticipating the slowdown of Moore’s Law, if not its very end. “Predictions of the death of Moore’s law are nearly as old as the forecast itself,” notes a recent article in The Economist. “Still, the law has a habit of defying the skeptics, to the great good fortune of those of us enjoying tiny, powerful consumer electronics. Signs are at last accumulating, however, which suggest the law is running out of steam. It is not so much that physical limits are getting in the way… it is mainly because of economics.”

The Economist continues: “As originally stated by Mr. Moore, the law was not just about reductions in the size of transistors, but also cuts in their price… New “fabs” (semiconductor fabrication plants) now cost more than $6 billion. In other words: transistors can be shrunk further but they are now getting more expensive. And with the rise of cloud computing, the emphasis on the speed of the processor in desktop and laptop computers is no longer so relevant… Moore’s law will come to an end; but it may first make itself irrelevant.”

What happens next? I started thinking about this question several years ago and the metaphor that came to mind was that the IT industry is entering its Cambrian phase.

The Cambrian geological period marked a profound change in life on Earth. Before it, most organisms were very simple, composed of individual cells and some multi-cell organisms sometimes organized into colonies, such as sponges. After a couple of billion years, evolution deemed the cell to be good enough; that is, its continued refinement did not translate into an evolutionary advantage.

Then around 550 million years ago, a dramatic change took place, known as the Cambrian Explosion. Evolution essentially took off in a different direction, leading to the development of complex life forms. “Over the following 70 to 80 million years, the rate of diversification accelerated by an order of magnitude and the diversity of life began to resemble that of today,” as the BBC puts it.

The IT industry is now going through something similar. Over the past several decades, we’ve been perfecting our digital components — microprocessors, memory chips, disks, networking and the like — and we used them to develop families of computers. That includes mainframes, minicomputers, servers, PCs, laptops and so on.

But around ten years ago, the digital components started to become powerful, reliable, inexpensive, ubiquitous and good enough to start moving to a new phase. The acceptance of the Internet introduced a whole new set of technologies and standards for interconnecting all these components. Today, digital components are becoming embedded into just everything — smartphones, Internet of Things devices, robots, consumer electronics, medical equipment, airplanes, cars, buildings, clothes and on and on. Even for data centers and large supercomputers, “the question is not how many transistors can be squeezed onto a chip, but how many can be fitted economically into a warehouse,” writes The Economist.

Technology continues to be very important but after 50 years it’s no longer the key driver of innovation, and computers are no longer its preponderant families. The digital world has now entered its own Cambrian age. Innovation has now shifted to the creation of all kinds of digital life forms and to the data-driven analytic algorithms and cognitive designs that infuse intelligence into these artificial life forms.

“What could possibly go wrong,” asked Mr. Markoff in the last paragraph of The Next Wave. “There is an argument that these machines are going to replace us, but I only think that’s relevant to you or me in the sense that it doesn’t matter if it doesn’t happen in our lifetime. The Kurzweil crowd argues this is happening faster and faster and things are just running amok,” he said. “In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.”

I wholeheartedly agree.

Source: If the End of Moore's Law is Near, What's Next?

Via: Google Alerts for AI

Pin It on Pinterest

Share This