The Future of Work: With Us, or Against Us?

(Photo: ssguy/Shutterstock) (Photo: ssguy/Shutterstock)

For the third time in six decades a great debate on the impact of automation has emerged. It is being driven by advances in artificial intelligence that imply the obsolescence of white-collar jobs and have touched even high-status professions such as medicine and law.

John Markoff writes about computing, robotics, and artificial intelligence for the Science section of the New York Times.

The emerging new information economy has created a Rashomon moment. On one side are computer scientists like Moshe Vardi at Rice University who argue that machine intelligence is evolving so rapidly that virtually all human labor will be within the reach of robots and artificial intelligence software in just three decades. On the other side, the International Federation of Robotics argues that manufacturing robotics will be a dramatic job generator, spinning off jobs by creating immense new economic activity.

Whichever account is correct, disruption and destruction are possible as Internet-enabled software systems and dextrous mobile machines become commonplace in the workplace, classrooms, factories, hospitals, and assisted living facilities.

But the challenge presented by the new wave of artificial intelligence is an opportunity to remind ourselves that the future economy will be designed by humans. Nothing is inevitable, and indeed the designers of next-generation industrial machines and software have an unparalleled opportunity to design humans into, or out of, the systems they create.

As microchips have fallen in cost and the Internet has increasingly interwoven the world, computing has extended beyond desktop computers, laptops, and smartphones to become an Internet of things, increasingly shaping a vast swath of products. As we design factories, weapons of war, or driverless automobiles, it will increasingly be software and hardware engineers and computer hackers who will define the future. They will have to choose between conflicting strategies that date back to the dawn of interactive computing.

At the outset of the Information Age two researchers independently set out to invent the future of computing. They established research laboratories roughly equidistant from the Stanford University campus.

In 1964 John McCarthy, a mathematician and computer scientist who had coined the term “artificial intelligence,” began designing a set of technologies intended to simulate human capabilities, a project he believed could be completed in just a decade.

At roughly the same moment, on the other side of campus, Douglas Engelbart, an electrical engineer and a dreamer intent on using his expertise to improve the world, believed that computers should be used to “augment” or extend human capabilities, rather than to mimic or replace them. He set out to create a system to permit small groups of workers to quickly amplify their intellectual powers and collaborate.

In short, McCarthy attempted to replace human beings with intelligent machines, while Engelbart aimed to extend human capabilities. The distinction between artificial intelligence—AI—and augmentation or intelligence amplification—IA—largely defines the modern computing world today.

Of course, together, the two ideas defined both a dichotomy and a paradox. The paradox is that the same technologies that extend the intellectual power of humans can displace humans as well.

In the intervening decades these two contrasting philosophies have created two camps—artificial intelligence researchers and human-computer interaction designers—who largely proceed in isolation from one another.

The AI group is exemplified by researchers like Mark Raibert, the founder of Boston Dynamics, a robotics research firm acquired by Google in 2013, and by Google’s extensive robotics design group. Boston Dynamics recently released a striking video of a four-legged, dog-like robot that could function either as a military scout or as a Google delivery vehicle capable of climbing residential steps to drop off packages.

The augmentation ideal is perhaps best expressed by Tom Gruber, an Apple engineer who was an architect of the Siri speech recognition program that has created a personal assistant for every iPhone user. The augmentation approach is being pursued by small firms such as Magic Leap, an artificial reality company that wants to extend the power of personal computing by using glasses to overlay a virtual world on the physical one we see with the unaided eye. In the future it may be possible to project ultrahigh-resolution three-dimensional images and animations that appear just as real as physical objects in the world around us.

It is important that the isolation of these two contrasting approaches to designing technology be bridged, for it is likely that engineers and scientists in both camps will play a disproportionate role in designing our future.

A little over a century ago, Thorstein Veblen wrote an influential critique of the turn-of-the-century industrial world, The Engineers and the Price System. He argued that, because of the power and influence of industrial technology, political power would flow to engineers, who could parlay their deep knowledge of technology into control of the emerging industrial economy.

Veblen was speaking to the Progressive Era, looking for a middle ground between Marxism and capitalism. Perhaps his timing was off, but his basic point may yet prove correct. Today, the engineers who are designing the artificial intelligence-based programs and robots will have tremendous influence over how we will use them and whether they will be our slaves, partners, or masters.

Will a world watched over by what the ’60s poet Richard Brautigan described as “machines of loving grace” be a free world? The best way to answer questions about the shape of a world full of smart machines is by understanding the values of those who are actually building these systems.

At the very dawn of the computer era, in the middle of the last century, Norbert Wiener, the MIT mathematician whose book Cybernetics touched off the first debate about machines, robots, and the future of work, issued a warning about the potential of automation: “We can be humble and live a good life with the aid of the machines,” he wrote, “or we can be arrogant and die.”

It is still a fair warning.

For the Future of Work, a special project from the Center for Advanced Study in the Behavioral Sciences at Stanford University, business and labor leaders, social scientists, technology visionaries, activists, and journalists weigh in on the most consequential changes in the workplace, and what anxieties and possibilities they might produce.

Source: The Future of Work: With Us, or Against Us?

Via: Google Alerts for AI

Pin It on Pinterest

Share This