AI technology: Is the genie (or genius) out of the bottle?

It is with great enthusiasm and a healthy dose of angst that I am writing this post. My enthusiasm comes from the…

undeniable reality that artificial intelligence (AI) technology, after approximately 60 years of research-and-development breakthroughs and breakdowns, is mainstream. IBM supercomputer Watson’s February 16, 2011, decisive victory on the game show Jeopardy! over Ken Jennings (who had the longest consecutive winning streak, at 74 games) and Brad Rutter (who was the game’s largest monetary winner, at $3.25 million) has dramatically and indelibly made that point.

My angst comes from a deeply personal place; the study and development of AI was my entry point into the field of technology in 1967 as a young and (very) naïve college freshman at the University of Pittsburgh. I spent four intensive years of study with some of the leading AI technology pioneers at Pitt, Carnegie Mellon, MIT and Stanford. As excited as I was about AI’s incredibly positive potential contribution to society and mankind, I was equally tormented by the intellectual, psychological and emotional consequences inevitably associated with contributing to the development of what I thought could easily turn out to be the next atomic bomb.

For those readers whose high school history may be a bit rusty, J. Robert Oppenheimer was a theoretical physicist and professor of physics at the University of California, Berkeley, and is often referred to as one of the fathers of the atomic bomb. As a principal scientist in the Manhattan Project, the World War II project that developed and detonated the first atomic bomb, Oppenheimer was quoted as saying the experience reminded him of words from the Bhagavad Gita: “Now I am become Death, the destroyer of worlds.”

When I graduated from college, I opted out of artificial intelligence technology; that is, I went screaming out of the space at a million miles an hour with my hair on fire and got a job as a COBOL programmer at a bank.

AI technology before there was Jeopardy! (Pun intended.)

Most industry experts attribute the origins of the field of artificial intelligence to a conference that was held on the campus of Dartmouth in 1956. Those present — including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel and Herbert Simon — ultimately became the preeminent scientists and leaders of AI research and development. Throughout the 1960s, their early work was heavily funded by the Department of Defense and included programs that played checkers, spoke English, solved algebra problems and created proofs for logical theorems.

Significant enthusiasm for these nascent AI technologies reigned, as evidenced by predictions made by Herb Simon: “Machines will be capable, within twenty years, of doing any work a man can do”; Minsky supported Simon’s perspective and wrote, “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”

Needless to say, the early pioneers (as is typical) were a bit overly optimistic, although, in the bigger picture, perhaps not that much. The 1970s brought government funding cuts and the field went from the “Peak of Inflated Expectations” to the “Trough of Disillusionment,” to use the modern vernacular of the Gartner Hype Cycle. During the 1980s, commercial success was achieved through the development of expert systems that enhanced knowledge and analytics capabilities, and the market grew to over $1 billion — the “Slope of Enlightenment,” literally and figuratively. Enthusiasm for AI was rekindled.

You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence. — Artificial Intelligence, Phil Lemmons, Byte Magazine, April 1985, p. 125

During the 1990s and into the early 21st century, the “Plateau of Productivity” included many significant and highly visible advances in AI technology, including this very small subset of examples:

1994 Twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drove more than 1,000 kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 kilometers per hour.
   
1997 IBM’s Deep Blue supercomputer became the first to beat a reigning world chess champion (Gary Kasparov).
   
1998 Tiger Electronics’ Furby doll became the first successful attempt at putting an AI-based device in the home.
   
2010 Microsoft released Kinect which provided the first 3D body-motion interface (Xbox 360 and Xbox One).
   
2011 IBM’s Watson supercomputer defeated the two most successful to-date Jeopardy! champions (Jennings and Rutter).

The Hype Cycle extended: Singularity

I, as many do, believe that we are at an inflection point. The Google Search dictionary defines the term singularity as “a point at which a function takes an infinite value, especially in space-time when matter is infinitely dense, as at the center of a black hole.” Whether you believe in Moore’s Law or not, there is no question that advances in AI technology are coming more quickly and are more significant in terms of features and functions and — of critical importance — human connection. Bionic human components are no longer the stuff of science fiction or restricted to the realm of academic study. The human-computer interface is no longer just about moving a mouse, snapping a selfie or moving your arms or legs — it is about connecting and embedding electro-mechanical and silicon-based technologies to and within human physiology.

It is reasonable, I suppose, to be unconvinced … and to be skeptical about whether machines will ever be intelligent. It is unreasonable, however, to think machines could become nearly as intelligent as we are and then stop, or to suppose we will always be able to compete with them in wit and wisdom. Whether or not we could retain some sort of control of the machines, assuming that we would want to, the nature of our activities and aspirations would be changed utterly by the presence on earth of intellectually superior beings. — Marvin Minsky, “Matter, Mind and Models,” Proceedings of IFIPS Congress, Spartan Books, 1965, 65: Vol. I

Perhaps there should be a sixth stage in Gartner’s Hype Cycle called Singularity.

Ten CIO imperatives for the AI age

For CIOs, the already existing and new opportunities arising daily from AI technology are immense if not daunting. Here are some ways in which we, as IT executives, can help to steer the course and maximize our influence on the outcomes that today we can only imagine:

  • If you are not already familiar with the basic concepts and principles of AI, read a book, take a course, talk to colleagues and acquaint yourself with the discipline.
  • Familiar or not, continue to monitor new developments in AI technology — this stuff changes faster than we do.
  • If you already have an innovation function within your enterprise (business or IT), ensure that AI is one of the disciplines being tested and developed. If you don’t have an innovation function within your enterprise, create one.
  • Hire and train the best and the brightest AI talent that you can find and afford.
  • As with all new technologies, ensure products and services that you build with AI have commercial viability (i.e., reasonable economic return on investments).
  • With AI applications in particular, be transparent with your internal and external stakeholders. Proper messaging in this area is critical and, if managed well, can be a competitive differentiator.
  • Where the development and deployment of smart technologies may displace human workers, ensure that you have a good HR strategy and plan. Full communication and retraining of affected staff go a long way toward minimizing resistance (sometimes even sabotage) and toward ultimate acceptance.
  • Where AI is being used in an expert advisory context (e.g., sales, service, manufacturing, forecasting), ensure impacted staff understand that the tools are being deployed to help them do a better job, increase their productivity and value, and increase customer satisfaction, which will, in turn, increase employee satisfaction and retention.
  • Of paramount importance, where AI applications are being supported by big data, especially if the data is personal, absolutely ensure that information security and data privacy policies, procedures, methods and tools are employed to protect the data from breach or unintended use. The combination of big data and AI can be extraordinarily powerful, and ways and means to protect it need to be commensurate.
  • Spend quality time with your executive team, C-suite peers and staff to reflect upon potential ethical or moral implications of new AI-based products or applications, including how data is collected, stored, retrieved and utilized. This is especially important for those involved with medical or public infrastructure applications where lives may literally be at stake and/or significant economic or social disruptions could occur when unintended consequences happen — you know that this one is a “when,” not an “if.”

End note: Mr. Potato Head

There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. … Even a potato in a dark cellar has a certain low cunning about him which serves him in excellent stead. — Samuel Butler, Erewhon, 1872, New Zealand Electronic Text Centre

Let me know what you think. Post a comment or drop me a note at hrkoeppel@aol.com. Discuss, debate or even argue — let’s continue the conversation.

Next Steps

Recent columns from Harvey Koeppel:

CIOs can make mobile payment systems better

The enterprise data center is on life support

The digital CIO as data master

URL: AI technology: Is the genie (or genius) out of the bottle?

Source: Google Alerts for AI

Pin It on Pinterest

Share This