Touted as the “new electricity”, AI is expected to transform every industry – spawning new products and services, unlocking new efficiencies, creating new business models, driving new profit pools and delivering significant financial and human value. Expectations of, and enthusiasm for AI has reached a new high, gaining a prominent position in the C-suite and with governments as part of their broader digital transformation efforts.
But we have seen this fanfare around AI before; first from its inception in the 1950s to the mid-70s and subsequently from 1980 to 1987. Both periods were followed by an “AI Winter” – a period where funding declined, interest waned and research in the field went underground.
Given this historical record and the prevailing optimism around AI today, it seems natural to ask: Will AI face another “winter” - are we in for déjà-vu? And if so, how might business leaders and governments manage and mitigate the risks of their AI investments and ensure that AI builds human value without inflicting human cost?
To mitigate the risk of an AI investment or project being frozen amid costly and consequential outcomes, keep the following imperatives in mind:
The Dartmouth Conference held in 1956 kicked off a golden age of intense research into AI with the aim “of making a machine behave in ways that would be called intelligent if a human were so behaving.”2
Buoyed by impressive advances in the early days, AI pioneers like Marvin Minsky made bold claims that, “Within our lifetime machines may surpass us in general intelligence.”3 However, limited and expensive computing power and storage as well as a paucity of data meant that early solutions could only solve rudimentary problems. These technology limitations led to the first AI Winter where funding dried up and interest dwindled. The second AI Winter of 1980-87 was precipitated when expert systems became expensive to maintain and proved brittle when faced with unusual scenarios.
Perhaps even more detrimental was underestimating the difficulty of creating human-like or Artificial General Intelligence (AGI).
The current AI renaissance stems primarily from overcoming the technological hurdles that plagued earlier efforts. However, AI specialists today are not making proclamations of attaining AGI. Despite significant breakthroughs, Oren Etzioni, professor at the University of Washington and CEO of the Allen Institute for AI says, “We’re so far away from…even six-year-old level of intelligence, let alone full general human intelligence…”4
While AGI may remain a long-term goal for some in the field, the current focus and enthusiasm is around Artificial Narrow Intelligence (ANI).
Cheap and abundant computer power, copious digital data generated by the proliferation of the internet, as well as Geoffrey Hinton’s breakthrough with deep learning, has led to an explosion of ANI applications. These applications execute single specific tasks in a limited context very well, sometimes better than humans. Today ANI algorithms are creating human value, powering digital voice assistants, driving product recommendations and aiding in cancer detection. They have also expanded human knowledge by finding new planetsand deriving insights from human genetic data. The sheer number and diversity of commercial ANI applications is perhaps what sets this third wave of AI optimism apart.
With these accomplishments under its belt, has the eternal “spring” sprung for ANI?
More information here.
Delete