InsightaaS: Deloitte University Press is a source of deep, thought-provoking material on a wide range of technology and management issues. The site’s mandate is to publish “original articles, reports and periodicals…to draw upon research and experience from throughout our professional services organization, and that of coauthors in academia and business, to advance the conversation on a broad spectrum of topics of interest to executives and government leaders.” Today’s feature illustrates why we are so fond of highlighting Deloitte U on ATN: it provides a view of artificial intelligence that does indeed “advance the conversation” by focusing on what AI can do today, rather than on fueling the excitement/disappointment cycle that has typified AI in the past.
Today’s featured post begins with a quick review of the “breathless” (and sometimes, fear-mongering) coverage that AI has been receiving recently, and then sets a more rational basis for discussion, defining AI as “the theory and development of computer systems able to perform tasks that normally require human intelligence,” and adding that “defining AI in terms of the tasks humans do, rather than how humans think, allows us to discuss its practical applications today, well before science arrives at a definitive understanding of the neurological mechanisms of intelligence” – thereby avoiding the seemingly-endless loop of speculation about where the line between processing and cognition should be drawn. In its next section, the Deloitte U post traces the history of AI through a series of boom and bust cycles: one stretching from the 1950s through the 1970s, another in the 1980s, yet another in the 1990s, and a final cycle, still underway, that began towards the end of the last decade. This cycle, the authors hold, owes its staying power to four key factors: Moore’s Law and the “relentless increase in computing power available,” Big Data that helps “train” probabilistic models by providing access to enormous data sets, the Internet and the cloud, which deliver data and connect humans together in structures (such as Mechanical Turk and Google Translate) that help refine AI systems, and new algorithms, which “dramatically improve the performance of machine learning, an important technology in its own right and an enabler of other technologies such as computer vision.”
After setting the stage for a discussion of AI today, the post hones in on eight “cognitive technologies” that perform “specific tasks that only humans used to be able to do.” These include computer vision, machine learning, natural language processing, speech recognitioin, optimization, rules-based systems, robotics and planning and scheduling. The post highlights current examples of industrial use of most of these technologies. For example, computer vision is used today by Facebook, by matching applications and as an input to security and surveillance; a related field, machine vision, is already advanced enough to be considered a “solved problem.” Machine learning also is employed in a broad range of use cases, including fraud detection, sales forecasting and public health. Speech recognition systems take dictation, and even orders (for Domino’s Pizza). Optimization, planning and scheduling and rules-based systems are even more established technology areas, embedded in enterprise software systems.
The post closes by illustrating industries where cognitive systems are in use today, and by highlighting the benefits that early adopters are able to obtain. It closes by noting that “the impact of cognitive technologies on business should grow significantly over the next five years,” due to better system performance and the billions invested in commercializing these technologies. This broad interest will, the authors believe, advance the entire field, even if not all approaches are fruitful. “Many companies are working to tailor and package cognitive technologies for a range of sectors and business functions, making them easier to buy and easier to deploy,” the article observes. “While not all of these vendors will thrive, their activities should collectively drive the market forward.”
In the last several years, interest in artificial intelligence (AI) has surged. Venture capital investments in companies developing and commercializing AI-related products and technology have exceeded $2 billion since 2011.Technology companies have invested billions more acquiring AI startups. Press coverage of the topic has been breathless, fueled by the huge investments and by pundits asserting that computers are starting to kill jobs, will soon be smarter than people, and could threaten the survival of humankind. Consider the following:
- IBM has committed $1 billion to commercializing Watson, its cognitive computing platform.
- Google has made major investments in AI in recent years, including acquiring eight robotics companies and a machine-learning company.
- Facebook hired AI luminary Yann LeCun to create an AI laboratory with the goal of bringing major advances in the field.
- Researchers at the University of Oxford published a study estimating that 47 percent of total US employment is “at risk” due to the automation of cognitive tasks.
- The New York Times bestseller The Second Machine Age argued that digital technologies and AI are poised to bring enormous positive change, but also risk significant negative consequences as well, including mass unemployment.
- Silicon Valley entrepreneur Elon Musk is investing in AI “to keep an eye” on it. He has said it is potentially “more dangerous than nukes.”
- Renowned theoretical physicist Stephen Hawking said that success in creating true AI could mean the end of human history, “unless we learn how to avoid the risks.”
Amid all the hype, there is significant commercial activity underway in the area of AI that is affecting or will likely soon affect organizations in every sector. Business leaders should understand what AI really is and where it is heading…