Innovation for its own sake is the stuff of entrepreneurial legend, but rarely a strategy for sustainable development. While a certain degree of licence may be needed to fuel the creative process, ultimately, new paths that are forged must be wrapped in practical application, a lesson that has been absorbed by innovators from Icarus to the tech genius of today – or ignored at their peril. The challenge in developing these applications, however, appears to vary directly with the magnitude of invention. As Gartner’s Hype Cycle model of technology adoption has observed, a ‘technology trigger’ may follow a circuitous route through inflated expectations and disillusionment before achieving market enlightenment and the final productivity stage, depending on the establishment of relevant and beneficial applications. In the case of advanced technology, the link between ‘trigger’ and technology ‘productivity’ is more tenuous: the bleeding edge is a place no business wants to be.
Is it possible to fly close to the sun without melting one’s wing wax? IBM would answer yes, and the company’s efforts to envelop Watson technology in down to earth use case offers a fascinating look at the process of commercialization, and the commercialization of ground breaking technology at mass scale. By any measure, Watson has triggered, and has generated some success stories. And with its open API announcement this week, IBM is looking to accelerate adoption, bypass Gartner’s stage three (disillusionment) and move straight to broad market relevance, the final phase of a successful technology’s life cycle.
Most observers associate Watson with the supercomputer’s challenge, and eventual victory over human champions in the Jeopardy! quiz show back in 2011. Developed as part of IBM‘s DeepQA project, Watson was designed specifically to answer questions on the show, and put to the test in this setting to showcase IBM’s natural language expertise. But Watson features more than an advanced human/machine interface. As John E. Kelly and Steve Hamm explain in their recent dive into cognitive technology, Watson represents a “new era” in computing that follows the “tabulating era” which began in the late 19th century with mechanical tabulation machines for automated calculation, and the “programmable computing era” that exists today in which “electronic devices governed by software programs perform calculations, execute logical steps, and store information using millions of zeros and ones.” In the “Watson era,” the authors explain, “cognitive systems will learn from their interactions with data and humans… and program themselves to perform new tasks”; they will be “designed to draw inferences from data and pursue objectives they were given”; and they will “augment our hearing, sight, taste, smell and touch.”
These new capabilities are achieved through vast improvements in processing speed, that in turn are realized through a distributed designed architecture that overcomes what Kelly and Hamm call the “Neumann bottleneck”: while today’s systems are limited by the need to transfer data from storage, through the bus for processing in the CPU, new architectures “take inspiration from the human brain,” and “data processing is distributed throughout the computing system rather than concentrated in a CPU.” So Watson offers massively parallel computing with POWER 7 processors, a natural language interface that can “understand” the context in human questions, and an advanced analytics platform that can simultaneously process multiple algorithms at lightning speed, enabling the system to learn in order to improve performance, to reason, sense and predict.
Sound like the stuff of science fiction? Certainly there has been some question of Watson’s ability to “think” and as Rob High, IBM Watson Solutions CTO & IBM Fellow, notes in the accompanying video glimpse of the “What’s up Watson?” panel at IBM’s recent IOD conference, Watson’s goal is not to replace human thought, but rather to augment it. Interestingly, the process of turning Watson into a tool to support human decision making offered its own challenge. In creating Watson, IBM and other research scientists had borrowed from the best in IBM hardware and software, and the team led by IBM Watson Solutions GM Manoj Saxena had first to deconstruct the Jeopardy! version in order to understand its constituent parts before moving on to the construction of new applications. The next step involved shrinking Watson down to size for commercial deployment – from the size of a room, Saxena noted, to the size of an appliance. IBM also managed significant performance enhancements: the Watson of today boasts a 240% improvement in system performance over its Jeopardy! progenitor and a 75% reduction in physical requirements, and the solution can now be run on a single Power 750 server using Linux. As Saxena explained, while the CPU utilization in the Jeopardy! system was approximately 15%, Watson now operates 16 cores (original system had 2,880 cores) with a 65% utilization rate. These improvements have been driven through serialization and enhanced session and thread management.
In this “IBM startup” work, Saxena had to address the “five Ts”: technology, talent (107 initial hires, including Rob High), treasure (unlike other startup initiatives he had been involved in where funding presents its own issues, Saxena received a cheque from Armonk), timing and target. In terms of timing, commercial Watson was launched to capture the gathering “perfect storm”: as Saxena explained in the IOD panel, with “the super convergence of Big Data, social, mobile, cloud and analytics, we thought Watson would be an excellent technology to start monetizing and innovating at the intersection of those five trends.”
As commercial targets, the Watson team has focused on data intensive industries as the most likely candidates for adoption. In March 2012, the company announced formation of the Watson Healthcare Advisory Board and formed agreement with Citi to explore the use of Watson in financial services. IBM also partnered with organizations, including Memorial Sloan-Kettering, the Cleveland Clinic, the University of Rochester and the Rensselaer Polytechnic Institute to test Watson capabilities, and in February of this year, announced with WellPoint and Memorial Sloan-Kettering the first commercial application for healthcare – a utilization management solution to support care decisions in lung cancer treatment at Memorial Sloan—Kettering Cancer Center .
With Watson capability established in healthcare, IBM moved to the second stage in Saxena’s “eestablish, extend, embed” market development strategy – extend into other industries. In May, IBM announced release of the Watson Engagement Advisor, a powerful analytics engine designed to help customer-facing staff, primarily in call centres, and consumers with real time, data-driven answers to service inquiries. Delivered as an appliance, or as a cloud-based subscription, the Advisor assists either customer service agents or consumers who access Watson remotely for answers to questions, feedback on purchase decisions, or for troubleshooting. So far, this solution is being piloted by large brands from a number of industries, including the ANZ Banking Group, Celcom, HIS, Nielson and the Royal Bank of Canada, and touching hundreds of thousands of end users. According to Saxena, IBM now has approximately 30 different versions of Watson.
Based on the self-service capabilities in the Engagement Advisor – the “Ask Watson” feature for consumers – IBM has dubbed the solution “Watson for everybody.” But as it moves into the “embed” stage of market development, IBM is looking to further democratize Watson, easing business access to the technology through cloud delivery and driving distribution through an evolving ecosystem of development partners. This past week, the company opened access to a Watson development platform to spur the creation of a community of software application providers, ranging from startups to emerging to established players, who will build a new generation of apps based on Watson’s cognitive computing intelligence. The IBM Watson Developers Cloud is a cloud-hosted marketplace that includes a developer toolkit and educational materials, as well as access to Watson’s application programming interface (API) and to a group of IBM and third-party experts who can provide design, development and research support. So far, IBM has committed more than 500 of its own subject matter experts to the program.
This support is a critical piece in the popularization of Watson technology, as is the data component. The care and feeding of Watson is a complex and time-consuming process, which determines to a great extent, the success or limitations of the built application. In the case of the Memorial Sloan-Kettering, for example, the institution uploaded 25,000 case studies, 2 million pages of text and engaged in 15,000 hours of Watson training to ensure that data analysis and outcomes would be derived from the most complete knowledge base available. According to Saxena, for information input, IBM intends to rely to a great extent on content provider partners with domain expertise who will build their business on content provision – or on other partners with data to share. As it stands today, users of the Watson Developers Cloud may mine their own company data for analytic insight or tap into the IBM Watson Content Store, which will contain third-party data resources that IBM hopes will grow with Watson adoption. Another critical piece in wide adoption is pricing – while IBM will want the solution to be accessible to a broad user base (developers can use the Watson Developers Cloud at no cost), potential users will want to establish where company data would reside, what is involved in upload to IBM’s SmartCloud and what are storage costs as they weigh the cost advantages and disadvantages of IBM’s new subscription-based cognition-as-a-service platform.
 John E. Kelly III and Steve Hamm. Smart Machines. IBM’s Watson and the Era of Cognitive Computing. Columbia University Press: New York, 2013.
 Philosopher John Searle, for example, argued that computers/artificial intelligence are able only to execute certain syntactic manipulations and do not understand the meaning of language. http://en.wikipedia.org/wiki/John_Searle