What exactly is AI?

HAL's camera eye in 2001: A Space Odyssey
HAL’s camera eye in 2001: A Space Odyssey

We used to think of artificial intelligence (AI) as the stuff of science fiction. And it was often evil, especially in the movies. Remember HAL, the mad sentient computer from 2001: A Space Odyssey? Or Proteus IV from Demon Seed, whose goal was to create a flesh body for itself by forcibly impregnating the wife of its creator? Or Skynet, the AI in the Terminator movie universe that decided that humanity was standing in the way of its mandate to protect the world.  Scary stuff!

Yet AI is by no means science fiction any more, nor is it new, despite what marketers would have us believe. The Association for the Advancement of Artificial Intelligence (initially known as the American Association for Artificial Intelligence, it was renamed in 2007 to reflect the reality of the time) was founded in 1979 as a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. It aims to promote research in, and responsible use of, artificial intelligence, to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and to provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.

Which leads to the question: what exactly is AI? A 2007 paper, What is Artificial Intelligence?, by Stanford University’s John McCarthy, defined it as: “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. … The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans.”

The first formal research that led to AI, suggests the textbook Artificial Intelligence: A Modern Approach, by Stuart Russell and Peter Norvig, was a 1943 paper laying out a formal design for what it called “artificial neurons”. Then came Alan Turing’s work, including his 1950 article Computing Machinery and Intelligence, which discussed conditions for considering a machine to be intelligent. In it, he laid out what we now know as the Turing Test, which states that if a machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent. To avoid issues with voice synthesis (or, for that matter, the visual problem – a machine doesn’t usually look like a person), the tester would interact with the machine and a human by teletype (this was the 1940s, after all!), and both the human and the machine would try to convince the tester that they were, indeed, human.

McCarthy noted, however, that “The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human.”

Eliza was a prime example. It simulated conversation using pattern matching, and its most famous implementation was that of a psychotherapist. Created in the mid-1960s, it arguably was one of the first programs to pass the Turing Test; in fact, to some real psychotherapists’ chagrin, Eliza sometimes got more information out of people than humans did. Eliza gave its name to something known as the Eliza effect, described as “the susceptibility of people to read far more understanding than is warranted into strings of symbols — especially words — strung together by computers.”

Lynn Greiner, freelance IT journalist and regular contributor to InsightaaS
Lynn Greiner, freelance IT journalist and regular contributor to InsightaaS

Today, of course, the stakes are higher. Eliza’s conversations were short and limited, and she was more of a novelty than anything else. Now we’re taking AI seriously, to the extent that Amazon has announced the Alexa Prize (named after the voice service that powers Amazon Echo), offering $1 million to the university team who can create an AI that can converse coherently on popular topics for 20 minutes. To kickstart this endeavours, the company is providing a $100,000 stipend to up to ten teams, along with free cloud computing and support from its Alexa team, and Alexa-enabled devices.

Amazon isn’t the only company that’s putting its money into AI. Salesforce has just announced Einstein, a project to introduce artificial intelligence into the Salesforce platform. Einstein drew expertise from Salesforce acquisitions such as MetaMind, PredictionIO, and RelateIQ, and has 175 data scientists working on the components. These include advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery that will give sales professionals the information they need, when they need it, for example, predicting which leads are most likely to result in a sale. So far, Salesforce has demonstrated initial use cases such as predictive lead scoring for the Sales Cloud, image-based filtering for social studio to find specific pictures without their having been manually tagged, and automated personalized marketing for the Marketing Cloud.

A few days after Salesforce announced its AI, analytics firm SAS Institute introduced Aristotle, a reference application whose functionality will find its way into the company’s products, as well as into partner and customer software through an API. Dubbed by its project lead as “your personal data scientist”, Aristotle’s ultimate fate is to help customers make better, more effective use of analytics without the kind of expertise that’s currently required. Aristotle will be the expert, guiding users in their efforts.

A couple of weeks after Aristotle was announced, Microsoft jumped into the fray, combining its Bing, Information Platform, Cortana, and Ambient Computing and Robotics teams into a new AI and Research Group of 5,000 engineers and computer scientists.

But the granddaddy of modern AIs is, of course, IBM Watson. From its debut as a quiz show champion, it has moved on to do everything from running customer support and conducting cancer research to developing recipes. Yes, Chef Watson has written a cookbook.

In fact, Watson has become important enough to IBM that the company has created a conference for it: The World of Watson, where current and future technologies will showcase the possibilities that cognitive computing and the AIs it drives can offer, like self-driving cars and AI concierges.

It will be fun to see what the next batch of AIs will give us.

 

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.