Microsoft speaks to the ethics of AI

Microsoft took time on an "underground tour" of AI innovation to consider the social impacts of the technology, including issues around privacy, safety, fairness and job automation.

To showcase the latest in artificial intelligence, Microsoft recently hosted an “underground” tour, two days’ worth of virtual reality demos, product prototypes, programming and platform innovation, research news and philosophical musings on the future of AI from technological, social and business perspectives.

AI progress can be attributed to a number of factors, including advancements in processing power, powerful new algorithms, data availability, cloud computing, and machine and deep learning capabilities. One of the more compelling milestones that furthered the cause for many applications was Microsoft’s achievement late last year of error rates that are on par with, if not better than, human benchmarks – under 5.9 percent for speech recognition and 3.5 percent for image recognition.

Autonomous cars, smart homes, automated assistants, translation apps, virtual and augmented reality were all represented over the course of the event as part of the AI spectrum. But the most compelling discussions were those that went beyond technical wizardry (which was impressive in itself) to explore the social and cultural impacts of AI. This is a topic that is becoming increasingly prevalent in AI gatherings these days, as enterprises, governments and consumers grapple with the compliance, privacy and legislative issues that are emerging with this new tech frontier.

David Heiner, VP and deputy general counsel, Microsoft

AI is the next big step forward,” said David Heiner, Microsoft VP and deputy general counsel in a presentation entitled Enabling the Promise of AI.  “We are big believers in the benefits this will bring. At the same time, it is like any other new profound technology. It raises societal issues, and that is something we think of carefully,”

With any new technology, there are valid concerns, he observed. Key questions that have been raised on  the AI front include: How does this work? Where is the data coming from? Who/what is making these decisions? Are they accountable? “It all feels new because it is,” Heiner said, “and it feels like a black box because it is in a sense. If you aren’t a data scientist it can be hard to understand.”

Heiner outlined industry’s need for an ethical framework that applies in four key areas:

  1. Reliability and safety. Since AI systems make predictions, “it’s about being cognizant of how accurate is the system and how accurate can we make it,” he said. By way of example, in testing facial recognition software for detecting emotions, researchers found it worked well with Caucasian faces, but not as well on Asian faces. “That’s because of how it was trained. We have to ensure that the training data is representative of planet earth. One thing about AI [applications] is that they can be so amazingly smart, but they can be so dumb because they don’t have common sense. A new fact pattern not reflected in the training data can lead to calamitous results.”
  1. Treating everyone with fairness and respect. As it stands today, rightly or wrongly, the world is represented by available output. A famous example shows results of an image search on “three black teenagers” and “three white teenagers.” The former produced mug shots, the latter, smiling teens. “It’s just a sad reflection on the state of our world,” Heiner said. “The computer doesn’t have any sense of what is what. It’s just returning what is out there. Because the system was trained on societal biases, the output can reflect that. The machine feeds back what we feed into it.”
  1. Beyond typical privacy issues around how records are being kept, there is new concern in AI related to context, Heiner said. “That is, the ability of AI systems to draw inferences about people based on data they knowingly shared, but who would not have shared those inferences. For example, a system can make a predictions about a person’s political leanings or sexuality based on their product choices. This is something we must be thoughtful about.”
  1. Automation of jobs. AI’s impact on jobs is the number one concern in the public policy arena. “The short answer is no one knows what its effect will be on jobs. Over the past 150 years, new technologies have both destroyed and created jobs. Is this the same, or is it something different this time? People are concerned it’s the latter. We need to be thinking through skills training.”

To address some of these concerns, the technology industry is now banding together to make some headway on a number of policy and ethicical issues. Fairness, Accountability and Transparency in Machine Learning (FAT/ML) is a community that includes representatives from Microsoft Research, Google, the University of Utah, Haverford College, and Cloudflare. There is also the Partnership on AI to benefit people and society which is focused on enabling knowledge exchange within the industry and on developing best practices in area such as transparency, safety and reliability. Partnership founders include Google, Apple, DeepMind, IBM, Microsoft, Amazon, Facebook, as well as small business representatives, academics and society groups.

Eric Horvitz, technical fellow, AI and Research and head of Microsoft Research’s Global Labs

Eric Horvitz, technical fellow, AI and Research and head of Microsoft Research’s Global Labs, also spoke at length on the inherent value of AI for social good in his presentation AI for People & Society. “There are big wins coming ahead with the advance in technology to enhance lives. It would be unethical to say no to those things. But there are some applications that give value that you have to say no to.  The most important thing is that when you build a system that has a goal, you need clear disclosure to end users about the intention and the mechanism.”

There is much that can be done for developing worlds, he noted, adding that AI can be invaluable in predicting cholera outbreaks, weather patterns, or emergency assistance needs; or addressing a countless array of environmental and social challenges, from endangered species protection to farming practices.

In these instances, industry is not applying AI fast enough, he said. “At the same time, you can’t discount the issues around machine bias. Think about data sets being used in criminal justice decision making. Biased data sets can created biased predictors without annotations as to why. When we field services, we have to look beyond how well does it work, to how fair is it on all constituencies.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.