Enable breadcrumbs token at /includes/pageheader.html.twig

AI, Please Explain Yourself

Researchers develop ways to help machines account for their decisions.

Artificial intelligence has a trust problem. While adoption is increasing in both the government and commercial sectors, artificial intelligence-infused technologies have not reached their full potential in many critical applications because their opaque nature does not give users a window into the decision-making process.

A large part of the trust challenge entails the way artificial intelligence (AI) software is written. Current systems resemble a black box: Users enter a request for data or analysis, and an answer is returned. This may be fine for many commercial applications, but for some government, military and medical users, knowing exactly how the AI approached a decision is vital, explains David Gunning, program manager, Explainable Artificial Intelligence (XAI) program, Defense Advanced Research Projects Agency (DARPA). The XAI team calls this capability “explainability,” and it is looking at two specific areas in which AI is being used: supporting an analyst and recording autonomous unmanned system actions.

The XAI program comprises teams from industry and academia as well as DARPA scientists. It is built around two major technical challenges for AI that include data analytics, which involves sifting through large volumes of information to spot trends or specific pieces of intelligence, and understanding decision making for autonomous platforms.

U.S. Defense Department and intelligence community analysts already comb through vast amounts of data to detect trends. Although they use some automated tools, a smart application that can find data and explain why it is relevant would be very useful, Gunning says.

Autonomous systems are another target area for the XAI program. As the Defense Department ponders deploying unmanned aerial, ground and sea vehicles for long-duration missions, Gunning notes that the vehicles will most likely have machine learning software with a built-in decision-making policy to guide their behavior. “If an operator sends one of these [unmanned vehicles] off on a mission and it comes back, he’s going to want to be able to do a debrief where he really understands the rationale on why the system succeeded, failed, turned around or whatever decisions it made. The operator’s going to want an explanation,” he relates.

Gunning explains that in the early days of AI development, researchers wrote programs with a series of explicit rules and if-then logic statements. But these types of AI software did not work well because they were “too brittle. People could never quite specify knowledge in the right way. It was too inflexible,” he says.

Machine learning tools helped launch the current generation of AI because they allowed the systems to use masses of data to come to conclusions. If enough data was available, then it could be fed into a machine learning system, training it to solve complex math models.

The drawback is that current AI uses an inherently opaque math model that humans do not quite understand, Gunning says. Although this approach has been “wildly successful,” the XAI team wants to examine whether it is possible to change the process to include an explanation of the machine’s reasoning methods, he says.

Three different but broad strategies can result in more transparent AI decision making. The first strategy is to label the network nodes. Gunning notes that neural networks are the most common programming method for the high-performing black boxes now making AI software decisions. Neural networks can have hundreds of thousands of layers consisting of millions of nodes, similar to the neurons and structures of the human brain. Each node works on a single part of a larger equation. Because a problem is broken up across millions of nodes, just how those individual sections make their decisions is very opaque, he explains.

As researchers explore how deep networks operate, they are discovering that some network nodes contain reusable concepts that can be described in a way humans can understand. Gunning explains that if a neural network is trained only to recognize objects, then it can recognize various objects without human assistance by designating individual nodes that represent individual parts of an image, such as a dog’s tail, a wall or a door, then assembling them like a puzzle to make a larger image. As a result, the system independently discovers features recognizable to humans.

When a deep learning neural network recognizes an image—a car, for example—researchers can trace back through node activations to see which part of the picture the AI is paying attention to, then highlight those functions. “That can give you some information on what features it’s really using to make a decision,” Gunning says.

Researchers recently used a deep learning technique called attention mechanism to train a system to distinguish pictures of huskies from pictures of wolves. Gunning reports that when these techniques were applied each time the AI identified the picture as a wolf, the scientists discovered that it actually was paying attention to the snow in the wolf image. It turned out that all the photos of wolves had snow in them, but the husky photos did not. “It was really learning this false correlation, thinking every time there was snow, it was a wolf,” he explains. “Most of the time, that worked out. But just seeing where it’s paying attention would really tell the user where this thing might make mistakes on something like that.”

A more advanced deep learning technique that the University of California, Berkeley XAI team is pursuing would change how neural networks learn to make them more modular. This work involves training a library of smaller module networks that feature some data composition labels, enabling a system to explain what it is doing, Gunning says.

The Berkeley team also has another approach that exploits deeper learning to understand a neural network’s activity. The goal is to train one deep learning system to recognize objects and make decisions and then train a second deep learning system to generate explanations about how the first network arrived at its answers.

A third strategy the XAI researchers are exploring to better understand AI decision making uses more coherent models. This technique employs a different machine learning approach, such as a decision tree method instead of deep learning. Researchers will determine if they can push decision tree technology to create richer, more accurate models that include tracers to allow actions to be explained. “If you maybe trained a very complex system, but alongside it you can also train a system that’s going to generate a decision tree, you can use that to explain what the net is doing,” Gunning says.

The program manager also refers to this approach to better understand how AI makes decisions as “model induction.” Instead of trying to explain exactly how decisions are made, one group within the XAI program will try to develop a model system that can explain what is generally happening inside the black box, he shares. This may not be the exact algorithm in the black box, he adds.

DARPA’s XAI program, which launched last May, includes 12 university and corporate teams. It is at the beginning of its first 18-month phase. By the end of this period, the teams will run an experiment with human users focused either on data analytics or autonomy. The teams will demonstrate an AI system capable of machine learning that can generate an explainable model of its actions and can be accessed through a user interface. The test problems will measure whether the various approaches help users and improve their performance and understanding. Another important part of this exercise is to determine when users should or should not trust a system’s decision based on traceable decision maps.

The XAI program’s second phase will last two and a half years and involve work with the U.S. Naval Research Laboratory. Common problems will be defined, then each team will run annual experiments similar to those in the first phase, with the goal of improving performance each year, Gunning says.