Enable breadcrumbs token at /includes/pageheader.html.twig

DARPA to Increase Artificial Intelligence IQ

Researchers wrestle with how to represent human knowledge in a machine.

Amidst a great deal of hype, hope and even apprehension regarding artificial intelligence (AI), experts at the U.S. Defense Department’s premier research and development organization intend to help smart machines reach their full potential.

The Defense Advanced Research Projects Agency (DARPA) has been on the frontline of AI scientific advances for decades. In the 1960s, DARPA researchers completed some of the foundational work leading to the creation of so-called expert systems, or the first wave of AI technologies, according to an agency website. Expert systems are essentially programmed with much of the knowledge gleaned from human experts in a particular field. Expert systems are considered the first wave of AI technologies.

Since then, DARPA has funded developments in the second wave of AI, machine learning, which largely trains computer programs to perform a specific task, such as detecting certain objects in photographs or videos. This second wave has significantly impacted defense and commercial capabilities in areas such as speech understanding and self-driving cars.

Today, DARPA is spending roughly $500 million a year on AI research. It currently has about 80 programs across the agency with the Defense Sciences Office and the Information Innovation Office taking the lead. One of those programs, the Physics of AI, which aims to embed physics and prior knowledge in AI systems for the Defense Department, includes 16 individual projects that involve a wide range of topics, including unmanned systems, robotics, biomolecular research and synthetic aperture radar.

The agency’s investment in third wave AI technologies is designed to ensure the United States maintains a technological edge and to address the limitations of first and second wave systems by making it possible for machines to contextually adapt to changing situations. The ultimate goal is to improve AI capabilities so that machines serve as trusted, collaborative partners in solving problems of importance to national security.

In September, DARPA announced a multiyear investment of more than $2 billion in new and existing programs called the AI Next campaign. Key research areas include automating critical Defense Department business processes, such as security clearance vetting or accrediting software systems for operational deployment; improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies; reducing power, data and performance inefficiencies; and pioneering the next generation of AI algorithms and applications.

“The grand vision of our AI Next campaign is ultimately to move machines from human tools to human collaborators. There are a number of challenges in realizing that vision,” says Valerie Browning, who directs DARPA’s Defense Sciences Office. She adds that for some applications—image recognition, natural language processing and voice recognition, for example—scientists have learned how to represent the necessary knowledge within computer programming.

But for more complex tasks, having AI systems help with scientific discovery, for example, representing the vast amounts of necessary data, is much more difficult. “There’s a whole open space that we need to be able to understand before we can begin to tap into the potential huge promise of AI in fundamental [scientific] discovery,” Browning says.

To reach its full potential as a partner to humans, AI needs—to put it bluntly—to get smarter. Browning points out that AI can be easily fooled. “AI can be, either intentionally or unintentionally, very easily spoofed into giving incorrect answers.”

John Everett, deputy director of DARPA’s Information Innovation Office, explains that many of the second wave advances have relied on machine learning techniques, such as training systems to recognize images, which can be found by millions on the Internet. “The common thread for all of AI right now is the knowledge acquisition bottleneck. We’ve done an end run around that because we have so many images on the Internet. There are limitations because many important things are not represented in pictures. Chief among them is common sense.”

He illustrates the point with a fictional vignette featuring a robotic butler, a botler, as he calls it. For that robot to scramble eggs, it must know how to get to the kitchen, recognize and open the refrigerator and identify the eggs. As easy as these tasks may seem to humans, instilling that kind of knowledge into robots has so far proved challenging.

To scramble those eggs, the robot also needs to understand the “theory of butter”—that butter is used as a lubricant for cooking, Everett offers. Using natural language cues, the robot might understand that Pam is a nonstick spray also used to lubricate a pan for cooking, but Everett describes that possibility as a stretch.

He suggests the robot could place the can of Pam on the hot burner, resulting in an explosion, burning down the house and injuring people. “That could be really bad. So now, we need a theory of thermodynamics, and we need an explanation that I can also use the theory of thermodynamics to explain why popcorn pops, but popcorn popping is not an explosion.”

The point is that, for machines to scramble eggs, they first must learn a surprising number of facts, “including that WD-40 would not be an appropriate lubricant for scrambling eggs,” Browning interjects.

Estimates vary, according to the two researchers, but a 21-year-old may know between 1 million and 10 million facts, meaning people learn up to 606,000 facts per day, every day. “Somehow, we absorb information about the world in a way that is so transparent to us that we don’t recognize this as a major problem until we try to program computers to do things we take for granted,” Everett says. “Whatever is really easy for humans tends to be extraordinarily difficult for artificial intelligence.”

Everett offers a bit of consolation to any person devastated to learn that systems such as IBM’s Watson routinely trounce human experts at playing chess. “It turns out that a computer trained to play chess is really good at playing chess and nothing else.”

He also cites a case in which researchers at the University of Washington trained a neural network system to distinguish between wolves and huskies. The system was able to do so quite well, but only by cheating. “They realized every wolf was standing in snow. So, what they had built was a really good snow detector.”

Still, machine learning algorithms have made tremendous advances, and the concern among DARPA officials is that AI may become stuck in a learning rut because so many systems are trained using bucketloads of labeled photographs readily available on the Internet. “We may well be in what we think of as a GPU [graphics processing unit] trap. In other words, GPUs are so effective at enabling people to train machine learning systems that a lot of researchers gravitate to them,” Everett notes. For researchers required to publish to get tenure, for example, “a very powerful device that can enable you to get lots of results very quickly” appears to be a great solution, “even if [results are] somewhat incremental in the field.”

For AI systems to be better teammates for humans, they also must learn how to respond to a specific person’s needs. “If we want AI machines to be more than tools, more than calculators, we want them to respond differently based on what they know about you,” Everett says. “That would include past interactions. It would also include knowing something of the goals and background knowledge of the person it’s interacting with.”

He illustrates with another vignette about a military commander in the Arctic planning missions with his subordinate officers. If an Alexa, or similar system, interrupts the conversation with the time and temperature, the information is irrelevant, easily accessible via other means and outright annoying. In that case, the system will likely be switched off.

The researchers envision a time, however, when an AI system finds an appropriate moment to break into the conversation, reports that temperatures have been below freezing for 27 days, notes that a nearby river is frozen over and suggests that river could be used as a road. That system would more likely become a trusted partner. “Maybe not entirely like human partners, but we want to start to bridge the gap so that you can have a meaningful conversation with the system,” Everett says. “Now, it’s understood the context. It’s also understood that people haven’t yet suggested using the river. And it offers a new solution.”

Asked whether she uses Alexa or a similar technology in her personal life, Browning explains that she uses it to listen to music, and her husband and children ask for the temperature before leaving the house. Everett responds, “I don’t use it. I spent 10 years doing cyber research.”