Enable breadcrumbs token at /includes/pageheader.html.twig

Rise of the Machine IQ

Cognitive computing offers revolutionary changes for warfighters.
By Sandra Jontz and George I. Seffers

Machines of the future may think more like humans, promising dramatic changes for military robotics, unmanned aircraft and even missiles. U.S. military researchers say cognitive computers—processors inspired by the human brain—could bring about a wide range of changes that include helping robots work more closely with their human teammates; allowing for smaller, more agile unmanned aircraft; and improving missile precision, further reducing civilian casualties.

“A regular computer takes an algorithm, basically a recipe for something you want it to do. That algorithm is a list of instructions, and it executes them essentially in serial, one after another,” explains Tom Renz, program manager for the Air Force Research Laboratory’s (AFRL’s) Energy-Efficient Computing and Emerging Unconventional Processor Systems programs. “It works very well because they’re very fast. They can run a billion or more of these instructions per second, if needed, or at least go through a billion clock cycles per second,” he adds, referring to the cycles per instruction, one indicator of a computer’s performance.

Traditional processors, however, are not very good at recognizing objects, making decisions, learning and planning—all of which more autonomous systems, such as robots or aerial drones, will need to do to excel. “All of those cognitive functions are computationally difficult. They take a lot of processing to do what seems very simple for humans. Depending on who you ask, it’s somewhere in the range of hundreds of megawatts of power in a very large warehouse-type building to do what a human brain does. You need that kind of processing,” Renz says.

The Naval Research Laboratory’s (NRL’s) Navy Center for Applied Research in Artificial Intelligence is interested in robotic systems smart enough to work well with their human counterparts. The center takes a multipronged approach to cognition for human-robot interaction. Researchers work toward a deep understanding of how humans think, then use the knowledge to improve on intelligent systems by taking advantage of people’s strengths, says researcher Laura Hiatt. “The overall goal is to build cognitive modules that model different things that the human brain does, be it accomplishing a task or looking around the world,” she says. “Then we put them on the robot.”

The work will allow a greater understanding of how people think, the ability to build improved robots and the capacity for robots to interact effectively with people. By understanding human cognition, intelligent systems can be taught to predict human behavior and step in when a human is on the verge of making a mistake, Hiatt adds. Humans can be unpredictable, error-prone and susceptible to mental lapses brought on by confusion, fatigue and distractions.

“In this light, we believe that it is important that a robot understands what human teammates are doing, not only when they do something right, but also when they do something wrong,” according to the NRL team’s white paper on the research endeavor. “Our goal in this work is to give robots a deeper understanding of human cognition and fallibility in order to make the robots better teammates.”

The NRL researchers have three: Octavia, Lucas and Isaac. The three human-looking robots interact with people for complicated and risky tasks and missions, including on the battlefield. “With their face, for example, they can open their eyes very wide. They can move their eyes around so you can tell from their gaze what they’re looking at. They can raise their eyebrows; they can make facial expressions. Their arms and hands are built for gesturing. The idea is that if the robot communicates and looks like a person, it’s easier to interact with them,” Hiatt says.

The team has built process models of fundamental human cognitive skills—perception, memory, attention, spatial abilities and theory of mind—and is transferring the models as reasoning mechanisms on robots and autonomous systems using the Adaptive Character of Thought-Rational/Embodied (ACT-R/E) cognitive architecture for human-robot interaction, which provides a set of seven computational modules. “With those modules, we can match performance against human data to make sure that we’re capturing the right things and we’re modeling human cognition at the right level,” Hiatt asserts.

The modules deal with various cognitive functions, from how humans store factual knowledge to how they process goal-oriented cognition, see and hear perceptual elements, keep track of time and move and speak appropriately in different situations.

“You have a ton of memories of all types—things you did, things that you saw, ideas you had—and those are all somewhere in your brain,” Hiatt explains. “As you’re interacting with the world and thinking about things in the present moment, productions are firing [in the brain] that are relevant to the task that you’re accomplishing. … For example, if you’re trying to remember someone’s name, the production that you’re firing will be like ‘remember their name,’ and then that will go to the declarative module, which will do a search through all the names in your head and try to find the right one.” Understanding the process of name recognition will lead to mirroring the same process for intelligent systems—only better.

The research team also has a role in the NRL’s Laboratory for Autonomous Systems Research, which partnered with the Navy’s Damage Control for the 21st Century project and Virginia Tech in the development of a humanoid robot called the Shipboard Autonomous Firefighting Robot (SAFFiR) to help with fires onboard U.S. Navy ships. “SAFFiR is being designed to move autonomously throughout a ship to learn ship layout, interact with people, patrol for structural anomalies and handle many of the dangerous firefighting tasks that are normally performed by humans,” Thomas McKenna, managing program officer of the Office of Naval Research’s Computational Neuroscience and Biorobotics programs, says in a Navy account of the research project.

“One of our projects has been in firefighting, building robots that will work side by side with men and women on ships and help them put out fires,” Hiatt adds. “In situations like that, it’s very difficult to communicate verbally. It’s hard to hear, it’s hard to see, so having a robot that communicates in a way that people communicate in those types of situations is a big win.”

Additionally, the NRL researchers are working with recognition algorithms to teach robots to perceive similar-looking items based upon the context in which they are experienced. “An orange ball looks a lot like an orange. But if you’re in a kitchen or at the grocery store and you see this orange circle, you’re going to interpret it as the fruit because that’s what makes sense. But if you’re at Chuck E. Cheese and you see an orange circle, you’re going to think of it as a toy ball instead of fruit because that’s what makes sense in that context,” Hiatt explains.

The researchers want to leverage ACT-R/E’s fidelity to human cognition to develop robots that can serve as true teammates to humans, which means the intelligent systems would have to learn to discern between jungles and deserts, for example, or better anticipate when people are going to make errors, she says.

Today’s ubiquitous digital technology has made multitasking part of everyday life, and the highly connected world has led to chronic multitasking and adverse effects on human capacity to effectively carry out routine duties. Through the researchers’ multitasking example and error-prediction example, they seek to capture how human memory works and what might make humans performing the simplest of tasks err, such as leaving a debit card in an ATM after withdrawing money, failing to retrieve an original document after making a copy or perpetrating mistakes during Navy vehicle maintenance procedures.

“People make those types of errors all of the time, and if we can understand why they’re making those errors, we can better anticipate them and help them as a robotic teammate, for example, not make those types of errors,” Hiatt offers. By learning what causes humans to make mistakes, she says, the researchers hope to teach intelligent systems how to not only prevent making errors, but predict when humans might make them.

Research by the company Realization, which seeks cost-saving solutions for businesses, revealed in a report, “The Effects of Multitasking on Organizations,” that multitasking within large companies annually costs the global economy more than $450 billion in lost productivity.
The NRL built an ACT-R/E model that simulates interruptions to track how long it takes individuals to get back on track and uses the data to explain and predict errors and, it is hoped, mitigate error risks, particularly for warfighters in dangerous situations or environments.

Meanwhile, the AFRL research into cognitive systems is driven in part by the rapidly expanding use of sensors and an ever-greater need for autonomy, which present size, weight and power challenges. “Sensors all of a sudden have put on this huge growth spurt, and the computer chips weren’t keeping up. On top of that, we had these unmanned aerial vehicles [UAVs] come along, and we would really like to have some functions in them that make them autonomous because we have too many UAVs and not enough pilots,” Renz recalls, explaining how the research started. “If we want to do those things onboard a plane or a missile, we need to get a lot more processing on there than we can with current computers.”

Missiles also could benefit. “What they’re looking for in missiles nowadays is for them to be smaller and smaller, but also smarter and smarter—to be able to stand off a long way and fire that missile and have it get to its target,” Renz states. “The problem is that it’s getting easier to jam the Global Positioning System, so the missile needs to navigate to its target. It needs to verify its target because we’re no longer tolerant of civilian casualties. And we may not have communication with that missile at some point, so we would like it to be more than just a dumb gravity bomb that we let go of.”

Renz stresses, however, that the Air Force still insists on a man-in-the-loop solution, meaning weapons systems will have only limited autonomy. Humans will maintain control. “We do not want to do what we call fire and forget, where the missile is in charge of deciding what to hit,” he emphasizes. “There’s going be people in the loop, but at some point we want the missile to be smart enough to help us make the decision on how to navigate to the target.”

Another AFRL program office will be the first to use IBM’s TrueNorth brain-inspired chip, according to Renz. The chip consumes only 70 milliwatts and is capable of 46 billion synaptic operations per second, per watt. IBM documentation describes it as a synaptic supercomputer that literally fits in the palm of a hand. “We’re going to demonstrate that in a pod that will fly on an airplane. It’s basically supplying more processing for a modular system on a UAV, where you want it to do more [processing] than what you have available,” Renz says. The pod could house processing power for an additional radar capability, for example. The intent is to demonstrate the capability within about three years.

The AFRL will officially begin the Emerging Unconventional Processor program in 2016, building on work done over the past several years. “Actually, we have some seedling programs going on now. The goal is to demonstrate dynamic neural networks and evaluate other people’s neural networks so that we can be an honest broker for the Air Force,” Renz says. “We’re more concerned now with developing theory and the why behind how it works so that we can pick the right neural network for the right cognitive function.”