• A U.S. Army soldier gives a playful head rub to a to a local boy while on patrol near Forward Operating Base Salerno in Afghanistan. Human intuition may one day help artificial intelligence distinguish between safe and dangerous scenarios.
     A U.S. Army soldier gives a playful head rub to a to a local boy while on patrol near Forward Operating Base Salerno in Afghanistan. Human intuition may one day help artificial intelligence distinguish between safe and dangerous scenarios.

For Combat-Ready Robots, Add a Dash of Humanity

September 1, 2017
By George I. Seffers
E-mail About the Author

Researchers teach machines warrior instincts using brain waves.


U.S. Defense Department officials insist on having a person in the loop to control robotic systems on the battlefield for a reason: Human intuition can mean the difference between life or death. Some human perspective also could make artificial intelligence systems better at a variety of battlefield tasks, including intelligence analysis and threat recognition.

Artificial intelligence (AI) is, in some ways, better than human intellect. For example, it can process data really fast. But people can more effectively assess danger and determine when to shoot or, arguably just as important, when not to shoot. An Army Research Laboratory team is trying to figure out a way to infuse AI systems with the gut instincts of combat soldiers.

“One of the biggest challenges is ... to build technologies that can exploit the advantages of artificial intelligence but still have human intuition and expertise, human guidance into the system,” explains Vernon Lawhern, mathematical statistician, Human Research and Engineering Directorate, Army Research Laboratory.

The research lab is partnering with DCS Corporation in Alexandria, Virginia, to use human brain waves to teach AI technology to analyze a scene for potential threats. Although research is ongoing, the team recently completed a series of experiments with soldiers wearing electroencephalogram (EEG) caps to control AI. The experiments used a computer vision system, a form of AI that robotic systems may one day employ as part of a military team.

AI can effectively recognize objects using databases with millions of images, but spotting danger is a much harder problem. “Humans have a good intuition that something might be happening, and they need to be very careful. We foresee a human-computer vision team being able to accurately categorize what is dangerous and what is a threat. That’s an approach that computer artificial intelligence by itself has yet to solve,” Lawhern says.

The team’s recent experiment involved soldiers wearing EEG caps and traveling through a simulated urban environment. The researchers also incorporated EEGnet, a web-based platform that aids in the study of brain waves. When a soldier, or any person, recognizes something unique or out of place, the brain fires off a specific type of signal known as a P300. The scientists theorize that P300 brain waves might enable neural networks to better recognize anomalies in the environment, whether on a battlefield or in an intelligence analysis center.

In the experiment, the soldiers were asked to look for unusual elements in a scene, such as a man with a weapon or an overturned table that could hide an explosive device. They also were told to look for nonthreats, which could include tables they could see under or unarmed personnel. The computer vision system looked for the same types of anomalies. Researchers wanted to find out whether they could provide brain wave information to the computer vision program to help it pinpoint targets or aspects of the scene similar to what humans would identify.

“The AI system tries to look at the subject’s EEG signals. It tries to understand what the human subject is looking at. It’s about how to leverage the speed of AI systems with the inherent expertise and situational understanding of humans as they’re performing a task,” Lawhern explains.

The research reinforces the notion that humans should be in the loop when an AI system’s actions could have major consequences, he asserts. “When we talk about human-AI collaboration, consequence plays a very important role. If the consequence of an action is very low, maybe AI could do that without human intervention,” Lawhern says. He cites the example of iRobot’s Roomba robotic vacuum cleaner being allowed to “do its thing,” even though it might occasionally bump into the furniture.

A battlefield situation, of course, is a much different matter. “In this situation, the consequence is very high, so there needs to be a balance between the consequence of the action and the ability of AI to execute the action without the human’s input,” Lawhern offers. “If the consequence is very high, potentially catastrophic, the human in the loop provides a mechanism for controlling AI in those situations.”

The experiment was in deep learning—a class of machine learning algorithms designed, in part, based on the human brain. Lawhern describes the brain as a decidedly hierarchical system. Multiple layers within the brain specialize in performing certain kinds of tasks. The information gets passed to higher levels of the brain, and then it gets aggregated and sent upstream.

The human visual system is a good example. Extensive research in neural science indicates that in the early stages of visual processing, the brain learns to understand edges and corners. Higher up in the visual system, it learns to aggregate edges to form shapes, such as a square with four corners and straight edges. A little deeper into the processing stream, the brain learns to recognize objects.

“Objects are combinations of shapes. For example, if you look at a car [from the side], you would see two wheels that are circles, and a car body looks something like a box,” Lawhern elaborates. “A brain breaks down the information into little bits and pieces first. Those bits get aggregated to higher-level concepts and then eventually to an understanding of the visual you see.” He adds that deep learning algorithms attempt to process information in a similar way, with a hierarchical representation of data that enables more effective understanding.

While the research may one day prove beneficial to soldiers, it also could aid government data mining or help intelligence or law enforcement analysts with countless images to comb through. YouTube recently indicated that for every real-time minute, more than 400 video hours of content are being uploaded, Lawhern points out. Satellite imagery is amassing at the same dizzying rate, he says. Satellites are collecting visual data at all times. The data collections can be several hundred thousands of terabytes, so much of it goes unseen by human analysts, he notes.

“Because the data is being collected at such a fast rate, it’s actually difficult for people to look through the video content for anything that might be of interest. The question is whether we can use computer-vision artificial intelligence to help humans search through the database for interesting objects that need to be further examined,” Lawhern states. “You want to leverage the ability of AI to quickly process data, but you also want to have human expertise influence the search.”

The project is a part of a larger area of scientific study at the Army Research Laboratory. In 2010, lab officials initiated the Cognition and Neuroergonomics Collaborative Technology Alliance, which draws together experts from multiple sectors to harness advances in neuroscience.

The main purpose of the umbrella program is to understand how the brain operates in complex environments while performing complex tasks. “We must be able to provide and leverage a clear and working understanding of how the human brain functions when faced with real-world tasks and real-world operational settings,” Lawhern offers

Departments: 

Share Your Thoughts: