Human-Computer Interface Gets Personal
![]() |
The Defense Advanced Research Projects Agency and four industry teams are developing an augmented cognition concept that would enable computer systems to understand the cognitive state of warfighters. To evaluate the technologies, Honeywell Laboratories, one of the industry participants, equipped a soldier with an experimental system. |
Conversations with computers are usually pretty one-sided: Users may yell obscenities; cursors continue to blink innocuously. But a collaborative effort between the military and industry may one day replace this one-way, futile discourse with systems that understand the user’s cognitive state and then respond accordingly. The implications of this capability reach beyond ensuring that warfighters are primed to receive critical information. It could prove to be instrumental to inventing ways of designing new systems and improving military training.
Research for systems that can learn to adapt to their users began in 2001 with funding from the Defense Advanced Research Projects Agency (DARPA),
DARPA is not alone in its quest to find ways to facilitate communication between man and machine and to ease information overload for warfighters. Scientists at the U.S. Army’s
Dr. Trish Ververs, research scientist and program manager of the AugCog program for Honeywell Laboratories, explains that the idea to help computers understand their owners has been around for many years. But, while developments in information systems have improved how computers communicate with humans, comparable progress has not occurred in collecting information about humans and sharing it with computers. Based on this premise, AugCog researchers turned to human neurophysiology and physiological data—brainwaves, heart rate, galvanic skin response—as the telltale signs of human reaction to information flow.
Ververs notes that information about human behavior can be gathered in a variety of ways that are less tactile than neurophysiological information collection requires. For example, using relatively common technology such as cameras, researchers can monitor computer users’ eye movement and gaze to determine how they are tracking data on a screen or if they are skipping items. Pupil size also can reveal information about cognitive state and degree of alertness, she adds.
This data can lead to the development of techniques known as mitigation strategies that would call attention to information, such as changing the modality of information delivery. Perhaps a tone would sound to alert the user that information was skipped, she suggests. Different types of strategies would have to be examined closely, Ververs stresses, because they must be user-friendly or they will be turned off.
But in a military operational setting, determining how to assess the user’s cognitive state and ensure that critical information is viewed is a bit trickier than installing cameras and ringing bells. In addition to knowing a user’s cognitive state, mission details as well as the content and priority of information being sent must be incorporated to make the augmented cognition capability valuable. “Given that context, you can manipulate how information flows to that decision maker,” Ververs explains.
When the cognitive state information ebb and flow takes place strictly between man and machine, it is referred to as a closed-loop system. For example, the data’s priority would be set by the sender or receiver such as a platoon leader; all other information would be considered secondary. The AugCog system would enable the computer to assess the platoon leader’s cognitive state to determine whether he is ready to receive the information, then it would ensure he sees it. “If they are overloaded and there’s a chance they may not have seen it, then you might change how that information gets to them. You have different modalities you can use, and all of this research is developing the concept about how you might manipulate that,” Ververs relates. Researchers must be very careful in how this is done, she adds, because concepts of operations and rules of engagement also must be taken into consideration.
Creating this type of human-computer interaction requires researchers to refine a number of components of the system. On the human side, they must determine which physiological and neurophysiological aspects are the best indicators of cognitive state. Ververs acknowledges that not only does this information vary from person to person, but also it can change in the same individual from day to day.
In terms of heart rate, measuring the interbeat interval (IBI) is one possible way to gauge cognitive state. The IBI is the distance between two blips seen on a heart monitor. Changes occur in response to physical activity such as running; however, changes also can indicate a parasympathetic response that tells researchers about the cognitive state. Although the latter takes longer to evaluate, it can provide information about a subject in approximately one minute.
“You’re not going to do this in milliseconds. It’s not going to help you schedule information at this very moment, but it might be able to assess the cognitive readiness of an individual to take on another task. So in this way, you can use this in an open-loop manner,” Ververs explains.
![]() |
A U.S. Army commander, equipped with electroencephalograph equipment to collect data about his cognitive state, tests the augmented cognition concept during an evaluation at Aberdeen Proving Ground. |
But gathering information about a warfighter’s readiness state is only half the equation in either the open- or closed-loop systems. Signal processing is another component that must be refined, Ververs explains.
Honeywell is working closely in this area with the
Although much of the work on the concept appears to involve behind-the-scenes development, experiments also have been conducted to determine the best ways to bring the AugCog concept to fruition. In one study, eight men between the ages of 20 and 42 were observed while they conducted three simultaneous tasks: identifying targets, monitoring mission tasks and maintaining radio counts of civilians, enemies and friendly forces. The mitigation strategy used was a computer system called the communications scheduler, which was programmed with a set of rules. It presented messages based on each participant’s cognitive state profiles as well as message characteristics and current task load.
Six participants completed three experimental trials: low workload, high workload with no mitigation and high workload with the communications scheduler. The primary goal of the trials was to improve performance during a high workload state. In the trial, a high-task-load environment was created by increasing the number of radio messages sent to the participant. The scheduler assisted each person by rescheduling tasks to reduce the load conditions. Preliminary findings indicate an overall 94 percent improvement in monitoring mission tasks and a 36 percent boost in the recall of radio counts when the communications scheduler was used.
This is the kind of progress DARPA had in mind when it started the program five years ago. According to Dr. Amy Kruse, AugCog program manager at DARPA, the effort emerged in part from DARPA program manager Cmdr. Dylan Schmorrow, USN, and his passion for the topic as well as “a realization that human-computer interfaces were essentially ‘dumb’ to the state of the user. Under conditions of stress, high workload and streaming information—like we see in the military environment—the computer should be ‘smart’ and recognize and adapt to these states,” she explains.
Because the goal of the program is not only to detect a person’s cognitive state but also to do it in real time in a military operational environment, the research team has had to overcome significant challenges. For example, the commercial neurophysiology sensor industry is dominated by companies that design systems for clinical settings, which can accommodate large equipment and multitudes of wires. For operational conditions, however, large, heavy and cumbersome equipment is not practical, so a great deal of effort has gone into engineering the devices into a smaller, less-intrusive footprint, Kruse states.
According to Kruse, reaching one critical milestone of the program sometimes led to more questions that needed answers before researchers could move forward. One such area was real-time detection of cognitive state. “There was a considerable effort to hit this milestone. Once we hit it, it took a moment for us to regroup and say, ‘OK, we’ve got cognitive measures in real time, now what do we do with them?’” she says.
The researchers had to go through the process of baselining tasks of interest for their workload on the human user. Once that was accomplished, they could begin to interpret the data in these realistic conditions. From this emerged the notion of classification—having an algorithm that can combine data from different sensors and compute a likelihood that the user is in a particular state. “This work is ongoing in the program and will reap benefits across the cognitive neuroscience realm,” Kruse says.
The research team also realized that building a closed-loop system requires a systems engineering approach. Kruse relates that although the university researchers are world experts in brain signals, sensors and signal processing, most are not engineers. To address this issue, DARPA brought in systems integrators during the second phase of the program so that all of its “good science had a chance of turning into good product,” she relates.
Although DARPA’s AugCog program is focusing on creating specific practical military systems, the lessons being learned are much more far-reaching. Kruse explains that augmented cognition opens a window into the processes in the brain that can be used across a spectrum of areas, what she calls operational neuroscience.
“If I understand cognitive functioning in real time, then I have flexibility. I can use this information to design new systems—a 21st century approach to human factors. I no longer have to guess or infer the workload an interface might place on a user. I can measure it based on these real-time sensors,” she maintains.
Kruse believes knowledge about cognitive function also will help in training because it may provide insight about warfighters’ levels of comprehension as they move through training course curricula. “I might even be able to track progress from novice to expert on a particular task. That would be very powerful,” she states.
Kruse and Ververs agree that some of the technologies that result from this program will be implemented as soon as they are robust enough to withstand operational conditions. For example, the Army’s Future Force Warrior (FFW) program is planning a physiological sensor suite as part of its soldier’s ensemble. “The Honeywell team is supporting that endeavor with classification algorithms for that physiological data as a first step. These could be deployed in the three- to five-year time frame, possibly sooner,” Kruse says. According to scientists at the
She predicts that neurophysiological sensors first will be introduced into fixed operational environments such as command and control or other types of workstations. This could easily occur within five years, Kruse maintains.
Web Resources
DARPA Augmented Cognition Program: www.darpa.mil/ipto/programs/augcog/index.htm
Honeywell Laboratories: www.honeywell.com/honlab/tech.html