Enable breadcrumbs token at /includes/pageheader.html.twig

Assessing Communications of Humans and Autonomous Systems

ARL researchers examine robot-human team interactions to support soldiers.

With plans for future U.S. Army soldiers to work with a cadre of autonomous systems, scientists at the Army Research Laboratory are examining the intricacies of communication to support effective operations between groups of soldiers and robotic systems. They are finding ways to measure communication and study conversational processes to understand human-autonomy team performance, trust and cohesion.

The research is needed to help inform commanders managing these types of teams on a complex future battlefield and is key to developing the kind of combat vehicles that will carry the human-autonomy teams, say research scientists Amar Marathe and Anthony Baker, from the Army Research Laboratory’s (ARL’s) Center for Agent-Soldier Teaming.

“Part of the challenge from a communications perspective is if you look at a team of soldiers working with a whole suite of these autonomous technologies, how do they effectively operate all of these types of things simultaneously?” says Marathe.

Through its Next Generation Combat Vehicle (NGCV) program, the Army is rapidly pursuing the development of a portfolio of new armored vehicles and tanks to provide more mobility, lethality and protection in close combat. To combat adversarial threats—such as precision-guided munitions, advanced rocket-propelled grenades, unmanned aerial systems or sophisticated electromagnetic weapons—the service is calling for robust human-machine teams to operate within the advanced vehicles.

“The concept for the NGCV is that you have manned vehicles and unmanned vehicles,” Marathe says. “What we are trying to do is get the soldiers that are in the manned vehicles to work together and work with the technology that is available to them, with those systems that are in both the manned and unmanned vehicles, to effectively execute missions. It is our job to really figure out how to make that human-autonomy team function.”

The researchers are considering several key categories of autonomous systems with which soldiers will interact simultaneously, including ground mobility systems, robots and unmanned aerial systems, Marathe noted. “We are also looking at things like embedded algorithms within these systems for target recognition, or computer vision-based algorithms that are going to try to understand the world around us,” he says.

Knowing the possibilities of such advanced robotic systems only opened the door for more investigation about how humans would effectively team with them, Baker adds. “If we have those kinds of capable autonomous agents that have intelligence, you can think about it as a human,” he clarifies. “If the [computer] is saying certain things, how does it say it, does it apologize after making a failure, and if it doesn’t, maybe you’re going to trust it less when it makes a mistake. There are all sorts of considerations that come into play.”

When Baker first searched literature on communications between human-autonomy teaming, he found that existing research did not address multiple robotic systems or did not apply to military operations. “A lot of the research in this domain is focused on teams of like two, three or four people and generally with one autonomous system if it even involves that,” he states. “But an Army platoon may involve 20 or 30 soldiers, multiple vehicles and autonomous systems. There is a clear need to understand how those communication patterns in those groups are coming together.”

In March, Baker and other researchers from the ARL and Arizona State University published a toolbox of ways to measure such communications in the study, “Approaches for Assessing Communication in Human-Autonomy Teams.” The scientists examined 11 approaches for assessing such human-autonomy team communication: four of which are based on the structure of the interactions; two come from dynamical systems theory, looking at the dialogue across time; two examine the human’s speaking volume or pitch or facial expressions to detect emotional states; and three approaches look at the exact word choice and content of the dialogues.

“The main thing that we want to be able to do is no matter how the teams are interacting or speaking or communicating, we wanted a tool to be able to analyze that or a measurement method to be able to study that, so that we can make predictions about are they trusting each other enough, is there a good level of team cohesion or are they performing to the standard that we’re hoping for, and so on,” Baker says.

The researchers found that several of the assessment approaches worked with communications data that included information about the people sending and receiving communication and a timestamp of the messages, along with the so-called aggregate communication flow, social network analysis and relational event approaches.

“Aggregate communication flow and social network analysis can be used even without message timestamps, but due to this, they provide less nuanced data than approaches that can leverage interaction timing for more in-depth analyses,” the study notes. “For scenarios in which only one data type is available or feasible for capture, the voice, facial expression and linguistic synchrony approaches are especially useful.”

In addition, the researchers found that dynamical approaches were beneficial. “It is often important to understand team dynamics over time: how teams change from one interaction to the next, how their coordination is affected by given scenarios, how they adapt their behaviors and interactions throughout the course of a task, and so on,” the study suggests.

The study also considered the different ways computer-based systems can interact with humans, including through verbal communications, touch, haptic feedback, gestures or multimodal interactions. The autonomous systems can either be a communication observer or a participant, with the former assessing linguistic similarities in real time while a communication participant would have more “sophisticated capabilities, such as understanding how these linguistic features correspond to team processes and the external environment or understanding how to produce natural language to make lexical, semantic and syntactic features comparable across entities.”

The researchers cautioned that autonomous systems were only now “becoming capable enough to understand, interact and adapt more naturalistically” with their human counterparts.

To verify and test some of the communication assessment approaches, the ARL researchers conducted several field and simulation experiments. In one experiment at Fort Benning, Georgia, the scientists looked at the specific exchanges of two four-person teams—one group of soldiers and one group of Marines—that were engaging targets on a gunnery range—like performing tasks for vehicle gunnery qualification.

“We got audio recordings from what the teams were saying, and I transcribed everything they said, I measured who spoke to whom, as well as how often and at what times they spoke,” Baker observes. “I found that soldier team was very rigid and very particular about their communication structure. Their back-and-forth communication between the vehicle commander and gunner made up almost half of everything that was said. The commander and radar operator didn’t even say anything between each other. It was very focused on this one pair of soldiers who were interacting with each other to do the gunnery tasks.”

In contrast, Baker says, the Marines’ communication was a lot more distributed and flexible across the team. “The communication between the commander and the gunner in that team accounted for about 35 percent, and there was a lot more of an even balance of the communication between all the rest of the teammates,” he noted. “Now, what that tells us on the surface is that there was obviously some kind of previous training. The soldier team came into this with more experience, and they’ve also worked together before. The Marine team has worked together before, but they were not as familiar with the gunnery tasks. And what that tells me is the Marines are trying to work out the task, trying to send out information between each other to share awareness of what’s going on in the environment, what they’re seeing, what they’re doing. But they didn’t perform as well in the end. So, if we know that type of information is based on who is talking to whom, maybe that makes it easier to predict how well they are going to perform.”

From the experiments, Baker and Marathe learned that the ability to obtain communications data that is operationally relevant can be a problem. “The current data collection methods are not always efficient enough to support fast, accurate, real-time analysis of what is being said by a team,” Baker notes. “For any measurement approaches that need to know what is being said, the actual content, we often have to wait until someone can record it and then transcribe it and then do the analysis. Even with smart diction, or voice recognition software even, it is almost entirely focused on you speaking very clearly in a quiet room. That is not very conducive to being in a noisy vehicle and in a real operational environment with all kinds of jargon and multiple people speaking over each other. So, that is a really big hurdle.”
In addition, the researchers must build up their understanding of how the different characteristics of autonomous teammates affect the behavior of soldier teams in various scenarios.

The autonomous systems also have huge data requirements to function properly. “That’s one of the biggest challenges for fielding artificial intelligence-based systems,” Marathe suggests. “If we can take knowledge from our soldiers and understand their dynamics based on their communications, based on their actions, then you can start helping the machines or the autonomous systems adapt to the current situation much more readily and operate much more effectively in a situation.”

Yet progress still needs to be made to improve the deduction abilities of the autonomous systems, Marathe continues. “Another big challenge on the robotics side is being able to infer the context,” he offers. “For a lot of these autonomous systems, you can demonstrate tremendous capability in very specific situations for which the autonomy is built, but when you operate outside of that setting, when you operate in a new environment, you often have trouble.”

In addition, Baker aims to tackle how human-autonomy team performance, trust or cohesion can be predicted from a team’s communication pattern. “That is still sort of an open question,” he acknowledges.
The researchers will continue to improve how quickly they can capture and leverage communication data, which will help inform how the NGCVs are developed and apply to a broader Army purpose to assist commanders in understanding team dynamics and effectiveness.

“At the end of the day, we know that our soldiers and our commanders are the most capable and the most innovative in the world,” Baker concludes. “We know that they are going to generate new possibilities and new uses for these things if we put the tools and the science into their hands. We want to enable them to win decisively in a complex and highly competitive world so, if we can keep looking at these types of human-autonomy teaming issues, especially as communication is so important, we know that we’re going to achieve that.”