Enable breadcrumbs token at /includes/pageheader.html.twig

Figuring Out the Human Side of Command and Control

Analysts at the Air Force’s JADC2 lab delve into data on decision-making to determine what part of C2 humans should perform.

In another important step before soliciting new and emerging technologies from industry for the Department of the Air Force’s command, control, communications battle management, the Advanced Battle Management System and joint all-domain command and control, a Shadow Operations Center-Nellis (ShOC-N) team at Nellis Air Force Base, Nevada, is examining the human side of command and control (C2) and battle management. The team is collecting initial data on airmen’s C2-related actions and decision-making.

“It is very important to collect that data because you need to know if the new and emerging technology is better than the old or legacy [technology],” explained Senior Intelligence Analyst Tina Hicks, who is leading the team under the guidance of Lt. Col. John Ohlund, USAF, commander, ShOC-N. “Are they able to make a decision faster, based on that artificial intelligence or that machine learning software? Or is ‘Technology A’ actually hindering? Is it not user-friendly and actually causing the warfighter to take longer to make that decision? We’re looking at how long it takes to make a decision and how accurate that decision is.”

The ShOC-N, which is the 805th Combat Training Squadron, is the Department of the Air Force’s designated Advanced Battle Management System and joint all-domain command and control lab. The ShOC-N team is leveraging an engineering model called the Transformational Model-Battle Management (TM-BM), created by the Advanced Battle Management System Cross Functional Team that distills command and control and battle management down to its most fundamental steps and process.

“We’re on the advent of using computers to help humans in warfare,” Col. Ohlund stated. “And today, we have a lot of legacy systems that humans operate on a day-to-day basis. What TM-BM largely does for the Department of the Air Force is it identifies the tasks that the humans are currently doing. And it really defines the requirements as to what tasks humans should do that require cognitive [input] and which ones are best optimized for computers to perform.”

To begin creating the C2 human performance data, the ShOC-N team looked at one of the 13 main categories identified in the TM-BM called “Match Effect.” They first examined that C2 function during a large event last December at Nellis.

In addition, the ShOC-N team brought in human performance specialists from the 711th Human Performance Wing at Wright-Patterson Air Force Base, Ohio, to assist them in conducting experiments and collecting and integrating data. The wing is also examining an airman’s stress level when making decisions. “They actually have a [device] that can count the blink of your eyes, how many times you blink during a session, which is pretty interesting,” Hicks shared.

So far, the experiments have involved how humans perform using legacy C2 systems compared to new or emerging technologies. “Typically, we’ll run an event on a legacy system,” Hicks explained. “We’ll have those processes timed, and we’ll have our data and instrumentation team sit next to the subject matter experts and collect that data, and we use it as a baseline. Then, we bring in the warfighter to sit with the new emerging tech with the same scenario that we ran on legacy. We’ll try to compare apples-to-apples on the process time and the accuracy based on the first group.”

The human factor, however, can be a challenge, given the confluence of choices and life experiences, Hicks continued. “The human is a complex machine,” she said. “And there are so many variables when it comes to trying to do an experiment with humans because everyone does something differently. They could have the same task and four different people will perform it in four different ways. You have to pay attention to the small things, and why did they make that decision?”

Even across the Department of the Air Force, warfighters have learned different processes based on their service experiences where they were stationed at an Air Operations Center or located for a particular mission. Hicks said it was “very complex” measuring multiple warfighters running through C2 processes and trying to have a consistent comparison. “It’s more like apples-to-pears sometimes,” she noted. “It’s interesting. It is a huge learning curve.”

From the commander’s perspective, Col. Ohlund sees human performance relating to the systems the airmen first used when learning to perform C2 and battle management—and that is a problem for the future operating environment.

“For example, we have an airman who grew up on ‘System A,’” he said. “They learned how to mash buttons on keyboards based on how the company designed that software. Then you have Airman B, who comes in using ‘System B.’ They performed a similar function, but a different company designed the whole product. And they mash the buttons slightly different. For all intents and purposes, the design of how the humans were trying to extract information was based on the system that they grew up on. And that’s fundamentally flawed. If we’re learning how to mash buttons or how to pull information out based on how a particular company designed a solution, going into the future, it may not look like that at all because that particular task, whatever it is, may be automated altogether.”

The underlying premise, the commander continued, is that automating some C2 functions will aid warfighters.

“The overarching hypothesis is that the computer will help us do either ‘A or B’ and be more accurate,” Col. Ohlund said. “In the December experiment, we ran multiple problems for the human to do. In the end, they probably could have done all 30 to 50 tasks—if the computer helped them do Tasks A, B and C. But they didn’t because they are using a legacy system. We need to automate. And when we start to automate, then you can start to get the machine to learn. Once the machine can learn, then you can apply artificial intelligence.”

Lastly, the commander noted that any future solutions from industry must take into account complex human-computer interactions. For this, the Air staff “is very clear” that the answer is not “the next best” widget.

“They want to be agnostic of companies and current products because they want to make sure they design it the right way, if you will, with the computer aiding the human, but not given any constraints by current limitations of software or software design,” Col. Ohlund emphasized.

The ShOC-N will continue its human performance data efforts through the summer and into the next fiscal year, the leaders shared. Hicks also noted that the team is planning a “complex scenario” for December, but she couldn’t unveil any details yet.  

“We’re at the forefront of data collecting,” the ShOC-N commander said. “And I think once people see what the insights are with respect to the data collection, I think that is going to stimulate a lot more thought and drive some of the future experiments.”

The Air staff will continue designing what to get out of human performance, he stated, noting that for the secretary of the Air Force, the guiding principle is to beat China.