Navigating ‘Human-in-the-Loop’ and ‘Human-on-the-Loop’
Through experimentation, the Air Force is advancing battle management and command and control structures for effective human-machine teaming, where computers play a role in decision-making across a complex, multidomain future battlefield.
The service’s key battle laboratory, the 805th Combat Training Squadron, known as the Shadow Operations Center at Nellis Air Force Base, Nevada (called the ShOC-N), along with the Advanced Battle Management System Cross-Functional Team (the ABMS CFT) have spent several years experimenting and developing the necessary tools to examine decision-making capabilities for the future.
The ABMS CFT has “a couple of hypotheses” that they are examining in conjunction with acquisition teammates.
“One is decision advantage and human-decision making,” said Col. John Ohlund, USAF, director of the ABMS CFT, in an interview with SIGNAL Media. “We want to understand partnership with industry, and are they available with commercial, off-the-shelf tools that we could procure, put into acquisition systems and start to use, and see how quick we can go.”
In their latest testing of artificial intelligence (AI) decision aids for battle managers, the lab and the team’s advancements involved machine-generated battle courses of action, or battle COAs.
With success, the warfighters are getting to the point where they are discerning the differences between “human-in-the-loop” and “human-on-the-loop,” where the latter is a more sophisticated role of the machine in preparing stages of decision-making.
“For the human-in-the-loop, it means that the human is going to be a part of the decision-making part of creating the decision to make a decision, versus human-on-the-loop, which means that the computer is going to be making all the recommendations and triaging, prioritizing and sequencing for the human-on-the-loop, who will then just make some final decisions,” explained Ohlund.
“As far as human-machine teaming goes, we always will have the human as the final decision-maker,” said Lt. Col. Shawn Finney, USAF, commander, 805th Combat Training Squadron/ShOC-N, in another call with SIGNAL Media. “But what we are quickly seeing through our experimentation is that the number of decisions available is great. And that happens faster and faster. The team comes up with their own machine aids, and then, as a crew, they are able to collectively come together. And that’s been really cool to see with human-machine teaming.”
The ShOC-N, the ABMS CFT, the 711th Human Performance Wing from the Air Force Research Laboratory (AFRL) and the Integrated Capabilities Command tested AI tools during DASH—the Decision Advantage Sprint for Human-Machine Teaming—a series of three events during 2025, Finney shared. The idea is to see how AI microservices can greatly reduce time in decision-making and improve decision quality for air battle managers, generating command and control (C2) decision advantage.
The DASH series is enabling the battle lab “to increase not just the complexity, but also our understanding of where we are going with that [AI] experimentation,” he noted.
“And both of our experimental hypotheses, we have examined through three experiments,” Ohlund added. “We have had great insights—it turned out to be a fruitful set of experiments in 2025.”
The events were so successful that DASH will return in 2026, with three more events, beginning in May. “[There was] firm agreement across the board from all the stakeholders that this needs to continue,” Finney said.
DASH 1, held last June, focused on AI tools for so-called perceive actionable entity functions that determine which actions are possible, permissible and desirable against an operational entity, such as targeting or resupplying.
Next, DASH 2’s examination of AI-enabled decision aids for battle managers last September focused specifically on match effectors, or the tools and processes to decide the best weapons systems needed to effectively strike identified targets.
The AI-enabled tools tested in DASH 2 generated recommendations in less than 10 seconds and produced 30 times more options than human-only teams, according to a report from Debora Henley, 505th Command and Control Wing Public Affairs.
“Two vendors each produced more than 6,000 solutions for about 20 problems in just one hour,” she noted. “Early findings also showed that software error rates were on par with human error, despite the tools being built in only two weeks.”
Meanwhile, the most recent DASH 3 event in December focused on AI microservices to generate battle COAs—which could include generating recommended actions for long-range kill chains, electromagnetic battle management efforts, space activities and cyber operations.
The effort involved “a lot of industry interaction,” Ohlund noted, with the staff and the AFRL looking at a lot of industry proposals and selecting the companies that they thought had best understood the battle COA problem. As a result, six industry teams as well and one ShOC-N/CFT team participated in DASH 3.
The military team included airmen from the CFT, coders from the lab’s software development team and operators, with another group of software developers that helped run the experiment.
Having the ShOC-N’s Howard Hughes Operations Center in Las Vegas—an unclassified, off-base facility originally designed as a software factory—host the DASH events not only makes it easier for industry participation, but also allows foreign partners to join in.
For DASH 3, the military team included allies from the Royal Canadian Air Force and the United Kingdom’s Royal Air Force.
“They were sitting just as the U.S. operators were, and we rotated them through,” Ohlund said. “We also had an operator side, with enlisted officers. We had younger experience levels, and we had senior experience levels. So, we had a great mix in the experimental operator group.”
The battle COA challenges for the teams were tough and meant to push software innovation.
“Experiments like the DASH series demonstrate why strong industry partner relationships are so critical to maintaining the fighting edge of the U.S. military,” Finney emphasized. “We cannot solve these exceedingly complex issues alone. Tapping into the immense technical and logistical capacity of industry through these focused events is what allows us to accelerate innovation and lets our warfighters focus on their ultimate goal—being prepared to engage in conflict at a moment’s notice.”
Humans will always be the final decision-maker, but what we are quickly seeing through our experimentation is that the number of decisions available is great. And that happens faster and faster. The team comes up with their own machine aids, and then, as a crew, they are able to collectively come together. And that’s been really cool to see with human-machine teaming.
And, at the heart of DASH 3—and all of their experimentation and testing of industry capabilities—is the ABMS CFT’s Transformational Model for Decision Advantage.
The model, which has grown more sophisticated over the last several years, digitally maps out more than 50 command and control (C2) decision points to discern the role of humans versus machines and see exactly where automation can effectively fit in.
Ohlund explained that they had started the DASH events with a request for information to industry about the verity of the transformational model, to test the CFT’s second hypothesis of the model as its government reference architecture.
“We wanted to expand and bring out the Transformational Model for Decision Advantage into experimentation, to ascertain the validity of how the model was written and the underlying requirements,” he said. “We wanted them to bring innovative solutions of how they would try to solve those problems, understanding how we had to use the reference model. We did several academic sessions with industry about the model, what the requirements are and what the expected outputs are that we should see. But everything in between, it was totally up to industry of how they were able to design solutions.”
Incorporating weather data was also a key focus of DASH 3, Finney noted. “Weather, and the way that we think about really any of those inputs, in terms of experimentation, is as a dilemma, an input,” he said.
With the Air Force’s agile combat employment operations, especially in the dispersed operations of the Indo-Pacific region, distances are much greater, and forces encounter more weather patterns and microclimates. So, it was important for the ShOC-N to include weather management as part of the DASH 3 experiment.
“It is basically something that the crew needs to work through, discuss and see how they are going to mitigate,” Finney stated. “As you know, we cannot fly through a storm. With DASH, we are trying to make these scenarios as realistic as possible because these are situations and factors that we see in the real world. And it is something that I think is perhaps difficult to code against for some operators. But it is one of the things that we really need to make sure we incorporate correctly.”
In their experimentation, the ShOC-N does set a baseline for its human-machine teaming ratio, based on what it is currently able to execute, a picture enabled by the Transformational Model for Decision Advantage.
“And then we go from there,” Finney continued. “Where there is a new capability, it has got to be measured against something to even know that there is objectively a beneficial capability that’s arrived.”
Ohlund added, “When we go from human-in-the-loop to human-on-the-loop, the reference model is very good at explaining where and what are tasks and decisions that the computer is going to be doing or supporting, and what else is going to be human, and human-machine teaming.”
The ShOC-N’s and the ABMS CFT’s efforts all go into the requirements integration for the Department of the Air Force’s Battle Network, an effort led by the Department’s Portfolio Acquisition Executive for Command, Control, Communication and Battle Management.
“I think that is going to help future senior leaders understand the moral and ethical lines for where the machines were going to be good at, and where we may want to put or impose some limits on the human machine teammate for ethical and moral decisions,” Ohlund said. “But we are not there fully in terms of like, full capability, we just grasped the edge for generating courses of action, which is just one small portion of the larger apparatus.”
Lastly, in addition to the 2026 DASH series, the ShOC-N will host Industry Days—their primary events—which are integrated into their yearly Capstone experiments that serve as a dedicated forum to engage with both new and existing industry partners, Finney noted.
“It is how we scout for emerging technologies that can be adapted to meet specific mission needs and discover innovative operational constructs,” Finney said. “This collaborative environment allows industry to gain a deeper understanding of warfighter challenges, and it gives us insight into the art of the possible. This continuous, direct engagement is the key to how these relationships are fostered and sustained over time.”
For the human-in-the-loop, it means that the human is going to be a part of the decision-making part of creating the decision to make a decision, versus human-on-the-loop, which means that the computer is going to be making all the recommendations and triaging, prioritizing and sequencing for the human-on-the-loop, who will then just make some final decisions.
Comments