Enable breadcrumbs token at /includes/pageheader.html.twig

Advice for Military Applied Artificial Intelligence

Experts offer suggestions to industry on how the military can leverage artificial intelligence.

The U.S. Department of Defense is increasingly leveraging artificial intelligence (AI). Organizations such as the Chief Digital and Artificial Intelligence Office (CDAO) and its effort such as Task Force Lima, created to guide development and use of generative AI for national security, are advancing applications of the technology. At the same time, industry is making considerable investments in AI to help warfighters, but the process of adoption is complicated, experts say.

And although artificial intelligence has been around since the 1960s, new capabilities—like large language models such as ChatGPT—have abounded over the last year or so.

For industry wishing to assist the department in applying AI-solutions, first and foremost, the process must be mission lead, emphasized Jay Bonci, chief technology officer (CTO), Department of the Air Force. “Find a mission thread and pull on it,” he noted.

Jon Carbone, from Forcepoint, added that, “it is never about the IT. It is about the mission, not about the AI. Think first about the mission, then the AI.”









Jay Bonci, USAF CTO
Find a mission thread and pull on it.
Jay Bonci
Air Force Chief Technology Officer


Bonci, Carbone and other AI experts spoke on a panel moderated by Shivaji Sengupta, founder and CEO of NXTKey Corporation, at the AFCEA Alamo ACE conference on November 15 in San Antonio.

The Forcepoint executive, who also is a professor of AI and data science at Baylor University, advised that successful use of AI will warrant a greater understanding of the underlying system. “We need to understand our systems much better than we do,” Carbone stated. “If we knew all of those dependencies as you build systems, the systems could actually heal themselves. And what we're doing is not really understanding many of the systems and those dependencies. We're now trying to flip artificial intelligence algorithms, which are getting more and more complex every day in that same box without understanding of dependencies. Now, that's a big problem.”

For Bill Streilein, the CTO within the CDAO, it is about adroitly examining AI technologies. “Our strategy is really to adopt experimentation within the Department,” he said. “The idea is that technology is evolving so quickly, and innovations are coming. Within weeks, for instance, large language models show up with new capabilities. We need the ability to experiment with those in a relevant operational context so we can understand how they can affect operations.”

Streilein advised leaders look at applying military AI at all echelons—not only at the decision level—but across the whole life cycle. AI is commonly seen as a decision-making aid, but it can be used for many more applications than that. In addition, the CDAO is working to articulate a hierarchy of needs for AI so that department leaders understand at a foundational level what is necessary for its adoption.



The military has already seen successful application of AI across predictive maintenance, cybersecurity and intelligence data. Carbone sees that AI is working well with the intelligence community’s processing of high volumes of data and imagery.

Meanwhile, Streilein offered “predictive maintenance is one area that I think has had really profound improvements from AI,” he said. “That is again, going back to that sort of hierarchy of needs that I mentioned, and it comes not only from applying the models, but it is getting the data in order so that predictions can be made about vehicles that need maintenance at a given time. And the impact there is to keep the vehicles in use much more often. It is one of the shining examples of the application of AI. There is still lots of other examples that maybe you didn't see yet. And we will see those through experimentation.”

The CDAO CTO does want the department to leverage large language models, such as ChatGPT. But in a “cautious, optimistic, skeptical way,” to understand how the large language model of generative AI can be applied with what he calls “justified confidence” in conditional use cases. “We've already collected around 200 use cases across the department and through Task Force Lima we are evaluating them.”

Richard White, Leidos’ chief engineer for enterprise and cyber, warned against blindly trusting the machine. “The operators, the users of the AI, their foundational understanding is that the AI—or even previously the machine learning—is basically giving them a trusted answer,” he stated. “The context is that they expect an answer and they are used to basically absorbing from an information overload all of the data to come to their own conclusion and to trust what the machine has learned. And AI is now that analytic that is presenting to them as ‘a trusted response’ from that given interface.