Enable breadcrumbs token at /includes/pageheader.html.twig

Considerations of Machine Learning at the Tactical Edge

The challenge of employing AI/ML at the far reaches of a battlefield persists, with many potential pitfalls and considerations, but one that could bring great rewards to warfighters.

The capabilities of, and the industry for, artificial intelligence and machine learning have exploded in the last few years. The challenge of employing such capabilities at the far reaches of a battlefield persists, with many potential pitfalls and considerations, but one that could bring great rewards to warfighters, says Charles Clancy, general manager, MITRE Labs, and senior vice president, The MITRE Corporation.

However, because computational power and data processing are most likely constrained at the tactical edge, machine learning (ML) applications that can train in one environment and be used in another yet are still secure are crucial. “Anytime you use machine learning, there are two modes: the training phase and the actual phase,” Clancy says. “Training generally requires a massive amount of data and a huge amount of computational power to actually train the model, whereas once the model is trained it’s fairly lightweight. The question is—is there a small amount of training we can do at the edge and that will offset for [any vulnerabilities]? And when you design the architecture, you have to have some amount of edge computing locally. That is a big open area of research right now.”

Any such applications also would have to address the possible lack of communications links. “From a command and control perspective, the whole point is to enable AI to the edge to enhance decision making. You want to make sure that you can trust your algorithm to make the right decisions even if it no longer has a satellite communications link to support it.”

In addition to decision making, applications of ML at the tactical edge center around processing image, video and audio data, known as machine perception. “If you have a drone that is feeding down camera data in real time, or if you want the drone to actually do all the processing on the drone, and then of course there’s sophisticated image recognition features, recognizers, classifier algorithms that are all derived from AI and machine learning that you can run on those devices,” he says. “It is really this whole field of machine perception, the ability to see and hear that are being driven by advances in deep learning right now.”

In addition, Clancy notes, there is another set of advanced AI capabilities evolving around reinforcement learning. “It is really revolutionizing that field,” he stresses. “It is a lot different than the AI that is good at finding pictures of kittens on the Internet and a lot different than the AI that learns to play video games. If you can repurpose these two types of AI capabilities to address both the perception question and the decision-making question, then those are two big areas [that will make a difference at the edge].”

Clancy shares that experts are still debating whether or not 5G will be secure enough to be used at the edge. If it is secure enough, the opportunity is great to harness commercial AI and ML solutions at the edge that are designed to work with 5G. “At MILCOM 2021, one of the big themes of the conference was using 5G at the tactical edge and there were a lot of different panel discussions that really delved into, ‘Is 5G secure enough to be used at the edge?’ and there are a lot of differing views on that,” he emphasizes.

All of these types of AI applications are seeing advances in the commercial world, “and all of them represent opportunities for the Defense Department,” Clancy notes. “And realizing that the Defense

Department operates in an austere environment with contested and congested communications, that means there’s a different set of assumptions that need to go into the design and deployment of an AI algorithm at the edge.”

Lastly, the explosion of AI and ML has impacted The MITRE Corporation itself, Clancy offers. He first joined the federally funded research and development center in 2019, originally to run its intelligence community work. Last year, he was asked to lead the center’s new entity, MITRE Laboratories. “MITRE Labs is the place where a lot of the thinking from an emerging technology perspective is happening,” he notes. “When we formed MITRE Labs, we actually launched a new Innovation Center in Artificial Intelligence and Autonomy. Previously, our AI capabilities had been scattered around different parts of the organization. We brought it all together into a coherent unit last year, and they’ve been one of the fastest growing centers within MITRE.”

The center, which started off with about 90 people, has grown to include about 200 people. It is focusing on adversarial AI, essentially understanding AI that can be hacked or tricked.

“Our big focus is what we call consequential AI,” Clancy notes. “If I’m thinking about how to fool a camera on a drone, it will be an entirely different approach to take with camouflage. I might try and find out the data that the drone was trained on and find a camouflage pattern that would be completely obvious to a human looking at it but tricks the computer 100 percent of the time. It’s this whole concern around adversarial activity with AI that we need to be able to design solutions for if they’re going to be used at the tactical edge, because the decision making is so critical.”­    —KU

The new MITRE Labs is examining consequential AI, such as how to fool a drone camera ‘seeing’ camouflage, says Charles Clancy, general manager.