Enable breadcrumbs token at /includes/pageheader.html.twig

Hypersonic Defense Poses Colossally Hard Problem

AI may be critical to solving the issue.

Developing the ability to defend against hypersonic missiles flying at least five times the speed of sound poses a significant problem and will likely require advances in artificial intelligence, according to Dr. Lisa Porter, co-president and co-founder of LogiQ and a former deputy undersecretary of defense for Research and Engineering.

Porter made the comments while serving on a panel at the Intelligence and National Security Summit 2022 conference. The panel also included Dr. Catherine Marsh, director of the Intelligence Advanced Research Projects Activity (IARPA) and Dr. Stefanie Tompkins, director of the Defense Advanced Research Projects Agency (DARPA).

“Hypersonic defense is a problem we don’t know how to solve yet. The challenge we have to recognize is that the timelines of hypersonics are such that the command-and-control aspect of this is fundamentally different than the way we think around missile defense today. This hasn’t totally seeped into the consciousness yet of everyone,” Porter said.

Porter also indicated that artificial intelligence (AI) and machine learning (ML) are critical to the hypersonic defense mission. “We need to figure out how to use AI and ML to autonomously fuse, tip and cue directly from sensor to shooter. This is not what we do today, but if we don’t figure out how to do this, we can’t counter the hypersonic threat,” she warned. “The timelines will not allow us to have centralized command with guys on consoles somewhere in the traditional way that we think about missile defense.”

“AI engineering is not a nice-to-have, it is a must-have because we do have to figure out how to autonomously fuse, tip, cue and get from sensor to shooter and do it, by the way, in a decentralized manner where different sensors with different viewing angles figuring out what gets fused where. This is a colossally hard problem,” she stated.

She added that hypersonic missiles pose a genuine threat, specifically in the western Pacific area. “This is a fundamental point that I think people need to really grasp because the hypersonic threat is real. In the western Pacific, it is a real issue, and we need to get after that.”

All three panelists agreed on the need for rigorously engineered AI.

Image
Lisa Porter
This is a fundamental point that I think people need to really grasp because the hypersonic threat is real. In the western Pacific, it is a real issue, and we need to get after that.
Dr. Lisa Porter
Co-president and Co-founder of LogiQ

Tompkins described AI as having come in two waves so far. The third wave will be the fusion of the first two. “The first wave of AI was basically decision trees. It was very rules based. If you think about how TurboTax works, that’s the first wave,” she explained. “The second wave is statistically driven. That means it’s taking advantage of big data, but it is basically the machine learning, deep learning methodology that we’re talking about today.”

DARPA, she added, is exploring the third wave and suggested some people do not yet understand the inherent weaknesses of the first two. “I worry a little bit in the sense that a lot of folks are still trapped in this notion that the second wave is all we need and the fundamental lack of understanding about how that works, the hidden biases and the hidden gotchas. And the real ease with which it can still be spoofed is not yet fully solved,” Tompkins said.

She added that people should exhibit “some skepticism about exactly how magical that AI pixie dust really could be while researchers work to mature the third wave capabilities. “My sense is that we need that third wave to mature, and I think we’re all working on it as quickly as possible to get there.”

Marsh agreed, noting that IARPA does not necessarily have programs to develop AI, but it uses AI in developing programs. She also pointed out that AI used for commercial purposes does not have to be as advanced as AI used by the military or intelligence community when lives are at stake. “It’s not going to be perfect ever, but for mission-critical systems, we have to know what it is we can enable with that and what it is we’re never going to enable. When we’re talking about making decisions that ultimately can result in life or death, we have to know we have confidence in what we’re doing.”