• WEST 2020 panelists discuss AI. Photo by Michael Carpenter
     WEST 2020 panelists discuss AI. Photo by Michael Carpenter

Is the Navy Missing the Boat with AI?

March 4, 2020
By Kimberly Underwood
E-mail About the Author

Machine learning or autonomous capabilities need data and access to data to be successful.

The U.S. Navy still has its work cut out for it regarding the use of artificial intelligence (AI), machine learning or robotic systems. Data and the ability to obtain data remains an impediment to the increased use of AI, as does the ability to verify that adversaries have not tampered with AI-related code, said experts speaking on a panel at AFCEA and the U.S. Naval Institute’s WEST 2020 conference in San Diego on March 3. Capt. George Galdorisi, USN (Ret.), director, Strategic Assessments and Technical Futures, Naval Information Warfare Center (NIWC), Pacific, moderated the panel.

“It is worth noting that our peer adversaries are galloping ahead with AI and machine learning investments,” Galdorisi said. “You’ve heard the statements from leaders in Russia and China. They are not kidding. They are making huge investments.”

For the service, though, “Big Navy is still sorting through how it will get organized for AI,” he added.

One area that still needs to be developed concerns human-machine teaming, advised Jamie Lukos, branch head, Intelligent Sensing, Naval Information Warfare Center, Pacific.

“In order for AI to truly make a difference, both in commercial sector and for our warfighters, we really need to focus on human sensing,” Lukos said. “We do a pretty good job right now of being able to sense our platforms in very complex settings. Our robots have plenty of sensors.  But we really don’t do a great job of measuring our most important and our most dynamic systems, which are our people in the field. The problem with that is with human–machine teaming, a team represents bidirectional communication, but it is not a discussion right now [between humans and robots].

The input that humans are giving to robots is very basic. “Our systems don’t really receive a lot of information from us,” she said. “When they do receive input, it is typically a button press.”

Lukos and her colleagues' research focuses on measuring and being able to project human intent. “That is one of the major things we believe is missing,” she stated.

She cautioned that team dynamics between humans and robots will take time to evolve. “We have to learn over time what our systems are able to do,” she clarified. “And they need to adapt to us and as we adapt to them.”

On the operational side, Lt. Cmdr. Connor McLemore, USN, principal operations research analyst and section head, Office of the Chief of Naval Operations Assessment Division (OPNAV N81), saw firsthand the need for artificial intelligence-enabled decision making. During Operation Iraqi Freedom in 2003, he was an E2 Hawkeye pilot and saw how the war in the air was outpacing the ground-based air targeting support. “A whole lot of aircraft started coming back to the carrier with bombs on their wings,” he recalled. “And I can assure you that it was not for a lack of targets at the time.”

Without available software, the warfighters had to solve command and control issues with paper and grease pencil.

A couple of years ago, within OPNAV N81, Cmdr. McLemore joined an AI cross-functional team to advance previous research in automated asset mission assignment he had conducted at the Naval Post Graduate School. He wasn’t seeing as much progress in leader’s dialogues about artificial intelligence. “I wasn’t really happy with the quality of the conversation in the Pentagon,” he admitted. “And that is not because these aren’t smart people working on this. Nobody really understood what the Navy’s specific problems in this field were.”

Fast forward two years, and the pilot does see that “the quality of conversation on artificial intelligence within the Pentagon, as to what the actual hard problems are, has improved greatly,” he said. “And that is heartening.”

Cmdr. McLemore offered that a balance has to be found in trusting autonomous systems. “Too much trust is really dangerous, as is too little trust,” he stated.

Meanwhile, Sam Tangredi, professor and Leidos Chair of Future Warfare Studies at the U.S. Naval War College. suggested that the Navy make sure that adversaries have not altered AI-related code. “How do we actually verify that the AI systems we will be using have not been spoofed?” he pondered. “How will these systems be transparent to us?”

And while each service will have its unique ways to apply AI, it will be up to the Joint community, and the Joint AI Center (JAIC) at the Pentagon to make sure solutions are not totally separated, or stovepiped, Tangredi said.

As far as industry contributions to the military’s use of AI, Cmdr. McLemore recommended that companies reach out to the Defense Innovation Unit, known as DIU, or the JAIC. He noted that the JAIC would be hosting an AI conference at the end of April.

In addition, the Naval Information Warfare Systems Command, or NAVWAR, is hosting a second prize challenge, the Artificial Intelligence Applications to Autonomous Cybersecurity Challenge (AI ATAC), with a purse of $500,000, reported NAVWAR Commander Rear Adm. Christian Becker during another WEST panel. The deadline for submissions is May 29.

Tangredi also advised that “companies need to help DOD protect from the deception or vulnerabilities of AI.”

“Ready or not, AI is coming, so let’s figure out what we are doing,” Lukos stressed.

Enjoyed this article? SUBSCRIBE NOW to keep the content flowing.


Share Your Thoughts: