Drinking From the Firehose: Sponsored Content

May 1, 2021
By Shaun Waterman

How U.S. intelligence agencies can use machine learning to beat data overload.

As U.S. intelligence agencies pivot from the war on terror to the new era of near-peer competition, the information landscape on which they operate is shifting dramatically, as detailed in the recently released report from the National Security Commission on Artificial Intelligence (AI).

For two decades, U.S. intelligence operated in an information poor environment—hunting its elusive adversaries through fleeting glimpses on surveillance video or wisps of cellphone traffic. And, thanks to the technical and operational excellence of U.S. collection, even that information poor environment often generated an overwhelming volume of data.

Even then, the intelligence community (IC) was drinking through the firehose, with only a limited number of trained, cleared analysts to make sense of the information.

Today, confronting near-peer adversaries happens on an entirely different information terrain—one that’s enormously data rich. Whole universes of open source data are added to the equation, not to mention the increased collection opportunities presented by an adversary that’s a national government—not insurgents hiding in caves, or among a civilian population.

The bottom line: Data overload grows exponentially worse in near-peer competition.

“Agencies recognize that they already have way more data than they’re able to use effectively right now—and the challenge is growing,” says Justin Neroda, a Booz Allen Hamilton vice president focused on AI. “They understand that the way to unleash the power of all that data is through artificial intelligence, specifically machine learning.”

Different agencies are at different stages on their AI journey, adds Booz Allen Director Justin Betof, but all of them face four key challenges:

• Cultural willingness and understanding: Even where agency leaders have embraced AI, a “credibility gap” with analysts sometimes persists, says Betof, until they learn how models generate their outputs.

• Data and infrastructure readiness: Is the data there? “Everybody wants to build a model,” says Neroda, but agencies must first ascertain whether they have the data they need.

• Compliance, ethics and risk: Documentation is key, says Betof, “You must show your work.”

• Workforce upskilling: There is a shortage of qualified and cleared data scientists. “Our experience shows it takes 12-18 months to get a TS/SCI clearance,” says Neroda, “People with those skills don’t like to sit idle where they are unable to support the mission for over a year.”

Booz Allen is working with agencies to overcome each of these challenges, to help them unleash the power of their data.

To build a pipeline of cleared personnel with the data science skills needed to meet the AI challenge, Booz Allen is running three programs:

• Data Science 5K: “We take digital staff who are not data scientists, and put them through a 60+ hour data science bootcamp,” says Betof, “We teach them the art of the possible when it comes to AI.”

• Analyst 2.0 is focused on transforming traditional all source intelligence analysts into technology enabled AI analysts by training them in automation, visualization and data science.

• Tech Excellence is a crammer for entry level professionals fresh out of school as they await their clearance. “We jam them up with all the training and resources they will need to use data science, AI and modeling in their field with alignment to strong mentors,” says Betof.

The key to overcoming cultural obstacles is transparency, or at least documentation, says Neroda. “At the end of the day, you are asking these analysts to put their professional credibility on the line. They need to trust the tools. They need to know, how was it trained? What data was used to make those decisions? Why did it make that decision, because that’s key to them starting to use and understand it and eventually trust it.”

Understanding the decision-making process of AI is a work in progress, acknowledges Neroda.

“Can I quantitatively or even sometimes qualitatively show how I’m complying with ethical or other principles? We’re working very hard on these issues right now, but we don’t have all the answers yet. Some of those decision-making variables are easier to measure than others, some of them are pretty hard, and have lots of sub levels to them.”

In any case, documentation is key, says Neroda. “It’s collecting the metadata of what data source did I get this from? What data did I use to train version X of this model? What were the results of that testing? Now that I’ve deployed it, what’s the performance of that model in the production environment?”

Booz Allen is helping agencies get the visibility and transparency they need, to drive that credibility with the end mission user.

When it comes to eliminating bias in data—a hot button topic—Booz Allen is at the cutting edge, leveraging existing tools and testing new approaches. “No one has a complete answer,” says Neroda. “There are toolkits out there in the commercial marketplace [for bias elimination] but often they’re very limited.” They will solve a problem with facial recognition, but not with Natural Language Programming. “There’s a lot of research and investment going on.”

But ethical and compliance issues are where the rubber really meets the road with AI and the different stakes at play for the U.S. intelligence agencies. “If you have a recommendation engine on an e-commerce site and it’s wrong 20 or 25 percent of the time, that’s no big deal,” says Betof, “You don’t have that margin of error in the IC.”

That absence of room for error is one reason why direct comparisons between intelligence or defense agencies and the private sector when it comes to AI adoption might not be that helpful, says Neroda. “When people compare DoD and the IC to the commercial world, they always like to compare them to the absolute top—to Google, to AWS, to Microsoft. Is DoD better than Google? No. Google puts billions of dollars into AI research and development … if you compare DoD AI adoption to more average companies, they are probably on a par or even slightly ahead, but our adversaries are investing significantly so we need to make this a priority also to maintain our advantage.”

Agencies that are furthest along in their AI journey need to look to machine learning (ML) operations, says Neroda.

“Some agencies have progressed from experiments to programs. They have begun to demystify AI tools. But too often, their efforts remain siloed, manual, boutique. Now they need to take the next step to enterprise-level machine learning operations,” he explains.

ML operations, or MLOps for short, offers three value propositions to U.S. intelligence agencies:

• Speed

• Repeatability

• Transparency

MLOps means tracking the performance metrics of AI models in a centralized repository, Neroda explains, so different models’ performance can be compared. “Traditionally, when you start out with machine learning, you’re doing all that manually. That’s a data scientist and an ML engineer sitting there on a keyboard going through each of those steps. With MLOps,
the objective is to build one repeatable solution, so you can deploy and integrate more quickly, and get performance metrics in a more automated fashion.”

“Accuracy, demystification, explainability is a core focus of what Booz Allen does,” adds Betof, “We are hyper focused on visibility and transparency in terms of how the decision pipeline works. Because that is the key to building trust.”

In the early days of standing up ML models, he adds, “the outputs were almost exclusively geared toward someone that was a technologist. It was technologists delivering things on behalf of other technologists.”

To ensure explainability, Betof describes how Booz Allen puts together teams that include embedded domain subject matter experts (SMEs) alongside the technologists and data scientists. “The SMEs don’t just drive the training and validation of these models. They’re also the connective tissue back to the analytic teams themselves.” At Booz Allen the teams are focused on delivering outcomes for their customers and helping analysts and agency teams provide AI and MLOps enhancements to help keep the mission moving forward.

A U-2 Dragon Lady assigned to the 9th Reconnaissance Wing prepares to land at Beale Air Force Base, Calif., December 15, 2020. The flight marked the first time a USAF aircraft flew with an AI crew member. The algorithm, known as ARTUµ, carried out assigned tasks during the flight, which otherwise would have been performed by the pilot. Booz Allen AI researchers, under contract at Air Combat Command’s U-2 Federal Laboratory, developed ARTUµ in fewer than 40 days, employing edge processing and containerized microservices to automate and speed delivery.

For more: www.boozallen.com/intel

Enjoyed this article? SUBSCRIBE NOW to keep the content flowing.


Share Your Thoughts: