Enable breadcrumbs token at /includes/pageheader.html.twig

Building a Framework for AI in Intelligence Tradecraft

The Central Intelligence Agency sets a course to ‘democratize’ artificial intelligence across its organization.

To aid its mission of gathering and sharing intelligence to protect the United States from threats, the Central Intelligence Agency (CIA) is outlining how it will expand the use of artificial intelligence (AI), machine learning and other automation capabilities across the agency, reported Kyle Rector, deputy director of the Office of Artificial Intelligence in the CIA’s Directorate of Digital Innovation.

Rector, who has been the AI deputy director for about two years, moved over to the agency from the Office of the Director of National Intelligence to first help with the CIA’s data science program. He helped stand up the CIA’s Office of AI during his first year in the role and worked on obtaining the CIA director’s approval of their AI strategy, which includes measures to upskill their workforce and outlines how to harness AI tools at a greater scale across the agency.

“We are working on democratizing AI across the organization,” the deputy director shared. “In our strategy, we are always very much concerned about trying to make sure that we are enabling AI whenever possible to be used across the organization. Because of my own background in terms of analysis and collection, we always want to make sure that we understand what our adversaries are doing in that space as well, and that we’re preparing to both respond to that and be able to provide good information to folks in the policy space.”

Rector and the AI Office are implementing several large efforts to support that AI democratization—the first being how the CIA would answer the President’s Executive Order on AI as well as how to establish the agency’s AI governance.

“I cannot stress enough how important getting AI governance right is to us,” he stated. “And we are working hard to respond to the President’s Executive Order on AI that was published in October that talks about governance in conjunction with the need to encourage innovation.”

The AI Office is positioning itself to meet standardized requirements for their AI systems, including putting in place “robust technical evaluations,” methods and policies to test, understand and evaluate algorithms and AI platforms before use. The office is also working closely with the agency’s Office of Privacy and Civil Liberties and other AI organizations in the intelligence community (IC) to ensure any AI-related efforts meet privacy and civil liberty standards.

In addition, the agency stood up an AI governance council, with representatives from across the CIA, Rector said, which helps ensure decisions about AI implementation and governance include a variety of stakeholders and identifies needed capabilities and gaps in the CIA’s current AI programs.

The key to implementing AI in the agency, Rector emphasized, is making sure that it gets integrated into the CIA’s existing tradecraft in a way that meets governmental standards.

“For us, that is establishing and promulgating rigorous AI tradecraft across our agency,” the deputy director stated. “But it’s also making sure that tradecraft is properly cross-walked with all the existing tradecraft that the agency has, whether that be in the analytic space, in the collection space, or in the business operational space—taking into account consideration of AI bias, particularly in the realm of civil rights, and advancing equity.”

 

Given the various possible forms of AI, the CIA is examining which types are suited for specific use cases and how to deliver different types of results.

The basis of this is affordability, the deputy director noted.

“We have to figure out how to provide these compute-intensive applications to those who can benefit from them in an affordable manner,” Rector emphasized. “We can’t expect an analyst who wants to use a large language model to help them, for example, summarize a group of articles, to have to have thousands of dollars at the ready to pay for the running of that model. My office is working on this issue.”

To provide cost-effective AI tools at an enterprise-wide level, the AI Office has created and deployed a data and model exchange, which is the first catalog of and repository for artificial intelligence, machine learning models and training data sets built specifically for the IC, Rector shared. For each AI tool, the data and model exchange provide details about the model, its functions, how it was developed and information on when and how the model should be used.

The data and model exchange is already up and running in the IC’s high-side cloud. “We built this specifically to address the need for development of IC standards for searching, discovering and sharing AI models across the IC.”

In addition, the CIA’s Office of AI is building out a so-called “models-as-a-service” platform for hosting AI microservices across multi-tenant environments. The goal with that platform is to provide mainstream deployment and maintenance of AI models. “With a centralized ‘models-as-a-service’ platform, our data science teams will have a common approach to deploying and hosting AI and machine learning models as a service, and application development teams will have a centralized location from which to integrate AI services into their applications,” Rector explained.

The models-as-a-service environment is already up and running on the CIA’s internal data science networks. And based on guidance from the Office of the Director of National Intelligence, the CIA’s AI Office expects to provide the models-as-a-service platform to the broader intelligence community beginning in fiscal year 2025, which starts in October.

Regarding the use of generative AI—with large language models such as Chat GPT-4, or Gemini or Claude 3—Rector was cautious of the sudden boom in its use and the exponential advancements but noted that it has its place, and for the agency, generative AI can be used in open-source intelligence.

“I think it’s worth mentioning that generative AI is not the answer to every problem, although, no one could have predicted the viral adoption that we’ve seen . . .. and how AI and Chat GPT would become household words,” the deputy director said. “The latest iterations now accept images as well as text. In addition, we’ve seen how GPT 4 can pass tests, from the GRE to the bar exam, and do them quite well. Each milestone seems to bring this closer to a future where AI seamlessly integrates into our daily lives, enhancing our productivity, creativity and communication, both at home and at the office.”

Specifically, the agency is using an open-source platform that brings generative AI large language models together to assist in the triaging of open-source data. The idea is to leverage open-source data that otherwise might not have been used by CIA analysts and collectors. Here, the Office of the Director of National Intelligence’s Open Source Center has built a capability for the broader intelligence community.

“It allows them to leverage the latest models,” Rector clarified. “It allows us to quickly query and bring back results for analysis from a whole host of data resources that otherwise folks would not have been able to use.”

Moreover, as large language models cannot be expected to do everything, other machine learning or AI tools can be harnessed alongside those capabilities. “In most cases, it’s good for those of us in charge of adoption to remember that there’s often other AI services stepping in to assist,” Rector noted. “So, it’s rarely just the generative AI models working on their own.”

To harness industry innovations, the CIA is “actively looking” to pull in many kinds of companies to support their efforts, including small businesses and nontraditional defense contractors. The Office of AI is working with their procurement executive to provide opportunities through contracting. The office hosted its first set of reverse industry days last December and January and this spring was evaluating the submissions to select a number of the companies.

The CIA is also tackling the workforce considerations of adopting AI, including education and training, and putting fears to rest that naturally come with any new impactful technology advancement. “We are ensuring that the workforce knows what AI is,” the deputy director said. “And that’s been a really important thing for us. The reality is you’ve got a large labor force that in many ways can seize the opportunity and the benefits of AI. But of course, it’s also very scary for them. Many of them feel that it may put them out of work and has them wondering what’s going to happen on that front.”

Part of that education is to work with the CIA’s top leaders about AI so that they can guide its adoption adroitly.

“For our leadership, it is about providing them the necessary context so that as we move forward, they are able to prioritize and figure out how AI can fit into the larger milieu of what the agency is doing,” Rector noted.

“We’ve been working to create conditions that enable AI to flourish across the mission and operational portfolio, focusing on those applications that have the broadest impact to our mission,” he said.