Federal Law Enforcement’s Use of Artificial Intelligence

The use of machine learning, automation, large languages models and other aspects of artificial intelligence is happening in federal law enforcement, say experts from the FBI, the Transportation Security Administration (TSA) and the Naval Criminal Investigative Service (NCIS), who spoke at AFCEA Bethesda’s Law Enforcement and Public Safety, or LEAPS Preview event on April 17 that was moderated by Sonya Thompson, former chief information officer and assistant director, Information Technology and Data Division, Federal Bureau of Prisons.
Given what is at stake as far as legal authorities and building evidence against alleged criminals, the federal law enforcement organizations are treading carefully when it comes to artificial intelligence (AI). Even so, organizations are applying AI to business applications and operations.
The positive impact of AI is great, and naturally, the organizations expect to benefit further as they adroitly implement more AI applications and the technologies advance.
For the FBI, in their Criminal Investigative Division in particular, the need to harness AI came out of the Boston Marathon bombing in 2013. The Islamist domestic terrorist attack that killed several people, including two policeman, and injured hundreds of runners and spectators near the finish line of the race, left the FBI to quickly find and track the suspects amidst a great amount of video data, said Kiersten Schiliro, senior technical advisor, Operational Technology Division, FBI.
“That was the first large event where we had thousands of hours of video from the public CCTV cameras, and we just weren’t set up at that time to go through it quickly,” Schiliro noted. “But out of that, we developed an open MPF, which is a multimedia processing framework for computer vision where you can quickly triage large data sets to pull out specific optical characteristics. You can draw out license plates and words within the imagery. You can pull out specific objects, like if you’re looking for a certain car. And the really nice thing about it is you can also track faces.”
It was not meant as a tool for identity resolution, as that is a different use case, Schiliro clarified.
“This one is to be able to track the same subject throughout the data,” she said. “In this case, we were looking for [two suspects] with a black hat and white hat. We had to find every instance of those within this large data set.”
The MPF proved to be a powerful triage tool for federal investigators. And since then, the AI capability has reduced the time it takes to review data sets, which is especially important given the increased use of body-worn cameras and other video data.
“We ended up developing a pretty decent capability out of that requirement,” Schiliro noted. “Back then, it took close to a year to fully process all that data, but at this point, with the capabilities we have now, it probably would take about two days to process that same amount of video. These computer vision tools are one of our most mature AI use cases, and they have really come through, and they are going to continue to evolve, giving us faster ways to pull useful information out of data sets.”
Elsewhere in the bureau’s Criminal Investigative Division, the FBI is applying AI to its efforts to prevent or stop violence against children—and doing so very effectively, Schiliro noted.
“We have cases where we have vulnerable child victims, and I can’t speak to any of the specifics of some of these cases, but oftentimes they are not going to show up in a database of adults, like the NGI [Next Generation Identification law enforcement] database or Department of Motor Vehicle records, because they are children,” she explained. “In those cases where we are having trouble identifying either a victim or the subject in a use case like that, we will have to conduct [facial recognition technology] searches for identity resolution. And I have to tell you the accuracy is very high. And believe me, this technology is being used to save victims’ lives.”
Schiliro explained that the bureau intentionally chose to apply AI to some of its harder problem sets first. Rolling certain AI tools into the operations of the Criminal Investigative Division made AI adoption a “little bit easier,” she said, given that it was only one division and not the entire FBI. Now, the bureau is looking at other ways AI can offer efficiency gains and benefit more of the workforce.
And for companies that can offer AI capabilities to law enforcement, Schiliro had some advice.
“There are three areas where I think AI can be beneficial: for something that increases accuracy, something that increases efficiency or something that does what a human cannot do,” she stated. “There are just some AI tools that do better than human reviews. But while we all want to use AI, it is not free, and we have to identify a measurable outcome for what we are trying to do with that specific AI use case.”
Meanwhile, the TSA is leveraging AI tools from its parent organization, the Department of Homeland Security (DHS), which has a robust capability development arm in its Science and Technology office that has already been implementing AI. The TSA’s Chief Information Office has everyone in the information technology office learning AI, and the mindset is to leverage tools across the agency and from the DHS, said Kristin Ruiz, deputy assistant administrator and deputy chief information officer, TSA.
“We don’t need to necessarily add additional staff to be able to leverage this emerging technology,” Ruiz emphasized. “We do that through our current staff, who are highly skilled and able to pick up new skills on the fly with training. And then we partner with our component organizations that may have funding for one piece while we may have funding for another, and we work together to make sure that we get economies of scale. We are looking at all the options available to us to see what we can do for the biggest bang for our buck.”
One early tool that the TSA is using internally is called the TSA Answer Engine. It is geared toward TSA employees in the field to be able to ask questions related to standard operating procedures, generate specific documents or reports, or get quick answers out to the field about a specific regulation or policy.
The TSA is also relying on its Innovation Lab to help with AI implementation. “We bring in our partners not to just discuss specific use cases and to look at their technology, but to help folks in the field be able to see how they could automate something that they may have not considered before,” Ruiz shared.
One example, she said, applied to training and enabling hands-on learning at TSA checkpoints. The organization paired with industry partners to do a demonstration of how the TSA could leverage virtual reality holograms in combination with ChatGPT personas. This allowed TSA officers to have real-life training with different personas and scenarios.
“When they did the training, they never knew what ChatGPT was going to do,” Ruiz stated. “You could run through the same scenario, but each officer might handle it differently and get a different response. Some could get the hologram to comply, and some could not. And it was quite interesting to be able to pivot towards technology for that use case. And it gave them confidence so that when they go into the actual field, they’re able to handle different scenarios.”
The TSA’s well-established use of facial recognition, AI and biometric data at airports and ports, which have been in place for several years, has led to further gains as the TSA partners with the DHS, Department of Justice and the State Department and leverages the resulting information and data, Ruiz noted.

In addition, the NCIS—headquartered at Quantico and made even more famous by the eponymous TV show—relies on about 250 agents in 15 field offices in the United States and abroad. Recent work by the NCIS includes a joint investigation that indicted Chinese nationals for a computer hacking campaign, a multiagency takedown of a drug trafficking network and a joint child exploitation arrest operation, according to the agency.
The NCIS is progressing with AI, starting with a few small projects, said Richard Dunwoodie, acting executive assistant director in NCIS’ Operational Technology and Cyber Innovation Directorate.
For the organization, AI represents possibilities both in business applications and in the operational environment in these types of investigations, he said. “NCIS is looking at leveraging some of these key kinds of capabilities as part of our business applications,” Dunwoodie shared. “We are concerned with looking for efficiencies, and ultimately helping the workforce navigate better through policy. “
For operations, Dunwoodie sees AI helping with human aspects as well as for vehicle recognition. “It’s not just about the human element,” he explained. “It is also about vehicle recognition as well. There’s a number of different ways that those kinds of capabilities are being utilized in an operational environment.”
The NCIS is also speaking to industry to see how AI-related capabilities could help the workforce on a day-to-day basis operationally. In addition, they are relying on the Department of Defense’s Chief Digital and Artificial Intelligence Office, or CDAO.
“We are starting small,” Dunwoodie said. “For projects where we can do that safely in an operational environment and give an agent a tool that allows access to actual investigative and operational data, we are not there yet. Fortunately for us, within the Department of Defense, we have the Chief Digital and Artificial Intelligence Office that is vetting, assessing and validating a number of artificial intelligence solutions from the private sector that are readily adaptable to our environment. That is where we are oftentimes starting, getting what is already on the shelf that’s been approved and that is validated for immediate use.”
Unique, legacy environments—at NCIS and elsewhere—will sometimes call for a specific AI capability, Dunwoodie continued. And to a certain extent, they have been using AI and machine learning capabilities for “quite some time,” by leveraging the ability to work with large data sets—either historically or in live environments—to inform decision-making. They just haven’t necessarily been putting an “AI brand” on that kind of work.
One example is using AI for public-facing events, such as fleet weeks or air shows and statically deploying, pre-programmed solutions. “There is this opportunity to be a great deal more effective,” Dunwoodie said.
In addition, staff education is necessary at the NCIS, as is combating the desire of agents to rely on commercial GPT applications to write their reports. “That is an issue,” he acknowledged. When submitting a query to a commercial AI tool, the NCIS is giving away knowledge of their gaps, potentially to adversaries.
“We do need policies and guardrails to guide this,” Dunwoodie emphasized.
Schiliro added that the bureau does have “checks and balances” when it comes to using AI, and while that takes longer, it ensures a careful approach. For example, the FBI reviews all its AI use cases for privacy and civil liberties considerations, and they undergo an ethics review by their AI Ethics Council.
“We have been very cautious,” she said. “All of our agencies have been cautious and deliberate about the way we approach AI. We only have one chance to get these kinds of things right.”