Leveraging AI To Keep Pace With Nonlinear Change
The rapid and unpredictable nature of today’s threat landscape can only be met with artificial intelligence (AI). However, with adversarial access to similar, if not better, technologies—in addition to a difference in ethical standards—this capability becomes a double-edged sword.
For the Department of Homeland Security (DHS), mission success has seen exponential growth through the use of AI.
“We’ve had a 50 times improvement in the speed-up of [child exploitation] leads this past year with AI tools,” said Dimitri Kusnezov, undersecretary for the department’s Science and Technology Directorate.
Child exploitation investigations require agents to sift through millions of images of children of different ages, he explained. “People have to ingest things that are horrible. The more you can automate, extract features, identification.”
The largest global nexus, Kusnezov stated, is between U.S. men as predators, Philippine women as traffickers and Philippine children—boys and girls.
With AI capabilities, agents can now age or de-age images to help stream immense amounts of content and information.
There is no other way to do this job, Kusnezov emphasized.
What used to take three years and large amounts of human labor can now be done in days, he said.
“In fentanyl, we had just in the past few months a 50% uptick in fentanyl seizures, because of the tools we have embedded in the flows of data, things that come from cellphones and other kinds of warrants and law enforcement actions where you just get dumped with terabytes, 10s of terabytes of data,” Kusnezov said.
“From 1957 to 1974, AI flourished,” a Harvard report states. However, the evolution of the technology seen in recent years has been rampant.
From President Trump’s Executive Order 13859, which introduced the safe research and development of AI technologies in 2019, to President Biden’s most recent Executive Order 14110, released in October 2023, federal agencies have invested vast amounts of time and resources to explore and develop AI capabilities safely and ethically.
In April 2023, DHS Secretary Alejandro Mayorkas released a memorandum requesting the establishment of a DHS AI Task Force, now led by Kusnezov and Eric Hysen, DHS chief information officer and chief artificial intelligence officer.
“We will seek to deploy AI to more ably screen cargo, identify the importation of goods produced with forced labor and manage risk,” he stated during the 2023 State of the Homeland Address.
In March, Mayorkas and Hysen unveiled the department’s first-ever AI road map, which outlines three lines of effort: responsibly leveraging AI, promoting AI safety and security nationwide and continuing to lead AI through partnerships.
In June, the DHS announced the hire of 10 out of 50 members for its “AI Corps,” in addition to appointing Michael Boyce as the first director of the initiative.
Finally, the agency is currently on a hiring spree for AI experts. “You will serve as an AI technology expert in the DHS AI Corps, focused on creating and enhancing AI/ML [machine learning]-enabled applications, solutions and related oversight,” the job posting open through September 6 reads.
To date, the DHS AI use case inventory lists 58 unclassified capabilities, including commercial off-the-shelf autonomous surveillance towers used by the Customs and Border Protection agency; cyber incident reporting currently in the initiation phase by the Cybersecurity and Infrastructure Security Agency; and mobile device analytics in development and acquisition by the Immigration and Customs Enforcement agency.
Yet while the department, along with all federal agencies, has focused heavily on AI development, adversarial threats continue to advance.
“Transnational repression is a thing where foreign countries will target populations in the U.S., and now you can be far more selective; you can do it in dialects and dialects that you choose,” Kusnezov explained in an interview with SIGNAL Media.
AI-enabled disinformation has helped traffickers make millions of dollars by encouraging migrants to cross the treacherous Darién Gap, for example.
Additionally, image modification, morphing and deepfakes are a rising concern.
“It’s always been a cat and mouse game,” said Amir Sadovnik, research lead at the Oak Ridge National Laboratory’s Center for AI Security Research. “The people that want to cause harm are going to be using AI, but us at the lab, what we’re doing is actually using AI for defense,” he stated at an April meeting with members of the media.
The lab, which was established in 2023, works closely with federal agencies to address AI threats and analyze vulnerabilities within its capabilities.
In cyber defense, the trick is to look for things that have already been seen, while AI’s strength lies in its ability to generalize over unseen conditions, he explained.
Unlike with cybersecurity attacks, however, victims of AI-enabled attacks have no point of contact for reporting and resolution.
“We’re at the beginning phase of a new world of threats coming from the tools that we are ingesting at remarkable scale because they are so enabling for our lives and empowering,” Kusnezov continued. “The appetite is too ferocious; the question is, ‘how do we tend to these risks that will flow with it?’”
Now is the time to step up the game, he urged, and preparedness is key. “If tomorrow is not a reflection of yesterday; if the world is nonlinear, then you have to ask whether the approaches we have are scalable to the problems that will face us tomorrow,” he said.
Success can only be met through true comprehension of today’s complexities, he added, because the change seen today makes it difficult to predict tomorrow.
“What it isn’t is simply adding an AI platform to the backend of what we have today ... we use it where we can. It’s not a panacea.”
There is a need to innovate in nontraditional ways, he added.
Kusnezov’s priority is developing go-to places for the DHS to test and evaluate technologies. Innovations such as facial recognition require a deep analysis of vulnerabilities to ensure the safe, robust and responsible use of the capabilities.
“We can’t lose track of the fact that we have to be ready for a future that will likely be distinct from today,” he concluded.
Diego Laje contributed to this report.