Adversarial Artificial Intelligence Is Real

Emerging leaders in industry discussed the trends influencing the future of artificial intelligence in the cyber realm during a panel discussion at AFCEA TechNet Cyber 2022 in Baltimore. Credit: Fit Ztudio/Shutterstock
A panel of artificial intelligence (AI) experts from industry discussed some of the technology’s promise and perils and predicted its future during an AFCEA TechNet Cyber Conference panel April 26 in Baltimore.
The panelists were all members of AFCEA’s Emerging Leaders Committee who have achieved expertise in their given fields before the age of 40. The group discussed AI in the cyber realm.
Asked about “anti-AI” or “counter-AI,” Brian Behe, lead data scientist, CyberPoint International, reported a recent case in which his team used a method called reinforcement learning to change the signature of malware files without altering the malware’s functionality. “We use this as a way to do some security testing on other machine learning classifiers that had been built to detect malware. Sure enough, we were able to beat those classifiers,” Behe explained.
But the same techniques can be used for ulterior purposes. “Additionally, we were able to use those techniques to beat commercial AV [antivirus software] in a number of instances. So, adversarial AI is real. The tactics that we used to build models to detect threat vectors are the same tactics adversaries can use to beat our very models that we build,” he warned.
Behe stressed the importance of curating data sets. “I would highly encourage, if you think there’s an adversarial component to the problem you’re tackling—a la malware detection—it would be important to look into and research how we can use those techniques, modify malware examples and then introduce that into a training set to hopefully build a better detector,” he said.
The emerging leaders predicted a mixed future for AI technology. For example, Shammara Clarkson, senior software support specialist, IntelliGenesis, sees interest in AI rising and dropping in cycles. “It would be cool if AI could stay relevant. I feel like it’s kind of a cyclical thing. Kind of both in and out of fashion, and I would just hope we can find small solutions that would lead us to that greater good that we’re looking for when it comes to artificial intelligence,” Clarkson said.
She indicated that interest in AI is waning and predicted “a grand resurgence” in 10 years. “I think we’re getting away from it again because we’ve seen a lot of failures with autonomous vehicles and things of that nature, so people are getting afraid to invest their money within that area,” Clarkson asserted. “In the same scenario, it seems like they’re going more toward electric vehicles instead of autonomous because they’re seeing success in that area. So, I think we’re going to move away from it, and hopefully we’ll get to do a lot more deep thinking on solutions in this space. And then, 10 years from now, there’ll be a resurgence and we’ll be even better.”
Behe responded that he hopes to see breakthroughs in explainable AI. “I really hope in 10 years when we’re sitting here that we’re talking about explainable AI. I hope we’re talking about models that can not only give you the predictions and tell you whether things are threats or not, but they can also tell you why they’re making that decision.”
He suggested AI software should offer “the reasoning behind the features and the probability distributions that it learned and all the things that go into this nascent field of having models tell you why they’re making the choices that they’re making.”
Anthony Zech, senior AI architect for cybersecurity with ECS, reported a stratification within the realms of AI and data science. “What I mean by that is that not everybody has to have a doctorate in applied math in order to leverage AI anymore. You don’t need to hire a team of doctorates to build an AI solution.”
That trend, he predicted, will continue. “You’re going to have this spectrum of tools that’s going to enable folks with less AI-specific skills and more skills in a particular discipline to use some of these tools that right now require a lot more education in terms of AI.”
And organizations should keep that in mind when considering whether to invest in AI. “Do you need a team of doctors working on your own bespoke algorithm, or can you use stuff that’s off-the-shelf? I’m just saying the trade-off between those two is very important for organizations,” Zech said.
Zech told the audience that AI in the cyber arena is still immature. “If you look at some of the more mature areas, particularly like image recognition, you have off-the-shelf models like Yolo that do a very good job without a real good understanding necessarily of how they were specifically built. Not everybody needs to rebuild that.
Behe agreed about the stratification. “I definitely think there’s a push to make AI more ‘easy button,’” he said. But that agreement came with a warning that for risk mitigation, organizations should understand the methodology behind the models that were built—how the AI was trained, for example.