Enable breadcrumbs token at /includes/pageheader.html.twig

Cyber Heist 2.0: AI’s Role in the New Age of Hacking

AI enhances the attackers’ efficiency and scalability, but security measures and guardrails are improving.

Large language models (LLMs) have advanced and can now use tools, read documents, and call themselves, allowing them to act independently. Because of this, there is growing interest in how these autonomous LLM agents might impact cybersecurity.

Hacking used to be a business for a select few. “It used to be that you had to be very, very, adept at code and understanding systems and things like that,” said Chris Cullerot, director of technology and innovation at iTech AG.

“Now, with the advent of generative AI and AI agents and large language models and so forth, that barrier to entry is reduced,” Cullerot added.

Lowering barriers to entry into malicious cyber activity carries consequences, which are balanced between quantity and quality.

“You’ve always had infinite agents; that’s never been an issue, but it’s how well armed they are, and now they’re very well armed,” Cullerot said.

Agents are better equipped, and the scale is greater than it was prior to the emergence of these models. Still, attacks such as overwhelming a computer should be separated from finding a vulnerability.

A denial-of-service attack is a cyber assault that overwhelms a system, network or service with excessive traffic or requests, rendering it inaccessible to legitimate users. It has been linked to server farms in rogue countries and it is seen as “quantity” with a relatively lower quality.

Cyber experts warn that these types of attacks increasingly are becoming “smarter,” especially with more sophisticated distributed denial of service. Some posit that AI may be coordinating large-scale botnet actions, dynamically adjusting strategies to maximize impact.

Nevertheless, LLMs and other AI models empower other types of offensive actions.

Phishing and social engineering attacks have been widely discussed, for example, facilitating a video call from a deep-fake boss requesting system access or funds.

AI also empowers attacks such as:

  • Automated exploitation: AI can identify vulnerabilities and automate the exploitation process, making attacks faster and more efficient.
  • Malware evasion: AI can develop malware that adapts and evolves to avoid detection by traditional security measures.
  • Brute force attacks: AI can optimize password cracking.

Leveraging sophisticated models like LLMs to attack requires advanced techniques unavailable to the average person. And for a well-versed hacker, new technologies may very well mean further optimization of their malicious powers.

“The AI-equipped hacker, if you will, has access to the same benefits of that technology, meaning I’m able to go in and generate what would have taken 24 hours to generate, an exploit for a weakness; now, maybe I can do it in four,” explained Gaurav Pal, CEO of stackArmor.

LLMs face many challenges, among them is goal drift. This is a phenomenon where a model, while working autonomously, gradually deviates from its original objective due to iterative decision-making processes.

“As you iterate, like if you are having a conversation with an LLM, it’s called goal drift, where they kind of lose that eye of what they’re shooting for,” said Jason Wagner, cyber staff engineer at Cogility.

For the average user, this is most visible when working with imagery.

“They wander and it’s not quite moving towards the target. I’ve tried to do this with the image generation apps like Dall-e from OpenAI, taking an image, and it doesn’t achieve this one symbol,” Wagner explained.

In this case, image generation model users see new images generated as they edit their prompts, and those originally created are not edited.

While this is obvious for regular use in this case, more refined employers of these technologies will tend to see this in other applications—like website hacking.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Image
Small businesses will meet AI-powered threats, teaming resources with other organizations. Credit: Image generated using artificial intelligence
Small businesses will meet AI-powered threats, teaming resources with other organizations. Credit: Image generated using artificial intelligence

Only a small set of behaviors can be expected from AI systems, and critical thinking is not likely to appear soon.

LLMs “always move forward trying to generate the right answer. They don’t revisit the answer. So that’s going to limit how well they can do,” Wagner told SIGNAL Media in an interview.

Hacking by AI agents would require a very sophisticated model that can think strategically. Experts agree that does not currently exist.

Nevertheless, LLMs can be an optimal tool when many AI agents are programmed to look for a vulnerability across a wide range of targets. This is not for amateur operators.

“Groups that specialize, and a lot of times employ these tools, not as some random college student anymore but as professional organizations where it’s a particularly trained person pushing buttons trying to find that piece of technology, that unpatched server, that way in,” said Wagner, an expert in cyber intelligence.

Many agents carrying out an attack at this scale would produce a large amount of data that, in turn, must be analyzed to find the few successful intrusions.

Leveraging AI makes malicious actors more effective in sorting the noise created by employing many agents.

“How do I sort through them faster and find the one that I can ransom for the most money?” argued Wagner. This kind of job, sifting through troves of data, is another one where AI tools make malicious actors more powerful.

But capability growth through scale is not infinite.

“The more you do it, the more you miss discovery. So, you can’t always expand infinitely, but you can extract more value out of what you have to find the opportunity quicker,” Wagner added.

There are also limits to the final capabilities LLMs may have.

Currently, mainstream models like ChatGPT, Copilot, Gemini and others have increasingly more effective guardrails against malicious use, according to Cullerot. For service providers, it is a question of following their users’ curiosity and intervening when it crosses into illegal terrain.

As computing power grows, this may cease to be possible. Nefarious actors may continue to migrate toward processing capacities that allow them to build their own AI applications.

“They’re creating their own generative AI and their own LLMs that are specifically tailored,” Cullerot said.

The creation of unfettered services is a global risk.

Preparing for this onslaught, Cullerot suggested upgrading defenses. In the case of smaller organizations like his, using external suppliers helps increase the capabilities of his own defense team.

“It’s not a one-size-fits-all. It depends on kind of your own situation, your own organization, the size, the resources at your disposal,” Cullerot told SIGNAL Media in an interview.

Better defenses only present a part of the solution, as this will be a Whac-a-Mole game, only iterating at faster speeds.

Therefore, AI is also becoming important in defense. According to Pal, secure by design is imperative.

“You have to start integrating some of these tools into the automated testing and security, vulnerability testing process, because again, attack vectors that you may know about today may not be the attack vectors that these AI models use in the future and, especially, for these malicious actors,” said Jared Kim, national security and civilian senior vice president at Tyto Athene.

In this case, security must meet the tools that are gathering momentum.

“The amount of time that you have to go in and respond to weaknesses, vulnerabilities, things like that are shrunk—or is shrinking,” Pal told SIGNAL Media in an interview.

Kim identifies a trend that is compounded by complacency or bureaucracy.

“A lot of the defenses that are in place right now, they tend to be static tools, static methodologies,” Kim said. Kim explained that government cybersecurity certifications create these immobile defenses because of long procurement procedures.

Still, cloud infrastructure and the attention toward cloud migration are other potential lines of defense against upgraded actors.

“Our infrastructure, not just our hardware and software, but collective infrastructure, the cybersecurity infrastructure, has had seen a lot of investment over the last, I would say decade, decade and a half. I feel we are in a position to go in and deal with it,” Pal said.

Enjoying The Cyber Edge?