AI Should Not Replace Tried-and-True Security Practices
Artificial intelligence complements traditional cybersecurity measures.
In the federal government space, the machines have risen, but they’re not here to threaten us. Instead, agencies are turning to artificial intelligence (AI) and machine learning to bolster the U.S.’s cybersecurity posture.
There are many reasons for this emergent interest. Agencies are dealing with enormous amounts of data and network traffic from many different sources, including on premises and from hosted infrastructures—and sometimes a combination of both. It’s impossible for humans to sift through this massive amount of information, which makes managing security a task that cannot be exclusively handled manually.
AI alleviates many of these challenges. Machines can automatically comb through millions of packets of information and detect suspicious behavior. The more data these machines analyze, the more intelligent they become, and the better they are at noticing, predicting and preventing security breaches. Meanwhile, network administrators are freed up to manage other mission-critical tasks and develop and implement new and innovative technologies that will help advance their agencies’ agendas.
But while AI offers many great benefits, it should not be considered a replacement for human intervention or existing network monitoring tools. Instead, AI should complement and support the people and tools that agencies are already using to keep their networks safe.
The human factor remains critical.
The cyber threat landscape continues to change rapidly, and some aspects of that landscape require human intervention now more than ever before. Respondents to our recent Federal Cybersecurity Survey indicated a wide range of threat sources, from foreign governments to hackers, terrorists and beyond.
The biggest threat, though, appears to come from careless or untrained insiders, with 54 percent of respondents listing them as their top concern. This point exemplifies why people still very much matter when it comes to cybersecurity.
Even though machines and systems can be highly effective at preventing suspicious behavior, they are not great at training staff to adhere to agency policies or practice strong overall security hygiene. Agencies must still rely on their security managers to train employees on everything from potential attack techniques to simple daily habits that can help protect agency networks.
Of course, AI can certainly help prevent malicious or careless insiders from doing damage. Automatic detection of suspicious activity and immediate alerts can help managers respond more quickly to potential threats. It can also be used to fill in gaps resulting from the lack of human resources or security training and significantly decrease the time it takes to analyze data. As such, AI can reduce attack identification and response times from days to hours or even minutes.
Even so, humans will still be needed to react to and implement those responses. They remain a critical piece of the cybersecurity puzzle.
Traditional monitoring solutions are still vital.
Just as humans will continue to play an important role in network security in the age of AI, tools such as security information and event management (SIEM) systems, network configuration management and user device monitoring programs should remain a foundational element of agencies’ initiatives. These solutions supplement AI by extracting information from the constant noise, allowing managers to focus on truly critical issues and pinpoint security threats.
Like AI tools, traditional network monitoring programs have the ability to analyze huge amounts of volume. They complement this ability with continuous monitoring of user activity and network devices, and provide automated threat intelligence alerts along with contextual information to help managers act on that information. Indeed, our survey indicated that these tools continue to play a significant role in keeping networks protected; for example, 44 percent of respondents using some form of device protection solution stated they are able to detect rogue devices within minutes.
In short, while AI is extremely useful, it should not be used exclusively. Instead, agencies should plan on augmenting existing best practices and the abilities of their staff with AI. Because although AI is good and here to stay, it’s the use of tried-and-true resources that will continue to lift up the machines as they rise.
Joe Kim is executive vice president, engineering and global chief technology officer for SolarWinds.