Defense agency’s new approach avoids playing into the adversary’s hand.
The defense community is turning to new weapons to wage war against relentless attacks on its computer networks by increasingly sophisticated cyberadversaries. According to Richard Hale, chief information assurance executive for the Defense Information Systems Agency, protecting the U.S. Defense Department’s network services has traditionally entailed stopping known adversaries before they damage military systems. But as “cyber-tuers” develop more refined means of avoiding detection, military network defenders are finding new ways to identify events that just do not seem right, even before any malicious intent is discovered.
Called anomaly detection technology, the security tools reportedly help network analysts identify abnormalities and even assist them in learning what a normal network looks like by defining aberrant behavior and unusual occurrences. “It’s all about knowing your network,” says Greg Stephens, principal investigator, The MITRE Corporation,
Hale says that traditionally, the Defense Information Systems Agency (DISA),
But complete reliance on signature-based detection can leave the agency’s systems vulnerable to adversaries who have developed novel ways of attacking the network, Hale says. Anomaly detection technology involves identifying not only network visitors that do not look normal but also learning what normal means. DISA must supplement these signature-based systems with ones that can be programmed to look for deviations from the norm, he says.
Bill Neugent, a MITRE fellow and chief engineer for the organization’s Security and Information Operations Division, agrees. “We have to get away from signatures of specific attacks and come up with a simple set of rules to which we can automatically react,” Neugent says.
Hale says that DISA’s experience with a policy-monitoring product over the past three years has helped analysts understand anomaly detection and helped the agency develop procedures for dealing with deviations. Simply put, policy monitoring entails watching over a set of rules and procedures that outline appropriate actions within a network. For instance, Machine A is never to communicate with Machine B. If that action happens, it is anomalous to the policy and potentially indicative of an attack.
The monitoring has been helpful in identifying internal attacks, Hale says. Policy-savvy analysts know by the way the site is configured that some things should never happen. “So if there’s a deviation from policy, the monitoring device triggers an alarm,” he says.
Knowing what to do in response to the alarms once they are received is the tricky part, Hale says. DISA’s experience with policy monitoring has highlighted the need for an analyst to understand the meaning of each alert before taking action, he adds, because responding too aggressively might result in denying services to DISA network users. In the past, the Defense Department reacted to certain kinds of cyberattacks by tightening security, which sometimes narrowed the range of missions the agency could support, Hale says. “That wasn’t a great model,” he adds.
Now, if the agency is facing increased threats, it focuses on technical and procedural actions that would help it recover from or resist an attack without denying service to whole communities, Hale says. Cutting off services in anticipation of an attack can cause more detriment than the actual attack might cause, he adds.
Stephens says the key is to take a few quick actions to wipe out as much of the alarm stream as possible so more time remains to focus on the complexity of the threat. The goal is not to turn analysts into automatons by having them clear checklists, he warns. “You want them to be thinking and putting on their detective hats,” he adds.
Human analysis of attacks against the network could take a few minutes or a few hours, Hale says. During a cyber-speed virus attack, seconds can seem like an eternity; this lapse of time appears to expose a vulnerability in DISA’s network security. Though automated threat detection software that reacts rapidly to anomaly alerts exists, Hale says he is hesitant to use it. “We’re proceeding very cautiously,” he says. Automatic reaction systems have not yet guaranteed low false alarm rates, Hale says. “We don’t want to play into the adversary’s hand by having a system that reacts automatically and denies service to certain defense department customers,” he adds. DISA is trying to balance fast attack detection with continuity of service, Hale says.
The alternative to automation seems to be well-trained, network-omniscient analysts. Policy monitoring has helped educate analysts about their network, Hale says. “We’re getting better at defining policy and at understanding the deviation of standards from that policy,” he adds. MITRE’s Stephens says anomaly detection allows human analysts to learn about their networks to a degree that they would never be attained from signature-based intrusion detection. “The anomaly system becomes the feeder system for building new rules about your network that you didn’t know you could apply,” he adds.
Hale says the challenge is getting the policy right in the first place and avoiding denial of service when a detected anomaly turns out merely to be an administrator’s mistake or a false alarm. An analyst conducting network defense needs to understand how the network site is configured, what job the site does, what vulnerabilities are involved, and what type of upstream protections and perimeter-defense mechanisms are in place between the Internet and the site, he adds.
DISA has deployed security information management tools at its regional theater network operation centers to help analysts manage data and to understand better the context of incoming alerts. The centers are aimed at ensuring continuity of missions in that geographic location, Hale says. Analysts manning the centers use data from anomaly detectors, vulnerability scanners that look for system weaknesses, intrusion detection systems and packet-sniffing devices that eavesdrop on packets as they crisscross the system.
Hale says he intends to add host-based anomaly detection to the DISA network security mix this year. Host-based systems can monitor the activity of individual computers to determine whether a particular unit is behaving oddly. Neugent says, “You can recognize the different personalities and behaviors of machines.”
Log analysis monitoring, another form of anomaly detection, also is on DISA’s radar. Akin to credit card monitoring, log analysis technology looks for anomalous behavior, for example, when authorized clients access prohibited Web sites. DISA will monitor the data logs of many sites within the military’s classified and nonclassified networks to uncover misuse. Hale says analyzing logs over an extended time period reveals patterns of bad behavior, often by authorized users. Stephens, who studies log analysis as part of a research effort titled “Detecting Insider Threat Behavior,” says, “I’m not looking for hackers. I’m looking at people who are abusing the privileges that they legitimately have.”
Incorporating log analysis and the myriad of other available and emerging intrusion detection technologies into DISA’s network security effort still may not be enough, Neugent warns. “What we need are coherent, integrated management infrastructures,” he says. Right now, information assurance managers must maintain several disparate products to provide a fairly comprehensive network security. “I think we’re five years from seeing the type of integration that we need,” Neugent says.
“We’re not there yet,” Hale agrees. The goal is to have an infrastructure that resists most attacks and, as a result, “allows us to have a sensible layered monitoring system in place that can spot the small numbers of attacks that actually get through.”
The challenge will be keeping ahead of increasingly sophisticated adversaries. Neugent says criminal attempts to take control of network systems provide insight into what future attacks might bring. Just as the growth of spamming, identity theft and phishing—a term used to identify frauds posing as legitimate online financial service providers—were not anticipated five years ago, new types of attacks may be hard to predict, Neugent offers.
But, he says, future attackers will likely point their cyberlasers on specific applications to gain control of them or to take them out entirely. “Mostly, we’re going to see the vacuuming up of information. We’re going to see espionage,” Neugent says. Attackers are not trying to shut down a victim’s services but to take control of them so they can send out spam, harvest private information and steal identities. “They will attack commercial applications, not infrastructure,” Neugent says. “They aren’t trying to shut you down, so you shouldn’t shut yourself down.”
Cyberterrorists will likely follow the same trend and try to conduct more surgical attacks, Neugent predicts. Fortunately, that threat has not yet materialized, he adds.
The growing number of terrorist organizations popping up online may actually stave off attacks on the commercial Internet, Neugent says. Terrorists are using the Web for recruiting, promoting their causes and promulgating their successes. “None of them could exist without the Internet,” he adds. So any attack terrorists might initiate on the Internet could boomerang and hit their online operations, Neugent says.
Still, the war against cyberfoes is a constant effort. The security equation must include three critical components: people, processes and technology. “You have to fight the war all the time with good people and tools,” Neugent warns.