Extending Cyberdefense To the Fiber Domain
High-speed communications call for multi layered protection.
Today’s enterprise networks, major Internet exchange points and international peering points increasingly are being interconnected by high-speed fiber and gigabit Ethernet facilities. While these next-generation environments provide benefits in terms of speed and throughput, they also are brutally efficient at spreading distributed denial of service attacks, viruses and malicious worms that can disrupt network and application servers. The increase in the number and severity of attacks as well as the massive economic costs of malicious worms over the past three years indicate that defenses against these problems need to be improved.
As the threat of cyberattacks continues to grow, protecting the critical telecommunications infrastructure is of paramount importance. Some forms of malicious traffic must be filtered out at the core-network level before they travel the backbone network and are pumped into customer access links. The processing power needed to perform appropriate security applications on high-speed links requires that application-related processing of packets be loosely coupled to and not embedded in the routing or switching fabric that transports packets. An intelligent network overlay of the core transport infrastructure enables more effective monitoring and management of malicious attacks, improving network assurance characteristics for carriers, service providers and customers.
A range of security point products, including software embedded in routers and special-purpose appliances, protect today’s tail circuit or customer premise environment. However, the core carrier infrastructure of high-speed fiber optic links running at optical character (OC)-48 and the metropolitan fiber networks running gigabit Ethernet are predominantly unprotected. This places a burden on the access applications, resulting in a lower quality of service and degrading the performance of many of the traffic services running through these links.
From a purely technology expansion perspective, there is a growing divergence between Moore’s Law about the growth of microprocessor power and Metcalf’s law about the increase in bandwidth. It is conceivable that the power of a silicon chip may never equal the speed with which packets of light travel down a fiber. By decoupling the complex packet processing functions, routers can be optimized for their communications processing functions. This intelligent network overlay approach provides several key benefits. Network-planning engineers gain an increase in router performance and traffic throughput, while network security engineers gain significant opportunity for enhanced capability.
The Code Red II worm alone infected more than 350,000 computers in less than 14 hours. At its peak, it was infecting more than 2,000 new hosts per minute. By focusing serious computing power solely on the packet processing function rather than the routing, the speed of distribution of this worm could have been severely reduced. Systems administrators would have had more time to update their anti-virus systems or patch the original operating system vulnerability, mitigating the final damage. Once the identifying data string was known, appropriate filters could have been implemented on high-speed links. This approach detects and blocks distributed denial of service attacks closer to the network core, before they reach their intended target. However, using router security filters to accomplish this would slow transport performance to the point where the entire network would degrade.
To address these problems as far into the Internet as possible requires a new breed of highly intelligent, packet-based computing platforms that perform a wide range of core-based network security and assurance functions.
The initial step in protecting the telecommunications infrastructure in a more mission-critical fashion requires the design and implementation of assured access to cyberspace. This system involves robust network infrastructure protection utilizing a defense-in-depth strategy and a national security/emergency preparedness approach for Internet bandwidth to ensure access for high-priority Internet protocol traffic in times of crisis or war.
A key technical requirement of a true defense-in-depth strategy is positioning the sensor capability as deep into the Internet as possible. The ideal location is at the high-speed OC-48 and gigabit Ethernet trunks of the network. The core sensor must provide data about threats and respond to those threats based on predefined rules. This is the basis for establishing national security/emergency preparedness-level capabilities for the Internet. Based on threat data from these core sensors, responses can be automatically or manually engaged. For example, once a threat has been identified, increasingly restrictive traffic management can be initiated through predefined access control lists. Packets can be blocked, filtered, selectively rate-limited and prioritized, all with assured transport guaranteed for the highest priority traffic. At the same time, these sensors can protect the infrastructure by line-speed monitoring and dropping distributed denial of service traffic and viruses. Multiple sensors can work in concert to isolate and respond to a threat.
The ultimate defense-in-depth security posture would include tying core and edge sensors to enterprise sensors and even desktop security tools then feeding all appropriate data into a security operations center for enterprise-level activity consolidation. Using new-generation data archiving strategies and sophisticated assimilation and analysis tools, the security dashboard at the security operations center would provide visual indications of threats, network anomalies and dramatic changes in network demographics. With this visible threat information, response teams can implement threat responses effectively and provide networkwide threat augmentation as needed.
The assured access to cyberspace sensor platforms must have certain characteristics to operate effectively in the core fiber network environment. The sensor must be capable of matching the speed of the link bandwidth with no user-recognizable latency. The system would have to feature a high-performance, scalable architecture; optical fiber and gigabit Ethernet capability; less than 10-millisecond latency; a hardened, fault-tolerant operating system; and carrier compliant interfaces and chassis. It must be deployable at one or more traffic concentration sites, capable of strategic and tactical deployment, and noninvasive to normal network operations. The system also must have stealth insertion and operation, feature multiprotocol support, provide 100 percent packet inspection, include programmable real-time response, offer a secure collaborative capability among sensors and run multiple network assurance applications. In addition, it must feature rapid prototyping of new information warfare applications or filters as well as rapid porting of legacy information warfare applications and include a multilevel threat condition configuration.
A number of key operational characteristics also are needed. There must be an interface to high-level data analysis packages, real-time alarms, traffic capture and storage, continuous operation and remote management.
With these capabilities, the sensors can be programmed to reflect current defense condition levels. As the threat to the systems increases, the various security applications can be tightened incrementally to minimize the damage done by cyberattacks and ensure that mission-critical traffic has access to available bandwidth.
Robert Fish is the vice president of corporate strategy, CloudShield Technologies, San José, California.