Enable breadcrumbs token at /includes/pageheader.html.twig

A House Divided Does Not Fall

Network segmentation offers a way to meet burgeoning security needs.

The proliferation of new data sources promises to compound security challenges. Organizations must embrace a new way to protect their valued assets and information, building robust assurances against data leaks, spills and theft as well as any compromise of data integrity. Cross-domain solutions offer protection at the highest levels, and they facilitate secure collaboration at significantly lower costs than other methods.

The defense and intelligence industries are at the forefront of this initiative to better secure information. They must continue to stay the course. With threats escalating in complexity and severity, it is only a matter of time before their civilian counterparts and commercial industry follow suit. This need has been building over time.

As computers started to multiply, manufacturers did not perceive security as a primary driver in the process. They proceeded to store everything on flat networks and continue to do so chiefly out of consideration for cost and ease of management. The Defense Department and the intelligence community, on the other hand, went through the pain of classifications and the compartmentalization of information and assets across separate physical domains. In many ways, this move benefited both groups by minimizing data compromise and leakage better than their commercial counterparts. As threats become more serious, network segmentation and data classification will emerge as the next frontiers to ensure a higher level of defense and information assurance.

High-profile attacks reported on so far primarily have targeted commercial industries. One of the most highly publicized was the Target breach, wherein hackers stole customer data by compromising an HVAC vendor in the retailer’s supply chain. In the government, the U.S. Office of Personnel Management (OPM) breach stands out as the watershed moment, affecting more than 21 million personnel records, including fingerprints. While each of the organizations breached had mature security measures in place, a quick analysis presents an interesting discovery: All their networks were flat. Once inside a network, an adversary could move laterally and freely to attack critical servers. 

If any of these organizations had segmented their networks and compartmentalized their data, there could have been a different story. At the very least, the adversary’s job would have been much harder, and security teams might have found the compromise before it was too late. The fundamental concepts of data compartmentalization and network segmentation make sensitive data difficult to reach for adversaries but easy to access for authorized users. Data does not leak, spill or otherwise fall into the wrong hands inadvertently.

As the threat landscape changes, two additional alarming trends loom. One is the rise of insiders—either maliciously or accidentally—exfiltrating information. The other is the penchant for attackers to move away from data theft and toward compromise in favor of more menacing goals. Many security experts predict that 2016 may very well be the year of ransomware.

Yet just because networks are segmented and data is classified and compartmentalized does not mean the adversary will not try to attack. Therefore, it is important to apply segmentation across agencies. This can grow complicated quickly because agency and program requirements evolve constantly, and network segmentation should serve as a fundamental design, not as an afterthought. In many cases, a physical infrastructure, rather than just software, must be taken into account. Also, classified data generally cannot reside on the same domain or network as other data. 

While segmentation can offer greater information assurance, it also can create budget burdens. If an agency opts for multiple domains, each application and physical hardware might have to be replicated for adequate service across these domains. An agency with 500 users who require three networks with email and print capabilities would mandate an email client be available to staffers on each network. This could bring the number of email clients up to 1,500. A printer for every 10 individuals would entail 50 printers at each user level and 150 across the three networks. In addition, if the agency has not acquired what are now considered readily available cross-domain access technologies, each employee would need three client workstations, along with all the connectivity and power to go with them.

Classified data and segmented networks introduce a new challenge as well. Many objectives or functions feed on information from various places. Security standards do not easily allow co-mingled data. User rights must be considered down to a granular level. Access to information and applications and the safe transfer of data between domains represent two major concerns. In the former case, information and applications are not required to be physically moved. The latter dictates duplication.

While physically separating data establishes a greater degree of information assurance and confidence, the data still has to transfer across domains as needed for collaboration. Typically, an employee at a lower domain could seek a document or multimedia content from a higher domain to meet his or her objectives or to perform a function. But consider a soldier who usually does not have access to a classified domain who must listen to a broadcast from an officer residing at the more sensitive level. Instead of providing authorization to the higher domain, it may be better to stream the broadcast to users at the lower domain.

These examples illustrate the demand for technologies that not only help reduce costs but also enhance the secure transfer of information on a need-to-share basis between user levels. In some cases, a physical transfer might be necessary; in others, duplication of data at additional levels is prohibited. Cross-domain technologies are built specifically to address some of these difficult issues, leading to collaboration and secure access.

Firewall technology is ubiquitous, and it plays a key role. Although firewalls originally were devised to control access to private or enterprise networks from the outside, their latest incarnations offer more eclectic flavors. The basic kinds deliver some packet-filtering capabilities to ensure that only whitelisted sources are allowed through, but application firewalls or proxy servers bring more sophistication. They can intercept packets and forward them to specific applications inside an organization. Next-generation firewalls establish an integrated platform that combines basic firewall functionality with deep packet inspection, intrusion prevention, SSL and SSH interception, website filtering and anti-virus inspection. 

The concept of guard technology has been around for a long time. Guards are similar to firewalls—both are border-protection devices that control entry to assets stored within the enclaves they defend. But they differ from firewalls in that, while the latter allows any traffic through and is configured to block unwanted traffic, the former operates on a zero-trust model, denying all sources access unless configured otherwise. Because their applications traditionally run in a high-risk and high-assurance environment, guards display a greater standard of confidence, evaluation and filtering functionality than firewalls.

Whether to apply a next-generation firewall or a guard depends on an organization’s asset protection and assurance needs. The sensitivity of information transferred, the application sought, source and destination assurance levels and a host of additional issues dictate which choice is appropriate.

One example is a utility in the critical infrastructure industry. If a user is trying to access a supervisory control and data acquisition (SCADA) control from a different network, a guard would apply because an elevated degree of assurance is necessary. If status data generated from a controller in a substation is being transmitted to a central SCADA control, a firewall may be the appropriate choice if the substation controller is fairly isolated from untrustworthy networks.

While firewalls and guards control information transfer, not all data has to leave its location to be accessed by a user. With transfer mechanisms, information physically moves from one location to another—either from a place of higher trust to one of lower trust, or vice versa. When data moves, essentially a copy is made. It is up to the custodians of the domain to establish appropriate protection. End users who access this information and store it on an endpoint frequently degrade information assurance. One reason is that smartphones and tablets used as part of bring your own device (BYOD) programs, which often receive overt or tacit approval from employers, are more prone to loss and theft. The upshot is that information assurance is less of a certainty than ever. In many circumstances, organizations should store data in appropriate enclaves. Authorized employees can use that data as necessary without physically transferring it to a second location.

In such cases, access to the various domains can be virtualized. The applications and data physically reside at the sensitivity levels to which they are assigned in a virtualized environment. Users access them through a secure redisplay mechanism. If users require availability from multiple domains, this can be facilitated through secure technologies that keep the separation of the physical enclaves all the way to the endpoint but introduce multiple redisplay windows from which a user can retrieve the information.

In the government, one example of a complex environment requiring robust information security is the Department of Veterans Affairs. The health care ecosystem is defined by an intricate web of providers, patients, payers, pharmaceuticals, suppliers and more. As a result, the department handles a variety of sensitive information, including patient histories, billing details, credit card accounts and medical device data, and it must comply with myriad regulations. In most cases, information does not require transfer to a second location—to do so would degrade the nature of security around it. Should secure access and redisplay technologies be used in this type of environment, users still leverage apps and data that benefit the business and its mission. But the apps and data will not move physically or replicate to enable this flexibility.

In this type of construct, information assurance can generate savings, and agencies and organizations are looking for new ways to lower costs while augmenting security. In the United States, President Barack Obama’s fiscal year 2013 budget called for cuts in unnecessary spending, including printing and supplies. This is understandable when studies show that organizations can cut their total printing budget by 65 percent through printer consolidation. Consolidation saves not only in hardware expenses but also in consumables and administration. When extraneous printers at multiple sensitivity levels are eliminated, organizations reap significant savings from reduced hardware, space, power, support and supplies. The robust defense provided by guard-based systems enables users to print safely to high-side printers from multiple security levels without the risk of transferring malicious or sensitive data from high-trust to low-trust networks.

Guard-based systems also enable the secure, policy-enforced exchange of email and attachments among users on different networks, designating a single inbox for all email activity. A single inbox boosts productivity for those who require access to multiple email clients residing on different networks at varying classification levels. This effort also cuts down on the number of email clients deployed across various domains, adding to budget savings.

Ultimately, the Defense Department and intelligence community agencies can reach a state of optimized protection through analysis. Yet whether it is analysis of environmental data, operational information or security data, analysts face the challenge of deploying tools at each clearance level and then correlating them across domains. This is a tedious task. With cross-domain technologies, analysis tools are deployed at the high side, with protected data shared from other levels to foster a holistic view of information that drives easier analysis and lowers costs.

In multiple clearance-level environments in which analysts and users operate within various domains, endpoints must distinguish themselves to achieve separation. Depending on the number of domains involved, an analyst could end up with seven or eight workstations, creating onerous conditions and a space overwhelmed with heat. Cross-domain technologies avoid this by providing users with secure, simultaneous access to information on any number of networks from a single thin client, cutting expenses and eliminating environment degradation.

Cloud technologies unveil a new paradigm for network segmentation as well, enhancing cost savings. Applications can reside on one level—such as the high side—and information associated with the applications can exist at other levels. Guard-enabled access to data would ensure that a user does not require multiple instantiations of an application but is allowed access to data from appropriate levels. Guards and next-generation firewalls also facilitate safe and secure transfer of information between multiple clouds, incorporating physical separation while offering a higher level of assurance.

Ashok Sankar is the senior director, cyber strategy, Forcepoint.

Comments

The content of this field is kept private and will not be shown publicly.

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Enjoying The Cyber Edge?