The Other Side of Zero Trust
Application and workload connections can’t be ignored.
The federal government has been taking zero trust more seriously. Although a significant part of it has yet to be implemented, some initial work has been completed with zero trust network access, yet the outside-in approach to zero trust and complexity remains. But the more important aspect of zero trust relates to application and workload connections, which is what attackers care about and is not being protected today.
This “other side” of zero trust and a host-based micro-segmentation approach will lead to greater security and will stop the lateral movement of malware. Constituting multiple pilot projects is the best way forward in the inside-out approach to zero trust.
Progress has been made. The U.S. Defense Department Digital Modernization Strategy (DMS) FY19-23, released in June 2019, specifically called out zero trust security under Appendix A: Technologies Offering Promise. In August 2020, NIST released its latest Zero Trust Special Publication 800-207 with input from the federal CIO Council and the NIST Cyber Center of Excellence.
The publication abstract states, “Zero trust is the term for an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters toward a focus on users, assets and resources.” The federal government also is working on improving user access for zero trust; however, it is moving slowly to protect its assets and resources in a zero trust environment.
The document also specifically defines microsegmentation as using software agents or firewalls on the endpoint asset or assets. These gateway devices dynamically grant access to individual requests from a client, asset or service. This is the best way to protect the high value assets that are specifically called out in the FY2020 FISMA CIO Metrics report.
Least privileged access is at the heart of zero trust. It suggests that agencies and commands deny all and allow only predefined access from both a user perspective and, more importantly, from the applications and workload perspective.
Also, properly creating a baseline application and workload dependency map is extremely important to embed security throughout an agency’s computer architecture. Users need to see the app-to-app and workload traffic to segment it properly.
DMS Appendix A states the actual implementation and operationalization of zero trust is significantly complex because it requires a change in philosophy based on the current model, which is trust all and deny only when a reason exists.
But it is how users implement their zero trust architecture that will determine its ultimate ease or difficulty. When users can create a baseline application and workload map in real time, they can reduce zero trust complexity dramatically.
The Defense Information Systems Agency (DISA) and the National Security Agency (NSA) are working on their own zero trust reference architecture that aims to improve cybersecurity without requiring an agency or command to purchase new equipment. Instead, host-based microsegmentation makes the security method possible by allowing an agency to program native firewalls.
Brandon Iske, chief engineer, DISA, takes the approach of “Never trust, always verify and assume a breach.” Iske is working with Serena Chan, director, DISA Cyber Development Directorate, on the agency’s zero trust reference architecture in conjunction with the NSA. They also are building a new zero trust laboratory at the U.S. Cyber Command Maryland Innovation and Security Institute Dreamport facility in Columbia, Maryland.
Over the past year, there have been some discernible pilot projects focusing on upgrading identity management and user credentials using the zero trust network access (ZTNA) approach. Currently, the U.S. Army, Air Force, Navy and DISA all have pilot programs in flight that focus on the outside-in approach to zero trust, but that only tells half the story.
For example, DISA and the Army are leveraging the Zscaler Private Access service, which brokers access through the Zscaler service and ensures the organization’s security and access policies are enforced before users attempt to securely access enterprise applications from wherever the user may be. With Zscaler, ZTNA is achieved through microsegmentation by application (user to app) as well as microsegmentation for workloads (app to app), preventing illicit lateral movement.
With an inside-out approach, safeguarding high-value assets is the most prudent way to start a zero trust pilot project. This recommendation aligns with the Department of Homeland Security’s release of microsegmentation as a recommended capability under the Continuous Diagnostics and Mitigation Program. The program provides cybersecurity tools to participating agencies to improve their FISMA score.
A significant dependency with credential escrow, strong authentication and identity management exists. These items apply to user- and device-to-application traffic because machine-to-machine or workload-to-workload connections often are application programming interface-based and require a different approach. Users must have these in place before they can get to policy enforcement, some engineers say.
However, credentials depend on network security, while enforcing policy focuses on application security, which doesn’t require the network. In fact, both sides of the zero trust coin can be worked on simultaneously.
Once users are approved inside of the perimeter, they should no longer gain access to the entire network. The way forward with zero trust means that the concept of a strong network perimeter must be replaced with an expanded emphasis on users, data and applications. While multiple projects that are centered around users are in the pilot stage, a much greater emphasis should be placed on the data—or workloads—and applications.
Because the very heart of zero trust is the concept of least privilege, breaches can be locked down to one server, workload or laptop. This is the inside-out approach to zero trust. This approach to zero trust from a system architecture point of view can be implemented in three ways: software-defined networking (SDN), firewalls and host-based microsegmentation.
Using SDN or network virtualization for enforcement is a weak security option because it focuses on network security and utilizes a free-form tagging and labeling construct. This lack of governance in managing the metadata used to identify a workload makes it difficult to manage and provision policies. Keeping track of Internet protocol (IP) addresses adds complexity and prevents scale. It also requires a complete network upgrade and is costly.
A firewall approach requires additional firewalls to control the movement of east-west traffic, and network engineers understand that as well. It also integrates into many other security technologies and enables the deployment of extra services, including intrusion protection systems and URL filtering.
However, hardware firewalls are rigid. For internal/data center firewalls, keeping track of zones, subnets, IP addresses and the order of the rules can easily become unwieldy when the environment is virtualized and highly automated. The likelihood of breaking applications during a firewall rule change increases as the environment becomes more complex. Just as in the SDN approach, a lack of visibility for app-to-app traffic exists, large deployments can be expensive and, once again, this is a network security-centric approach.
Lastly, a host-based approach to microsegmentation is one that programs the native stateful firewalls that reside in each host. By its very nature, focusing on the applications decouples segmentation from the networking architecture. It is simple to deploy, easily scalable, lower cost and can be rolled out in any architecture, including cloud, containers, hybrid and bare metal. It can work with heterogeneous hardware assets such as firewalls, load balancers and network switches, and a real-time app and workload dependency map is available. For the first time, chief information officers and chief information security officers can see what their applications and workloads are doing.
Individuals in the government’s information security and data center agencies anticipate the DISA/NSA zero trust reference architecture report will help them navigate a better way forward. Nascent action is being taken from the identity management or ZTNA perspective; however, more focus is needed on the inside-out approach, a host-based microsegmentation approach to zero trust.
The time for initiating multiple pilot projects is now. Doing so will help prevent the spread of lateral breaches, improve an agency’s or command’s cybersecurity posture using existing equipment and provide a first-ever real-time visibility map. Taking action now will lock down the application and workload attack vector and allow for a renewed mission focus.
Mark Sincevich is the federal director for Illumio, a cybersecurity software company enabling end-to-end zero trust using host-based micro-segmentation.