Enable breadcrumbs token at /includes/pageheader.html.twig

Zero Trust Is About Priorities and Thinking the Unthinkable

Zero-trust architecture is both elusive and inevitable.
Achieving zero trust is not a one-size-fits-all formula and each organization should set clear goals before starting. Credit: NaMong Productions/Shutterstock

Achieving zero trust is not a one-size-fits-all formula and each organization should set clear goals before starting. Credit: NaMong Productions/Shutterstock

Implementing zero trust cybersecurity principles across the full array of technologies, including legacy systems, next generation cellular and open source technology, is necessary but not easy, experts say.

As defense agencies place more operations and assets online, adversaries have more opportunities to exploit vulnerabilities. Thus, cybersecurity efforts have gained attention, especially zero trust.

“A cybersecurity strategy is based on the idea that no user or asset is to be implicitly trusted. Why? Because trust is a vulnerability,” said Matt Chiodi, chief trust officer at Cerby, a cybersecurity company.

But others consider that to be only a portion of the problem. “As we think about zero trust, I believe that concept needs to be expanded pretty significantly. It began with networking constructs and identity constructs, and now, you’re looking at software supply chain security,” said Varun Badhwar, founder of the cybersecurity company Endor Labs.

Defense connectivity is expected to soar around the world. For 5G alone, it is predicted that the market size will grow nine times, from $71 million in 2021 to $647 million in 2025, according to estimates by Research and Markets. High-speed wireless is only one part of this.

Security is not only at the network’s entrance or in the software that runs on servers. An overlooked aspect is the infrastructure where data travels. And it is not the obvious physical part of it, as hardware is controlled by software.

“5G core has now moved to a service-based architecture, so you’ve got many different functions that need to interconnect and that needs to be secure. One of the key tenets of zero trust is that you assume there are threats internally, and if you make that assumption, then you need to think about, ‘hey, where I’ve deployed my 5G core, I’ve gotta make sure all those interfaces are secured,’” said Steve Vogelsang, chief technology officer at Nokia Federal.

“The 5G core essentially runs in a data center, so you can think of a bunch of servers with a virtualization or containerization layer, and then the 5G core functions run on top of that,” explained Vogelsang. “So it’s very much like a large-scale enterprise application: the architectures have converged.”

Every step of the process, including infrastructure, must be built with security and resilience in mind. Nevertheless, building something is not a choice in many critical applications used by the federal government. In terms of technology, many applications were built in the distant past. These are some of the biggest challenges most networks face.

Among the first hurdles a security modernization encounters is the cyber-mosaic of old, new and almost obsolete. “One of the top challenges for military and federal governments is the vast number of legacy applications that they have. It’s an order of magnitude greater than probably anything in the private sector,” Chiodi said.

Some applications cannot be secured with today’s technology. If necessary for the functioning of the organization an evaluation process should be opened. “First, do you have an idea which are the most significant, important apps and information, and chunks of data?” Chiodi asked.

Once critical attributes are defined, another process begins to estimate update costs and lead times. Still, this will be an additional difficulty. “It will take years to modernize some of these older apps and some of them just will never be modernized,” Chiodi said.

“But for the ones that, let’s say, they’ve got years to go, they’re not going to be sunset, there are emerging solutions that offer a way to connect these legacy environments,” Chiodi explained.

Applying these solutions is imperative, and in January of 2022, the White House set forth a Memorandum, “Moving the U.S. Government Toward Zero Trust Cybersecurity Principles,” requiring all federal agencies to fully implement zero trust by the end of fiscal year 2024.

In this memorandum, the government addressed users and access, with special focus on multifactor authentication and related security protocols.

Badhwar criticized this document in a recent paper as insufficient. “Open-source software is a huge boost to development velocity, but over the past few years we are seeing productivity stifled by security risk... The concept of Zero Trust is one of the most significant cybersecurity trends of the last decade. We’ve seen it applied to access management, endpoint security and of course network security. But for some reason, when it comes to open-source software, we take an almost opposite approach.”

The Cybersecurity Infrastructure Agency (CISA) as well as other government departments, follow possible open-source vulnerabilities. Still, they tend to arrive late in the game, when an incident has already happened. A bipartisan bill, currently in Congress, places open-source security on CISA and defines this as a national security issue.

But the nature of its strength is its main vulnerability.

On their website, Red Hat, a software company specializing in open-source software, defines the field as the following: “Open-source software is developed in a decentralized and collaborative way, relying on peer review and community production. Open-source software is often cheaper, more flexible, and has more longevity than its proprietary peers because it is developed by communities rather than a single author or company.”

Given these code compilation procedures, little stands in the way of a malicious actor inserting code that could potentially pose a risk to users, data and infrastructure. It should be noted that most code used today comes from these sources. This saves production time and increases developers’ productivity.

Image
Many systems in place now were created in a time when computers only shared files through physical means. Credit: iamnoonmai/Shutterstock
Many systems in place now were created in a time when computers only shared files through physical means. Credit: iamnoonmai/Shutterstock

In response to this, Google proposed a set of procedures under its Supply-chain Levels for Software Artifacts initiative. This approach includes human and computer resources to approve an application.

One of the keys to ensuring code quality is traceability. Much of this is difficult, as the repositories where code is stored, by nature, are open to the whole community. These collaborations make checking security a tax on themselves as they are donating their work in most cases.

“Part of programming is calling external sets of instructions not directly written on the original script. Therefore, to trace the relationships in the code, these must be traced one by one; but these could run into the hundreds in certain code blocks, and this makes it impossible for a human,” Badhwar explained in an interview with SIGNAL Media.

Badhwar described how his company operates a series of algorithms to highlight potential dangers. “We’re scanning code, and we’re building very detailed dependency graphs to understand every component that they’re using,” he said.

But pure code is not the only area where potential “red lights” could appear. As developers work in communities, in sites like GitHub that function with some parallelisms with Facebook or other social media networks, there are also behavioral observations that suggest whether a contributor is suspicious.

Badhwar’s technology also looks at behavioral aspects of a contributor, like from which country the person has logged in, how old the account is and how active it may be. Similarly, if anyone is approached by a person on Facebook whose account is new, has little activity and few friends, that contact would be treated with suspicion.

Building a map of code relationships and analyzing a contributor’s behavior are two potential inputs among several attributes that could give a future user of that work an idea of risk.

It’s ironic, but one of the red flags is a lack of a zero-trust posture from a developer with a high reputation in a code repository like GitHub.

“One of the most common attack vectors have been account takeovers. The reason accounts get taken over from [software] maintainers is because they don’t have two-factor authentication turned on in their GitHub account. That’s a very common scenario, so if we find somebody who doesn’t have that, that’s a problem we flag,” Badhwar explained.

If someone contributing code does not comply with minimal security standards, a malicious actor could use those credentials to insert dangerous content impersonating him or her.

Improving security has ramifications that lead to vigilance in every aspect of online activity.

“Zero trust is a journey; it’s not an overnight light switch,” Vogelsang said.

Securing a network implies a series of steps that, most in the industry say, is a process that will continue over time. Multifactor authentication is among the first steps, but as vulnerabilities continue to be discovered, the network must evolve as it faces sustained challenges and creativity from hostile actors and adversarial states.

This journey becomes lengthier each day; still, there is no choice but to soldier on, experts indicate.

Promoting Security Through a U.S. Government Memorandum: Moving the U.S. Government Toward Zero Trust Cybersecurity Principles

On January 26, 2022, the president issued a document ordering all federal agencies, including contractors, to bolster their security.

This document laid out five immediate goals, which are:

  1. Identity: With mandatory multifactor authentication for access, stealing credentials for malicious use should be stamped out and phishing attacks made less likely to succeed.
  2. Devices: All the equipment used to log into government networks should be in an inventory, and if a breach occurs, should be identified and isolated.
  3. Networks: DNS requests and HTTP traffic should be encrypted, and perimeters should be established externally and internally.
  4. Applications and Workloads: Agencies should treat all applications as internet-connected.
  5. Data: Information should be protected and categorized, especially when hosted in the cloud.

There is only one mention in the document of “open source,” which refers to a General Services Administration and Cybersecurity and Infrastructure Security Agency collaboration for a website scanning service. The document does not present any protocols or references for open-source code adoption, despite being most of the programming that goes into applications.

Government resources on open-source software can be found at the following link: https://digital.gov/topics/open-source/

 

Enjoying The Cyber Edge?