Enable breadcrumbs token at /includes/pageheader.html.twig

Humans May Be the Biggest Hurdle to Zero Trust

Automation is improving, but it’s not there yet.

The human factor looms as the most imposing challenge to implementing zero-trust security, say experts. Aspects of this factor range from cultural acceptance to training, and sub-elements such as organizations and technologies also will play a role. Ultimately, change will have to come from the top of an organization to be truly effective.

All security measures depend to a large degree on human cooperation, but that is only part of the picture for zero trust. Its implementation will entail a massive change in security procedures both for users and for network architects. And, the ability to share information across organizational boundaries will be strongly affected at all government levels.

Government will face a host of challenges implementing zero trust across its ranges of users. Many agencies have a federated security model featuring different products, approaches and architectures, notes John Dvorak, emerging technology specialist at Red Hat. “This is a long-term project, not something that can be done overnight. And, it requires broad acceptance across an organization.” He adds that chief information security officers (CISOs), chief information officers (CIOs), contracting officers and procurement officers all must be on the same page to implement zero trust successfully.

Classifying, categorizing and tagging data are other problems that must be solved. Agencies and companies are challenged by how to categorize data and how to permit access to it. Planners must reconsider how to think in terms of localized security, especially in terms of building future applications—which increasingly will require built-in security. Above all, zero trust requires a highly trustworthy authentication service, Dvorak states.

James Stanger, chief technology evangelist at CompTIA, emphasizes that visibility will be a major issue with all the people who must interoperate. Dan Schulman, chief technology officer at Mission: Cyber, adds that the cultural differences in each agency will become more apparent with zero-trust implementation. “One of the largest challenges will not be technology; it will be culture in getting people to change and having your user base accept that change,” he declares. Everyday security routines will be discarded in favor of zero-trust methods, and that will require behavioral acceptance. “We’re going to ask these users to completely change their Monday morning process for, in their eyes, no real benefit.”

“You are what you measure, and how do you measure cultural change?” asks Glenn Hernandez, senior consultant at OpEdge Solutions and chair of the Zero Trust Strategies Subcommittee of the AFCEA Cyber Committee. Dvorak, Stanger and Schulman also are members of this subcommittee. The cultural change measurement that Hernandez cites will be a struggle because investment must follow vision. “You can talk about holistically, but it gets down to how you’re going to implement and where’s that investment going to come from and how is it going to be measured,” he offers.

Schulman notes that many smaller government agencies and departments look up to the Department of Homeland Security and the Defense Department as leaders for how to implement zero trust, but “they’re not there yet.” He adds that government needs a workforce that can address some of the zero-trust issues, whether the people are direct hires or contract workers. “We need people to be able to build the rule sets that allow for better decisions for when it comes to whether one system should have access to another system.”

Hernandez points out that some level of automation will be necessary for some zero-trust security. The focus is on avoiding an algorithm driving security decisions, which could lead to missteps from the use of personal social interaction rankings.

Stanger allows that experts must fine-tune security monitoring, possibly from a red-team/blue-team perspective, based on what the agency is doing. This could enable automation to respond to up to 80 percent of security issues. “If you can get that dynamic going of people simulating what issues can be and then listening very carefully and fine-tuning their systems, then you have a solution,” he states. “That’s where automation and orchestration have to come in.”

Industry currently is using soft interventions in its social media that have automation providing some of those actions, Hernandez says. With that, hard interventions would be manual, depending on the action undertaken. The question is whether the algorithms for future engines will be mature enough to identify those soft and hard interventions.

Stanger believes that the algorithms could do a good job of gathering the data. It will be up to the data analyst to remove biases and fine-tune it. Machine learning is improving and could be proficient if properly primed, but it will remain largely a manual process, he suggests.

Dvorak notes that many network management personnel have grown up in a castle-and-moat environment. Whether building applications or working as network engineers securing networks, these people tend to think in terms of perimeter firewalls. “[Instead], we’re asking them to go a step further to think about micro-segmentation and putting those perimeters throughout the network and as close to applications and as close to data as possible,” he states. That constitutes a re-training of people who view themselves as being at the top of the security pyramid, including for building future applications.

For the next 15 years, many systems as old as 30 years will be developed with new cloud-native applications, Dvorak explains. These new applications will be able to embrace zero trust immediately, but in some cases, they will be communicating with old legacy systems that are not fast enough and will be difficult to modernize. “We’re going to have to have these architects who can think in terms of not just the new but the old at the same time and figure out the best way to secure those older assets until they eventually can go away,” he posits.

This cultural knowledge must start at the top, Stanger offers. The “C” suite of corporate leadership must be aware of zero-trust designs and implementations, and these leaders must ensure that the security architects adhere to zero-trust models. This includes control and auditing, Stanger points out.

He believes that it likely will be too much of a challenge for organizations that are not interested in evaluating their existing approaches to information technology and security. “Most organizations need what I call a ‘zero-trust intervention’ as they move forward,” he declares. “An organization needs to have the appetite to plan and strategize its way into a zero-trust architecture, not simply apply a few tactics to address a ‘lack of zero trust.’

“Once they conduct an accurate evaluation and address their over security culture, then organizations can identify the skill sets and technologies necessary,” Stanger concludes.

“Zero trust is not a solution for every use case and would be challenging to implement everywhere,” Schulman warrants. “In fact, it may be impossible to do so in many situations.

“While zero-trust initiatives generally focus on use traffic, they are not necessarily aligned with system-to-system traffic,” he continues. “This is especially true for legacy applications.”

Dvorak suggests that the operational technology realm offers some of the biggest challenges. Many of its systems and devices do not have an operating system on them that allows granular access controls. Zero trust does not offer always-on system-level administration accounts; it only allows elevation to those accounts when a need for access appears. With a firmware device that does not have an operating system that natively supports zero trust, remedial approaches must be applied.

Some of these approaches include network micro-segmentation, proxy policy enforcement points, one-way diodes and other ways to isolate devices that cannot accommodate zero trust programmatically, Dvorak continues. Even old operating systems that might be running critical infrastructure can still be protected using a layered approach, although they might not have the account level of security sought through zero trust.

Another issue is when agencies must share data. Schulman notes that current interconnection agreements may have to change with the advent of zero trust. Those changes may prove cumbersome in the years to come, he says.

Hernandez takes a large view of the challenge. “If you look at it in terms of utilities—if you don’t have your water, if you don’t have your electricity, if you don’t have your gas, that impacts the way an organization operates, because now you don’t have a way to live,” he explains. “If you think of that in terms of your partner organizations that are providing you data, organizations that live off data will be disrupted. If you can’t identify the crucialness or criticality of those supply paths through data operational systems … that in turn has dependencies on your mission.”

Prioritization will be necessary to determine the most critical aspect of legacy systems transitioning to zero trust, he continues. Hernandez describes how an HVAC system, for example, that is not protected sufficiently could be hacked and set to a high temperature, thus cooking computers and servers that already are protected digitally. In zero trust, someone must look at the physical access controls for the HVAC system, he points out.

Schulman offers that difficulties will reign for the foreseeable future. “We’re going to go through a period now, probably the next five or 10 years, where there is going to be some pain,” he says. Yet, he is confident that more zero-trust principles will align system-to-system communications in as little as three to five years.

Stanger says we’ll know when we’ve achieved broad zero trust when “we won’t use the term anymore. It’ll just be part of what we do.”

Enjoying The Cyber Edge?