Enable breadcrumbs token at /includes/pageheader.html.twig

Creating a Common Language Of Cybersecurity

Standardized terms can produce a comprehensive threat picture.

The Office of the Director of National Intelligence is developing a set of common definitions to unify descriptions of cyberthreats used by different elements of the intelligence community. The effort seeks to bridge differences among various segments of the community when it comes to assessing these threats and reporting them to government organizations and industry. A common vernacular will help generate a common threat picture that can serve government and industry alike, experts agree.

The office’s Cyber Threat Framework serves as the basis for the effort, as it establishes a model for consistent characterization and categorization of cyberthreats. The model has been refined over the past five years, and now the office is championing its use while continuing to hone and simplify it.

The office endeavored to ensure that this model could be populated by objective data—ideally from sensors, where it could be directly measured, explains Jim Richberg, national intelligence manager (NIM) for cyber, Office of the Director of National Intelligence (ODNI).

But this framework is just one step in assembling a unified intelligence strategy for dealing with cyberthreats. Other efforts by the ODNI focus on performance metrics and additional actions. Government and industry also must establish a means of sharing cyber intelligence to ensure that proper measures are taken against threats before and after they strike, Richberg states.

“Cyber is still in its infancy compared to anything else we are dealing with both as an intelligence community and as a society,” he says. “We are dealing with something where we all keep score differently, and we’re still sorting out the roles and responsibilities of government and the private sector.”

Sorting out those roles is the NIM’s responsibility, and Richberg is facing a complicated task. Foremost among the issues is how members of the intelligence community keep different cyber scorebooks. “It’s OK that we all have different missions and different customers, but we take what is by its nature a hard problem and make it artificially harder when we all decide to speak a separate language,” he declares. “I realized in 2012 that no two agencies in my intelligence community of 17 organizations were reporting on cyber the same way.”

Each had its own frame of reference, which meant that reports from Agency A, Agency B and Agency C used different languages, he notes. Many times, it was unclear when the agencies were describing the same threat activity or something entirely different.

“We said, ‘This is crazy,’” he relates. “‘Nobody is doing it wrong, but we need to create the equivalent of Esperanto,’” the 19th-century artificial language, as a way of having each organization use a common approach to reporting. “It’s easier to map 17 models to one than to try to keep a rolling translation matrix of 17 to 17,” he states.

Equally important is the capability to define what is being discussed. Some terms common to multiple organizations may have different meanings in the context of each organization. Sometimes these differences arise among billets within a single organization. “If you don’t actually define what you’re talking about, it’s easy—especially in a crisis situation like incident response—to end up talking right past each other,” Richberg points out.

The private sector is encountering a version of the same problem, he continues. Some firms are building considerable expertise in cyberthreat intelligence, but Richberg says it still is a boutique practice in which companies keep score using their own methods. It is up to private consumer organizations to make the data interoperable and sensible, he adds.

“When you get together analysts or investigators with threat data, they’ll normally spend about 90 percent of their time normalizing the data,” Richberg states. “That consumes virtually the totality of any given opportunity they have to work together.”

He points out that some parts of the intelligence community break down malicious activity’s “cycle of evil” into four steps, while other models use as many as 14 steps. Abstracting that data will lead to dissimilar terms being compared using an uncommon mental framework—the basis for thought and action—and ontology.

Richberg heavily favors industry’s endorsement of a common approach to cyberthreat intelligence. That does not necessarily mean that everyone must use the same model, but if everyone could explicitly recognize each other’s work, that would help reverse the analysts’ 90-10 ratio characterizing today’s interactions. “That truly is something where we are still handicapped by large parts of the private sector wanting to keep score in their own boutique fashion,” he states. “It’s actually OK to do that for proprietary reasons, but [companies should] be able to readily export their data to other people in a format that already makes sense to them.”

He compares this to the hope that Esperanto would help unify Europe. If the intelligence community could just establish a form of Esperanto-type metadata, then a common operating picture could be created without requiring extensive translation.

This new approach would allow tracking “from the very granular,” Richberg says, all the way up to the global-view information consumed by senior policy makers and decision makers. This material would need to make sense to upper-level consumers without losing its fidelity from the original data, Richberg says.

He describes how the effect of actionable cyberthreat intelligence differs among different levels of customers. If, for example, the government were to tell a company’s chief information security officers (CISOs) that a certain country was trying to penetrate its network to steal data, they would accept that information without hesitation or emergency action—and perhaps already be aware of the threat. However, if the same information about a persistent threat were presented to a CEO or a chief operating officer (COO), then the leader would want to take immediate action in a number of ways.

The information given to each customer level would need to be different as well. The CISOs would benefit from actionable digital signatures, but giving the same information to the executives would be a waste, Richberg says. Meanwhile, those who are neither CISOs nor C-suite executives would benefit from information about the cyberthreat group’s ways of operation and suggested risk-reduction measures. These messages would constitute actionable threat intelligence for this group of people, Richberg notes.

“We have to break apart the idea that cyberthreat intelligence is a monolithic good,” he claims. “You have different messages that truly resonate and are useful to different functional levels and parts of an organization.”

Even with message and language commonality, metrics also occupy an important place in threat assessment, Richberg says. Measuring cyber activity is important for grasping its implications. “If you can’t measure cause and effect on cybersecurity, then it’s somewhere between faith-based activity and voodoo science,” he declares. “If you can’t measure it, then you can’t tell when you’re getting better or direct how you’re getting better.” This includes calculating return on investment in cybersecurity, he adds.

These measures are necessary in the operational arena as well. Richberg explains that the intelligence community focuses on malicious activity by adversaries. This entails seeing foreign powers of interest to the community engage in such activity so that analysts can describe it and possibly any of the capabilities being wielded. In a few cases, experts can report on intent or plans, he adds.

Yet if the intelligence community reports only that a country has a certain capability and is trying to use it, the customer would not consider this actionable information. If that knowledge was paired with the identity of an intended victim, and the victim’s degree of vulnerability was severe, then the information would raise the proper alarm, Richberg says.

The problem with this approach is that the information about the threat—the red side—belongs to the government—the blue side, Richberg says. But the consequence belongs to the victim, so that red and blue information must be fused into purple knowledge for the victim, which often is in the private sector.

The intelligence community must present threat information in a context that describes its nature, its capability and the measures that can counter it, he concludes. “This is why cyber, more so than just about anything else we do, is inherently a team sport activity between government and the private sector,” he declares. “We [in government] don’t own all that data, the private sector probably doesn’t see all the threat data, and we need to find ways of being able to put this composite picture together that makes it truly actionable.”