Incoming: The Lessons of Y2K for Cybersecurity

June 1, 2017
By Maj. Gen. Earl D. Matthews, USAF (Ret.)

This article is the last in a two-part series on what Y2K can teach the world about cybersecurity. Read the first part here.

The Y2K event went out with a whimper and not a bang, but not because the issue wasn’t serious. The potential for massive data disruption was there, but government and industry rallied to address it before the January 1, 2000, deadline. The millennium bug was squashed because stakeholders with a lot to lose attacked it in a coordinated effort. That approach can serve as both a lesson and a model for the latest security challenge: the cyber bug.

Today, the dynamic nature of cyberspace is a result of rapid advancements in computer and communication technologies as well as the tight coupling of the cyber domain with physical operations. Military organizations have embedded cyberspace assets—their information technology—into mission processes to increase operational efficiency, improve decision-making quality and shorten the sensor-to-shooter cycle. But this cyberspace asset-to-mission dependency can put a mission at risk when a cyber incident occurs, such as the loss or manipulation of a critical information resource. 

Nonmilitary organizations typically address this type of cybersecurity risk through an introspective, enterprisewide program that continuously identifies, prioritizes and documents risks. This allows for selection of an economical set of control measures—people, processes and technology—to mitigate risks to an acceptable level. The explicit valuation of information and cyber resources, in terms of their ability to support the organizational mission, enables the creation of a continuity of operations plan and an incident recovery plan. 

But above all, cyber response demands the same sense of urgency as Y2K. In addition, information technology/operational technology (IT/OT) risk must be aligned with real-world risk. I have not seen the same rigor about IT/OT risk since Y2K. Unfortunately, what followed Y2K was a huge decline in information technology spending and a reversal to less governance of IT/OT portfolios. This led to more risk by allowing technology to lapse naturally after a big investment, along with the regrowth of shadow information technology. According to a Market to Market report, global spending on cybersecurity will be nearly $170 billion by 2020, and that figure does not include all the other information technology spending, which is approaching $4 trillion, analysts estimate. The money is there, but why not spend it through a structured framework to address the cyber bug today?

The millennium bug was considered a once-in-a-lifetime opportunity to clean up and standardize information technology. Now we need to do it again. We did not learn our lesson after the turn of the century as we relegated information technology back to a supporting role. Operational technology continues in its own lane instead of being incorporated into the overall business and mission risk equation. But for business, government and nearly every person, technology is part of the fabric of everyday life. The Internet of Things (IoT) promises to advance that principle even further.

The approach for solving the millennium bug challenge should serve as a framework for stopping the cyber bug. The need for a solution is becoming even more urgent with the explosion of IoT devices. We have succeeded with this approach before, and we can do so again. But this time, we must be sure to learn our lesson.

Maj. Gen. Earl D. Matthews, USAF (Ret.), the former director of cyberspace operations in the Air Force’s Office of Information Dominance and Chief Information Officer, is vice president of the Enterprise Security Solutions Group for DXC Technology (formerly known as Hewlett Packard Enterprise Services), U.S. Public Sector. The views expressed here are his own.

Enjoyed this article? SUBSCRIBE NOW to keep the content flowing.


Share Your Thoughts:

I was a member of the JCS Y2K team. We set up a contingency facility at Site R, in cooperation with the Contnuity Of Operations Planning (COOP) office. We did approach the SecDef's office and asked if they would like to test the COOP, since we had so much time and money invested. Their negative response in the pre-9/11 era I feel represented a lost opportunity.
Let's not lose this opportunity to test and improve our cyber bug defense, as Maj. Gen. Matthews pointed out.

Gen. Mathews is spot on in terms of using the Y2K example as a model for systemic commitment and for the marshaling of resources to stopping the cyber bug as he describes. However, I think the analogy of having a cyber flag event to once and for all to solve today's known problems (like Y2K had) is seriously flawed.

Cyber bugs and major vulnerabilities each present a Y2K problem several times every year. We search, patch, fix, and repeat the drill continuously. Some vulnerabilities have dire consequences and wide prevalence (such as the recent SMB vulnerability and WannaCry), and others fester under the surface not getting much attention until they are exploited. Some are simply unknown and small until one day discovered on a critical system. To take an approach that we are going to "fix all this once and for all" by upgrading to the latest HW/SW/versions like we did for Y2K is too simple an approach.

What we do need to first with Y2K urgency is change how we design and build computer systems and networks, and pay much more attention to deploying secure systems. Then we must proceed to get out of the discover-exploit-patch cycle once and for all by doing away with pervasive non-secure systems and protocols (for example even the entire TCP/IP stack) and replacing them with the secure technology. Any non-secure legacy technology should simply be dropped from standards. Non of this is trivial, and it will take a commitment more like a moon landing (and will probably take as many years).

If we need a date, then let's use no later than Y2K38 when the Unix clock rolls over.

Share Your Thoughts: