Future Network Defense Needs Self-Healing AI Systems
Data accuracy and integrity will be crucial to effective cybersecurity.
Artificial intelligence can analyze vast amounts of information, identifying patterns and anomalies at a speed and scale beyond human capacity. To make it an invaluable part of defense, the goal will be to create cybersecurity systems that can anticipate national security threats. Once systems can automatically reconfigure themselves and their security controls to prevent any potential breaches, the next step will be to move to machines with the power to make their own decisions.
With the escalation in automated cyber attacks, autonomous self-healing systems may be the only viable defense. However, artificial intelligence (AI) systems are entirely dependent on the quality of the data they receive. For machines to be entrusted with key decisions that affect critical national infrastructure security, organizations will need to be able to trust and verify the accuracy of the data guiding those decisions.
This is one of the challenges that the U.S. military’s new $29.8 million Unified Platform—the future engine room of AI-driven cyber defense—could face. It will offer a central, 24-hour, real-time window into the military services’ international cybersecurity risk profile and warfighting operations, enabling U.S. Cyber Command to switch seamlessly between defensive and offensive capabilities. It may even defend forward, going behind enemy lines in cyberspace. By providing command and control, decision support capabilities, operational-to-tactical cyber mission planning, large data ingestion and enhanced analytics, the Unified Platform will significantly advance the command’s ability to function with the speed, agility and precision required to successfully counter threats.
Today, AI mostly augments human decisions through high-speed analysis of data. Examples of AI already in action include image-recognition algorithms that aid autonomous navigation and identify enemy targets in real-time as well as audio analysis systems that can detect gunshots. AI also has helped develop self-driving military vehicles and provided virtual border security kiosks with lie-detecting capabilities.
However, in all cases, a lack of data accuracy and integrity can have devastating consequences. For example, Microsoft’s Tay, an AI-driven self-learning social media chatbot, demonstrated how quickly AI-driven decisions can fail catastrophically under the influence of bad data. Within 16 hours of launch, malicious actors had fed Tay enough bad data to turn its AI-driven persona into a hate-mongering, racist, pro-Hitler neo-Nazi.
Self-healing networks and systems hinge on the ability of machines to produce trustworthy, accurate and reliable cybersecurity data and feed it into security information and event management (SIEM) platforms. SIEM systems centralize, combine and refine data from multiple sources. If those sources are inaccurate or flawed, the AI platforms will make equally flawed decisions that, combined with the power to take corrective action, could prove fatal to defense.
This is a real and major threat to cybersecurity. For example, military drones have gone rogue for hundreds of miles, Amazon Rekognition facial-ID cameras misidentified members of Congress as criminals and AI-based driverless vehicles have caused accidents. These errors occurred because, in many cases, the training data sets used to educate AI were not properly contextualized or curated, or the sensory data fed to the algorithms was unreliable.
Cybersecurity SIEM and AI platforms will be highly dependent on the performance of an array of automated cybersecurity tools just as self-driving car safety relies heavily on the quality of data input from on-board sensors. If these problems extended to cybersecurity, machines could make erroneous security decisions based on inaccurate or deliberately manipulated data. Any inaccurate information coming from these tools could produce a devastating domino effect on AI-based tools guarding future infrastructure.
For example, if an electricity smart grid network monitoring device was manipulated to produce spoof results, it could cause the AI to shut down the grid, leading to a blackout. Similarly, some cybersecurity tools are prone to producing false positives and negatives and could dupe an AI-driven system into believing a network is secure when it is actually vulnerable to attack.
The threat is compounded because many traditional automated cybersecurity tools lack sufficient accuracy and reliability to produce a truly trustworthy self-healing system. However, the current go-to tools such as network scanners and monitoring systems are likely to continue as components of future solutions.
For example, scanners commonly are used to seek vulnerabilities by simulating a cyber attack and bombarding systems with network traffic from the outside. This is the equivalent of shelling friendly frontlines to find weak points. This approach has a wide dispersal rate, so it is a great way to get an eagle-eye view, but it often has a low accuracy rate.
Independent lab tests of one market-leading scanner showed it was only 16.4 percent accurate when assessing for specific compliance to security technical implementation guidelines. This is far too low to form the sole basis of any trustworthy AI-driven security system.
Network monitoring tools that sit on networks and monitor live traffic for malicious activity are useful for identifying and protecting against cyber attacks in real time. The equivalent of a closed-circuit TV system or burglar alarm, they can make users aware of people casing the location but aren’t set up to defend and protect it and could be misled with false data.
Legacy tools are useful but can generate data inaccuracies and leave holes in defenses. As a result, security experts must constantly verify the information from their own systems. They are a flawed radar system supplemented with constant vigilance from dedicated security teams. Although battle-fatigued veterans are valiantly defending against automated attacks, the squadrons of cybersecurity professionals are running out of new troops.
In cyber warfare, the speed, scale and even enemy in an attack may be unknown, but the groundwork that must be laid is not dissimilar from traditional warfare. Organizations need a well-prepared battle plan, a defined field of engagement, the ability to know with some certainty what defenses are in play, a resilient response to anticipated attacks and the ability to defend against new forms of attack rapidly.
Security experts configure network infrastructure and defenses to define the field of engagement and prepare the tools that defend it. These weapons—firewalls, switches, routers, servers and other network infrastructure devices—each contain their own set of battle plans, including configurations or operating systems that define their actions under attack. However, the ability to independently interrogate these battle plans accurately and at scale is missing from current cyber defense platforms. It’s a gap that needs to be filled.
The model must be changed from scan, monitor and extrapolate—all methods that work well at scale—to incorporating technologies that traditionally haven’t scaled well. This includes the auditing of internal system data on a highly granular and accurate level. This solution is challenging, but not impossible.
Configuration and operating system auditing may be in relative infancy for many enterprise providers, but red teams are using at least one mature and secure solution at scale to automate compliance audits and vulnerability discovery. Deployed as an autonomous auditor, it works by virtualizing the configuration or operating instructions of a device—its battle plans. It then combines deep analysis with intelligent modeling to create accurate streamlined information on structural vulnerabilities and security weaknesses along with remediation recommendations. This pre-refined data is exactly the sort of information crucial for future AI-driven self-healing systems.
Future security operations and network operations centers could ensure security on deployment and rapidly conduct virtually modeled intelligent security audits to ensure continual readiness in many systems, from power plants to military bases, without disrupting the vital networks or systems.
Combining automated scanners and live monitoring systems with configuration auditing tools would create the basis for an AI platform that could draw on real-time external data through scanning and stress testing systems by simulating attacks and defining the field of engagement. They could perform live monitoring of network traffic for attack discovery and have a unified platform of accurate system information, including current defenses and their resilience, readiness and response.
Over time, as the self-healing infrastructure continuously strengthens, AI could be trained to predict and prevent a variety of potential cyber attacks, similar to how a self-driving vehicle learns to improve navigation every time it encounters a new object and environment. With accurate data and this self-healing capability, the U.S. military’s Unified Platform could provide the speed, agility and precision of threat detection and mitigation to tip the balance of power away from autonomous attackers and maintain cyberspace superiority.
The Unified Platform endeavor and others like it will require unprecedented collaboration between clients, manufacturers and innovators in cybersecurity. Manufacturers need to make tools that are interoperable to support system flexibility and the addition of new, more accurate data sources. Clients also deserve the clarity of common gold standards so they can fairly assess the trustworthiness, traceability and accuracy of all cybersecurity data.
Nicola Whiting is chief strategy officer at Titania Group. She is a cyber strategy expert, and SC Magazine named her one of the top 20 most influential women working in cybersecurity in the United Kingdom.