Enable breadcrumbs token at /includes/pageheader.html.twig

Is AI Resilient Enough for Security?

Machines need to be hard to fool and reliable under pressure.

Artificial intelligence can be surprisingly fragile. This is especially true in cybersecurity, where AI is touted as the solution to our chronic staffing shortage.

It seems logical. Cybersecurity is awash in data, as our sensors pump facts into our data lakes at staggering rates, while wily adversaries have learned how to hide in plain sight. We have to filter the signal from all that noise. Security has the trifecta of too few people, too much data and a need to find things in that vast data lake. This sounds ideal for AI.

A new field known as adversarial AI asks how easy is it to fool a smart computer. It turns out to be very easy. Take, for example, the widely reported result that adding stickers to a roadside stop sign makes autonomous cars completely misread them. This ease of fooling AI is a real problem for anyone hoping big data plus machine learning is going to solve our predicament.

There is some hope. It’s the deep learning systems that are easily fooled this way. Learning in AI refers to a specific method that combines a large body of data with some known outputs, and trains the computer to build a matching algorithm. This works well when you have clear results and enough training data. Speech recognition is a great example: We have lots of speech, we know which words are present and we can let the computer figure out how to pick them out of a recording. Importantly, human language doesn’t change very fast.

Machine learning fails when rules change rapidly or when the problem is new or even slightly different. It isn’t resilient and can’t generalize. Indeed, inflexible technology that isn’t ready for real-world surprises seems to be a big part of why self-driving cars are taking so long.

In security, we value resilience above everything else. We can’t expect perfect protection, but we can harden our defenses and learn to detect and respond rapidly as attacks change. Machine learning is not the right tool to deliver resilience.

There is another type of AI, machine reasoning, that provides an alternative approach for security. The key difference between machine learning and machine reasoning is the input. Machine learning takes in a ton of raw data and a set of known, good answers, then asks the computer to come up with an algorithm. Machine reasoning adds a whole new ingredient, taking complex data and combining it with domain expertise captured in software. We can capture rules we understand, such as “vendor default passwords are bad” and let machine reasoning build sequences far more thoroughly than we can to show defensive gaps and realistic attack chains. We can also use customized rules during incident response to speed up human analysis and triage.

Even so, we can’t omit humans. It takes another person to play cat-and-mouse and think about the motivations, politics and economics of human adversaries. Computers still can’t do that.

Machine reasoning is the way forward for security. We need to build resilient systems that are hard to fool and reliable under pressure. The best strategy is to divide the work correctly between the computers and the people. Computers are good at exhaustive calculation and searching huge data spaces, while people are far better at policy, intuition and psychology. Combined teams of human strategists with deep machine reasoning will produce the resilience we’re looking for.

Dr. Mike Lloyd is the chief technology officer, RedSeal.