On Point: Q&A With Dr. James Stanger
Dr. James Stanger has consulted with government and industry worldwide about security, data analytics, open source and workforce development for over 25 years. Organizations include Northrop Grumman, the U.S. Defense Department, the United Kingdom Royal Army, Mandiant, Amazon Web Services and Thailand’s National Security Agency. He is a member of the Forbes Technology Council and the AFCEA Cybersecurity Committee. He is currently CompTIA’s chief technology evangelist.
What is machine-assisted decision-making, and why is it important?
I see it as leveraging information and automation. There’s way too much data out there for a human—or any collection of humans—to crunch. We need well-trained artificial intelligence (AI) co-workers to do at least 80% of that crunching and sifting for us. Then people can make great decisions.
What useful AI implementations have you seen for cybersecurity?
Solutions that improve existing technologies and improve process maturity are where it’s at. I love AI implementations that enable better human-to-machine dialogue and that leverage existing best practices. Most established security tools, from CrowdStrike to SentinelOne and Red Hat’s Ansible Lightspeed, do both well. The Defense Department 8140 program created a tool to help sift through courseware submissions.
What creative AI solutions have you seen for cybersecurity?
I focus on implementations that accelerate decisions in the OODA [observe, orient, decide and act] loop. I recently spoke with an analyst who processes terabytes of maps, photographs, data every day with her AI co-worker. She has become very good at an old idea: the Socratic Method, which is a creative dialogue between two interested parties. Useful AI involves going “back to the future” to dialogue-based critical thinking. Humans can actually do that—with help.
What frightens you about AI?
First, the AI hype cycle. Suddenly, everyone is an expert in everything, including generative AI. But we can use these resources intelligently if we’re humble. Second, folks seem willing to believe anything generated by their AI friend. Third, I worry about responsible AI use and “guard rails.” It comes down to how well, or how cynically, humans use or misuse it. If AI causes serious problems, it’ll be a human element failing where we didn’t set and enforce correct behavior expectations.
Finally, I worry about bias.
We’ve seen egregious examples of bias regarding race, class and gender. So many exist; some are even documented. There’s another bias issue: Each AI tool has a particular origin. It’s hard to escape that DNA. Some might remember the decades-old submarine movie “The Hunt for Red October.” In one scene, the AI software in the American submarine ingests some data, crunches it and then tells its co-workers that they are chasing a volcano, not a sub. Seaman Jonesy, the human element in the conversation, doesn’t believe that conclusion. He knows that the machine language engine was first created to detect seismic anomalies. He concluded that it had “run home to Mama,” interpreting the data as a “seismic disturbance,” when they in fact had actually found a new top-secret sub. My point? It (still) takes humans to guide the conversation to a useful conclusion. We need to delve into that dynamic. Humans will be in the OODA loop indefinitely. It’s just a question of where and how.
In your view, does AI most benefit attackers or defenders in the cyber domain?
It all depends upon how we train it. The most mature solutions, right now, favor the defensive side. But it doesn’t have to remain that way. I’m intrigued by machine learning-enabled efforts to speed up pivot time. Enhancing proactive and detective threat modeling will allow us to gain superiority in any industry or theater of operation.
We need to train our AI like we train our children: Provide guidelines, intervene with corrections and wean with the right data.