Enable breadcrumbs token at /includes/pageheader.html.twig

The Dark Art of Artificial Intelligence

National security depends on gaining an AI edge.

Asked which technology will be most critical to artificial intelligence in the coming years, experts agree: artificial intelligence, hands down.

Two experts from academia and industry—Mathew Gaston, director of the Emerging Technology Center at the Carnegie Mellon University Software Engineering Institute, and Fletcher Previn, chief information officer at IBM Corporation—participated in a fireside chat at the AFCEA TechNet Cyber 2019 conference and predicted artificial intelligence will be the number one technology most critical to national security in the next several years.

“I don’t think there’s maybe as much hype as you would expect there to be around AI in the cyber world,” Previn suggested. “If you ask any cybersecurity professional about what keeps you up at night, it’s AI-augmented threats because if you can iterate millions of times a second on some sort of piece of malware, it’s very hard to defend against that. The only thing that can combat AI is more AI.”

Gaston agreed. “As these other nation-states are investing heavily in these types of technologies, we need to be making similar investments,” he said. “AI is very much a craft right now, almost a black art, to build one of these deep learning systems, computer vision systems, those sorts of things.”

Previn offered two perspectives on AI. “As an IBM employee, I’m optimistic about the future of AI, but as an American, we cannot lose the AI advantage,” he said. “It’s not like traditional compute where you only have to be a little bit better than somebody to [avoid having] a really big problem on your hands.”

Gaston suggested the United States create a professional AI engineering discipline to help ensure national security. He noted that adversarial machine learning, which started out as a way of training neural networks, can be used to spoof machine learning systems. He suggested it also can be used for military testing and evaluation. “This adversarial machine learning is a great way to get at testing of machine learning systems. We’ve got to build up the discipline for doing that kind of thing,” he said. “I think AI engineering is a difference maker, not just the rapid adoption of AI and machine learning, but doing it in a smart way where you know where the vulnerabilities of it are and you know where it might fail.

Previn responded by predicting the very nature of malware could soon change. “In the near future, malware is not going to be about getting you to click on a link in an email to get you to install some a piece of code on your computer. It’s going to be about using the minimum amount of data required to correct a data set that something like a vision recognition system is using.”

He added that AI systems can be easily fooled. “With a surprisingly small amount of data, you can get an autonomous driving system to think that a stop sign is a green light,” Previn said. “That’s a fairly benign use case. Now imagine financial systems, medical systems, critical infrastructure. AI is very closely related to the big data problem and data integrity.”

The next great leap in AI technology may be comprehension of language, Previn predicted. "I think mastery of language will be the shortest path to close the gap between narrow and general AI. That will be really when you see that hype machine start to spin up.”

He went on to explain that current AI systems require the answer to a question to exist in the data the system is examining. “That’s not the case if you can reason with it. That then becomes the equivalent of reading and writing, where you can then answer questions to which there was no previously known answer. That will be when we really start to see some exciting things happen with AI.”