Enable breadcrumbs token at /includes/pageheader.html.twig

AI Offers Intelligence Pitfalls as Well as Advantages

The key technology the community is counting on could be turned against it.

At the top of the list of the tools that the U.S. intelligence community is expecting to help accomplish its future mission is artificial intelligence, or AI. It is being counted on to help the collection and sorting of the large amounts of data that are growing exponentially. However, like many of these tools, AI can be co-opted or adopted by adversaries well-schooled in basic scientific disciplines. As a result, AI can be a trap for unwitting intelligence officials, offers Bob Gourley, co-founder and chief technology officer of OODA LLC.

For example, machine learning algorithms can train by teaching themselves as new data comes in. This activity is taking place today, but problems already have arisen that bode ill for future applications lacking appropriate measures. Machine learning algorithms can be deceived by adversaries in ways that affect the fidelity of the information gleaned from the data, Gourley says. And AI can be self-deceiving. “Any algorithm that can change itself can corrupt itself,” he states.

The hope behind using AI is that an AI algorithm will think for itself and learn from experience as it serves its human masters. However, Gourley warns that machine learning can lead an algorithm astray to the point where its original mission is corrupted by its learning.

He cites the example, originally reported by Reuters, of Amazon’s use of AI to screen resumes submitted by prospective job applicants. The self-learning algorithms were designed to scan for the best potential recruits and submit the top resumes up the human chain of review for consideration. The computer models vetted applicants by observing patterns in their resumes over a 10-year period.

However, this approach inadvertently enabled the AI to become misogynistic. Basing its learning experience on traditional resume patterns amassed over 10 years—which largely represented men—the AI taught itself that male resumes were preferable and overwhelmingly rejected resumes from women. Elements of women’s resumes that did not appear on men’s were flagged as undesirable even though they did not indicate any lack of ability or poor work habits. Amazon worked to fix the recruiting engine but determined that it could not be assured that the AI would not discriminate in other ways. Ultimately, it abandoned the AI-driven screening process.

Another commercial AI setback occurred in 2016 with Microsoft Tay, an AI chatter bot. Released by Microsoft through Twitter, Tay quickly became the target of trolls who attacked it with dialogue that turned it into a sex-crazed racial supremacist. Tay, which was designed to adapt the language patterns of a 19-year-old woman, learned the wrong ideas from interactions with these miscreants and developed Nazi tendencies, and Microsoft had to take it down within hours of its introduction.

Other examples of AI distortion abound, but these two cases highlight pitfalls of AI in intelligence applications. In the first, a well-meaning AI algorithm drew the wrong conclusions as it pursued its mission, which did its users no favors. In the second, outside influences conspired to corrupt the algorithm and transform it into something that ran counter to everything its owners hoped. The intelligence community must guard against these types of outcomes, as either could be devastating.

Outsiders can manipulate AI to the same effect. “Machine learning algorithms can be deceived,” Gourley declares. In national security applications, an adversary could program or manipulate data to influence AI into generating misleading information, and the intelligence community must take pains to avoid this scenario.

Many commercial firms have discovered that their AI algorithm or data are vulnerable to attacks from outsiders, and this can lead to thoroughly corrupted information. Gourley describes how businesses red-team their AI constructs with outside experts who try to manipulate the algorithm and its data. But even with this red-team validation, companies must maintain scrutiny of their AI to ensure it is generating expected results. Having verifying information may be necessary for decision making, he says, adding, ”These kinds of lessons will apply to the intelligence community. But, the intelligence community is operating at a much greater scale and will need much more well-engineered solutions than commercial industry will,” he emphasizes.

“Some of these machine learning algorithms change themselves so much that no human can understand how they work,” Gourley declares.

The community must incorporate a means of explaining AI findings if it is to avoid deliberate or accidental deception, he says. “We need to know what made [AI] come up with this conclusion,” Gourley says of a hypothetical AI-driven report. He adds that many major academic institutions that teach AI are addressing this issue, and this should help the intelligence community in the future.

Startup companies also are being formed to deal with this need. One approach is to examine the millions of variables that machine learning systems have taught themselves, and then have the AI tell the human the top 10 variables, he relates. “There is a lot of innovation in AI protection and explainability.”

Bob Gourley is moderating a panel on technology futures at the Intelligence and National Security Summit being held at National Harbor, Maryland, September 4-5. His views on key intelligence technology issues can be found in an expanded version of this article in the September issue of SIGNAL Magazine, available online September 1.