Intelligence Analysis Needs Course Change
Legacy methods and arcane rules are hamstringing U.S. intelligence analysis at a time when it should be innovating. From training, which needs to shift emphasis to more basic skills, to collection and processing, which must branch into nontraditional areas, intelligence must make course corrections to solve inflexibility issues, according to a onetime intelligence official.
Previous shortcomings led to the establishment of analysis standards that have become too rigid for today’s intelligence needs, says Mark Lowenthal, former assistant director of central intelligence for analysis and production. Lowenthal, who teaches an AFCEA PDC onsite course on U.S. intelligence that is also available in a virtual format, states that training must change to compel the type of analysis necessary to meet the new threat picture.
Intelligence analysis changed substantially in the wake of the 9/11 attacks and the onset of the Iraq War, Lowenthal says. Recognizing that analytic deficiencies existed then, Congress and the intelligence community created the Director of National Intelligence (DNI) position. Along with this came congressional requirements for what analysis should look like. These evolved into analytical standards that emphasized analytic training, but the attempt to bring it about on a community-wide basis largely has not succeeded, Lowenthal states. Analytic training is still stovepiped by agency.
These analytical standards are the new battleground in analysis training, he continues. The conflict lies in how the standards are being interpreted, as many analytical managers have taken to using the standards as a checklist rather than as guidelines. The result is to subject analyses to an acid test. “If your analysis doesn’t check off every one of these boxes, it cannot go forward,” Lowenthal reports. “The standards are a guideline—what good analysis should look like, and these are the things you need to think about. But to argue that you have to do each and every one of them, or the analysis can’t go forward, I’m quite convinced is not what the DNI had in mind [when formulating these standards],” he declares.
What makes this problem worse is that the standards are now written in law. “So, if an analyst doesn’t check off all the standards, is he in violation of the law?” Lowenthal questions. The argument that you can perfect analysis is illusory, he claims. “It misunderstands the inherent nature of intelligence analysis.
“You rarely have complete intelligence,” he continues. “You are going to different points of view as to how good the intelligence is … there are subtleties, there are uncertainties, and we rarely have absolute truth.” At the end of the day, it is up to the policymaker to determine how to view the opinions expressed in an analysis, he adds.
Fixing intelligence analysis can come down to basics, Lowenthal offers. One skill sorely lacking among many intelligence trainees is the ability to write effectively. “They literally can’t write well,” he charges. “I’ve joked that this is because they spend all their day texting and they sound like a character out of a 1950s western … but I find that my students really can’t write well, and that’s a major issue. You have to be able to write well, quickly and to the point.
“If you don’t like to write, then [analysis] is a really bad job,” he continues. “You really need to think about another profession.”
He also expresses concern over these students’ ability to discriminate among sources. They tend to spend too much time on social media, which lacks an intermediating authority that judges the worth of each piece of information. This may have led to a decline in students’ critical skills, he suggests.
And social media itself poses different challenges. Pursuing relevant information on social media is, for the most part, a waste of time because of the large amount of chaff. Lowenthal notes that Sir David Omand, the former director of the Government Communications Headquarters in the United Kingdom, has suggested that social media should be treated as a separate -INT known as SOCMINT because it’s a separate open-source stream.
But these increasingly large amounts of data can be overwhelming to the decision maker. In many cases, the customer cannot understand it in its raw form. “Data are like a foreign language,” Lowenthal offers. “Maybe we should be thinking about data as a language set, where we need a bunch of people who can interpret data and then give it to the analyst.”
Effective data depends on the algorithm, he notes, referring to a former colleague who used to state that an algorithm is just an opinion written in code. “Data is not ground truth,” Lowenthal states. “It’s an interpretation of number sets, and different types of data are analyzed and look different. I need a set of people who can interpret the data for me just like I need someone who can translate Chinese for me.” This interpreter would provide the translation to the analyst, saying what he or she believes the data is, its reliability and the problems with the data set—just as is done with other -INTs, he says.
One lesson educators can take from the military is to “train the way we fight,” he says. The intelligence community does not do that, he charges, opting instead to train as individual agencies instead of as a community. “We don’t put enough emphasis on training and education the way the military does. It’s a career-long activity; it’s not just checking a bunch of boxes before we put you at your desk after six weeks.”
Students also should be taught about dealing with uncertainty. How to deal with it and express it are needed skills, he adds. This is especially important with cyber, where information often defies easy attribution.
Experts who focus on the future of analysis often cite artificial intelligence (AI) as essential to managing large amounts of data, but Lowenthal warns against excessive expectations. “We’re not at a point yet where AI is going to achieve what we want it to do,” he states. “It’s very important in terms of pattern recognition and working through sets of data, but again we need a person to interpret it for us.” AI is not likely to change analysis in the near term, he concludes.
But improvements in the intelligence community need not wait for technology to advance. For example, one thing the pandemic has shown is that the United States needs better medical intelligence, Lowenthal says. “We are not well-structured right now for medical intelligence,” he states. Existing assets include the National Center for Medical Intelligence at Fort Detrick, Maryland; and the Centers for Disease Control, which includes people cleared for Top Secret/Sensitive Compartmented Information (TS/SCI), but the United States should start thinking of a national intelligence officer for health, he says. The country also needs offices dedicated to epidemiology and pandemics.
Lowenthal offers that terrorism, crime and narcotics are all similar issues to national health. At one level, they are foreign intelligence issues, but at another, they are domestic issues. The United States has figured out how to parse the first three topics; it can easily do the same with national health, he says. “We are not well structured for it, and we haven’t paid enough attention to it, and that’s going to have to change.”
This will require hiring people with biological and medical expertise, such as epidemiologists and doctors, and then translating their findings and analyses so that government officials can understand the intelligence. “We need to put more emphasis on this because it’s going to keep happening,” he predicts. “This is a major warning shot, and we need to be better structured to do it.”
Mark Lowenthal teaches the AFCEA online/virtual course, U.S. Intelligence: An Introduction. Click the links for more information.
Iraq War Intelligence Failures Led to Current Limitations
The incorrect intelligence assessment of Iraq’s weapons of mass destruction capabilities was one of the two main causes of the new analysis standards, but that assessment may not have been the primary driver behind the U.S. decision to invade Iraq, says Mark Lowenthal, former assistant director of central intelligence for analysis and production.
“The estimate did not cause the war,” Lowenthal maintains.
“Clearly, there were flaws in the Iraq experience,” he states. “Number one, we were wrong.” Yet, he charges, the decision makers who led the United States into that war did not read that estimate.
Lowenthal emphasizes that the estimate was not written for President George W. Bush or Vice President Richard B. Cheney. The request for the estimate came from the U.S. Senate, for which it was written. The Senate wanted it for the vote to go to war, he says.
Lowenthal relates that he was the officer commanding CIA when the request for the estimate came in one Sunday afternoon in October 2002. While 9/11 provided the emotional spark for intelligence reform, most of the legislation that directed it had the Iraq estimate mistake in mind.
He offers that the biggest flaw that doomed the estimate was that analysts did not think of Iraq as a place. “We thought about Iraq as a bunch of weapons systems. What does their nuclear program look like, what does their missile program look like, what does their CW/BW [chemical warfare/biological warfare] look like? We didn’t think about the overarching picture of Iraq.
“And there was really no ground truth available in Iraq,” he continues. “Everybody in the Iraqi government lied to everybody else, up and down the line. And so, it would have been incredibly difficult to have established ground truth in Iraq.
“Some of our sources turned out to be totally unreliable, which is another issue … but the main issue was we ended up not thinking about the place in which this was happening, what is it like. Had we thought about that … I’m not sure it would have changed anything.”