Enable breadcrumbs token at /includes/pageheader.html.twig

Man vs. Machine

As AI rapidly advances, the debate resurfaces on whether computer processing can match human cognition.
By Capt. William R. Bray, USN (Ret.)

One year ago, scientists announced that they had designed artificial intelligence that displayed a humanlike ability to learn on its own. The breakthrough raised the possibility that machines could one day replace human intelligence analysts. 

That day will not come.

To date, analytical software has significantly aided but not supplanted human analysis. Viewing the analytical process as a relay race, the better the software, the closer the analyst is to the finish line after the machine passes the baton. The analyst adds vast contextual understanding of the entire problem necessary to even grasp the baton. 

But could there be a time in the future when a complete analysis is delivered directly from machine to man with full confidence that man could not have done better? Perhaps, but this would call for a true unsupervised learning program that also accounts for the hard problem of human consciousness, or subjective experience.

Despite advances in artificial intelligence (AI), language—both speaking it and understanding it—remains a central problem. And the language of intelligence analysis is complex. Analysts often carelessly use words such as information, processing, analysis and thinking.

In addition, even seemingly straightforward human analysis is actually the fruit of an elaborate cognitive process rooted in a deep subjective experience. Analysts never come to a new problem with a completely blank slate as a starting point. Intelligence analysis also is inductive, whereby a probability is inferred by examining distinct parts, or fragments of parts, of a whole picture. 

It is tempting to view human cognition and computer processing as essentially the same thing. Yet, as any analyst knows, even fairly simple analytical problems require significant background knowledge to discern patterns, trends and, more importantly, to imagine alternative possibilities. If intelligence analysis were a discrete set of tasks needing only data inputs, then the thinking done by human analysts should be a reachable target for AI researchers.

Full automation of intelligence analysis requires AI that actually is indistinguishable from the human mind. The search for strong AI that duplicates human intellectual abilities has pushed, in parallel, for a more complete understanding of the mind itself. In the 1950s, early AI pioneers such as Alan Turing and Marvin Minsky were optimistic that strong AI was not only possible but also would be realized before the turn of the century. This optimism rested in part on many erroneous assumptions of how the mind works. One important assumption was that human intelligence could be understood as something other than an embodied intelligence that stems from DNA, a position not popular today in cognitive science. Sixty years of research have produced some critical advances in the development of weak AI, or machine learning, that have aided human cognition and enabled a better understanding of how the mind works, but strong AI remains elusive.

Perhaps it remains so because of continued ignorance about the human mind. Scientists cannot replicate what they do not completely understand. If this is the case, advanced learning machines of the future will only ever aid human analysis and never completely replace it.

Human analysis also relies heavily on subjective experience, which too often is taken for granted, but is central to understanding anything. Every piece of data perceived is understood only in the context of everything a person has experienced. The analyst cannot set aside an experience to analyze data or to imagine alternative possibilities. Furthermore, even if exquisite programming could someday entirely remove human analysts—with all their biases—in favor of a machine-generated solution, the intelligence consumer, assumed to be human, would only understand the analysis through his or her subjective experiences.

The most powerful philosophical argument against even the possibility of strong AI is John Searle’s semantics versus syntax argument, first presented in 1980 in the journal Behavioral and Brain Sciences and updated a decade later in Scientific American. Searle contends that all computer programming—whether the traditional step-by-step serial processing expressed by John von Neumann or the more complex parallel processing programs that resemble the neural nets of a human brain—is nothing more than formal logic through symbol manipulation. By themselves, symbols have no meaning. They are syntax without semantics, and only human minds have mental contents, or semantics. 

Additionally, Searle asserts that strong AI proponents, beginning with Turing, are forever confusing simulation with duplication. Computer programs have always simulated many aspects of human thinking, and at far greater speeds, but that hardly is the same as duplicating human thinking. In Turing’s seminal 1950 essay “Computing Machinery and Intelligence,” the computer scientist addresses the definition of thinking, claiming that if people cannot tell the difference between human thinking and machine thinking, then the machine is effectively thinking, and no human has the right to deny the machine that achievement. As Searle points out, a computer program can simulate the human digestion system, too, but it is obviously not digesting anything. 

The human brain is part of a complex biological organism and cannot operate independently. Through complex and still-mysterious neurobiological processes, the brain has remarkable powers to produce mental states, such as phenomenal consciousness or subjective experience. In emphasizing this distinction, Searle provided a deep philosophical challenge to even the possibility of strong AI as opposed to simply claiming computer science has not gotten there yet.

Thirty-six years and an entire generation of AI research later, Searle’s detractors have never sufficiently countered his argument. David Rosenthal’s higher-order-thought theory might be the best one offered, but it will forever fail to be proved until a functional system with the same causal powers of a human brain can be built. 

Better predictive analysis tools, often using big data aggregation, can potentially yield better predictions. But this is better weak AI. How good it will ultimately be remains to be seen, although intelligence analysts everywhere certainly welcome anything that improves the accuracy of predictions. Fully automated intelligence analysis almost certainly will never happen. For if strong AI proves impossible, then fully automating intelligence analysis is impossible.

Capt. William R. Bray, USN (Ret.), served as a U.S. Navy intelligence officer as part of the staff of the Office of the Chief of Naval Operations in Washington, D.C. He is now a managing director at the Ankura Consulting Group. The views expressed here are his alone.