Enable breadcrumbs token at /includes/pageheader.html.twig

Man vs. Machine

As AI rapidly advances, the debate resurfaces on whether computer processing can match human cognition.
By Capt. William R. Bray, USN (Ret.)

One year ago, scientists announced that they had designed artificial intelligence that displayed a humanlike ability to learn on its own. The breakthrough raised the possibility that machines could one day replace human intelligence analysts. 

That day will not come.

To date, analytical software has significantly aided but not supplanted human analysis. Viewing the analytical process as a relay race, the better the software, the closer the analyst is to the finish line after the machine passes the baton. The analyst adds vast contextual understanding of the entire problem necessary to even grasp the baton. 

But could there be a time in the future when a complete analysis is delivered directly from machine to man with full confidence that man could not have done better? Perhaps, but this would call for a true unsupervised learning program that also accounts for the hard problem of human consciousness, or subjective experience.

Despite advances in artificial intelligence (AI), language—both speaking it and understanding it—remains a central problem. And the language of intelligence analysis is complex. Analysts often carelessly use words such as information, processing, analysis and thinking.

In addition, even seemingly straightforward human analysis is actually the fruit of an elaborate cognitive process rooted in a deep subjective experience. Analysts never come to a new problem with a completely blank slate as a starting point. Intelligence analysis also is inductive, whereby a probability is inferred by examining distinct parts, or fragments of parts, of a whole picture. 

It is tempting to view human cognition and computer processing as essentially the same thing. Yet, as any analyst knows, even fairly simple analytical problems require significant background knowledge to discern patterns, trends and, more importantly, to imagine alternative possibilities. If intelligence analysis were a discrete set of tasks needing only data inputs, then the thinking done by human analysts should be a reachable target for AI researchers.

Full automation of intelligence analysis requires AI that actually is indistinguishable from the human mind. The search for strong AI that duplicates human intellectual abilities has pushed, in parallel, for a more complete understanding of the mind itself. In the 1950s, early AI pioneers such as Alan Turing and Marvin Minsky were optimistic that strong AI was not only possible but also would be realized before the turn of the century. This optimism rested in part on many erroneous assumptions of how the mind works. One important assumption was that human intelligence could be understood as something other than an embodied intelligence that stems from DNA, a position not popular today in cognitive science. Sixty years of research have produced some critical advances in the development of weak AI, or machine learning, that have aided human cognition and enabled a better understanding of how the mind works, but strong AI remains elusive.

Perhaps it remains so because of continued ignorance about the human mind. Scientists cannot replicate what they do not completely understand. If this is the case, advanced learning machines of the future will only ever aid human analysis and never completely replace it.

Human analysis also relies heavily on subjective experience, which too often is taken for granted, but is central to understanding anything. Every piece of data perceived is understood only in the context of everything a person has experienced. The analyst cannot set aside an experience to analyze data or to imagine alternative possibilities. Furthermore, even if exquisite programming could someday entirely remove human analysts—with all their biases—in favor of a machine-generated solution, the intelligence consumer, assumed to be human, would only understand the analysis through his or her subjective experiences.

The most powerful philosophical argument against even the possibility of strong AI is John Searle’s semantics versus syntax argument, first presented in 1980 in the journal Behavioral and Brain Sciences and updated a decade later in Scientific American. Searle contends that all computer programming—whether the traditional step-by-step serial processing expressed by John von Neumann or the more complex parallel processing programs that resemble the neural nets of a human brain—is nothing more than formal logic through symbol manipulation. By themselves, symbols have no meaning. They are syntax without semantics, and only human minds have mental contents, or semantics. 

Additionally, Searle asserts that strong AI proponents, beginning with Turing, are forever confusing simulation with duplication. Computer programs have always simulated many aspects of human thinking, and at far greater speeds, but that hardly is the same as duplicating human thinking. In Turing’s seminal 1950 essay “Computing Machinery and Intelligence,” the computer scientist addresses the definition of thinking, claiming that if people cannot tell the difference between human thinking and machine thinking, then the machine is effectively thinking, and no human has the right to deny the machine that achievement. As Searle points out, a computer program can simulate the human digestion system, too, but it is obviously not digesting anything. 

The human brain is part of a complex biological organism and cannot operate independently. Through complex and still-mysterious neurobiological processes, the brain has remarkable powers to produce mental states, such as phenomenal consciousness or subjective experience. In emphasizing this distinction, Searle provided a deep philosophical challenge to even the possibility of strong AI as opposed to simply claiming computer science has not gotten there yet.

Thirty-six years and an entire generation of AI research later, Searle’s detractors have never sufficiently countered his argument. David Rosenthal’s higher-order-thought theory might be the best one offered, but it will forever fail to be proved until a functional system with the same causal powers of a human brain can be built. 

Better predictive analysis tools, often using big data aggregation, can potentially yield better predictions. But this is better weak AI. How good it will ultimately be remains to be seen, although intelligence analysts everywhere certainly welcome anything that improves the accuracy of predictions. Fully automated intelligence analysis almost certainly will never happen. For if strong AI proves impossible, then fully automating intelligence analysis is impossible.

Capt. William R. Bray, USN (Ret.), served as a U.S. Navy intelligence officer as part of the staff of the Office of the Chief of Naval Operations in Washington, D.C. He is now a managing director at the Ankura Consulting Group. The views expressed here are his alone.

Comment

Permalink

Everything that you know and think about, is a collection of years and years of incredible high definition video being fed to your brain via yours eyes. You don't fully understand your own thinking and where it is today vs when you were an infant. A computer can have sensors, video feed, audio feed but it doesn't come close to the sensors a person has. Hearing ear implants have about 10 electrodes and can not fully replicate sound to the mind as a persons ears can. Not even close.

A infant is believed to operate off instinct and reaction, the equivalent of "if then else" statements. Receiving unquantifiable amounts of data from the infants senses, it experiences the world and begins to favor things or dislike things and learns to move its arms and legs a specific way to crawl around and through repetition it becomes autonomous. Parents(anyone really) that talk or communicate with the infant is believed to create the infants inner voice. When people think they almost talk with a voice in there head. When they are infants and just start learning to talk they say everything that comes to mind and as their speech skills improve what they say is understandable and they talk all the time. they go off to school where they are told to be quite and slowly that voice becomes an inner voice that people have today.

I mention all of this because, sensors today are worst then Helen Keller's experience on life in the first 5 years of her life. She was able to experience more through touch then using a bunch of electrode and a hearing implant and camera could ever provide her. SO ask how is a computer suppose to learn and develop AI when the sensors it uses and the hardware it has is not on the same level as a person.

Your comments saying this will never happen should be extended to this will never happen in your life time.

People have 5 great senses, and computer has 2, audio and visual, other sensors are providing touch like senses a computer also gets to experience direct data inflow from the internet, radio waves, sonar, ect. But its not enough. trying experiencing everything you experienced in life by only being able to use the same things a computer can. you would be a completely different person. everything you see can only be seen through sonar, camera, ect.

For what computers have to work with they are doing excellent. a lot better then people could.

Other points.

People are creatures of habits, because we learn the same way they have computers learning. a computer is going to experience input/data and form its on opinion of that data. It is then going to continue to result in the most favored outcome. Computers always try to do the best with the data they have where as people are okay with not choosing the best option. Lets say there are two ways to work. 1 way is the normal way you go every day, the other way is a road that you have traveled, its faster by 10 mins but requires you to focus more on the road because its usually congested and poorly marked.
a computer would maybe pick the fastest route and person would continue being inefficient because people have a problem with change.

Give a computer 1000 times the video and audio quality that exist today, a sense of smell, touch and taste equal to that of a person along with the computational capacity to equal the neural network of that of a human and I think that a computer will be able to have real AI.

In regards to a conscience,
Is this your ability to think out things in your head? a computer can play out scenarios in its head to try to improve its outcome.
Is it your ability to Dream? do you control your dreams? no. dreaming is a way for your brain to replay information that you experienced at one point in an unorganized random matter and from different perspectives. Its a different way of reviewing information and scenarios, which a computer can do.

Maybe creativity? is there really such a thing? isn't the collective experience, purposeful and accidental experiences, blurred memories and misinformation, given to us through our senses and interpreted in our own manageable way. we are able to crop things up and reorganize them in our heads and spit them out. Like a Spork (Spoon Fork), or a spoon, looks a lot like it originated from cupped hands, and a fork looked like it originated from a stick.

a painting is maybe how that person interpreted an experience, maybe they were very young in age and the memory is faded and new memories are pieced in with the old memory to make a memory you think is real or want to think is real.

Also question to Mr. Bray. do you code? do you have knowledge of the code involved with machine learning. could you write out a high level approach to image recognition?

If the answer is no then why should we give you any credit or find your article to be of any value?

Permalink

Lots of interesting comments. I assume you mean consciousness, and not conscience. I suggest you read Searle's work on that subject.

As for coding, are coders the only people allowed to speculate on the philosophical implications of AI? You seem to be quite comfortable commenting on philosophy and cognitive science. Are you a cognitive scientist? A philosopher?

For those that think strong AI is possible, a computer would need to be able to do everything a human mind could do. Be self aware, be able to love, hate, be lazy when it wanted, be able to specualte abstractly, etc... Or as Searle contends, it would need true intentionality. Because of space limitations I had to cut a lot out of this essay on where the field of Philosophy of Mind is on the subject of strong AI. I haven't met too many folks in the tech field that are well versed in it, unfortunately.

Permalink

I study the subject of Machine learning, the math, algorithms, write my own programs but more importantly I try to understand how the brain functions in regards to learning, memory, recalling events. Which is mainly a hobby.

Literally sitting down and thinking and while thinking, self evaluate your thoughts and try to explain every process of your thoughts with computer code can be very informative.

I believe anyone can sit down and think about their own thought process but if you don't know how Machine learning, the math, the process, the code, then you can not make the apples to apples comparison. You don't need to be a philosopher or read someone else's research to self reflect on your own thoughts but you do need to understand the programing and math side of machine learning to make the translation.

In regards to "Be self aware, be able to love, hate, be lazy when it wanted, be able to speculate abstractly, etc.."

Please explain "self aware"? how do people convince you they are self aware? what would a machine have to do?
You said a computer would have to do everything people can do. well people cant do everything people can do. that might not make sense but of all the people in the world can you say 100% of them can love? can be lazy? are able to speculate abstractly? I would say no.

Love for people is towards something that they hold dear, something that is very important to them. Why is that thing or person important to them? These are all factors, but in the end a computer can not fully experience Love like people can because it lacks the sensors and input ability. But you can make the argument that a computer can assign a high weight factor to something and therefor holds that to be more important and there for loves that thing.

Emotions are chemicals that are released by the body when something bad or good or something happens. It is something a person can not control. A computer is designed to control everything and therefore an emotional response is inefficient.

Lets say 1000 years from now people are able to use much more of their brain. They can start to control those chemicals at will. if you choose to have no emotional response to anything does that make you not human? there might even be people today that lack all emotion, are they not human?

To be lazy is a choice, a choice that occurs when people lack motivation, energy, goals or is trying to accomplish some other task like decompressing after a hard day at work (this would not really be considered lazy since there is a purpose behind it)

if lazy is a key factor for you then the opposite would have to hold true as well. being very motivated and driven something computers can do as well.

Do animals have consciousness? how do they prove it?

Permalink

Sorry for the double tap. Couldn't tell if it had taken.

Self aware: you are aware that you are aware of something. True? How does science make a machine do that? You are aware you have a mind. People say all the time: "I have a mind of my own". What are they talking about? Just the gray matter in their skull? This is just another version of the cogito.

In your earlier post, you pointed out how much humans ingest through the five senses from birth, and why a machine is so far behind. That is a good point. We are constantly sensing the world around us. Perhaps someday a machine can match that. However, I would point out here that this assumes that everything a human can know can only be derived from what passes through the senses. This is a formidable theory of knowledge in the history of philosophy, primarily espoused by the empiricist school of John Locke, David Hume, Thomas Reid and others. Immanuel Kant laid down a major challenge to this theory, however, in his First Critique. He makes a strong case for a priori knowledge, knowledge that exists before sensation. For example, how do humans come up with the concept of infinity using the five senses? Or the Pythagorean Theorem? Did Pythagorus wander around measuring triangles in the sand to come up with a2 + b2 = C2? Kant's point is that if infinity is a real concept (something true), it necessarily existed and was true before the first living creature existed on earth. So it couldn't be discovered only through sensory perception.

As for your method of determining how the mind works by studying math, machine learning, etc, plus self observation of how your own mind works, if that works better than deeply immersing yourself in the long tradition of philosophical thought on the subject, then I recommend you keep good notes and publish an essay yourself on the subject. As long as you let me know who you are, I promise I'll be your first reader.

I know you may not think this, but I am a big fan of AI and machine learning. I stand in awe of what computer science has done for humanity, and its promise going forward. But I also think it's important to understand the serious philosophic challenges to AI, so we have realistic expectations of what it can and cannot do now, ten years from now, and perhaps ever. I do believe computer science students should be required to take several credits in Philosophy of Mind to complete a degree. Perhaps that is the case at some schools already, I'm not sure.

Permalink

In regards to "Self aware: you are aware that you are aware of something. True? How does science make a machine do that? You are aware you have a mind. People say all the time: "I have a mind of my own". What are they talking about? Just the gray matter in their skull? This is just another version of the cogito."

For someone who seems to have read a lot about philosophy this is a poor explanation of being self aware and perhaps my point in prior comments about knowing the coding side of ML and human thinking.

"I have a mind of my own" This means the person is able to think and choose at their own will. can a computer not do this if it has experienced the same life as that person? a person makes their choices based off prior experience.

"Self aware: you are aware that you are aware of something" I am aware of my thinking. Why? How? well the "inner voice" I mentioned is maybe the voice that makes you think you are thinking? a computer can have such a voice but people have not given it such a voice. Right now typing this I hear my inner voice talking over the possible words to use and why i'm using them and how I associate with these words and how I associate with the point i'm trying to convey.

That inner voice is part of your analysis process, choice process, association process, along with the inner voice is the visual references that your brain presents while thinking something over.

The key thing that makes people human is their ability to associate. Right or wrong in their association does not matter in regards to thought. but if you associate crime as right then that's a not a good thing. but growing up and going through life the vast experience we have associated the input from our senses to everything. we may not remember it all but it is there. think of out memory as a forest that we walk through. When your first born there is no clear path in the woods so we just wiggle and toss our arms around, everything that a baby does. but slowly the baby moves its arms enough to rub its eyes and associates that rubbing your eyes makes you feel better in someway. they may scratch their face so the next time they might be more cautious. eventually they prefect the action and it almost comes natural, If you walk through a forest enough a path will form and then it might become a dirt path that is very clear and natural to walk down.

Its the formation of a automated action. I was told once that to be come great at anything you have to do it 10,000 times. So a baby might try 1000 times learning to crawl, another 1000 to walk, another 1000 to run, but they still aren't great at it, they still might fall. but somewhere around 10,000 times they become great at it.

this method is applied to the way we think also. how we learn. From years of school have being told to read from books and copy what is on the board, I can remember things a lot better when I see it written down. if you come up to me and tell me something I will forget it. but if you write it down show it to me then throw away the paper, i'll remember it.

Going back into "Self aware" how would a computer prove they are self aware?

going back into emotions and laziness and other "human" traits.

from birth, the first thing a baby does or wants is to eat. so you put the baby on the mother chest. the baby in that moment learns that the mom is warm (it doesn't know what warm is but its feels good) also that the mom provides food which fills the baby up (which feels good) then when the baby gets hungry (which does not feel good) it crys because it does not feel good. You feed the baby again and then you might just be holding it and it cries so you give it back to mom and it stops because it likes the warmth of mom. The baby associates that mom is good because mom is warm and mom fills my stomach. eventually that association and good feeling turns into love. Parents immediating love their children because through out the parents life they were once a child and maybe said that they want to be better parents then their parents because they would not let me play 2 sports at a time or something like that. as we grow up we think about having children of our own. and then finally we have one and its the culmination of years and years of association about having a child that not you finally have one and you love it. the love was not developed instantly it was developed years in the making before the child was born but feels instant because the baby just got here.

Allow a computer to feel hot and cold, soft and rough, everything that a person can do. also give a computer the need to eat. maybe a computer needs to eat copper, plastic, lithium. and when its low on something it feels pain in the form of higher energy consumption (assuming a computers main objective is to be energy efficient) if a computer were to touch a person and feel soft warmth they would still need to associate what warmth is and maybe from birth we know that because of our experiences.

In regards to infinity x=1 if x>0 then x=x+1 loop ................. People may categorize something as infinite but maybe the number is 2,456,234,567,239,456,998 and as people that is a large number that we might say there are infinite number of possibilities when in reality there is an actual number.

The infinity of Pi or space is an example of infinity. People understand this on a conceptual basis because to actual figure it out requires more computational power then people have. so we say its endless. It very well might be endless but a computer is designed to figure it out. so it will run and run and run for ever until it reaches the last number of PI or measures the end of space.

To conceptually understand something can be further broken down into, not able to figure it out because I can not think that far and I lack the ability to further explain or analyze the situation. Its the "Lazy" card. its the "i've put as much energy into this as I feel I can and any more input would be a waste of time"

Always try to break things down further and further, like I just did with the word conceptual. and you will be able to see the harsh reality.

Also self aware = consciousness? = soul? = inner voice? = real toddler voice that was suppressed by being told to stay quiet? = the voice of our parents that we mimicked?

love? = the association of experiences in life? = developed from the senses that we didn't fully give to computers.

The computer = idiot savant? (that is not able to feel, taste, smell, needs glasses, and has earing problems. It also was not consciences for the first 10 years of life... it is now 12 years old and expected to know what to do and explain that it is self aware even though it has been locked in a room and only given information through the little senses it has and from what someone has told it.)

People may not realize it but the difference in information that we have taken in vs what a computer has taken in is like the entire earth and a grain of sand.

Permalink

1. Don't build robots that can build other robots and improve on them.
2. Don't weaponize them...

Permalink

Are you a philosopher? A cognitive scientists? You seem to be very comfortable making claims in that space.

Permalink

I believe the author may be putting the cart before the horse ... AI has not, NOT, been achieved, and no one knows when that event will occur.

Permalink

Bill, good article. I worked a lot of AI during my CS graduate studies in the 80s, and perhaps because we were still grappling with the fundamentals of learning and rule-based systems we had a different view than today's non-technical dreamers and doom-and-gloomers. We still do, and it is this: Why would we even try to achieve AI as the same thought process, cognition, and function of a human being? It's like going to great lengths to make a submarine swim (that's the notable computer scientist Edgar Dijkstra's quote, not mine).

The proper advancement and goals of AI should be to provide advanced analysis, control, and decision making that either enhances or supplants the human where desired. It's a waste of time trying to mimic a human's behavior, manipulation of the homo-sapiens body, visual interpretation, study of art, appreciation of a sunset, and all the other poetic and functions that are irrelevant to the utility of true AI. It's much like robotics trying to re-create the physical form and capabilities (or even super-capabilities) of the human body of which at best is an evolutionary compromise.

A good example is machine vision. If we perfectly re-create human vision and cognition, all we've achieved is the same limitations of how a human can possibly interpret a scene. For advanced super-human or extra-human tasks, a capability more like an insect's visual senses and processing are often better (and no less "intelligent").

If you look at all the real advances and application of AI that are in place today in finance, C2, gaming, and data analysis, they are far from achieving human cognition. They don't have to. They are achieving the real goals of AI. Let's not make AI irrelevant and impossible by trying to be like a human brain.

Permalink

I agree with that Jim. Where it gets hard, however, is defining terms like "advanced analysis" when it comes to the very complex and subjective art of intelligence analysis. More often than not I have been presented with AI solutions to analysis tasks that underwhelm, to put it kindly. I am hopeful it will get better in the future.

Permalink

Very true. Deterministic algorithms do not so subjective well, especially when the source data is produced by human behavior.

Comments

The content of this field is kept private and will not be shown publicly.

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.