• The AlphaGo Zero initiative exceeded human level of play in the game of Go, with the machine playing itself and learning how to win from its own experience. Credit: Saran Poroong, Shutterstock
     The AlphaGo Zero initiative exceeded human level of play in the game of Go, with the machine playing itself and learning how to win from its own experience. Credit: Saran Poroong, Shutterstock
  • Experts discuss AI and other disruptive technologies at a panel at West 2018 sponsored by AFCEA’s Young AFCEANs.
     Experts discuss AI and other disruptive technologies at a panel at West 2018 sponsored by AFCEA’s Young AFCEANs.

Constrained Brains No Match for Machines

February 8, 2018
By Beverly Cooper

AI and Machine Learning Make Powerful Force Multiplier

Artificial intelligence, machine learning and neural networks are already influencing decision-making processes for both military and business, yet all of the benefits and consequences are far from understood. The way these technologies will be applied will have a profound effect on service personnel as well as civilians, and the timeline is accelerating, driven by the exponential growth in sensors, big data and simulation algorithms.

Artificial intelligence (AI) “is a high-risk but high-payoff technology, which through research and development will lead to a decisive advantage in combat,” explained Stephen Winchell, Presidential Innovation Fellow, Intelligence Advanced Research Projects Activity (IARPA). “We will see the same AI being used in strategy as well as in warfare,” he explains.

Winchell was part of a group of experts participating in a panel on AI and other disruptive technologies at West 2018 sponsored by AFCEA’s Young AFCEANs.

“Our brains evolved to solve certain problems, and the way we think is constrained in certain ways,” Winchell acknowledged. But robots are not constrained, and they can understand and compare vectorized data and processes in ways that will have significant impact on the military.

Because human knowledge and decision making can be unreliable, AI research looks to bypass the human step and create algorithms that will excel in challenging situations with no human input. This is happening through machine learning and simulations.

AlphaGo was the first computer program to defeat a world champion at the Chinese game of Go. This version of AlphaGo learned from observing human games and then through self play. A new version of the AI, AlphaGo Zero, trained itself from the start, and in so doing, it exceeded not only the human level of play but also the level of play of AlphaGo, defeating it 100-0. Human training and knowledge, it seems, had actually constrained AlphaGo. 

In training AI systems how to beat humans in games such as Go, new moves come to light from simulations, Winchell concluded.

People do not make the best decisions, even with all the data they have, so humans are predictably wrong, he allowed. That weakness could lead to AI systems that use entirely new tactics to confuse people and take advantage of the biases in human thinking. Some of this research is likely already being integrated into tactics that our adversaries are using, he explained. 

AI is an extraordinary capability because it can learn and teach itself, agreed Ryan Tseng, CEO and cofounder of Shield AI. With self-directed learning, AI can understand where it is strong and behave optimally in those circumstances. And it can determine where it is weak and choose to become stronger in those areas.

Globally, AI can connect to all sensors and all feeds and perform strategic and tactical command and control functions. It can far exceed the capabilities of human. So giving AI access to what people have access to opens up infinite computation ability and the ability to learn in real time. By doing this, AI can enable its weaknesses to become very powerful strengths.

The characteristics of AI make it fundamentally available to all, explained Tseng, and it is not easy to trace where source code goes as a result. That makes it difficult to know what capabilities are being developed and how adversaries will apply them.

Nikolay Atanasov, assistant professor, Department of Electrical and Computer Engineering, University of California, San Diego, also acknowledged that lot of advances in AI are no longer classified. “If we want to keep up, there has to be intersection between things that have to be classified and the theory and software,” he suggested. A simulator can have a classified hardware system that is not available to the public. But the hardware can still run a simulation that keeps details of the system classified yet has open source for algorithms and capabilities. Being able to do things at a large scale where people contribute is important, he added.

AI is a strategic capability, Tseng stressed. Peers are building programs and making investments. It is important that the United States recognizes that this is powerful. “Our military success in the future is going to depend on our ability to lead in AI,” he suggested.

While nation-states are exploring the same technologies as the United States, in an AI race, Tseng still believes the United States can win. The country has the technology, but right now it is located in business, rather than in defense. “The best AI talent in the world goes to work for large tech companies, and these individuals do not currently concern themselves with defense,” he explained. Steps need to be taken to reach out to these experts to entice them to apply their skills to solving defense and national security challenges, he recommended.

“Fortunately, there are people in U.S. agencies who are thinking strategically,” he said. “Investment is a piece of this, and hopefully our intelligence and defense agencies will fund technology to the level it needs to be funded,” Tseng stressed.

Atanasov agreed that the United States could be a leader, but he reported that “our adversaries are investing in all parts, from gathering data, to higher fidelity systems, to the ability to train with noisy and incomplete data.”

From his perspective, Atanasov, sees a gap between all the research and new ideas that come out of academia and the transition of that knowledge into industry and military applications. “The push in academia is to develop and move on because there is no real reward to implement in real systems.” More emphasis needs to be placed on transitioning academic efforts into practical options, he stated. Complicating the problem is that academic projects are funded for only two or three years, so it is rare to have continuity, he explained.  

Other research is being advanced into applications. The IARPA portfolio includes many such examples, three of which Winchell discussed.

The first effort is the $100,000 Functional Map of the World challenge, which calls upon deep learning and other automation to identify interesting features on satellite images. The effort includes assessing the angle of shadows and data through time. It also looks at ways to automate the analysis. IARPA has received more than 600 applications and is working to determine the winner.

Another effort centers on unconstrained facial recognition. “This is varsity-level facial recognition,” he explained. It seeks to identify faces that are occluded or that have noise or strange lighting circumstances or emotional features that make it hard to detect identity. This involves feature vectors for matching faces in 3-D.

The third effort connects machine intelligence with the neural sciences. It is about understanding different parts of brain connection and modeling and collecting structural and functional data from different sites in brain to develop algorithms and implement them, he concluded. 

Robots will have to coexist with other robots and humans. The challenge will be determining how they will share with and receive knowledge from humans, stated Atanasov as the panel concluded.

Josh Harguess, research scientist, SPAWAR Systems Center, moderated the panel.


Share Your Thoughts: