A groundbreaking artificial intelligence (AI) project seeks to have sensors accepting and transmitting information in a method similar to the human brain. Being developed by the Defense Advanced Research Projects Agency (DARPA) along with SRI International, the Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, project is surpassing traditional mathematical algorithms and attempting to process information in complex environments autonomously by learning relevant and probabilistically stable features and associations automatically.
Today, the military primarily uses AI to process large amounts of information, including videos, signals and intelligence that must be deciphered and analyzed quickly. As developments in AI continue, these tasks may seem almost menial. “A robot would be able to recognize, from the activities its video cameras capture, what the people it’s observing are doing,” explains Dr. Raymond Perrault, director,
These intelligence systems can perceive their environment and adjust. “They manage to do [their mission] while the world changes around them,” Perrault says. To accomplish this task, they organize ideas utilizing mathematical logic. Using sensory data, the programs prove simple theorems by plugging the data into the algorithms, which results in a solution and consequent action.
An alternative U.S. Defense Department use for AI involves virtual personal assistants that respond to voice commands and complete complex tasks on the user’s behalf. These AI programs analyze enormous amounts of data and make decisions. They interact and learn but don’t necessarily need to perceive the environment. “You communicate what you would like to have done, and they have a dialogue with you to get the job done,” he explains.
DARPA and SRI completed this kind of application in the 2009 Personalized Assistant that Learns (PAL) program; SRI calls the application the Cognitive Assistant that Learns and Organizes (CALO). The software program can reason, learn, understand instructions, explain what it is doing and react. It responds to unknown situations and can debrief after an operation is complete.
CALO takes information from a user’s e-mails, contacts, tasks and projects. It then creates a relational model of the user’s world and learns from it. From there, it can manage scheduling constraints, handle conflicts and negotiate with other parties on the user’s behalf.
Recent AI technology currently does not come in the form of state-of-the-art robot companions. Most of the technology is still in standard computer hardware. What is changing, however, is the way this technology helps people: through programs that learn and hardware that will eventually imitate the human brain, Perrault notes.
Because of the variety in AI, many unforeseen uses could be right around the corner. Perrault observes that the practical usage of interactive computers has imitated human nature so well that the incorporation of AI, though impressive, won’t look like a sci-fi movie. It will just naturally integrate into everyday life.A supercomputer that runs operations like the HAL 9000 from 2001: A Space Odyssey does not yet exist, Perrault confirms. “The things that are being done are progress and are useful, but they will not allow machines to take over the world. When it comes to real tasks in the real world like dealing with perception and language, we still have a long way to go,” he emphasizes.