Intelligent Agents Get Smarter
Self-learning system offers potential knowledge transfer, training and education applications.
Prototype technology could someday help exhausted or stressed front-line officers make sound critical decisions by providing advice based on their own career experiences. The software program can create a database consisting of an individual’s professional knowledge that can be expanded and modified throughout a person’s career.
Technologies such as expert systems, designed to act as organizational knowledge repositories, have had an uneven track record. Yet the military, government and business need systems that can draw in data and make conclusions within given parameters. This requirement is especially relevant in the military where commanders must often make critical decisions under stressful, constantly changing conditions. An aid that understands how a particular officer might react in a given situation and provides advice based on that knowledge without being clouded by fatigue or other distractions would be a valuable tool, experts say.
Assisting decision makers is the primary objective of Disciple, a computer program developed at George Mason University’s (GMU’s) Learning Agents Laboratory in Fairfax, Virginia, and supported by the Defense Advanced Research Projects Agency (DARPA), Arlington, Virginia, the U.S. Air Force and the U.S. Army. According to Dr. Gheorghe Tecuci, professor of computer science at GMU’s School of Information Technology and Engineering, the idea is to use intelligent software agents to solve complex military problems. A major part of this effort is the development of tools and methods that allow users with minimal computer skills to build, teach and maintain these systems.
Traditionally, the creation of expert systems incorporating specialized information requires the pairing of a subject matter expert (SME) in the given field with a programmer, or knowledge engineer, who designs and builds the system. This information sharing is an important issue in artificial intelligence because the knowledge engineer must spend time learning about the subject from the expert before any rules can be constructed, Tecuci says. What ensues is frequently a time-consuming and inefficient process as the programmer builds the database, confirms the rules with the expert and makes the necessary corrections. The SMEs often are unfamiliar with how engineers make computer language rules. “Experts are generally not used to expressing knowledge in a form needed to build a knowledge base. They use abstract or informal language and concepts or visual representation to convey information,” Tecuci explains. The engineer then takes this information and converts it into a knowledge base that is in a precise and formal language.
GMU’s learning agents laboratory takes a different approach to this task. “We want to develop a method that enables SMEs to build a knowledge base or intelligent agent [to do this] by themselves without receiving much support from a knowledge engineer and without requiring them to become knowledge engineers,” Tecuci says.
The intent was to develop software learning agents that could be taught directly by users. The SME would instruct the agent in a manner similar to teaching a human student or apprentice. Tecuci describes one possible learning scenario in which the user and the agent collaborate in solving a problem. The SME would help the software agent solve the problem and add the lesson learned to its knowledge base.
This approach requires the construction of a knowledge base that conducts a number of design and rules operations normally done by a programmer to formally represent the problem-solving process. This frees experts to input their knowledge without learning complicated coding procedures.
Traditionally, when a database for an expert system is developed, the programmer must create an ontology—a collection of terms to be used by the agent. Problem-solving rules then are defined using this terminology. Each problem-solving routine is like a small problem in itself, and there are usually thousands within a knowledge base. All of these actions require the creation of formal sentences and explanations in a well-defined, formal language, Tecuci says. “A subject matter expert cannot do this. So if we want an expert to build a knowledge base by himself, he really needs to do more than this while still producing a knowledge base,” he explains.
Disciple is a family of intelligent systems designed to help users by doing the programmer’s work. To avoid creating an ontology for every agent, tools have been developed that can import ontological information from other related databases. “Instead of defining complex problem-solving rules, the SME just analyzes concrete issues and explains to the agent how to solve a particular problem,” Tecuci contends.
This process expands the agent’s knowledge base because the software learns from each example and creates a general rule for it. The agent then applies these rules to solve other problems. Like a student, Disciple will make some mistakes during the learning process. After the expert has analyzed the agent’s work, the software program proposes a way to solve the problem. If an error is made, the SME corrects it and explains the reasons for the mistake, mimicking the way human students are taught, Tecuci explains.
Disciple is a general-purpose tool that can be taught to solve problems in predetermined subject areas. Once the expert trains the agent, it can continue to be used as an assistant or by other experts or nonexperts. The program also can teach other students to solve problems. “Basically, the application is unlimited,” Tecuci says.
Once an initial problem is formulated, the system tries to solve it through reduction. By asking a series of questions, Disciple reduces the complexity of a task by providing enough information to draw conclusions. The agent learns from each problem-solving episode, allowing it to generalize in new situations. “The strength of Disciple is that it can generate a rule from a specific example,” Tecuci contends. For example, when the system is taught that destroying an enemy reconnaissance outpost enhances security, it will apply this rule in similar situations.
In its current form, Disciple is written in the List Processing (LISP) language with a Java interface. According to Lt. Col. Michael Bowman, USA, a doctoral student working with Tecuci, the program runs on Windows 2000 or NT and operates on laptops and desktop computers such as Macintoshes and Sun system servers. If a fully field-capable application were developed, it would be written in a more powerful computer language such as C, he says.
Disciple’s development began in 1996 as a system that could be taught to locate artillery positions. The project’s early goals focused on high-performance machine learning, but in the past few years dramatic improvements have been made with respect to the ease of use and the interface, Tecuci notes.
Although Disciple is only a research prototype, it has already been used in DARPA programs such as the high performance knowledge base (HPKB) project. In this study, the software worked in a military domain to solve tactical ground combat problems at the battalion and brigade levels. Disciple operated in a command post environment where a battalion commander and his staff plan operations. The scenarios were set up as missions sent from division headquarters to the commander. Col. Bowman explains that in a real operational situation, the staff officers usually have several hours to plan the next operation and develop two or three scenarios about how to attack or defend an objective. The operations officer then picks the new mission’s most likely courses of action, highlights their strengths and weaknesses, and presents them to the battalion commander.
This was the setting for the challenge problem scenarios in the HPKB program, Col. Bowman says. The courses of action and the critiques were set within the context of Army doctrine and tenants of war. On the basis of the data it learned from participating officers, Disciple analyzed proposed courses of action and listed their strengths and weaknesses.
The colonel adds that this type of system should not be used in a tank or on the fly while engaged in combat; it is intended for command center use. An officer would develop the agent over the course of his or her career. “It would remember all of the things that you teach it. The strength of an automated system is that it does not forget or get tired. So when you are in the middle of a fight, and you can’t remember what the principles of war and the tenets of operations are because you haven’t slept in 72 hours or eaten in 24 hours, the agent will remember,” he says.
The software’s strength is its ability to dive into details and present the facts that need to be considered. “We are really looking for a system that picks out one or two key things the commander needs to think about and be aware of when he makes a decision,” the colonel explains.
In experiments with the Army, the decision-making process was presented as a series of questions and answers. Disciple provided a reference source for each answer. The software had varying levels of justifications, which allowed commanders or their technical staff to troubleshoot the system. “Disciple, like any other computer, does what you tell it to do. If you leave out a step or key aspects of how you reached a decision, like a student, Disciple will try to follow instructions and work that same way,” Col. Bowman says. Because mistakes often occur in the first stages of the training process, the system allows the user to analyze answers and retrain the agent by correcting its responses, he adds.
A script was used to train the agent during the Army tests. While the system took answers in standard English, a specialized vocabulary was employed to name specific military objects and actions. One of the goals of the DARPA program was the reuse of knowledge and information. To accomplish this task, much of the terminology was reused among similar systems.
In the DARPA experiment, feedback from participants indicated that important aspects, such as the use of surprise in a tactical situation, had been overlooked. The process of inputting this new information, training the software and removing inconsistencies took 90 minutes. When it was over, the agent had a new set of solutions that it understood and could apply to multiple scenarios. Col. Bowman says this is a relatively simple process that can be input after action reports and personal experience.
One of the goals for Disciple is to make it easier to use, the colonel says. The system currently uses stand-alone agents, but DARPA plans to call for the development of collaborating intelligent agents that would share information. For example, two SMEs could create agents that produce partial answers. The agents would then collaborate to derive a better answer to the question.
Tecuci says the first practical application for Disciple will be with the Army War College’s center of gravity program. Center of gravity studies examine the political, military and economic aspects of global affairs and their relation to U.S. foreign policy. Besides initially helping students in their course work, the software could become a decision-making aid in crisis situations because the War College staff consists of retired Army officers, history professors and strategic leader instructors who are frequently tapped to participate in real-world crises.