Enable breadcrumbs token at /includes/pageheader.html.twig

Development Program Etches Future for Mobile Robots

In the coming decades, autonomous robotic devices will patrol battlefields and vacuum the floors in homes. Recent advances in software and hardware are preparing the way for a generation of vehicles and tools able to operate with minimal human supervision for prolonged periods of time.

Inexpensive processors and sensors coupled with improved computer codes lead to smart machines.

In the coming decades, autonomous robotic devices will patrol battlefields and vacuum the floors in homes. Recent advances in software and hardware are preparing the way for a generation of vehicles and tools able to operate with minimal human supervision for prolonged periods of time.

Fully independent robots have been promised for more than 20 years, but their potential has been delayed by design, cost and development hurdles. Now, new methods to apply computer code and inexpensive, powerful processors are allowing researchers to create prototypes that will pave the way for robotic devices to appear in the next decade.

The key to fully autonomous robots will be the development of software that binds together data from servo actuators and sensors as well as learned reactions. The Defense Advanced Research Projects Agency (DARPA) is developing the software for these next-generation robots through two programs: the mobile autonomous robot software (MARS) project and the software for distributed robotics (SDR) project.

MARS seeks to develop and transition missing software technologies that are needed to program independent, mobile robots that can respond appropriately to changing and unpredictable situations in their environment. These systems will not rely on synchronous commands from human operators, nor upon high-quality, real-time or near-real-time datalink connectivity. To revolutionize the programming and utility of autonomous mobile robots, engineers are working to extend robot learning and control ideas through soft computing, robot shaping and imitation, and other interactive software development approaches.

Like the MARS project, SDR aims to develop the code necessary to program a class of extremely small and resource-constrained microrobots. Important research goals include creating control, networking and computing technologies to permit a large number of extremely resource-constrained robots to work collectively. Miniature robots would achieve large results through collective, cooperative behaviors in the same way that ants and honeybees operate in the natural world.

According to Col. Mark Swinson, USA, MARS and SDR program manager, such autonomous systems currently do not exist; however, researchers today can build sophisticated servo-actuated mechanical devices, what he describes as mechatronic shells. Despite the sophistication of these devices, they lack the appropriate information technology, he says. This is the software embedded control aspect, which has four dimensions: perception, reasoning, action and the human interface.

The development of software to facilitate machine learning is central to DARPA's robotics program. Col. Swinson notes several approaches to this, including reinforcement learning, cue learning techniques, soft computing and fuzzy neural applications. These span a spectrum from strictly parametric learning, where an action is repeated several times and the robot zeroes into the parametric value, to machine learning techniques, where the device learns from models. Machine learning can be applied to vision processing because a robot can be trained to learn patterns. Once it can recognize them, the amount of computation required to process the image collapses dramatically, the colonel explains

Propagating information is another part of machine learning. Once a robot has learned something, the data is actually manifested in a fairly conventional way, regardless of how the information got there. The circuit patterns are learned and the software is essentially created, Col. Swinson says. Once created, the learned material never has to be lost. Swinson observes that humans, being temporary to the world, struggle to acquire knowledge and then pass it along through books and other means. Compared to people, information gathered through machine learning can be shared with other robots and devices as capabilities are refined and learned. Work is being done with distributed databases to study this type of transfer.

Another dimension is temporal, or the ability of a robot to continue learning over time. By extension, even if an individual robot's learning capabilities are grossly inferior to those identified with biological organisms, the device can learn into perpetuity, and it never has to stop learning, he says.

Several different schools of thought exist concerning robot development. DARPA is pursuing all of them to give each idea a chance to present its value. One approach looks at pure subsumption of sensor motor couplings and deals with robots behaving like insects. As new capabilities are added onto this base behavior, the machine becomes more competent. It is a close analogy to the description of evolution, Col. Swinson observes. At the other end of the spectrum is an attempt to jump straight to human-like learning and to try to understand how the physical manifestation of the robot impacts the way it learns.

"There are a number of people who believe that the ways we learn and the things that we learn are partly driven by the manifestation of a specific physical device [our bodies and sensory organs]. Think about trying to train something to do a physical act. For a machine to be able to map that action onto itself, there has to be some physical corollary. For example, if you are playing ball with a robot, it is rather difficult for the robot to play with you if it does not have an arm and a hand. One of the things we are trying to explore is exactly the importance of the way in which a robot is physically instantiated and the role that plays in learning--especially high-level learning," Col. Swinson says.

This research turns on four points of study--perception, reasoning, action and human interaction--how these are coupled, and how much importance is placed on each of these aspects one at a time. The colonel observes that perception and reasoning tend to fit into the traditional artificial intelligence domain of symbolic reasoning and symbolic data sets. However perception, action and tight sensor-motor couplings tend to be more sensor mediates and are more numeric. Understanding the necessary knowledge representations and capabilities to accommodate both kinds of facilities is on the edge of research, he says.

Part of the machine learning aspect involves methods to formulate sensor-motor coupling--for example, industrial assembly line robotics. Unlike the commonly perceived notion, programmers do not explicitly program actuators to conduct individual tasks. Typically these systems employ teach-playback methods, which are simple to program because the robot is manually walked through all of its motions.

"If you look at the kinematics of the problem, they are extremely complex and do not offer closed-form solutions," Col. Swinson notes. "Yet, a fellow with a high-school education and some experience on the assembly line can literally walk a robot through it. All of that complex, low-level programming is abstracted out," he says.

Launched in the summer of 1998, MARS explores software technologies that enable sensory-driven autonomous navigation. Platforms benefiting most from this research would be anything likely to encounter obstacles in a high-bandwidth manner. From a military perspective, this includes ground vehicles and probably small rotorcraft, Col. Swinson maintains. Researchers will look at the range of options between explicit programming techniques and domain-specific programming techniques such as domain-specific languages.

MARS involves a mix of these various programming options and machine learning that forms a kind of progression. One extreme is almost predominantly explicit programming, either through soft computing technologies or fuzzy neural types of systems. At the other extreme, only the most basic competencies are programmed, and developers rely on machine learning to completely fill out all of the competencies that are expected to be derived from the software. Scientists will attempt to populate this spectrum with material from both ends, from predominantly explicitly coded to predominantly learned programming. According to Col. Swinson, the goal is to pursue these technologies for several years, compete them off to see which ones offer the most promise, and then further amplify the work.

Robotics development has been delayed for a variety of reasons over the years. Many of the challenges originate within the science of robotics because it is inherently multidisciplinary, Col. Swinson says. There are issues of servo control, mechatronics, sensors, processing and the software to glue it all together. All of these systems must work cooperatively for a robot to be truly autonomous. Much progress has been made in the last 20 years, however. Experience with a variety of unmanned vehicles has brought robotics to its current levels. The colonel cites the example of bomb disposal robots, which today are used in virtually every major police department in the country.

Progress in mechatronics over the last 10 years has provided scientists with a much clearer understanding of the hurdles they currently face. A decade ago, researchers underestimated the amount of processing necessary for mobile robotics and did not realize the level of difficulty of the software issue. The current level of understanding is helping to develop tools to solve these problems, Col. Swinson says.

While software is beginning to catch up with the hardware, inexpensive data processing and sensors have made much of the current development possible. Processors in the 1,000 to 10,000 million instructions per second (MIPS) range are now becoming widely available to researchers. Previously, the 1,000 MIPS level had been a barrier to real-time vision processing for robots, and scientists did not have access to faster processors, Col. Swinson explains.

Within the next five years, the colonel believes prototype autonomous robots will begin performing a variety of functions. He sees many uses for small, distributed robots, especially in the area of land mine removal and other work in hazardous environments. Robots will provide military forces with reconnaissance capabilities they do not have today, and those systems will allow commanders to extend their influence beyond what can currently be done with strictly manned systems. He notes that reconnaissance unmanned aerial vehicles (UAVs) have already provided a vision of that future. Today, it is almost inconceivable for commanders to conduct military missions without the use of UAVs, he observes.

By 2010, the first commercial products featuring mobile robotics will become available. At the end of the day, what will determine the success of autonomous robots in government and civilian applications is reliability, affordability and sufficient utility to be cost effective, the colonel notes.