Enable breadcrumbs token at /includes/pageheader.html.twig

Mobile Robots Learn Self-Reliance

A new generation of autonomous, problem-solving robots will soon be entering commercial service. Recent advances in computer processing power have allowed researchers to design prototype machines that can navigate in unfamiliar surroundings unassisted. Using a variety of sensors, the robot creates a constantly updated three-dimensional map as it goes through its routine. It is this self-navigation that is finally placing mobile robotic systems on the verge of commercial viability, scientists say.

Navigation, mapping system free machines for independent, unattended commercial work.

A new generation of autonomous, problem-solving robots will soon be entering commercial service. Recent advances in computer processing power have allowed researchers to design prototype machines that can navigate in unfamiliar surroundings unassisted. Using a variety of sensors, the robot creates a constantly updated three-dimensional map as it goes through its routine. It is this self-navigation that is finally placing mobile robotic systems on the verge of commercial viability, scientists say.

Government and commercial laboratories have pursued the goal of independent mobile robots for nearly 40 years. But making a machine successfully navigate across a room or factory floor proved to be more difficult than originally envisioned. Taking advantage of gains in computer processing power and memory, researchers at Carnegie Mellon University’s Robotics Institute, Pittsburgh, have developed a grid-based mapping system that permits reliable travel in cluttered environments.

According to Hans Moravec, principal research scientist and head of the institute’s mobile robot laboratory, modern computers are roughly 1,000 times more powerful than those used on robots in the 1970s and 1980s. Citing his own 30-year research experience, Moravec notes that, in the 1980s, it took a robot up to five hours to cross a 30-meter room. Using a mainframe computer operating at 50 million instructions per second (MIPS), these early robots took 10 to 15 minutes to process the mapping data of every meter they advanced.

Navigational errors made these mapping programs brittle, he explains. A robot would normally become confused in one out of 100 tests because too much ambiguity existed in these early point-based mapping methods to be practical. In the 1980s, a new grid-based approach was developed in which a robot generated a field of cells. Data about detected objects are placed in individual cells, creating a map of the machine’s surroundings. This method is more error tolerant and permits the use of a variety of sensors, some of which can produce a high degree of noise in their data, Moravec explains.

The first grid-mapping system operated in two dimensions. This system was more efficient than the point-mapping methods it replaced, but it missed data about heights and overhangs. It was not until the mid-1990s that the third dimension was added. Moravec describes how scientists at the institute built several robots that navigated with sonar. Using the grid-mapping system, they were able to roam the research center’s halls and offices at night without supervision. But because they could not process data about heights and overhangs, they often got caught under a potted palm tree.

Three-dimensional mapping was originally considered too expensive for the system, but by 1996, researchers had developed a program that incorporated it. In the past three years, a visual system using three cameras to provide depth and horizontal features has been developed. The cameras remove depth and range ambiguity about objects, Moravec explains. By 2002, his team created a fully operational three-dimensional grid-mapping system of 512 by 512 by 128 cells. A robot using a 1,000 MIPS processor now can update information about its surroundings about once per second, which is at the low end of the scale for practical applications, he says.

Other laboratories and companies have recently announced mobile robot systems, but they employ simpler methods using less computer power. These systems often use point-tracking systems and are limited in their applications because they need to follow buried wires or specially placed range markers for their sensors. Moravec claims that his computing-rich approach has several important advantages. As processing speed increases and prices drop, his machines become more capable while point-tracking systems are essentially limited by the data provided for them. Grid-mapping robots can create dense three-dimensional maps with a one-centimeter resolution, and while this leaves a certain amount of noise, it is sufficient for industrial navigation, he says.

Competing laboratories and firms also are using relatively inexpensive scanning laser range finders for their robotics projects. Laser mapping requires less computer power because the beam does most of the work, Moravec says. However, laser prices are dropping slower than computer costs, and they can map only in two dimensions. Some designs use a single laser to scan vertically or two lasers to sweep both the horizontal and the vertical to map in three dimensions. But this approach is time-consuming, and the robot does not see very far ahead of its current location, he observes. In addition, the lasers map only the area a robot has just passed through, not the space ahead of it.

However, if prices continue to drop, Carnegie Mellon scientists may incorporate lasers into their work. One major advantage of the grid approach is that the statistical model on which it is based permits a high degree of noise in the data. Lasers provide very clean data and can be incorporated into the grid-mapping system because it can support a variety of sensors, he says.

Sensor input for the grid system is very important. A major factor in the current research was to place the data into simple templates to avoid confusing the robot. Some groups attempted to abstract visual data to create three-dimensional images, but these images had limited perspective and depth, Moravec says. Because the grid system is scaled, commonly encountered objects can be turned into templates that the robot’s computer can recognize. For example, the robot can detect floors, walls and other common objects by comparing them to a template. He notes that this approach dates back to the 1970s when researchers created robots that could recognize parked cars. A car-shaped template registered whenever their sensors detected a spike in data. If the object fit the template, its grid location would be mapped and recorded.

Because of memory limitations, grid-mapping systems are more suited for indoor use. This includes industrial, office and hospital work such as floor cleaning and security. Security robots probably would use infrared and microwave motion detectors to locate human-shaped obstacles. But motion detectors do not work very well on moving platforms. The current generation of robot guards are usually linked to a command station. Moravec speculates that a commercial sentry unit also would have a built-in video camera and speakers permitting human operators to override the machine and directly control it to investigate an alarm or confront intruders.

Although mobile autonomous robots are on the verge of commercial use, they still have a long way to go developmentally. Comparing the human eye to current machine systems, Moravec notes that the human retina is designed to watch for edges and detect motion and operate in four layers of processing. “To do the same job with computers takes one billion calculations for each layer. Extrapolating to other parts of the nervous system, simulating a human brain would take 100 trillion calculations per second to emulate,” he says.

On an evolutionary scale, current processing speeds of 1,000 MIPS place robots at the small vertebrate level. “A guppy,” Moravec says, adding that besides carrying out their specific functions, autonomous robots are only aware of their immediate surroundings. However, he predicts that increasing processing speeds will bring more capable systems within a decade. Once robots are commercially available in large numbers, many solutions for issues such as hazard recognition will arrive through incremental use and modification. “There is no substitute for field use for learning about problems and solving them,” he says.

Research in Japan also may influence robotics in coming decades. Japanese scientists are designing bipedal robots that closely mimic human movement and shape. Moravec notes that a major challenge is making motors that are small and strong enough to move a machine and allow it to stand upright. This development work also extends to the arms and manipulators. Although skeptical about the practicability of bipedal machines, he observes that there are many uses for light, strong arms and dexterous manipulators.

In a related application, Carnegie Mellon researchers have developed autonomous vehicle navigation systems for cars (SIGNAL, July 2001, page 17). A modified Plymouth van traveled from the East to the West Coast with minimal human supervision to demonstrate the technology. Produced by the Robotics Institute’s Navigation Laboratory, the adaptive system requires human intervention only at crossings and turns; otherwise, it can use cameras to follow the road even in heavy rain and snow conditions. Under these circumstances, the system could follow the disturbance in the water film on the road left by other cars and their tracks in the snow, Moravec says.

Using a neural network software model, scientists taught the system to ignore distractions on the side of the road and concentrate on events directly in front of the vehicle. The system operated by recording the road ahead of it in a ribbon read from left to right. The vehicle’s navigation system creates a library of more than 100 possible actions based on road conditions. By constantly comparing the road ahead to its library, the autonomous system determined whether the car needed to stay straight or turn with the road.

Moravec maintains that the system can be taught to drive in a few minutes under a human driver’s control. If it encounters new types of road, such as a tunnel, the system will record it while determining the road’s cross section to remain in its lane. However, while the work has successfully allowed autonomous road travel, the system is currently not sophisticated enough to interact with other vehicles, he says.

 

Additional information on the Carnegie Mellon University Robotics Institute is available on the World Wide Web at http://www.ri.cmu.edu.