Perceptive Underwater Robots
Researchers at Carnegie Mellon University’s School of Computer Science in Pittsburgh are examining how to create systems that can perform autonomously underwater and provide a clearer view of the subsurface environment. Such capabilities offer important applications to the U.S. services, the Navy, Coast Guard and Marines Corps, as well as to the commercial shipping industry for ship and harbor inspections, among other activities.
While some Carnegie Mellon University (CMU) researchers are looking at the autonomy side or decision-making abilities of underwater robots, Michael Kaess, CMU assistant research professor, Field Robotics Center, Robotics Institute; CMU students; and other researchers are investigating how autonomous robots view and map their surroundings. Kaess, who is also the director of the school’s Robot Perception Lab (RPL) is working to improve robot perception using advanced sensors, complex algorithms and mapping tools.
The CMU researchers are providing basic research to the U.S. Navy, through the Office of Naval Research (ONR), to apply to ship inspection and other capabilities—mostly in harbor environments or for underwater infrastructure. The National Science Foundation also helped fund some of the efforts.
The professor explains that autonomous underwater vehicles (AUVs) can be more forgiving to work with compared to unmanned aerial vehicles. “If something fails, the vehicle isn’t falling out of the sky, it’s just floating there in the water,” he notes. “The worst case is that you don’t find it anymore. That is really the biggest risk.”
Kaess, his group of students and other researchers rely on so-called hovering, multipropeller AUVs, as compared to torpedo-shaped AUVs. The torpedo-shaped AUVs only have one propeller in the back that allows the vehicle to go mostly forward. “They have to go at a certain minimum speed, otherwise they sink,” Kaess adds. “They can’t just stop and look at things sideways.” Meanwhile, the design of a hovering AUV allows researchers to control most degrees of freedom of the vehicle. That kind of maneuverability is important for hull or harbor infrastructure inspections. “With the ‘hover’ type of AUVs, we can go into some opening and back out again, or stop the vehicle in a certain place, or go up and down to take a closer look at whatever you are interested in,” Kaess notes.
The complicated part of operating subsurface autonomous machines, however, is compensating for sensor capabilities traditionally used in the air that do not work in the water. “Above ground, with computer vision and LIDAR [light detection and ranging], we’re constructing 3D models as accurately as possible,” he says. “Underwater we are mostly limited to sonar [sound detection and ranging]. In very short range, we can use cameras if we get very close to the surface.”
Researchers consider two types of mapping, Kaess explains. “One is the mapping that we do to keep something localized [located or fixed in a certain place], and that might not be a map that’s very useful to look at,” he says. “With the other, once we have the AUV localized, we can use sensor data to create additional capabilities, such as dense surface reconstructions.”
CMU’s basic research for the Navy is working to find out how to keep an AUV localized underwater; how to ensure that an AUV has completely inspected a ship, covering all the parts or areas necessary; and how to go back to a certain location underwater if an operator wants to get further images of a certain area.
The professor notes that these AUV capabilities are especially important in regard to an accident. Here, a robot can help inform the decision-making process of what to do with a ship that is damaged—if the ship needs to be dry docked, for example. An AUV provides a safe alternative to sending a dive team to perform the inspection.
“If a ship is damaged, there may be a hole, and it’s dangerous for a diver to go near there,” Kaess says. “If the robot gets pulled in, it’s not that bad. At the worst, it is some loss of equipment, which is nothing, compared to the diver.”
CMU’s researchers conducted their latest field trials in San Diego in November, working to inspect large naval ships 180 meters long, and in the past have done test inspections of older aircraft carriers more than 300 meters long, Kaess mentions. The Navy is looking for AUVs to provide operational flexibility not offered by traditional inspection or mapping infrastructure.
“It is possible to set up acoustic localization infrastructure with multiple surface vessels or buoys,” the professor says. “That is a typical way of operating in the open ocean, but in more cluttered environments, like a harbor, those systems don’t work very well. It’s a bit similar to GPS in the middle of a city, where the reception is pretty poor because the signal bounces off of buildings. The same problem happens to the acoustic localization underwater.”
In addition, simple sonar sensors only produce single-range measurements, such as approximately how far away an object is in a certain direction. “But if you try to do mapping with that technology, it results in very poor information,” Kaess says. Employing sonar usually results in a trade-off between the range of perception and the resolution or image quality. Instead, using sonars that have multiple transducers, multiple senders and receivers can form a beam that shows more accurately the direction from which the sound waves come. “We typically use higher frequency sonars that give us higher resolution, but it is a shorter range, roughly 5 to 10 meters,” Kaess says.
To improve underwater mapping using multibeam sonar processing, CMU researchers are focusing on better image segmentation performance. This improves the classification of sonar images into object and free-space regions—the segmentation—that produces an estimate of the range to an object underwater.
The researchers combine “the use of intensity distribution estimation techniques with more robust formulations of the problem of segmentation of multibeam images, while also addressing important preprocessing steps that improve performance of both fixed-threshold and probabilistic classifiers,” according to a recent paper from Kaess and Pedro Teixeira, Franz Hover and John Leonard, scholars from the Massachusetts Institute of Technology.
Another technique, virtual occupancy grid mapping, is meant to increase an AUV’s autonomous maneuverability, improving upon simultaneous localization and mapping, known as SLAM, which generates maps, Kaess says. “SLAM essentially answers the problem of not having any external references underwater—such as GPS—for localization,” he explains. “If your localization is wrong underwater, then the question is how can you fix this problem, and fix it over time.”
However, because SLAM doesn’t help with planning for autonomous maneuverability, the researchers have to find a way for the underwater robot to determine where to go. “If you want to do planning for the AUV, so the vehicle can figure out what to do next, a lot of maps traditionally generated in navigation techniques such as SLAM are actually not useful in that context,” the professor says. “The problem is they don’t represent free space. They only represent surfaces. The AUV has to determine if it can safely go in a certain direction, and it also needs to know where the map ends, which areas have not been seen, so that it can go to the right places to complete the map, to cover an area.”
Virtual occupancy grid mapping essentially provides submaps, partitioning space into small cubes. “And for each cube you calculate, based on the sensor information, it calculates the probability of an area being occupied or not,” Kaess shares. “Then we create small local occupancy grids and rely on the sensor data to be accurate enough so we never have to correct this. We then can move these multiple maps with respect to each other, depending on what the SLAM algorithm tells us on how we need to correct the map.”
Another area of effort involves a new way to bring together visual and inertial measurements, in what is known as a visual-inertial odometry platform that uses information sparsification. Kaess and CMU researchers Jerry Hsiung, Ming Hsiao, Eric Westman and Rafael Valencia are working to find an efficient way for an unmanned aerial vehicle to perform localization using that kind of platform. The method is meant to address the aerial vehicle’s limited onboard energy and computing processing in regard to localization. That mapping work includes taking an inertial sensor, a gyroscope and a barometer, and trying to keep track of the aerial vehicle in a GPS-denied environment. “That is really important for mobile applications,” Kaess relates.
In addition, the researchers are considering developing techniques for sonar commonly used with cameras, and applying those techniques to underwater sensors to construct 3D models from multiple views. Kaess refers to this as acoustic structure from motion. “If you take an image with a camera, you lose some information in the projection, so you don’t have the depth information.” Instead the researchers use an imaging sonar that has a wide opening angle so it sees a larger volume of water at once. “The math is quite different for that application, so this is where the research is going on,” he says.
Kaess notes that the researchers are also looking at the orchestration of multiple vehicles in the underwater domain, examining how to coordinate multiple vehicles, how to use data from one sensor on one vehicle to match with data from a completely different type of sensor on another vehicle and how to harness different types of sonars. “It also would apply to situations where a user is performing mapping and inspection over time,” Kaess says. “So if you come back later to the same ship, how you can reuse the old maps.”