• The Army Research Laboratory’s (ARL’s) robotic manipulator, or RoMan, autonomously moves debris as part of a field exercise at Camp Lejeune, North Carolina, earlier this year. Advances emerging from the ARL’s Robotics Collaborative Technology Alliance have come together to produce self-determining robots modeled after Army battlefield needs.  CCDC Army Research Laboratory Public Affairs
     The Army Research Laboratory’s (ARL’s) robotic manipulator, or RoMan, autonomously moves debris as part of a field exercise at Camp Lejeune, North Carolina, earlier this year. Advances emerging from the ARL’s Robotics Collaborative Technology Alliance have come together to produce self-determining robots modeled after Army battlefield needs. CCDC Army Research Laboratory Public Affairs

Technologies Come Together for Army Robots

December 1, 2019
By Robert K. Ackerman
E-mail About the Author

The devices are more autonomous with a smattering of human-like attitudes.

Years of experimentation by Army scientists and academic laboratories have led to a new generation of robots that feature advanced capabilities bordering on human reasoning. These mechanisms are able to autonomously perform complex tasks in part by learning how to ape human behavior. Scientists have generated algorithms that teach robots both to perform complex functions and also learn from humans as they evolve digitally.

Distributed research performed by government, industry and academia partners under the auspices of the Army Research Laboratory’s (ARL’s) Robotics Collaborative Technology Alliance (RCTA) has come together to produce the first vehicles in what lab officials hope ultimately will be deployed in battlefield conditions. These rudimentary robots are designed around predicted Army needs, but they also are opening the door to other potential applications as robotic capabilities become more sophisticated.

“The world is very complex,” declares Stuart Young, ARL program manager for the artificial intelligence (AI) for maneuver and mobility essential research program. “One of the secrets to addressing [robot] challenges is to have systems that can learn rapidly from human demonstration and be able to either repeat those capabilities or be able to generalize those capabilities the way humans can.”

Before the RCTA began, the world was “very metric” with an environment full of objects that lacked any meaning, he relates. Applying a semantic understanding of the environment allows a robot to reason about that environment. It also permits using natural language for humans to interact with these robots. “We can use natural language to describe things in human-understandable terms, and then get the robots to actually do those things in a human-understandable way,” he posits.

Young notes that when the RCTA research began, scientists were aiming at a “super-fancy cognitive architecture” that would connect traditional AI with the metric-based world of robots. It would be a perfect holistic model, he recalls, except that goal proved to be too difficult. While the goal remains elusive, experimenters learned from that effort and were able to use natural language to ground symbols that were happening in the environment. “I don’t know if it was a blessing in disguise, but we were able to de-couple the original goals and show these really nice capabilities,” he relates.

Over the past year, Young notes, the lab has brought together a lot of foundational research from several years of investigation and experimentation. Young, who also is the collaborative alliance manager for the RCTA, says the result is demonstrable capabilities for robot behaviors that the ARL wants to implement for the Army.

These capabilities must be robust and resilient to be deployed among soldiers, Young points out. Both military roboticists and the autonomous car industry are learning similar lessons. “You can’t deploy systems that are too brittle,” he says. Many of the approaches that the ARL is invoking are allowing it to understand this factor better, which improves its ability to move these technologies to the battlefield.

He admits “there is still a lot of brittleness” to this new consolidated capability, but the ARL has demonstrated “it fundamentally works.” Research continues to make the defining capabilities more robust and resilient, he adds.

“As researchers, we have a long-term vision where we would like the robots to be able to imagine and infer what their tasks should be in a broad, general AI sense,” Young offers. “But there are a lot of off-ramps that we’ve learned we can exploit along the way to give them capabilities.”

These off-ramps include teaming with humans so that the people perform the hard reasoning aspect of a problem—the “why,” Young continues. Robots would be left to focus on their prime abilities, which would be lower-level tasks. Human adaptability would complement less adaptable robots in human-robot teaming. Young describes this approach as exciting, and it offers a better near-term opportunity to transition robot technologies to the warfighter.

“As we pursue these capabilities to address the Army’s challenges that it puts forth before us, we uncover a lot of new ground that we need to discover,” Young states.

Maneuvering is one discipline that is a key to effective autonomous operation. Young notes that, unlike commercial robots that operate in a fixed environment amid known objects, Army robots must be able to manipulate unknown objects in the wild. To do that, they must reason about the object and how they will grasp it—and this must be done in a timely manner, he points out. Combat waits for no one, and robots must be able to keep up with their human partners.

ARL researchers are designing these robots to be able to perform tasks that otherwise might put humans in danger—clearing a road of solid obstacles, removing concertina wire or moving mines out of the way, Young explains. These tasks would be done autonomously, so soldiers don’t have to be a part of the effort.

John Rogers, computer engineer with ARL who works on the T1C1 robot’s dynamic mission execution tasks, says the Army has extracted parts of the RCTA work that it considered important for robotics research. These successes have been highlighted so that other scientists could view them and potentially incorporate them into their own research.

Maggie Wigness is a computer scientist at the ARL and government lead for the RCTA T1C1 capability, which entails operation in a dynamic environment. She explains that learning robotic traversal behaviors from human demonstrations depended in part from state-of-the-art commercial technologies in autonomous navigation, which leverage large amounts of training data.

This data includes millions of images and annotations, she observes. So the laboratory focused on learning with small amounts of data from human demonstrations. Researchers focused on different traversal behaviors, one of which was to extend the robot’s capabilities from simply avoiding obstacles. Instead, the robot would use terrain information in its environment, she relates. This included, for example, being able to classify gravel, concrete, asphalt and grass and then have the robot traverse in a manner specific to those ground surfaces.

Only a handful of human teleoperations were needed to model specific behavior, she notes. The test robot was able to learn how to drive near the edge of a road as well as to traverse in a covert manner, in which it used obstacles to attain a degree of concealment.

Rogers describes how robot training has changed in just the past year. Researchers used to collect training examples and then process them offline to generate new models for evaluation in the robot. But this past year, the scientists were able to train models online so humans could see the robot’s behavior in real time.

He continues that this real-time approach allowed researchers to deliberately give the robot bad instructions. Then, humans would intervene and correct the robot’s behavior, and the vehicle would learn from that online demonstration and adapt its model to engage in the desired behavior immediately, he says.

Wigness explains that much of this robotics technology is based on inverse reinforcement learnings, especially maximum entropy inverse reinforcement learnings. This was a product that emerged from the RCTA about five years ago, she says, and it was extremely novel—“state of the art.” This approach worked well for the robot scenario given limited training examples, and it was a key to many of the robot advances.

Young allows that the maximum entropy inverse reinforcement learnings approach emerged from 6.1, or basic, research. Wigness and Rogers were able to work with other RCTA partners, so the ARL was able to undertake applied work to mature it into 6.2 research. “It’s a fundamental idea that was a breakthrough, and now we’re [looking at] how that fundamental breakthrough allows us to actually do something,” he says. He continues that Rogers and Wigness have applied it, so now it can help train algorithms to learn more quickly and also learn from soldiers. It does not require a large amount of training offline.

Wigness has been working on semantic classification, also called semantic perception. Most military robots lack an understanding of their environment from a semantic perspective, Young explains, so the ARL has transitioned her work to improve navigation. With this approach, semantic perception of the environment leads to newer capabilities for semantic navigation, and Young adds that it is transitioning to the Army Ground Vehicle Systems Center as advanced technology development, or 6.3 research. He predicts that some element of this capability will be in the 2028 deployment of the first robotics combat vehicle (RCV) and the optionally manned fighting vehicle.

The ARL is working closely with the Next-Generation Combat Vehicle cross-functional team, Young reports. This work by the ARL is the foundation for the autonomy in those vehicles—the RCV and the optionally manned fighting vehicle. As the Army deploys robots, they initially will be teleoperated, he says, but the ARL and its research partners are aiming for the autonomy that the Army seeks for battlefield robots in the next decade.

Rogers says the ARL is trying to show the Next-Generation Combat Vehicle cross-functional team “the state of the possible.” This covers the next 3 to 5 years, and that team will have new capabilities because of the lab’s robotics research. Two-way dialogue between the two groups will help generate advances that will work in these new vehicles.

For a quadruped robot, the focus has been on deliberative planning and reactive planning. A deliberative planner determines specifically where the robot’s foot will fall, while a reactive planner adjusts for the environment. If the quadruped robot loses its “balance,” it would react and alter its foot placement to compensate. In a rubble-strewn building or a pond with only a few stepping stones, the robot must engage in deliberative planning yet be able to autonomously engage in reactive planning.

“We want to be able to deploy robots in an environment that you can’t get to with wheels and tracks,” Young says. “So, legs might be the way to go. Instead of sending dismounts, which is our only option right now, we could send a robot that could have the ability to negotiate a three-dimensional environment like an urban or subterranean space.

“Or, you could deploy a robot up a mountainside like a mountain goat to get you an observation point over the enemy, where you wouldn’t be able to traditionally send a current legged or wheeled platform,” Young adds.

Enjoyed this article? SUBSCRIBE NOW to keep the content flowing.


Share Your Thoughts: