Enable breadcrumbs token at /includes/pageheader.html.twig

Machine Learning Enables Improved Radar Performance

A unique algorithm incorporates data about a radar’s sensing environment.

Guided by the U.S. Army Research Laboratory, researchers from the Bradley Department of Electrical and Computer Engineering at Virginia Polytechnic Institute and State University are examining the application of machine learning to radar. They have found that memory-based machine learning algorithms work well applied to simulated radar operations. The researchers’ system provides an effective radar waveform selection capability that iteratively learns and adjusts the waveform selection over time. Their early findings demonstrate improved radar sensing and the capability’s effectiveness in different environments.

Naturally, a radar’s sensing environment changes significantly over time due to shifting dynamic interference and a tracked element’s trajectory. By adapting a radar’s waveform based on data about that changing sensing environment—by incorporating “state of the scene” information—the researchers achieved notable radar tracking performance improvements when compared to two state-of-the-art waveform selection schemes, reports Charles Thornton, the lead researcher on the project and doctoral student at Virginia Tech, guided by Michael Buehrer, professor at Virginia Tech, and Anthony Martone, a senior scientist in the Army Research Laboratory’s Radio Frequency Signal Processing and Modeling division.

“On a basic level, what a radar is trying to do is measure things about an environment,” Thornton explains. “Based on the fact that it is trying to take measurements, it doesn’t know certain things about the environment. The radar might have a model for how the target that it is trying to track is behaving. It might have a model for how other systems are behaving, and it might have a model for the physical conditions, but it won’t have certainty about these things. There will be a best waveform for a particular type of environment, but the radar is not going to know what that is before it starts taking any measurements. So, the idea of using machine learning for waveform selection is that we’ll send out some waveforms, get some information back, and we’ll try to use that data to optimize the next waveform that we send out for the kind of model we have of the environment.”

One of the key aspects of the research is the ability to build the machine-learning models without having to know a lot about a radar’s particular sensing environment beforehand—such as information about the reflection characteristics of a target, the frequency response to the target or the interference in the environment, i.e., what radar experts like to call clutter, Thornton explains.

“The essential idea of the research is to build these models without making too many assumptions about what particular things are going on in the environment,” he notes. “It’s actually not too difficult to think of some very simple rules for how you can vary the waveform. But the idea is that we don’t necessarily know whether these rules are going to hold up for a broad class of environments. We could make some simple rules like, ‘if the target is this far away, we want to use waveform A.’ Or if it’s close, and we expect the channel [or interference] to be not so severe, we can use a different waveform, say ‘waveform B.’ But how you switch between those waveforms is going to depend on a lot of different things that you might not be able to model very well. So, that’s where the value of the algorithms and more sophisticated approaches starts to come in.”

That approach allows the model to be used in a wide range of unexpected conditions and frees users from having to make painstakingly specific calculations.

“The memory-based learning approach is really all geared on this idea of temporal correlation,” Thornton states. “If you take some number of measurements, you’ll be able to predict something about what the environment is going to look like when you take the next measurement. You would store some number of measurements in the radar’s memory, and you would use these to make a prediction about what the best next waveform to send is. But the problem is that a lot of the conventional ways of doing that assume that you take some fixed number of measurements, and then you use those number of measurements to predict the next waveform. But knowing how many measurements you need to take to make these predictions is not so obvious. And it can also be computationally costly if you have to make a ton of them.”

“The idea is to try to make it as universal as possible so that you don’t need to have an expert that has specific models for how some of these things behave, and yet still have it perform optimally,” he adds.

To build the radar-specific interface, the researchers relied on a Lempel-Ziv (LZ) memory-based learning algorithm, which is commonly used in developing universal prediction and active learning schemes, as well as universal data compression akin to the well-known zip file compression formats. The researcher’s waveform selection algorithm was further developed from an LZ version created by the Massachusett’s Institute of Technology’s Vivek Farias and other scientists. The resulting algorithm and data compression approach from the Virginia Tech team not only works to learn the length of the memory process of how things are changing over time in the radar’s environment but also learns how to act on those memory processes, Thornton notes.

“The Lempel-Ziv algorithm is something that is really famous in data compression,” he offers. “It’s something that people use to compile files, like the zip format and certain other types of compression where you have this large file and you’re trying to store it in a very small space and then reconstruct it later on. It’s been used in a lot of different contexts, for compressing data and for prediction problems, but it has never been used in this case of making decisions for wireless systems like a radar communication system.”

The researchers also employed a so-called context tree, a weighting method that further enhances the selection of waveforms. “The radar system is a compressed model of the radar-environment interface in the form of a context tree,” Thornton specifies. “The radar uses this context tree-based model to select waveforms in a signal-dependent target channel. In the case of an adversary’s jammer that looks at what are the past five waveforms the radar sends out and then picks an interference behavior based on knowledge of the radar’s waveforms, the radar is still able to respond to that because it takes all this into account in its model. When it is building this context tree, it’s not just looking at the states of the target channel; it is also looking at the waveforms that it sent out from each state. So, it’s a very large model, but it ends up being feasible to represent at least for smaller problems in a real computer.”

He explains that the target channel represents any way that the radar signal might lose energy from interferences in a radar environment, including entities in the channel who may respond adversarially to the radar’s strategy, in which case the radar needs to adjust.

“Waveforms are limited,” Thornton suggests. “They can either measure how far away a target is, the range or the velocity information, but there’s always a trade-off between how precisely you can measure one or the other. And the exact signal you get back is going to depend very much on the electronic characteristics of the target and how many paths the signal takes and how it bounces off things. There’s all sorts of different considerations as to which signal is going to give you the best response at the return. So, you more or less have to try to match the waveform to the channel in some way to how things are evolving in space.”

Thornton presented the findings at AFCEA and IEEE’s MILCOM 2021 conference in San Diego in late November/early December in the associated paper, Waveform Selection for Radar Tracking in Target Channels With Memory via Universal Learning, by Thornton, Buehrer and Martone. Thornton was also selected for a student grant award for the conference.

Dynamically varying the radar’s waveform to match the behavior of the sensing environment and radar objective improves both the target detection and tracking capabilities, the reseachers attest.

See:

Considerations of Machine Learning at the Tactical Edge