Enable breadcrumbs token at /includes/pageheader.html.twig

Accelerator Offers Artificial Intelligence Cookies

Air and Space Forces hunger for AI education.

Providing Air Force personnel with artificial intelligence (AI) knowledge sparks a desire for more, says Capt. Victor Lopez, an AI researcher helping to advance the technology within the Air Force through a cooperative agreement with the Massachusetts Institute of Technology (MIT).

Capt. Lopez works with the Department of the Air Force-MIT AI Accelerator, which is designed to advance AI to improve Department of the Air Force operations while also addressing broader societal needs, according to the accelerator website. The effort focuses on education for military personnel and MIT students, technology research and product delivery for both the Air Force and Space Force.

The captain compares the AI cravings to the children’s book, If You Give a Mouse a Cookie by Laura Numeroff. The book’s premise is that if you give a mouse a cookie, it will want a glass of milk, a mirror to check for a milk mustache and on and on. “We’ve found that when you give airmen and guardians AI education, they’re going to want to know how to use it. After you teach them how to use it, they’re going to want to know how to go out and buy it and put it into production, put it into their units,” he says.

So, accelerator personnel “flipped the script” with a concept known as ideation. “It’s a computational thinking system of engineering that combines human-centered design, artificial intelligence education, and contracting and program management education to go from an initial problem or challenge that these airmen and guardians face in their day-to-day jobs all the way to execution and solution,” Capt. Lopez explains.  

Each quarter, the accelerator brings in a new group of airmen and guardians with problems that need solving. They work through self-paced education for a month on AI and human-centered design. They then continue for an additional three months learning from a group of experts who help address their operational challenges. Those experts mostly come from Cyberworx, which is a human-centered design team from the Air Force Academy, MIT Lincoln Laboratory and the 10-12 personnel assigned to the accelerator.

Personnel brought into the project in the spring were expected to have working prototypes by September. “They brought everything from how to make better schedules to making robots work better together to improving the debriefing process after a mission,” Capt. Lopez reports. “All of these challenges have a data-centric problem to them and have the potential for machine learning to help their workflow to save time, save people and give the warfighters more flexibility in their operations.”

The accelerator assistance goes beyond prototype development. “We’re going to teach them how to make a pitch, teach them how to write requirements, to put some of these ideas on a contract, to actually go and execute them, whether that is a government solution that we’re able to do internally, a federal lab solution we need to pay a federal lab to do some work on and then the government owns that information, or maybe it’s ready to be spun out into a company.”

Although commercial products may be the solution, that often is not the case. “Unfortunately, too many times we expect our commercial businesses to know all of the nitty-gritty details and solve all of our problems for us, but the reality is no one knows those problems better than the airmen and guardians. We’ve got to teach them how to work through and get to the proper solution,” Capt. Lopez offers.

One research effort, the Multi-Modal Vision for Synthetic Aperture Radar project, provided  an AI model to aid humanitarian response efforts. The AI used machine learning techniques to train a model, which generated synthetic, color images from black and white radar images. Adding color to water and other features helps nonradar experts more easily understand the images. So, for example, rather than a local police chief or firefighter having to decode a raw radar image for the first time, they can easily identify floodwaters and other features and “do what they do and start acting and initiating those rescues,” the captain offers.  

“That is huge. That’s a proof of concept. Our next test is how to get this out to the rest of our Guard units and take this academic paper and make it into a humanitarian aid and disaster response tool that our governors across the United States can have access to using our military assets from their Air National Guard bases,” he states. “If we can take that same technology and give it to those governors, we’re going to be able, hopefully, to get access and figure out what’s happening on the ground much faster, especially if there’s cloud cover.”  

For the project, researchers used Pix2Pix, a machine learning algorithm that relies on a conditional generative adversarial network that can, for example, convert simple line drawings into more realistic images or black-and-white photographs into color photographs. They downloaded publicly available satellite photos and electro-optical images from the Internet and told the system to use the radar images to provide colored images.

“Over millions of these iterations, we train it and train it, and it tries to generate this color image. Every time it doesn’t get it, we say, ‘Hey, bad robot. Try again,’” Capt. Lopez reports. “And it will rework its math, and it’s going to try to generate it again.”

Once the system was properly trained, the researchers used radar images taken during a pre-scheduled training mission within the United States and tested the technology’s ability to provide the colored images. “And lo and behold, it did,” the captain says.

“We took radar images of coastlines and urban areas. Using those radar images, we passed them through the model, tested it, and we actually got no-kidding results on the backend of a coastline and buildings and beaches and ocean and fields and fences and roads, and we could hardly contain our excitement,” he states.

The system also inadvertently taught pilots a lesson on taking in-flight images, Capt. Lopez offers. “It also showed us where pilots can do a better job of taking pictures. Sometimes when you take radar pictures, they have artifacts where pilots, like myself, decided to turn in the middle of taking the picture. It highlights some feedback for the operators, too. We’ve never had that before.”

'Every time it doesn’t get it, we say, ‘Hey, bad robot. Try again.'
Capt. Victor Lopez, USAF
AI Researcher, Department of the Air Force-Massachusetts Institute of Technology Artificial Intelligence Accelerator

Other research projects include Guardian Autonomy for Safe Decision Making; Fast AI: Data Center and Edge Computing; and AI-Enhanced Spectral Awareness and Interference Rejection.

The AI Guardian project aims to advance AI and autonomy by developing algorithms and tools for augmenting and amplifying human decision-making. “The AI Guardian assists humans by suggesting actions using data from the past and fusing inputs from sensors and information sources,” according to the accelerator website.

The Fast AI data center project “focuses on developing a foundation for quickly building AI solutions, enabling performance and portability on both modern and legacy hardware platforms.” And the spectral awareness effort seeks to apply AI to enhance the Air Force’s ability to detect, identify and geolocate unknown radiofrequency signals, while providing “tools for adaptive interference mitigation and smart spectrum analysis” to enhance intelligence, surveillance and reconnaissance missions, communications, signals intelligence and electronic warfare, the website says.

Accelerator personnel select research projects every contract cycle. They are currently in the fifth year. “For those projects, we looked for a Department of Defense-relevant need, a program office stakeholder—which means the people who write requirements and put money to paper, if you will, to get things done—and what we call a tactical stakeholder, somebody on the ground who would actually be able to use the technology,” Capt. Lopez explains.

The Air Force and MIT signed an agreement in 2019 to create the accelerator effort to “conduct fundamental research directed at enabling rapid prototyping, scaling, and application of AI algorithms and systems,” according to an MIT press release. The Air Force invests approximately $15 million per year. The collaboration is expected to support at least 10 MIT research projects addressing challenges that are important to the Air Force and society more broadly, including disaster response and medical readiness.

“This collaboration is very much in line with MIT’s core value of service to the nation,” Maria Zuber, MIT’s vice president for research and the E.A. Griswold Professor of Geophysics, says in the press release announcing the collaboration. “MIT researchers who choose to participate will bring state-of-the-art expertise in AI to advance Air Force mission areas and help train Air Force personnel in applications of AI.”

Capt. Lopez says the collaboration benefits both the Air Force and MIT. “The first benefit, selfishly, is going to be to the Department of the Air Force and the guardians that we have assigned to us. It gives us the opportunity to walk hand in hand with some of the best minds in artificial intelligence and machine learning across academia and gives us the ability to begin speaking the same language as the rest of the academic world. And it brings the Department of the Air Force closer to relevancy in this field,” he says.

And MIT students know their ideas will be used, the captain notes. “For the graduate students, that means that they know that their work isn’t going to go into the void. They’re not going to write a paper that’s going to go on a shelf somewhere, and there it sits, and they’re just going to go on with life. I, and the rest of the team here at the accelerator, will take their ideas and run with them and see how we can actually implement them.”