Enable breadcrumbs token at /includes/pageheader.html.twig

Targeting Tomorrow: The Intersection of AI and Military Operations

Artificial intelligence targets Hamas in Israel, testing the limits of military intelligence and algorithms.

The conflict in the Middle East has revealed the use of the most sophisticated objective selection systems in the world, combining cutting-edge artificial intelligence (AI) with advanced sensors.

“When we were deployed, when we attempted to validate targets, I was seeing a person with something over their shoulder. It was hard for me to know if that was an RPG [rocket-propelled grenade] or a farming tool,” said Col. Gary Beckett, a retired F-15 pilot who served in Iraq and Afghanistan.

During the conflict in Israel, the Air Force has been accused of poor target selection during its campaign, but at the same time, the international media has reported on a novel AI tool that helps process and find areas of concentrated terrorist activity.

After an investigation by Israeli magazine “+972,” the system that finds targets for the Israeli Defense Force (IDF) was identified as “Habsora,” Hebrew for “gospel.” This report generated international controversy, and several anonymous sources offered comments in the press.

“There is constant collection of information, and this is connected also to the different applications that deal with targeting,” said Tal Mimran, manager of the International Law Forum of the Hebrew University of Jerusalem.
Mimran explained the different systems that progressively identify and classify potential targets once data is gathered.

“We have The Alchemist, which collects the information,” explained Mimran, who added, “We have the Fire Factory, which evaluates, categorizes the information into possible relevant categories from which you can derive, maybe, possible threats to the soldiers or targets for attack.”

The final stage of this process is the system identified as Gospel.

This capability is where “everything crystallizes into possible military targets for attack, and those systems—like every AI—work on data, and those systems—like every application—the more data you have, the better they will come, the more accurate they will be,” Mimran explained.

The combination of tools includes different types of software, from programmed procedures to exceptions and a decision process that may take multiple factors into account.

“There are fundamental rules and principles which the system follows: there are junctures in which sometimes it is as simple as if/then, and even if it is more complicated than that, then you still have principles which are guiding the system in its decision-making,” Mimran told SIGNAL Media in an interview.

Mimran clarified that he has expertise in the area where military technology and law meet, and as such, he has participated in the deliberations when a system locates a target and a team of experts weighs the value of the intelligence, the risks, the military rewards and the legality of the attack in terms of local Israeli and international law.

For international humanitarian law, Gospel is “neither a weapon nor a decision-making system. Rather, it is a decision-support tool for commanders who may choose to disregard the recommendations, and as such it should be considered as a means of warfare as it forms a military system, or platform, that is being used to facilitate military operations,” Mimran laid out in a recent article published by a West Point Military Academy research center.

While this system has been called a “murder factory” in the international media, its proponents stress two positive implications.

First, it aids decision-making. But human involvement remains part of the process.

“When Israel decides to require from itself more in terms of [target] distinction and precaution, it is risking its own nationals, and states have a right under international law for force protection. The right to protect ‘my’ soldiers is a real legitimate consideration, but Israel time and again is risking its own nationals in order to safeguard its opponents,” Mimran explained.

Part of the collateral damage included in the equation is the result of deliberate action by Hamas to place military facilities or combatants close to civilian infrastructure to present humanitarian dilemmas to those seeking to limit the terrorist group’s capabilities.

“On the technical level, it is definitely possible to integrate into the system principles such as the distinction principle, such as proportionality,” Mimran explained.

Second, producing a steady flow of targets, one at a scale that humans would be unable to cope with, is good for military outcomes and civilians on the ground.

Commanders receive options at an unprecedented scale. “Gospel will provide you with around 200 targets within between a week to 10 days, but usually the rate is one that a team of between 20 to 30 intelligence officers will need to work for around 300 days to reach the same amount of targets,” Mimran explained.

AI helps execution, but one of the limits for an air campaign in a target-rich environment comes after the initial stage, according to Mimran.

“The quality of the targets is also rising, and this really helps in a phenomenon that we usually saw it in the beginning of the military conflict with [Hamas]. You exhaust in the first three or four weeks all of the high-value targets, all of the real golden targets, which we worked on for a long time,” Mimran offered.

As the air campaign continued, AI provided an invaluable military edge.

“The longer the conflict goes, you need to come up with new targets and human teams are limited in their ability to provide the needed amount of targets in such a short time. And then you settle—you settle in terms of intelligence,” Mimran said.

Settling for lower intelligence standards has a cost for all parties involved.

“You completely ruin the entire purpose of the operation because you lose the legitimacy,” Mimran explained, as he has seen in the past that lower-quality targets led to violations in international humanitarian law.

“The AI, when it comes up with the Gospel, the first and the 200th targets are essentially of the same level of quality in terms of intelligence and other aspects,” Mimran said.

Despite the visible empowerment AI provides to intelligence teams, there is one underlying problem.

“AI is very good, but it’s very good on things with enough examples of,” said Steven Rogers, a U.S. Air Force senior scientist focused on artificial intelligence-enabled autonomy at the Air Force Research Laboratory.

Rogers stressed that systems excel at comparing images of known features, but when adversaries act with creativity, they can use asymmetrical capabilities to expose the system’s weaknesses.

The path to catastrophe is clear for this Air Force veteran.

“If a human has too much confidence in the AI doing the front-end work, it might miss anomalies that they would normally have found,” Rogers explained and laid out the importance of teaming systems and well-trained individuals.

“It turns out these AI solutions are notoriously bad at knowing when they’re bad,” Rogers said.

Image
An Israeli F-35I taxis out for a mission at Nellis Air Force Base, Nevada. Credit: William R. Lewis, U.S. Air Force.
An Israeli F-35I taxis out for a mission at Nellis Air Force Base, Nevada. Credit: William R. Lewis, U.S. Air Force.

For a country at war for its survival, there is little time to think about the specifics.

By comparison, the thousands of civilian deaths reported by terrorists and the international media could give this system a high score in terms of collateral damage. Efficiencies applied to bombings reduce collateral damage.

“When you calculate the amount of explosives and bombs that have been deployed against Gaza, the numbers are not that high, when you evaluate the overall amount of operations on behalf of the IDF,” Mimran said.

Other analysts do not share this point of view.

“The satellite coverage of the bombing indicates many strikes did exceptionally large damage to civilians; and this means whole buildings will have to be demolished, and new structures will have to be built up from the ground up,” said Anthony Cordesman in an article published 26 days after the start of the war. Cordesman is part of the strategy program of the Center for Strategic and International Studies, a think-tank.

A strong system that produces success on the battlefield can also create problems of its own.

Overreliance happens when operators presume that the AI’s assessment must be followed every time. This problem expresses itself as “you need to follow the system, and if you don’t follow the system, you need to explain why,” Mimran said. If a mistake is more severely criticized for not complying with the diktats of software, the team may see a reduction in their own critical thinking, according to Mimran.

Nevertheless, the nature of military operations may act in favor of complacent thinking.

“To gain trust in an AI system, that is first going to have to happen, which will naturally occur by how militaries train. At the tactical level, the operator, as long as the machine is to perform as expected as in training, will have a fairly high level of trust in the system,” Beckett explained.

To mitigate the impact of this trust excess, the U.S. Air Force seeks to strike a balance.

“We work very hard on how to integrate, based on missions and applications, and the majority of the AI we’re using to know when to trust and what is the appropriate amount of trust,” Rogers told SIGNAL Media in an interview.

People cannot be replaced by AI. And these capabilities create knock-on effects with global consequences among those who respect the rules-based order.

“You don’t always understand how the system reaches certain decisions, and then if you rely on the system and make a mistake, who is to blame? The programmer? The person who activates the system? The person who relied on the system?” Mimran questioned. The current state of international norms has no criteria on this.

To solve many of these issues, the U.S. Air Force Research Lab has brought several parties to the production process.

“When I’m talking to an airman or guardian, in every case, they have concerns that are on the top of their mind. I have to capture those in the producer co-creation,” Rogers said.

By bringing users in real-world scenarios together with engineers, many concerns can be addressed from the outset of the development of a capability, and many of these reservations will be beyond the power of the technology employed.

“The maturity of the AI is not good enough,” Rogers said.

The Israeli armed forces and Israeli defense contractors declined to comment for this report.

Image
U.S. Air Force airmen load cargo onto a C-17 Globemaster III at Ramstein Air Base, Germany, to provide the Israel Defense Forces with munitions. Credit; Senior Airman Edgar Grimaldo, U.S. Air Force.
U.S. Air Force airmen load cargo onto a C-17 Globemaster III at Ramstein Air Base, Germany, to provide the Israel Defense Forces with munitions. Credit; Senior Airman Edgar Grimaldo, U.S. Air Force.