Autonomous Weapons Vs. Humanitarian Law Accountability
Weapons powered by artificial intelligence (AI) could include algorithms with orders to destroy another system or, potentially, enemy forces, posing questions to states about how to regulate AI-enabled systems.
In defining these weapons, the International Committee of the Red Cross, a nongovernmental organization, said that “once activated by a human, an autonomous weapon self-initiates a strike in response to information about its environment received through sensors and on the basis of a generalized target profile.”
Concern about an algorithm with the capacity to kill has sparked many concerns, especially in relation to conflict law.
This field challenges the current technology development model for businesses that have thrived after launching products and allowing the market to shape improved versions.
“We’ve had companies with mottos like ‘move fast and break things,’ where it’s a totally different mindset,” illustrated André Oboler, honorary associate professor at the La Trobe University’s School of Law in Melbourne, Australia.
“Quality control in terms of thinking about consequences, in terms of legal implications, in the private sector, it’s very much been, ‘Let’s try it and see if it makes money and let’s alter things on the fly,’” Oboler added.
This approach, common for consumer electronics, is unacceptable when developing weapons, and it challenges decades, if not centuries, of military technology development.
“A lot of the drivers of technology, at least traditionally, have really come from the military, and it’s only in the last few years that we’ve had tech giants investing in leading-edge technologies at a level that is, you know, potentially competing within, if not in some niche cases surpassing, what’s happening within the military,” Oboler added.
When a government entity controls a novel design, regulation is closely observed, whereas when private companies develop a product or service, relevant laws are considered afterward, according to Oboler.
These business principles applied to military innovation directly conflict with what human rights organizations posit as necessary to uphold current international regulations.
“Any new technology of warfare must be used, and must be capable of being used, in compliance with existing rules of international humanitarian law,” said the International Committee of the Red Cross in a position paper on the use of artificial intelligence on the battlefield.

Human Rights Watch, a nongovernmental organization, has a campaign against these machines. Its position is that autonomous lethal machines dehumanize and can fall into algorithmic biases. The organization warns against taking humans off the kill chain as “machines don’t understand context or consequences: understanding is a human capability—and without that understanding we lose moral responsibility and we undermine existing legal rules,” said the organization in a campaign publication.
While authorities in most democracies have advocated for “keeping a human in the loop,” arguments are not uniform beyond that point.
In the United States, the discussion is shifting but mostly in semantics.
“Weapons systems currently have human operators “in the loop,” but as they grow more sophisticated, it will be possible to shift to “on the loop” operation,” wrote Capt. George Galdorisi, USN (Ret.). The shift is from a human operator deciding in real time to supervision of action. Pinning down this last human level of interaction in terms of a regulatory framework is tricky and almost impossible to define with clarity using current legal tools.
“The military knows what it wants to achieve, but often not what technologies or even capabilities it needs to field [autonomous systems] with the right balance of autonomy and human interaction,” Capt. Galdorisi stated.
Many experts and activists in the human rights field agree that most delays in international negotiations are caused by questions about words and definitions. These may seem trivial issues for those outside the legal field, but it should be noted that those texts will be used to measure warfighters’ behaviors by attorneys, lawyers and—most importantly—judges.
Another sticking point is proportionality. “Under the law, military commanders must be able to judge the necessity and proportionality of an attack and to distinguish between civilians and legitimate military targets,” said Human Rights Watch.
International discussions started 10 years ago. Christof Heyns, then-United Nations special rapporteur on extrajudicial, summary or arbitrary executions, is widely credited with sparking this debate when he warned countries at the U.N. Human Rights Council about autonomous weapons systems.
Related article: See United Nations Arms Conventions and Precedents
Since then, negotiations have stretched, and some experts suggest the Convention on Cluster Munitions, adopted in 2008 as a legally binding instrument that bans stockpiling, production and transfer of this ordnance, should be used as a model.
As these technologies advance in countries from Iran to Estonia and from China to Brazil, the need emerges to place limits on what these weapons can do. The controversy lies at the foundation of how technology is developed, as trial and error is not acceptable when lives are hang in the balance.
One scholar posited that future legal frameworks should include “support for requiring elements like predictability and understandability in the use of force,” wrote Bonnie Docherty, senior researcher at Human Rights Watch.
These two attributes are increasingly desirable as artificial intelligence models become increasingly complex and creators lose full grasp of how they function.
“There is a positive side to it potentially, that more precision means less harm to nonmilitary targets,” Oboler said.
This angle is also part of the debate among those observing the development of this field.
“The ICRC [International Committee of the Red Cross] is not opposed to new technologies of warfare per se. Certain military technologies—such as those enabling greater precision in attacks—may assist conflict parties in minimizing the humanitarian consequences of war, in particular on civilians, and in ensuring respect for the rules of war,” the organization said in a position paper.
All participants agree that liability is nevertheless unavoidable. If an autonomous system violates international humanitarian law, responsibilities will follow a similar procedure as with other breaches: states, commanders, operators or programmers and manufacturers also could be held accountable.
Still, this debate is well behind the speed of technology, and some participants see opportunities to obtain an advantage over others. Therefore, efforts to regulate a diverse sector, with many subsectors still to be developed, will better profit from less ambitious goals, according to most experts.
“I think the requirement to have a human in the loop in some situations is going to be critical. The requirement to actually be able to audit, to know how automated decisions are being made, which is one of the general challenges in AI, to be able to have the AI be able to explain how did it reach the conclusion it reached is going to be critical as well,” said Oboler.

More precision means less harm to nonmilitary targets.