In our previous article we examined the legal review of autonomous weapon systems (AWS), including but not limited to “lethal autonomous weapon systems (“LAWS”). While such reviews are indispensable to ensure that states employ AWS in compliance with IHL, also commonly known as “the law of armed conflict” (LOAC), some activists and non-governmental organizations (NGO) have called for the regulation of LAWS by new international treaty law.
This article examines the rationale and merits of this call for regulation. It also discusses alternative approaches that are easier to implement and more effective in addressing the dynamic nature of pairing AI with weapon systems. Due to the complexity of AWS regulation we have divided this article into two parts. Readers who are new to the topic of autonomous weapon systems may wish to read our previous post on legal reviews of AWS first to familiarize themselves with the terminology.
To enable readers to quickly grasp the essential phrasings in treaty provisions and academic papers, we have once again highlighted key passages.
Estimated reading time: 9 minutes
I. Are Autonomous Weapon Systems different from Other Weapon Systems?
Autonomous weapon systems do not differ from other weapon systems in their payload, but rather in their ability to select and engage military targets without further human intervention once activated. This characteristic sets them apart from weapons which lack it and has broad military and legal implications.
Exploring all of these implications would far exceed the scope of this article. Accordingly, we will focus on the most salient concerns regarding autonomy in weapon systems hereinafter and clarify some frequent misunderstandings. This article is based on the assumption that artificial intelligence (AI) will enable autonomy of weapon systems until a different innovation may replace it.
The short answer: AWS are different from other weapon systems due to their defining feature, namely their relative autonomy in the selection and engagement of military targets.
II. Is there an Urgent Need for a New International Treaty on AWS?
This question is likely the most controversial in the current discussion of emerging disruptive technologies (EDT) in the military domain and lies at the center of this article. It hinges on one of the most complex topics in modern warfare. In its position and background paper of 12 May 2021, the International Committee of the Red Cross (ICRC) has expressed its view that
“new legally binding rules are urgently needed to address the humanitarian, legal and ethical concerns raised by AWS that have been highlighted by many States, civil society and the ICRC“.
Researchers in papers published by Human rights Watch and IHRC, anti-AI activists and other non-governmental organisations (NGO) have promoted this assessment and continue to dominate the discussion on autonomous weapon systems. Others have viewed the effort to regulate military AI as misguided. In the following, we will examine the most prevalent concerns raised against AWS to support the call for AWS regulation in new international treaty law. We then address the perceived accountability gap in the use of AWS and explore alternative approaches to provide guidance on the employment of AWS in part 2 of this article.
1. Does Current IHL Apply to Autonomous Weapon Systems?
If current international humanitarian law (IHL) does not apply to autonomous weapon systems and their employment, closing this regulatory gap by new international treaty law can be necessary to maintain the safeguards of Additional Protocol I to the Geneva Conventions of 1949 and customary international law (CIL).
However, we consider this concern unfounded. In its Nuclear Weapons Advisory Opinion the International Court of Justice (ICJ) has confirmed the broad scope of IHL by highlighting
“the intrinsically humanitarian character of the legal principles in question which permeates the entire law of armed conflict and applies to all forms of warfare and to all kinds of weapons, those of the past, those of the present and those of the future.”
As we elaborated in our previous article, Article 36 of Additional Protocol I to the Geneva Conventions of 1949 (“AP I”) expressly requires legal reviews of “any new weapon, means or method of warfare” before states can employ them in military missions. Even states that have not ratified Additional Protocol I remain bound by the rules of customary international law.
The short answer : Current international humanitarian law governs LAWS, like all other weapons or weapon systems. Consequently, we dare to opine that there is no general regulatory gap that requires new international law to govern AWS.
2. Do AWS Come with Novel Risks That Current IHL Cannot Accommodate?
Despite the the applicability of current IHL to any new weapons like AWS, there could still be a need for new regulatory measures if Additional Protocol I and customary IHL would not adequately protect civilians and/or combatants from new risks arising from the defining feature of all (L)AWS – their autonomy. Various activists and NGO have argued that AWS entail such risks. We address the most frequently discussed concerns below.
2.1 Is there a Serious Risk of AI Turning Against Humanity?
Depictions of artificial intelligence in science fiction largely shape this concern. Perhaps the most seminal example is Stanley Kubrick’s “2001 A Space Odyssey” which portrays a spaceship crew confronted by an “Heuristically programmed ALgorithmic computer” (HAL 9000) that takes control of their ship to prioritize mission success over the commander´s intentions and human decisions.
The narrative of robots that act independently of human control also inspired the popular Terminator movies and similar works. Initiatives advocating for the prohibition of killer robots have leveraged this narrative to amplify fears of seemingly uncontrollable AWS.
We would misjudge this concern if we dismissed it as mere popular culture. The idea of an artificial intelligence turning against humanity continues to shape public perception. Some AI experts also reflect this concern in their “Statement on AI Risk” that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.“
However, this fear is based on a form of AI that does not currently exist: Artificial General Intelligence (AGI) or “super intelligence”. While some predict that AGI will arrive as early as 2027, the majority of experts consider its development using current AI approaches to be “unlikely” or “very unlikely”. Yann LeCun, Meta´s former head of AI and founder of AMI Labs has stated that “L.L.M.s are not a path to superintelligence or even human-level intelligence.“
Whether AGI or even superintelligence will emerge remains uncertain and depends on breakthroughs yet to come. Regulating such hypothetical capabilities through a new international treaty would therefore be premature and ineffective.
The short answer: experts refer to current AI as “narrow AI” or “weak AI” for a reason, and it does not pose an existential threat. Human-like AGI or “superintelligence” is not imminent.
2.2 Do AWS Select Targets Independently from Human Command and Control?
The concern of a loss of control extends beyond AGI scenarios. It also arises from the defining feature of current AWS: their ability to select and engage targets without further human intervention once activated.
This characterization can lead to the misconception that AWS operate independently from human command and control (C2). However, this assumption overlooks the complexity of the targeting process.
NATO´s Allied Joint Doctrine for Joint Targeting a six-phase targeting cycle. Phase 2 -target development- ensures that commanders only select lawful targets. As Mark Roorda has demonstrated, the execution of force in phase 5 is the outcome of extensive planning. This process allows commanders to define operational parameters within which AWS function in compliance with their intent and applicable IHL/LOAC.
Accordingly, AWS do not operate outside human command structures but within carefully defined operational constraints.
2.3 Do We Face Unmanageable Risks from Malfunctioning AWS?
Even the most sophisticated conventional weapons can fail and AWS are no exception. Scholars and researchers have widely discussed the possibility of malfunction and system errors due to design flaws or unforeseen battlefield conditions. While such risks must be addressed, current debates often suffer from several analytical shortcomings:
The focus on potential technical shortcomings
Position papers frequently emphasize weaknesses in current AI models and AWS. They often suggest that states lack adequate risk management processes. This assumption is incorrect. As previously discussed, domestic frameworks such as U.S. DoD Directive 3000.09 already implement comprehensive safeguards. Other states have adopted or are developing similar measures.
The speculative nature of malfunction and system errors scenarios
Much of the literature is inherently speculative. It highlights what might go wrong without providing empirical or observational evidence from real battlefield deployments. Reliable data on AWS use remains limited.
The misconception of unpreventable design flaws
The risk of unexpected AI behavior is often portrayed as inevitable. In reality, it is a design challenge that can largely be mitigated through appropriate system parameters and safeguards.
The presumed brittleness of AI systems
Critics argue that current AI lacks adaptability in dynamic environments. While battlefield complexity presents challenges, this does not justify the assumption that AWS are inherently unreliable or uncontrollable.
2.4 Do AWS “exacerbate” the Risk of Unintended Targeting of Civilians?
- Targeting errors have occurred—and will continue to occur—in armed conflicts, regardless of AWS use. However, AI has the potential to reduce the fog of war and improve situational awareness. For example, AWS may detect outdated targeting data more quickly by cross-referencing ISR inputs from multiple sources and adjusting their trajectory in real time.
- IHL does not require the elimination of all collateral damage. It prohibits only incidental harm that would be excessive in relation to the concrete and direct military advantage anticipated. Demanding a zero-failure rate for AWS would impose an unrealistic standard. Even the strict liability regime of the EU Product Liability Directive provides exceptions where manufacturers cannot discover defects of their products given the state of scientific knowledge. There is no justification for imposing a stricter standard on AWS.
2.5 Are AWS More Prone to Adversarial Attacks That Make them Unsafe?
AWS rely on the electromagnetic spectrum for communication and data exchange. This makes them vulnerable to jamming and spoofing. However, this vulnerability is not unique to AWS. It affects most modern weapon systems.
Engineers and operators must address such risks through resilient system design and continuous lifecycle management. If operators cannot maintain communication integrity, designers can configure AWS to abort missions before executing force.
3. Are There Moral Concerns Against AWS That Current IHL Does Not Address?
Moral objections to AWS often rest on the assumption that these systems make life-and-death decisions. In reality, human commanders rather than machines make such decisions within a structured targeting process.
Another known concern is the lack of compassion in AWS. Compassion is “the feeling that arises in witnessing another’s suffering and that motivates a subsequent desire to help.” However, IHL does not require compassion as such. It prohibits weapons that cause superfluous injury or unnecessary suffering. This rule applies equally to AWS.
Accordingly, there is no regulatory gap concerning compassion in armed conflicts. The principle of military necessity permits measures necessary to achieve a legitimate military purpose, provided IHL does not otherwise prohibit them. Combatants cannot avoid all suffering in armed conflicts and do not need to do so.
4. Does the Use of AWS Provide an Unfair Advantage?
The argument that technologically advanced states gain an unfair advantage does not justify a new international treaty on AWS. Principles of equality in capabilities do not govern war. As Marco Sassoli has observed requiring belligerents to use only mutually available weapons reflects a misconception of IHL.
Moreover, any serious discussion must also consider the potential humanitarian benefits of AWS, which many commentators often overlook.
Coming up next week: In Part 2 of this article, we will address another key concern raised in support of a new international treaty on AWS — The alleged accountability gap. We will then conclude this article with our notes on other instruments which can be used to govern the responsible use of AWS as an alternative to new international treaty law.