In the previous instalments of this series, we provided a general introduction to legal weapon reviews with particular reference to Article 36 of Additional Protocol I (“AP I”). We also addressed the issue of “cyber weapons” under review. This article examines the legal review of autonomous weapon systems (AWS), including but not limited to “lethal autonomous weapon systems (“LAWS”).
Readers who are new to the topic of legal weapon reviews may want to read our introduction first in order to build on the insights presented there.
To enable readers to quickly grasp the essential phrasings in treaty provisions, academic papers and military manuals, we have once again highlighted key passages.
Estimated reading time: 15 minutes
I. What are Autonomous Weapon Systems?
The GGE´s Working Characterization of AWS
Although the Group of Governmental Experts (GGE) at the United Nations (UN) has been discussing Autonomous Weapon Systems (AWS) within the framework of the Convention on Certain Conventional Weapons (CCW) since 2017, it has not yet reached a final consensus on a definition.
However, in its rolling text of 12 May 2025 the GGE adhered to its prior working characterization of “Lethal Autonomous Weapon Systems” (“LAWS”) which states that
“A lethal autonomous weapon system can be characterized as an integrated combination of one or more weapons and technological components that enable the system to identify and/or select, and engage a target, without intervention by a human user in the execution of these tasks.”
At the same time, the GGE noted that “the above description is without prejudice to any future understanding and the potential modification of this characterization, as well as the possible exclusion of certain types of systems.”
On 3 September 2025, the UNODA published a revised redline version of this definition but not an updated consolidated version of the rolling text.
Definition of LAWS by the U.S. Department of Defense
The GGE working characterization of LAWS essentially concurs with the definition of AWS in both the U.S. Department of Defense RAI Strategy and Implementation Pathway and DoD Directive 3000.09 as
“a weapon system that, once activated, can select and engage targets without further intervention by a human operator.”
It also corresponds to the non-binding definitions of AWS developed by various non-governmental organisations (NGOs) such as the Asia-Pacific Institute for Law and Security (apils) and —to a degree— the International Committee of the Red Cross (ICRC).
Definition of “Weapon Systems”
As for the notion of AWS, there is also no binding international agreement defining what the term “weapon system” encompasses. However, the U.S. DoD Dictionary of Military and Associated Terms defines a “weapon system” as
“A combination of one or more weapons with all related equipment, materials, services, personnel, and means of delivery and deployment (if applicable) required for self-sufficiency.”
Automated Versus Autonomous Weapon Systems
In this article, we view the ability to select and engage targets without further human intervention as the decisive criterion to distinguish autonomous from merely automated weapon systems. According to the Chair of the GGE, the term “select” encompasses “identification”
While both can operate without human intervention once activated, automated weapon systems —such as simple anti-personnel land mines (APL)— activate when any human -friend or foe- triggers them. While these ordinary mines are put in place against enemy forces, they do not have the functionality to select specific targets.
Although some authors and NGOs have suggested that such “unattended” weapons are autonomous, we do not adopt this view. Instead, we adhere to the insight that automation is not autonomy.
II. Does international law prohibit AWS?
As of March 2026 there is no international convention or treaty that prohibits autonomous weapon systems.
Some voices have demanded a total ban on AWS. Others, like the ICRC, have called for the prohibition of AWS that do not meet certain requirements.
Even so, AWS —whether lethal or non-lethal— are not unlawful per se if states can employ them in a manner that complies with current international law, in particular Additional Protocol I to the Geneva Conventions of 1949.
Whether the international community should adopt new international treaty rules to address the perceived risks of AI in the military domain remains highly controversial.
We will not expand on this issue here. Given its complexity and its potential impact on defence readiness, it deserves separate treatment.
III. Are AWS subject to legal review?
States that have become parties to Additional Protocol I must conduct legal reviews of new weapons. This obligation also extends to autonomous weapons.
As the GGE confirmed in May 2025
“IHL applies to armed conflict, whether international or non-international, and governs the use of all weapons, means and methods of warfare, those of the past, those of the present and those of the future, and is, consequently, applicable regardless of the military technology used.”
Weapon systems generally fall within the notion of ‘means of warfare’ under Article 36 AP I. We suggest that the scope of the review may also comprise certain subsystems like fire-control systems, targeting sensors and autonomous engagement functions that influence the application of force. However, purely enabling functions—such as navigation software or propulsion systems—may fall outside the review unless they materially affect the conduct of hostilities. Readers who wish to explore these and other aspects of legal AWS reviews beyond this article may find the contribution of Copeland, Liivoja and Sanders to their discussion particularly informative.
IV. How Should States Carry Out Legal Reviews of AWS?
At present, neither Article 36 AP I nor any other international treaty law requires the consideration of particular criteria or processes that apply only to AWS.
As there is currently no ban on AWS, states must first ensure that their employment does not cause superfluous injury or unnecessary suffering. They must also ensure that the design as well as the intended and expected operation of AWS allow for their use in compliance with other core principles of international humanitarian law, notably:
- distinction
- proportionality
- precautions in attack.
In addition, states must further consider the Martens Clause and foreseeable developments in IHL.
1. Domestic Regulations and Current State Practice
A growing number of states are already addressing AWS in domestic directives, guides and policies, and this trend will likely continue. These frameworks give special consideration to the unique features of autonomy and the challenges they entail.
Such domestic frameworks—rather than new international treaties—will likely continue to shape the legal review and employment of AWS in the foreseeable future.
In this article we therefore look at two primary examples:
- Australia, which has ratified Additional Protocol I
- the United States, which conducted legal reviews even before the High Contracting Parties conceived Article 36 AP I.
Australia
Australia’s Guide first addresses AWS in part II. step 3 of the regular Article review process. In this step the Guide refers legal reviewers notably to the United National General Assembly Resolutions including the Resolution regarding Lethal Autonomous Weapons System (A/78/L.56) and Australia’s AI Ethics Principles.
Australia´s AI Ethics Principles
The AI Ethics Principles do not provide a sector-specific framework for the use of AI in the military domain. However, they offer guidance for businesses and governments alike on the responsible design, development and implementation of AI.
Some of these principles better suit civilian uses of AI. For instance, maintaining privacy for legitimate military targets through appropriate data anonymization may not be a primary concern for military commanders.
However, most of these principles are equally relevant in the military domain—or even more so. By way of example, these include:
- User-centric design: AWS should augment human cognitive processes in system operation
- Transparency and explainability: Commanders and operators must be able to understand “what the system is doing and why”
- System reliability and safety: Although often regarded as technical requirements, they also affect system legality. For AWS, thorough testing and validation must ensure that systems meet the commander’s intentions in legitimate military operations.
Australia´s Functional Review Step for AWS
Moreover, part III of Australia´s Guide provides for a distinct functional review step for AWS. This additional step requires AWS reviewers to:
- “identify the AWS´functions and the governing LOAC rules;
- identify the standard of legal compliance required to determine legality;
- identify the operational context in which the AWS will perform its functions;
- obtain the AWS’ technical, functional and performance data relevant to the identified AWS functions;
- conduct a legal risk analysis for each AWS function governed by LOAC; and,
- identify appropriate risk mitigation (e.g. human control) necessary to ensure the AWS function is performed lawfully.”
Part III of Australia´s Guide provides further notes and instructions on these particular elements of AWS reviews.
United States of America
The approach of the United States to AWS reviews is unique. DoD Directive 3000.09 Autonomy in Weapon Systems requires that AWS undergo two separate reviews: (1) A review before formal development and (2) A review before fielding.
(1) Pre-Development Review
The first review focuses on system design. It must ensure that the system allows for the exercise of appropriate human judgment over the use of force. Moreover, this first review must inter alia establish that
- the system is designed to complete engagements within a defined time frame and geographic area and other parameters consistent with the commander’s or operator’s intentions; if the AWS is unable to comply with this requirement, it must terminate the engagement or obtain additional operator input before continuing the engagement
- the combination of the system’s design and concept of employment account for risks to non-targets
- the system design, including system safety, anti-tamper mechanisms, and cybersecurity minimizes the probability and consequences of system failures.
In addition, plans must exist for verification and validation (V&V) as well as Test and Evaluation (T&E) to ensure system reliability, effectiveness and suitability. This includes considering:
- adversarial actions
- unintended engagements
- unauthorized parties´interference with system operation.
(2) Pre-Fielding Review
The second review ensures that system capabilities, human-machine interfaces, doctrine, TTPs and training allow commanders to apply appropriate human judgment over the use of force.
They must also allow commanders to employ the system in compliance with:
- The law of armed conflict (LOAC)
- Rules of engagement (ROE)
- Other regulatory requirements such as weapon safety provisions.
Finally, the second review addresses issues such as:
- system safety
- anti-tampering mechanisms
- cyber security.
It also calls for a monitoring regime that detects and addresses changes in the operational environment, data inputs and system use that could contribute to system failures.
However, the Directive exempts certain systems from this review process. These include:
- semi-autonomous weapon systems
- operator-supervised AWS
- AWS used to apply non-lethal, non-kinetic force against materiel targets.
As DoD Directive 3000.09 is one of the most comprehensive frameworks on autonomy in weapon systems, a more detailed examination of the directive would exceed the scope of this article. However, it is worth considering as a model for other states developing processes for AWS reviews. Readers who wish to take a deep dive into this Directive and how it complements the weapon review processes of the U.S. Army, U.S. Navy and U.S. Air Force may read the instructive contribution of Ryan Poitras.
2. The Rolling Text of the GGE
National policy makers and practitioners should also monitor the evolving work of the Group of Governmental Experts on LAWS. States and reviewers of AWS should give particular attention to revisions of the rolling text of 12 May 2025. Although the text is not legally binding and reiterates many already established principles of IHL, it reflects the concerns of the Group’s members and therefore deserves careful consideration.
3. Other Initiatives
Apart from existing state directives and the work of the GGE, two initiatives currently seek to address the unique features of AWS in legal reviews. We consider them particularly noteworthy:
- Propositions of elements of good practice for conducting legal reviews of AWS
- A new framework for employment principles and human judgment.
These initiatives approach the challenges of AWS reviews from different perspectives. Nevertheless, both constitute constructive and balanced contributions to the discussion.
(1) Elements of Good Practice
Following an expert meeting in 2024, the Asia-Pacific Institute for Law and Security (apils) published a report proposing a set of good practices for legal reviews of AWS reviews. It also expressly stated that it does not aim to interpret existing legal obligations. This early version also contained questions for discussion which in our opinion have not lost their relevance.
The current version of this set of good practices structures the discussion around several measures that states could take to “enhance the efficacy of legal reviews as a mechanism for implementing international legal obligations relevant to AWS and increase transparency.” Instead of reiterating these measures, we encourage readers of this article to access them directly and keep up to date with any updates to these good practices.
(2) The proposition of a new framework for LAWS employment principles and human judgment
In January 2026, Major Brennan Deveraux, a U.S. Army strategist and national security researcher, proposed an innovative human-centric framework for LAWS.
A Cursory Overview of the Framework
The framework rests on four pillars:
Certification of military personnel: Formal recognition of adequate training for commanders and operators of AWS. This ensures they understand how AWS function and what their limitations are.
Authority: A policy defining the requirements that must be met before deploying LAWS and allocating employment authority to designated commanders.
Restrictions: Limitations on the use of LAWS in certain theaters of operation and on the duration of AWS deployments. The framework also considers the restriction of objectives to military targets that can be clearly identified as such.
Accountability: Accountability mechanisms for unintended engagements and incident reviews. These provide a basis for continuous learning, similar to safety practices in aviation.
Our Opinion on it
The framework provides a fresh perspective on the lawful and responsible use of AWS. We consider it a particularly valuable contribution for three reasons:
- It focuses on the legal review of the employment of LAWS, which also lies at the core of the obligation imposed by Article 36 AP I.
- It goes beyond existing regulations that only address development stages.
- It applies to all deployment phases of AWS. It thus avoids the fallacy of demanding “meaningful human control” or “context-appropriate” human judgment and control” only at the final stage of employment while ignoring earlier stages.
This framework should not only prompt the U.S. Department of War to consider adopting it. States that are parties to Additional Protocol I should also examine it carefully.
In our view, the good practices suggested by apils and Major Deveraux’s framework are complementary rather than mutually exclusive.
Both initiatives can support a productive and objective discussion on the lawful and ethical use of LAWS under existing IHL.
V. Must States re-assess AWS continuously during their entire lifecycle?
The Challenge of Continuous Legal Re-Examinations
Article 36 AP I requires states to conduct legal reviews of new weapons during their “study, development, acquisition or adoption.”
However, the ICRC Guide suggests that a review should also be carried out if technical or field modifications are made to a weapon. This suggestion has particular relevance for AWS. Such systems may rely on machine learning to adapt to changing operational environments and situations not considered during system training.
Depending on the use case, this capability can be indispensable. Battlefields are highly dynamic environments.
One might therefore be tempted to demand continuous re-examinations of AWS to ensure their compliance with international law. However, such an approach could require repeated legal reviews during ongoing military missions and again after their conclusion. This would neither be reasonable nor practicable. A more balanced approach is therefore required.
The key question is not whether AWS must be reviewed again, but under what circumstances such reassessments become necessary.
A Viable Solution: Scheduled Re-Assessments and two-tiered Ad Hoc Reviews
Rather than conducting legal reviews at regular intervals irrespective of system changes, we consider it more appropriate to conduct scheduled re-examinations only when planned modifications materially affect the conduct of hostilities. These scheduled re-examinations should then be complemented by technical and legal ad hoc reviews throughout the lifecycle of AWS to account for unplanned system changes. Such ad hoc reviews require the implementation of post-deployment feedback mechanisms and may be triggered notably by:
- system anomalies and/or malfunctions
- new risks for civilians or combatants that have become known in prior or current missions
- adversarial actions like novel ways of system tampering
- other unexpected events that can adversely affect system functionality and legal compliance.
These ad hoc reviews should proceed in two stages:
(1) Technical review: determining whether a trigger event may adversely affect system functionality, safety or security.
(2) Legal review (if necessary): conducted only if the technical review raises concerns. This review assesses whether the system can still be employed in compliance with international law, with or without modifications to its design or operating modes.
This three-tier model essentially concurs with the view of Dr. Renato Wolf in the apils Report of An Expert Meeting on Progressing The Legal Review of Autonomous Weapon Systems and with Section 4.1 of DoD Directive 3000.09.
VI. How can Defense Contractors support the Legal Review of AWS?
The obligation to conduct legal reviews of AWS rests exclusively with the state that intends to acquire and employ them.
However, such reviews can benefit significantly from technical information that only system developers possess. To ensure that this information is available when needed:
- procurement authorities and defense contractors should conclude appropriate data use and information exchange agreements before system development is commissioned
- developers should facilitate the early involvement of military commanders, future operators and judge advocates during the requirements definition and design phase.
As a rule, this involvement is indispensable. It provides developers with the necessary military domain expertise and helps ensure that the system can ultimately be employed in compliance with international law and domestic regulations
VII. Operational and Implementation Implications
For defence planners and judge advocates, the legal review of autonomous weapon systems raises several practical and procedural considerations:
- Legal reviews must assess not only system design but also foreseeable operational modes.
- Verification and validation (V&V) as well as comprehensive Test and Evaluation (T&E) procedures must ensure that AWS can be operated effectively and in compliance with IHL.
- System design must allow commanders and system operators to reliably understand and control system behaviour within its intended operational parameters, even in complex environments.
- Human judgment remains central to compliance with the law of armed conflict.
- Commanders must understand the capabilities and limitations of autonomous systems before authorizing their deployment. This requires appropriate doctrine, instructions and training.
- Operational use of AWS should be accompanied by monitoring and feedback mechanisms that allow states to detect unexpected system behaviour and initiate technical or legal re-assessments where necessary.
- Procurement authorities should ensure that developers provide sufficient technical documentation to support legal reviews.
VIII. Conclusion and Outlook
This article does not aim to provide a comprehensive overview of legal reviews of AWS, but rather offers a snapshot of ongoing discussions in academia, expert groups, and other forums. However, if it offers practitioners, policy makers and interested readers useful food for thought, it will have achieved its purpose.
What is urgently needed is a more objective discussion on AWS—whether lethal or not. Such a discussion should avoid maximalist positions and refrain from stigmatizing LAWS as out of control “killer machines”. Instead, it is necessary to recognize that various forms of AWS will likely become an established part of modern warfare much like other weapon systems before the age of AI.
In our next instalment of this series, we will examine whether there is an urgent need for new international law to regulate autonomous weapon systems or even military use of AI in general.