Autonomous Weapons: Dangerous, Unregulated and Out of Control? (Part 2)

In part 1 of this article we have looked at the defining feature of autonomous weapon systems and raised the question whether there is an urgent need for a new international treaty on such weapon systems.

In this second part, we examine whether regulatory action must address the much discussed “accountability gap”, which critics associate with the employment of AWS in armed conflicts.


Estimated reading time: 11 minutes


I. Does the Use of AWS Create an “Accountability Gap”?

Human Rights Watch was among the first to suggest an “accountability gap” by the use of AWS. More recently, the pertinent literature has reframed this concern as an “accountability vacuum“. However, depending on the context of use, the term “accountability” can have very different meanings even in the military domain. Practitioners and scholars often use the term “accountability” as an umbrella term and/or as synonymous with criminal responsibility. From a legal perspective, “accountability” for artificial intelligence and autonomous weapon systems comes in many forms. Hereinafter we will first highlight some general observations of legal scholars on the accountability of military commanders for AWS. Next, we will address more specific types of accountability and add our own thoughts on them. We will then conclude part 2 of this article with a summary of our findings.

1. High-Level Observations on the Lack of an Accountability Gap

  • Michael N. Schmitt had no doubts about the matter and stated that “Clearly, any commander who decides to launch AWS into a particular environment is, as with any other weapon systems, accountable under international criminal law for that decision. Nor will developers escape accountability if they design systems, autonomous or not, meant to conduct operations that are not IHL compliant. And States can be held accountable under the laws of State responsibility should their armed forces use AWS in an unlawful manner.”
  • Charles J. Dunlap Jr. has contributed the most detailed and distinct rebuttal to the Human Rights Watch report. While he has acknowledged the legitimacy of questions about AWS, he has concluded that “the notion that there is something intrinsic about them that bars accountability is simply untrue.”
  • Similarly, Tim McFarland has also found that placing a more complex computer-based control system in direct control of a weapon does not relieve the entities which deployed it of their accountability for the harm it causes. Taking this view one step further he posited that “this is so regardless of the complexity of the control system´s operation, the foreseeability of its actions, or the ability or inability of a human to intervene in the operation of the weapon after an attack commences.”

2. A Closer Look at Specific Forms of Accountability for AWS

Discussions on a possible accountability gap for the use of military AI and AWS often focus on criminal responsibility. We will hence address it first. However, we will also illustrate that there are still other ways to ensure accountability for military AI and AWS.

2.1 Criminal Responsibility: The Challenge of Blessed Ignorance

Criminal responsibility essentially takes two forms: international and domestic. International criminal responsibility can notably arise from war crimes which are grave breaches of the Geneva Conventions of 1949. War crimes fall under the jurisdiction of the International Criminal Court in The Hague under the Rome Statute. Such crimes are also punishable under domestic codes of crimes against international law.

The Mental Element: Intent and Knowledge

Criminal responsibility and liability for punishment for crimes within the jurisdiction of the ICC requires a particular mental element. Under Article 30 of the Rome Statute, the perpetrator must have committed the crime with intent. Moreover, he/she must have knowledge of the crime´s material elements. “Knowledge” means awareness that a circumstance exists or a consequence will occur in the ordinary course of events.

In some AWS use cases, this intent and knowledge will be missing, while in others it will not. Most recently, Jonathan Kwik has revisited the issue of “autonomous weapons and criminal liability for not knowing the knowable“. He has explored cases in which commanders decide to re-deploy an AWS after an incident that may question system safety. In his analysis, he distinguished various hypothetical scenarios. In these scenarios, the commanders have varying degrees of knowledge about the incident and decide to re-deploy the AWS anyway. As Kwik demonstrates, commanders may unduly avoid awareness of specific AWS deficiencies to escape criminal liability under the Rome Statute. This raises the question whether we must reckon with such behaviour, in practice and what we can do about it.

Organisational Measures: Ways to Prevent Blessed Ignorance

Pleading ignorance to escape criminal liability can be a concern for states that promote it or fail to prevent it. However, in states operating under the rule of law, such problematic cases are much less likely to occur, if at all. Even if they do, this must not mean that a lack of direct intent will always result in impunity. A commander must still expect prosecution if domestic criminal law only requires conditional intent for a punishable crime. As a rule, courts can infer such intent if two conditions exist. First, the perpetrator must recognise that the outcome constituting a criminal offence is possible and not entirely remote. Second, he must either accept it or resign himself to it in order to achieve his objective.

Turning a blind eye on AWS incidents must carry the risk that prosecutors can demonstrate such conditional intent. To ensure this, states should take adequate organisational measures to prevent blessed ignorance of consequential deficiencies of AWS. Such measures should notably include appropriate weapon reviews and (re-)evaluations. Also, states should implement reporting processes for defects of AWS and the known risks they entail.

If ministries of defence forward such reports to all commanders and operators of AWS, “pleading ignorance” will become less attractive. Moreover, continuous mandatory training for commanders and operators of AWS can enhance IHL compliance and effectively prevent such ignorance.

Even if these measures prove unsuccessful, at least one or more of the following accountability frameworks remain available.

2.2 State Responsibility: No Intent Required

State responsibility in its most simple definition by Lassa Oppenheim is

“the external responsibility of a State to fulfill its international legal duties.”

Modern state responsibility relies on the works of the United Nations´ International Law Commission (ILC) which we will outline hereinafter.

IHL Violations as Wrongful Acts of States

In 2001, the UN International Law Commission adopted draft articles on the “Responsibility of States for Wrongful Acts” (“ARSIWA“). States have not yet adopted ARSIWA in a legally binding international treaty. Even so, international courts have referenced it in many cases. Legal scholars have also acknowledged it as “one of the founding pillars of the international legal order”.

According to Article 1 ARSIWA, every internationally wrongful act of a state entails the international responsibility of that state. Violations of IHL by members of a state´s armed forces acting in their military capacity are such “wrongful acts”. But AWS cannot commit war crimes or violate IHL which primarily binds states.

However, this does not entail a “state responsibility gap” as states are responsible for the actions of their armed forces. Courts can attribute such acts to the state that commands these forces as such forces are “organs” of that state.

Unlike international criminal responsibility, state responsibility for violations of IHL does not require intent or negligence of military commanders. As Daniel L. Hammond has observed, states also have an incentive to avoid IHL violations resulting from AWS employments to avoid their responsibility.

  • the obligation to cease the wrongful act and offer appropriate assurances and guarantees of non-repetition (Article 30),
  • the obligation to make full reparation for the injury caused by the internationally wrongful act (Article 31). This obligation comprises restitution, compensation, and satisfaction. Restitution restores the situation that existed before the wrongful act (Article 35). If this is not feasible, Article 36 provides for the payment of compensation.

The responsible states owe these obligations not to individual victims. Instead, they owe them to another state(s) or the international community as a whole.

For serious breaches of peremptory international law, Article 41 provides that states shall cooperate to bring them to an end.

Further Reading on State Responsibility

Readers wishing to take a deeper dive into ARSIWA will find the contributions of Maurizio Arcari and Marco Sassoli instructive.

2.3 Administrative Accountability: Often Neglected but Far from Irrelevant

Laura A. Dickinson has explored the largely neglected administrative accountability for AWS employment that does not amount to war crimes. Article 87 of Additional Protocol I supports this type of accountability as it obliges states to require their military commanders

to prevent and, where necessary, to suppress and report to competent authorities breaches of the (Geneva) Conventions and (Additional)Protocol (I ).”

Dickinson provides a comparative law analysis for several instruments which can serve to enforce administrative accountability. These include administrative inquiries, investigations and the imposition of non-criminal sanctions. Besides, corrective actions can prevent further AWS incidents from occurring.

2.4 Military Accountability: Stricter than You Might Think

James Kraska argues that the military doctrine of command responsibility “holds the commander liable for “willful blindness” for failing to prevent or stop the illegal acts of his or her subordinates“. He also observes hat “if an autonomous system acts beyond its programmed limitations, the military system holds commanders accountable for failing to anticipate or guard against the danger.”

As Kraska emphasizes, this accountability is complete (…) “–even if the commander had no way of personally intervening to ensure a better outcome, indeed even if the commander optimized training and preparation of his or her forces to avoid such an outcome.”

2.5. Product Liability: The Blind Spot in the Discourse on Accountability for AWS

The statutory liability of manufacturers of AWS covers personal injury, death or damage caused by defects in their products.

The new EU Directive 2024/2853  provides for strict liability of manufacturers for damages caused by their defective products.

Importantly, this strict (no fault) liability also applies to software including but not limited to AI systems. Manufacturers cannot exclude or limit this strict liability by contractual provisions. It is a sharp sword and significant incentive for designers and manufacturers of AWS to comply with the Directive.

Article 11 of the Directive provides for certain exemptions from this strict liability. One of these exemptions is of particular practical relevance. It applies if the manufacturer could not discover the product defect despite adhering to the standard of science and technology. However, the mere observation of the state of the art is not sufficient for this exemption to apply.

The Directive applies principally to all products placed on the Union market or put into service after 9 December 2026. It does not affect fault-based civil tort liability of manufacturers under national laws. Such tort liability hence applies in addition to the strict liability imposed by the Directive.

3. Summing Up: Why We Do Not Need New International Law to Govern AWS

Laurie R. Blank has most convincingly demonstrated that an international treaty would not be the right legal framework for governing AWS. First and foremost, it would fail to address other military applications of artificial intelligence of which there are many. Second, compliance with IHL will to a large extent depend on how belligerents will use AWS. Given the stalling discussion in the GGE, a new international treaty is also unlikely to come to pass. Even if it would, the fast paced trajectory of AI research and development would very likely outpace it soon enough. An international treaty cannot adequately reflect this dynamic. But which instrument can? In the final section of this article we will address recent insights of the Asser Institute on this question.

II. Are Rules of Engagement (ROE) a Better Way to Govern the Employment of AWS?

In 2024, Tobias Vestner proposed ROE as a potentially more effective alternative to international treaty law. However, the outcome of a recent multidisciplinary workshop organized by the Asser Institute under the guidance of Jonathan Kwik suggests a nuanced approach.

The policy brief on the workshop findings concluded that fragmentary orders (FRAGOs) or special instructions are the method of choice when specific military units are using AI systems and/or when such systems require rapid modifications. For AI systems that serve a wider variety of users, such as multinational forces, or require strategic approval, the brief found that AI-specific rules in ROE would work best. In our opinion, this differentiation highlights that flexible, context-dependent instruments are a better alternative to govern AWS than rigid treaty-based approaches.


Conclusion

No Regulatory Gap Under Existing IHL

In Part 1 of this article, we demonstrated that AWS do not fall outside the scope of existing international humanitarian law. We further showed that many of the risks that proponents commonly invoke to justify new treaty law —such as loss of control, technical malfunction, or increased targeting errors— either overstate the problem or fall within the scope of the current legal framework.

No Accountability Gap in Practice

In Part 2, we addressed the accountability gap that critics link to the use of AWS. We consider this concern legitimate but unfounded, at least for states operating under the rule of law. Other states may fail to prosecute war crimes or other violations of IHL or even promote them. However, international courts will certainly not leave such behaviour unsanctioned. If states investigate incidents with AWS and keep commanders informed on system shortcomings, claiming ignorance will become exceedingly difficult. Even so, we think that it is about time to agree on an expanded standard of international criminal responsibility. In our opinion, this would require a revision of the Rome Statute and the acknowledgement of conditional intent by the ICC.

Alternative Accountability Frameworks Already Exist

Even if states cannot agree on this overdue step, there are plenty of alternative legal concepts preventing an accountability gap. State responsibility, military accountability and European product liability do not require intent on the part of military commanders to violate international law.

Why a New Treaty Is Neither Necessary nor Effective

An international treaty on AWS is unlikely to come to pass and is also unnecessary. Besides, it is not flexible enough to accommodate the current speed of AI research and development. Instead, more flexible instruments—such as ROE, FRAGOs and military directives—better govern the use of AWS in armed conflicts.

While ROE and similar instruments provide a flexible framework for governing AWS in practice, an equally important question remains: what level of human involvement do such systems require? We will address this question in Part 3.

About the author

With more than 25 years of experience, Andreas Leupold is a lawyer trusted by German, European, US and UK clients.

He specializes in intellectual property (IP) and IT law and the law of armed conflict (LOAC). Andreas advises clients in the industrial and defense sectors on how to address the unique legal challenges posed by artificial intelligence and emerging technologies.

A recognized thought leader, he has edited and co-authored several handbooks on IT law and the legal dimensions of 3D printing/Additive Manufacturing, which he also examined in a landmark study for NATO/NSPA.

Connect with Andreas on LinkedIn