The International Committee of the Red Cross (ICRC) has submitted its recommendations on military AI to the UN. (Part 1: AWS)

Photo: UN Photo/Manuel Elias

AI in the military domain is a particularly sensitive issue, with none other than UN Secretary-General António Guterres himself disdaining autonomous weapons systems (AWS) as “repugnant” and “politically unacceptable”. Read on to find out what the ICRC has proposed, and whether we really need new rules to govern AWS at a time when profound geopolitical challenges are shaking Western democracies and Europe needs to stand on its own military feet.

The ICRC’ s reaffirmation of existing international humanitarian law

Given the often negative coverage of military AI, which tends to exploit the fear of the unknown and create misconceptions about AI operating in a legal vacuum, the ICRC deserves praise for pointing out that the use of AI in the military domain is already governed by international humanitarian law (IHL), which provides important guardrails, particularly for the protection of civilians in armed conflict. Some of the most important rules of IHL that have been laid down in the Additional Protocol I of 1977 to the Geneva Conventions of 1949 (hereinafter referred to as “AP I”) are:

  • the prohibition of the use of weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering or severe damage to the natural environment (Article 35 AP I)

  • the obligation to review new weapons, means and methods of warfare in order to determine whether their deployment would be prohibited in some or all cases by AP I or any other rule of international law (Article 36 AP I)

  • the principle of distinction between the civilian population and combatants as well as civilian objects and military objectives which implies that military operations shall only be directed against military objectives (Article 48 AP I)

  • the prohibition of indiscriminate attacks i.e. attacks which strike military objectives and civilians or civilian objects without distinction (Article 51 AP I – e.g. carpet bombing of areas that are also inhabited by civilians and any attack that is excessive in relation to the concrete and direct military advantage expected)

  • the obligation to take precautions in the attack to spare the civilian population, civilians and civilian objects (Article 57 AP I).

This short list is not exhaustive as AP I  provides, among other things, for special protection for children and women as well as for persons off the battlefield (“hors de combat”). Moreover, it affirms that civilians and combatants are also protected by established customary law, the principles of humanity and the dictates of public conscience (the so-called “Martens clause”).

However, as will be seen below, the ICRC considers current IHL to be only a starting point for more stringent regulation of AWS. To this end, the ICRC has proposed strict regulatory measures, which will be examined in more detail below.

Human accountability

In its submission to the UN, the ICRC dispelled the widespread belief that AI creates an unacceptable “accountability gap” because AI systems have no legal personality and therefore cannot be held accountable for the results they produce. As the ICRC has so aptly pointed out, this is untrue because “it is not the system itself that must comply with the law, but the humans using it”. Military commanders make the decision to use AI systems in military missions, and they must ensure that this is done in accordance with international humanitarian law and the rules of engagement. War crimes, such as deliberate attacks on residential buildings that make no effective contribution to the military action, are committed by humans, not robots, and human perpetrators can and must be punished under international criminal law.

Photo: Resource-database on Unsplash.com

Do we need a ban on unpredictable military AI?

While the ICRC also rightly calls for users of AWS to be able to predict their effects with a reasonable degree of certainty and to limit those effects in accordance with IHL, there is actually no need for a ban on unpredictable AWS. Like any other weapon system, AWS must be subject to an assessment of their compliance with applicable IHL prior to their use. (Art. 36 AP I). Since it is impossible to confirm this compliance of AWS that produce unpredictable results in a weapon review, such unreliable weapon systems cannot pass this test anyway and cannot be used. For the same reason, they are unfit for their military purpose and not even marketable. If you are a member of the armed forces, you will certainly not want to use a weapon system whose effects in the theater of operations cannot be reliably predicted. If AWS that produce unpredictable effects are indeed inherently indiscriminate, any attacks executed with such AWS cannot be directed against a specific military objective and are violating Article 51 para. 4 lit a) AP I. There would be no additional benefit in banning such weapons. Nevertheless, an understanding of the operation of AWS and the risk of malfunction inherent in all modern weapon systems is of course essential to their safe and effective use.

Is it justified to ban AWS that are designed or used to directly target humans?

Photo created with Dall-E 3

The ICRC has argued that AWS designed or used to directly target humans must be banned “because of the significant risk of IHL violations and the unacceptability of anti-personnel autonomous weapons from an ethical perspective”.

In assessing the merits of such a ban, it is worth noting that it is not generally illegal or unethical to direct against a combatant any weapon system that is not prohibited as such (such as blinding laser weapons) and does not have effects beyond legitimate military targets that cannot be controlled. If it were, the Allies could not have won World War II, and any weapon that could be used to directly target a combatant, such as a sniper rifle, could no longer be used in combat unless you consider it more ethical than an AWS with the same functionality. While certain anti-personnel weapons such as landmines have been banned for good reasons, a blanket ban on AWS that can be used to target human combatants would clearly be disproportionate and not necessary to protect civilians from the use of force in armed conflict. Put simply, the mere fact that a weapon can be used in a manner that violates humanitarian law is not a legitimate reason to ban it outright, and this simple principle applies equally to AWS.

The effect of using force on a specific (human) target is not inherently different from using force with a non-autonomous weapon. Active participation in combat is a high-risk activity and may result in personal injury or death. Targeting a combatant for a lawful, legitimate purpose does not become illegal or unethical simply because it is done with artificial intelligence. However, as noted above, basic principles of international humanitarian law designed to limit the effects of military attacks to what is strictly necessary and proportionate to achieve a specific military advantage must be respected. This applies regardless of the degree of autonomy a weapon may have. However, as long as these and other basic rules of humanitarian law are properly respected, enemy combatants can be attacked without violating their human dignity, precisely because they can be considered combatants, regardless of whether or not the attack is assisted or carried out by artificial intelligence (Leveringhaus in The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives, pp. 475-487).

Is there a need for new rules to restrict the development and use of AWS?

Although current IHL already applies to new weapon systems, the ICRC calls for the creation of new rules to restrict their design and use. This is not surprising, as it is a well-known phenomenon that new technologies trigger a desire to create new guardrails for their use, and AWS are no exception. However, such calls should only be heeded if there is a real need for new regulation. Adherence to this sound principle is reason enough to note the following:

  • “restricting targets of the AWS to only those that are military objectives by nature” is an important principle but does not require new regulation as the postulate that attacks shall be limited strictly to military objectives has already been codified in Article 52 of AP I which also applies to new weapon systems such as AWS. It is also dealt with in Rule 8 of customary international humanitarian law which limits military objectives “to those objects which by their nature, location, purpose or use make an effective contribution to military action and whose partial or total destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.”
  • limiting the geographic scope of the operation of the AWS” does not require new regulatory measures either as Article 51 para. 5  lit. a) AP I already sets this limit. As far as the ICRC also calls for new rules on the limitation of the duration of AWS deployments it should be considered that the principle of proportionality in attack which is enshrined in Article 57 para. 2. lit a) iii) and Rule 14 of customary international humanitarian law does not provide for any general temporal limitation of attacks against combatants. Instead, Article 57 para. 2. lit. b) AP I only calls for the cancellation or suspension of an attack in specific cases that affect the civilian population or an objective that is either not a military one or subject to special protection. Beyond these cases, the duration of an attack against a military target using AWS must have its limits, which must be determined by the commander in charge, taking into account all the information available in each individual case, and not by a rigid rule.

  • limiting the scale of use, including the number of engagements that the AWS can undertake” is a risky proposition. Scaling back the use of AWS to a pre-determined maximum number of deployments without regard to strategic requirements and the specific military mission may adversely affect the capability to meet such requirements and accomplish the  mission. For this reason, a fixed maximum number of AWS deployments is not a viable option. It would also be shooting over the target (no pun intended) if such a limitation had to be applied  in geographical areas where there are no civilian objects (such as schools or churches) and no civilians at all (e.g. in no-fly zones, deep sea environments or space) as any number of engagements in such areas can only affect combatants.

  • “limiting the situations  of use, namely constraining them to situations where civilians or civilian objects are not present” is already ensured by Article 58 of AP I. Therefore, the hypothetical scenarios that have been raised against AWS in which they are used in densely populated areas such as city centers are unrealistic and do not require additional regulatory measures.

  • The ICRC´s requirement to ensure “to the maximum extent feasible the ability for a human user to effectively supervise and in a timely manner to intervene and, where appropriate, deactivate operation of the AWS” combines multiple requirements into one and has its own limitations. On the positive side, it does  not require a “human in the loop” controlling every single operation of an AWS. However, the perceived need for supervision still seems to imply that even systems with superhuman OODA (observe, orient, decide. act) capabilities must be controlled by humans with lesser capabilities which would be a dangerous proposition.

    The feasibility caveat is an obvious albeit important clarification. Even so, always extending human oversight to the maximum feasible may stifle the effectiveness of AWS by denying them even a moderate degree of autonomy.  Not everything that is feasible is also practical or even necessary and leaves enough room for preventing or responding to an adversary attack in a timely manner.  Instead, a more flexible level of appropriate oversight of AWS is needed taking into account the degree of  demonstrated reliability and safety of an autonomous weapon system as well as the intended operational environment and other risk factors on a sliding scale. Oversight can also be provided in regular weapon reviews, mission rehearsals (with suitable simulation software) and post-deployment reviews using the lessons learned to improve the accuracy of the system where this has not already been achieved through machine learning. In addition, once AWS have been deployed, human intervention should in principle be limited to cases where unforeseen events, new information or a system malfunction require it. While other cases are conceivable, it is important to ensure that human intervention remains the exception and is not exercised on the mistaken assumption that the human mind or judgement is always superior to artificial intelligence.

  • Equipping AWS “with an effective mechanism for self-destruction or self-neutralization so that the AWS will no longer function as an AWS when it no longer serves the military purpose for which it was launched” is an appropriate measure to avoid disasters. However, it goes without saying that the trigger function must be designed with utmost care to avoid a premature or delayed initiation of the self-destruction or self-neutralization sequence.

Roundup and the road ahead

Photo created with Dall-E 3

Given that countries outside the Western Hemisphere are less concerned about AWS and are accelerating their development and deployment with massive investment and political cover on an unprecedented scale, what is really urgently needed is not more regulation, but a clear mandate for AI-native defence companies to develop the defence systems of the future for all NATO allies as quickly as possible and to make them so effective, explainable, reliable and secure that there is nothing to fear and much to gain, both in effectively defending our democratic freedoms and in safeguarding basic humanitarian principles better than ever thought possible. Admittedly a tall order, but not an impossible one, so let´s get started.

If you’ve read this far, bear with me and don’t miss part 2 of this article which will be published next week, and will cover the ICRC’s recommendations on artificial intelligence decision support systems (AI-DSS) in the military domain.

About the author

With more than 25 years of experience, Andreas Leupold is a lawyer trusted by German, European, US and UK clients.

He specializes in intellectual property (IP) and IT law and the law of armed conflict (LOAC). Andreas advises clients in the industrial and defense sectors on how to address the unique legal challenges posed by artificial intelligence and emerging technologies.

A recognized thought leader, he has edited and co-authored several handbooks on IT law and the legal dimensions of 3D printing/Additive Manufacturing, which he also examined in a landmark study for NATO/NSPA.

Connect with Andreas on LinkedIn