The ICRC´s recommendations on Artificial Intelligence in the military domain (Part 2)

UN Photo/Loey Felipe

Following on from Part 1 of this article, which dealt with the International Committee of the Red Cross (ICRC)’s observations on autonomous weapon systems (AWS) in its submission to the UN Secretary-General, this Part 2 deals with the ICRC’s views and recommendations on military artificial intelligence decision support systems
(AI-DSS) and the use of AI in information and communication technologies (ICT).

What are Decision Support Systems (DSS)?

Decision support systems (DSS) were first conceptualized in the early 1970s and have since been used in a wide range of fields. Today, clinical decision support systems (CDSS) help healthcare professionals like Dr House diagnose and treat patients, while management decision support systems provide valuable information for growing businesses and increasing profitability. If your car has a navigation system, you may be unknowingly using a decision support system to help you take the quickest route home.

In the military domain, decision support systems have been used for personnel planning, supply chain management and operational level command and control to name but a few applications. While there is no universally accepted definition of modern military decision support systems, the ICRC has defined them in its recommendations to the UN Secretary-General as

computerized tools bring together data sources –such as satellite imagery, sensor data, social media feeds or mobile phone signals – and draw on them to present analyses, recommendations and predictions to decision makers.

For a proper understanding of military decision support systems, it is important to note that such systems do not make decisions themselves, but only provide their users with the information they need to make informed decisions. While this may seem obvious, it is sometimes misunderstood in the discussion of the merits of such systems and their perceived dangers, so it is clarified here up front.

What are the concerns raised against military AI-DSS?

Most recently, decision support systems have been enhanced by artificial intelligence. As expected, this raised concerns about the legal and ethical implications of military AI-DSS. While it is acknowledged that AI-DSS can significantly improve the quality and speed of military planning and decision-making, the ICRC has suggested that “increasing the speed of military operations can create additional risks for both civilians and combatants, including the risks of miscalculation and escalation”. This concern is not new as it has been the subject of popular movies and much discussion during the so called “Cold War” between the U.S. and the Soviet Union from 1947 to 1991. In recent years, the narrative about the alleged dangers of military AI has mostly been based on a single incident that occurred during the clash of two superpowers after World War II.

28 Minutes to counterstrike

In 1983, the Soviet early warning system Oko (“Eye”) indicated an incoming intercontinental ballistic missile (ICBM) attack from the United States. Although Oko gave not just a warning but an order to launch a counterstrike, the officer in charge of the system, Stanislav Petrov, kept his cool and did not inform his superiors of an attack, suspecting that the order had been caused by a technical glitch. As it turned out, he was right: what the early warning system had mistaken for launch flashes turned out to be reflections of sunlight on clouds over Montana, USA. The man who prevented a nuclear third world war knew that a counterstrike had to be carried out within just 28 minutes of receiving the automatic launch order, and yet he was not deterred. You would think that Stanislav would have received much praise from his country for this exemplary courage, but his superiors preferred to keep the failure of the early warning system under wraps, considering it too embarrassing.

Stanislav Petrov in 2016, Photo: Opencooper licensed under the Creative Commons Attribution-Share Alike 4.0 International license via Wikimedia Commons

Over-trust in technology ?

A common argument against AI-decision support systems is the presumed over-trust in such systems. While this phenomenon cannot be dismissed as unfounded, the Oko incident of 1983 demonstrated most impressively that overconfidence in technical systems can be avoided if they are operated by engineers like Stanislav, who are intimately familiar with the system’s shortcomings and are not too easily led to take all its output at face value. And although the Soviets did not recognise Stanislav’s great deed, they had the wisdom to let him improve their flawed system in the years following the incident.

Overconfidence in AI-DSS is something to be reckoned with, but can be addressed by selecting knowledgeable and well-trained operators who can be trusted to make the right decision even under severe time pressure. Like it or not, the acceleration of warfare to “hyperwar” is a reality, and OODA (Observe, Orient, Decide, Act) cycles will become shorter and shorter in the coming years. Democratic nations must ensure that they do not fall behind in this race, which will be crucial for effective defence and a peaceful future, and crippling AI-DSS by deliberately slowing it down to scrutinise every output will not solve this challenge but exacerbate it, as it could lead to asynchronous warfare with no balance of power, like the nuclear stalemate that became the basis of what the American historian John Lewis Gaddis called the “Long Peace” after World War II.

Are AI DSS promoting biased decision making?

AI bias, also known as algorithmic bias or automation bias, has become a popular argument that can be expected to be raised by AI sceptics. It is also a recognised challenge. In its submission to the UN Secretary-General, the ICRC expressed concern about the potential for AI-DSS to “perpetuate or amplify problematic biases – particularly those based on gender, ethnicity or disability”. Addressing these concerns is a complex endeavour. While there is no silver bullet for curing AI bias, research has tackled the problem and come up with a variety of measures to address the issue. Discussing them all, even briefly, would go far beyond the scope of this article, but a few observations on the cause and effect of AI bias and possible remedies are worth making before concluding on this topic.

First and foremost, to understand AI bias, it is important to recognise that it is not just a mathematical challenge, but has three types: systemic bias, human bias, and statistical/computational bias. These categories of AI bias have been explored and explained in many papers, of which the one published by NIST stands out, as it can be recommended both to those with no prior knowledge of the subject and to experts.

AI is a mirror of our own shortcomings and both can be improved.

Building on this understanding, we cannot avoid accepting the uncomfortable truth that human bias has been the root cause of atrocities and war crimes long before the rise of AI. This means that the real culprits are not in AI systems, but in humans themselves, whose prejudices, misconceptions, and racist or misogynist views become part of the training data of AI systems if left unaddressed.The old adage in computer science of “garbage in, garbage out” still holds true and has taken on a whole new meaning in AI, which becomes a mirror of our own shortcomings.

Every picture tells a story but, as in this case, not necessarily the truth. Ironically created with Dall- E3.

Can AI-DSS output reinforce human biases? Yes, but this is not a unique drawback of such systems, as the same reinforcement process occurred when the archetypal influencer was born and search algorithms started showing more of the same content to their users. And it can happen with any information collected with or without AI.

But how can this human bias be eliminated or at least reduced in AI systems? Looking at the big picture, one could try to combat human bias so that it doesn’t find its way into AI training data in the first place. A more practical approach is to generate and use fair data sets for AI training that do not contain discriminatory data. The use of synthetic training data can also reduce bias in AI systems. As research into trustworthy and explainable AI continues to yield promising results, AI-DSS will become more reliable and safer to use. As the creation of the image above shows, current AI models are already pretty good at avoiding biased output. My first prompt asked Dall-E3 to show a person of a different skin colour in the mirror, who is shocked to see a white man. Although I explained in my prompt that the image was to be used to illustrate prejudice against people from a different geographical region, Dall-E3 refused to process this request accurately to avoid biased output, and offered to show only an older version of the man looking in the mirror. This demonstrates that the challenge of human bias in AI can be solved, or at least reduced, by technical means.

What is largely ignored in the current discussion of automation bias is that AI can actually help detect and prevent war crimes caused by human bias. Done right, AI will make it much harder to cover up genocide and atrocities against civilians, and will increase the risk of prosecution and imprisonment to a degree that is likely to deter potential perpetrators from even considering illegal actions.

No battle plan survives contact with the enemy

Another concern raised by the ICRC is the observation that the usefulness of training data can diminish rapidly once a conflict begins. The fact that armed conflicts are highly dynamic and constantly evolving was noted by Helmuth von Moltke the Elder, who opined that “no battle plan survives contact with the enemy”. But AI systems based on machine learning are anything but static, as they can take account of these dynamics and adapt on the fly – much faster than any human could.

Helmuth Graf Bernhard von Moltke, oil on canvas, Frei nach Lenbach / Heinrich Lessing 1898; Deutsches Historisches Museum, Berlin, Germany

While the potential perpetuation of human bias by AI systems is a serious challenge, it is already being addressed and must not be used as a reason to abandon the deployment of decision support systems for military purposes. The benefits of properly designed AI DSS far outweigh the hypothetical risks, which can be further mitigated by continuously cross-referencing ISR data from multiple sources to verify its accuracy before a decision is made.

To some extent this has been recognised by the ICRC, which has acknowledged that “Indeed, the careful use of AI-based systems may facilitate quicker and more comprehensive information analysis, which can support decisions in a way that enhances IHL compliance and minimizes risks for civilians.”

Are AI-DSS turning users into human rubber stamps?

As shown in the first part of this article, overconfidence in technical systems is not as widespread as one might think and can be effectively mitigated by a variety of measures. The ICRC’s concern that “someone may plan, decide or launch an attack based solely on the output of an AI-DSS, effectively acting as a human rubber stamp rather than assessing the legality of the attack by considering all reasonably available information, including the output of the AI-DSS” is certainly valid. But NATO’s multinational Intelligence, Surveillance and Reconnaissance Force (NISRF) receives data from a wide variety of sources and analyses it extensively to ensure that it is correctly interpreted and transformed into valuable information and intelligence. For the nations of the Alliance, intelligence is a complex product derived from satellite imagery, an ever-increasing number of ground sensors and many systems in between that pierce the “fog of war” and provide insights that give a much more complete and accurate picture than ever before. AI-DSS will further enhance this process, and quantum computing will enable real-time processing of large volumes of unstructured data, further improving the accuracy and reliability of military intelligence.

What are the the recommendations of the ICRC on military AI-DSS?

As the ICRC has pointed out correctly,

  • armed forces should consider how AI-DSS can be designed and used in a way that protects civilians e.g. by tools for tracking movements of the civilian population or sensors which recognize distinctive emblems or signals indicating a protected status

  • the design process for AI-DSS must anticipate human interaction with such systems

  • the use of A-DSS for detention operations can improve IHL compliance but should not replace human reviews of detentions entirely.


The ICRC’s other guiding principles for AI-DSS in Annex I of its submission to the UN Secretary-General are mostly reasonable. This applies in particular to the ICRC´s recommendations that

  • Datasets that are processed with AI-DSS shall be reliable and validated,

  • Developers and users of AI-DSS should take appropriate measures to mitigate gender, racial, ethnic,
    disability and other similar forms of bias in the design and use of AI-DSS, including in the underlying datasets and training methods,

  • Developers should ensure that AI-DSS undergo rigorous testing and validation in environments that simulate the complexity of armed conflict,

  • AI-DSS shall be re-tested whenever their intended use changes or when they are modified in a way that affects their functions and/or effects,

  • AI-DSS that form part of weapon systems should be subject to a legal review to establish their compliance with international humanitarian law (IHL),

  • States should develop and adapt appropriate doctrinal frameworks, standard operating procedures (SOPs), tactics, techniques and procedures (TTPs) and other guidance to ensure compliance with IHL and other applicable laws and policies,

  • Users of AI-DSS must receive adequate training on the functions and vulnerabilities of AI-DSS as well as human tendencies to interpret their output,

  • Targeting processes must consider information from all sources reasonably available,

  • Users of AI-DSS should conduct timely and objective after-action reviews to ensure their proper functioning.

The ICRC´s other recommendations will require careful consideration and differentiation.

No AI-DSS in nuclear weapon command- and control systems?

The ICRC’s call for AI-DSS to be categorically excluded from integration into nuclear weapons command and control systems still seems to be based on the malfunctioning Soviet early warning system of more than 40 years ago. Comparing today’s advanced AI-DSS to that first version of the early warning system is not only like comparing apples to oranges, it also ignores the fact that AI-DSS, if used properly, can actually significantly reduce the risk of preventing nuclear war. Admittedly, recognizing this fact requires more effort than adhering to Hollywood’s popular “war games” and “Skynet” narratives, but the stakes are far too high to be guided by science fiction rather than science alone.

Science, not fiction. Image by cash-macanaya for Unsplash+

Do AI decision support systems always have to be predictable?

Requiring developers to “ensure that AI-DSS produce outputs that are sufficiently predictable” is only reasonable to a certain extent. The purpose of AI-DSS is not limited to the representation of collected data, but also includes the discovery of unknown, and therefore unexpected, connections between such data. Connecting the dots can thus provide a better understanding of complex relationships between data sets, which in turn can help to identify potential threats and opportunities earlier. Limiting AI-DSS to producing only predictable outcomes might maximize their security, but would make them much less useful and even redundant, as the mere collection and output of data does not necessarily require a sophisticated AI system. In future warfare, the warring party that is able to analyse and extract the most useful information from the big data collected on the battlefield is likely to win.

Should military decision-making be slowed down to allow for human evaluation of AI output?

The deliberate slowing down of military decision-making should be limited to cases where sufficient ISR (Intelligence, Surveillance, Reconnaissance) data indicates that there is no imminent threat requiring rapid decision and response.

Should we rule out the use of AI decision support systems to target individuals?

As explained in Part 1 of this article for AWS, it would be unwise to exclude the use of AI in conjunction with anti-personnel targeting. It would make AI-generated intelligence from reliable sources and satellite imagery inaccessible for targeting known terrorists and warlords committing genocide, even when such intelligence would be unquestionably accurate and act on it would save countless lives.

Risks arising from hackers, influencers and other ICT users

Last but not least, in its submission to the Secretary-General of the UN, the ICRC also identified risks related to the use of AI in information and communication technologies (ICT) that have raised concerns among states. While these concerns are valid in both war and peacetime, this use of AI can also have beneficial effects:

In cyber warfare, compromising data integrity through malware such as trojans or viruses can disable enemy facilities used for military purposes. The Stuxnet network worm, which sabotaged an Iranian military uranium enrichment facility, was the most prominent example of such malware and may have prevented a nuclear attack. However, data poisoning attacks on military AI systems and autonomous weapons systems could also affect the legitimate defence of states against aggressors.

The use of kinetic force may be replaced or complemented by targeted cyber attacks on civilian infrastructure. If entire cities or even states are affected by such attacks on public transport, which I discussed in an earlier article, the need to resort to kinetic force may become obsolete. On the face of it, this can be seen as beneficial, but when power outages affect life support for patients in hospitals, not so much.

AI can also be used in peacetime for industrial espionage and ransom attacks or identity theft, which is of course always illegal.

International humanitarian law is unlikely to be able to prevent the risks posed by such uses of AI, and the ICRC has not proposed any specific measures to address them. In the European Union, the NIS 2 Directive and the Cyber Resilience Act will hopefully do the job.

Conclusion and outlook

Although autonomous weapon systems (AWS) are often the first to come to mind when talking about military artificial intelligence, AI decision support systems may be even more important for future warfare, as they may have a greater strategic impact. It is likely that they will quickly become indispensable for predicting and effectively countering attacks, while protecting civilians from the use of kinetic force. As their name suggests, AI-DSS do not make life-and-death decisions, but simply support the complex military decision-making process, now and for the foreseeable future. They will clear the fog of war and allow better, more informed decisions to be made, making the attempt to change borders by force as pointless as it is likely to be, because AI-DSS could effectively stop it in its tracks before it could unfold. And that will be a good thing, won’t it?

About the author

With more than 25 years of experience, Andreas Leupold is a lawyer trusted by German, European, US and UK clients.

He specializes in intellectual property (IP) and IT law and the law of armed conflict (LOAC). Andreas advises clients in the industrial and defense sectors on how to address the unique legal challenges posed by artificial intelligence and emerging technologies.

A recognized thought leader, he has edited and co-authored several handbooks on IT law and the legal dimensions of 3D printing/Additive Manufacturing, which he also examined in a landmark study for NATO/NSPA.

Connect with Andreas on LinkedIn