Welcome to the first instalment of our series on select essential principles of international humanitarian law (IHL). Over coming posts we will explore key rules that govern the conduct of hostilities—rules that are vital both to armed forces and defense contractors, and to the broader public seeking to understand how the law of armed conflict (“LOAC” or “ius in bello”) works in practice. In this first article, we turn our attention to a foundational norm: the principle of distinction as codified in Article 48 of Additional Protocol I to the Geneva Conventions of 12 August 1946 (hereinafter: “Additional Protocol 1” or “AP1”) and reflected in customary international humanitarian law.
Estimated reading time: 11 minutes
Introduction: What is the Principle of Distinction?
As the International Committee of the Red Cross (ICRC) commentary puts it, this is the “basic rule” of protection and distinction—“the foundation on which the codification of the laws and customs of war rests.”
The principle of distinction obliges that, in the conduct of hostilities, the parties to a conflict must identify and target only legitimate military objectives and must at all times refrain from directing operations at civilians or civilian objects. It is recognized not only in treaty law but has also been summarized almost verbatim in the ICRC’s Rule 1 of Customary IHL.
Given its fundamental nature, this principle permeates the entire conduct-of-hostilities regime: the definition of “attack”, the prohibition of indiscriminate attacks, the rules on proportionality and precautions, the obligations to verify targets, all flow from or are shaped by the distinction obligation.
Why the principle of Distinction is Increasingly Challenged
Yet, despite its centrality, the principle is under pressure in contemporary armed conflicts. Several factors contribute:
- Attrition and total warfare mindset: In long and attritional conflicts, civilian infrastructure is increasingly targeted deliberately to break morale, degrade logistic bases or disrupt support networks. Such targeting may reflect a weakening of the distinction rule.
- Blurred battlefield lines: Modern conflicts increasingly involve non-state armed groups, urban warfare, insurgencies and hybrid forms of warfare. Civilians may be intermingled with fighters, military objectives may be embedded in civilian infrastructure, and non-combatant support functions may themselves be militarized. This makes the required dynamic distinction more challenging.
- Indiscriminate means/methods of warfare: The use of weapons or tactics which cannot be directed precisely at military objectives (cluster munitions, large-area bombardment,) threatens the distinction obligation and results in indiscriminate attacks that we will still cover below. In short, while the principle of distinction remains mandatory, the changing character of warfare places new stress-tests on its implementation.
The Principle of Distinction in Cyber Warfare
While the classic model of armed conflicts features kinetic weapon-systems, the modern battlefield increasingly incorporates cyber-attacks, electronic warfare, information operations and hybrid campaigns. Some of the key issues are:
- What constitutes a “military objective” in cyber space? A server, data-center or communication hub may serve both civilian and military functions. Under distinction logic, the cyber target must contribute effectively to military action and its neutralization must offer a definite military advantage to be a legitimate military target, Article 52 para. 2 AP1. If -as it is not rare in cyber space- civilian functions predominate, establishing such contribution requires close inspection and verification.
- “Operations” in cyber domain: Article 48 AP1 obliges that operations be directed only at military objectives. Cyber operations must therefore be planned and executed with targeting processes that filter out civilian functions or civilian users as far as feasible. This requires a verification process that adequately reflects the substantial differences of cyber operations from the application of kinetic force.
- Collateral effects and cascading dependencies: A cyber-attack on infrastructure may disable a dual-use asset with significant civilian dependency (e.g., water-supply SCADA systems). Even if the initial target is a military objective, the indirect effects on civilians must be foreseen and mitigated – though strictly speaking the principle of distinction focuses first on whether the object is military, not the downstream civilian effect (that falls more aptly under the principle of proportionality).
- Attribution and temporal dynamics: Cyber operations often lack the spatial and temporal clarity of kinetic strikes: the adversary may be diffuse, the target may be virtual. Ensuring distinction under these conditions demands robust intelligence, verification, and the capacity to discriminate among users and functions. However, since cyber attacks are often designed to conceal their perpetrator(s) and the states or organizations responsible for them, it would be misguided to demand certainty about their attribution before cyber defense measures can be taken.
Artificial Intelligence as an Enabler of Distinction and Protection of Civilians
The potential use of artificial intelligence in the military domain has drawn great concerns and much criticism from non-governmental organizations (NGO) and some AI scholars who have argued against autonomous weapon systems (AWS) and even AI supported decision support systems (DSS).
What has drawn much less attention, however, is the fact that AI can, if designed and governed properly, become a powerful tool to improve compliance with the principle of distinction in armed conflicts.
Even the International Committee of the Red Cross (ICRC), which is very critical about military uses of AI has repeatedly conceded that AI and machine-learning-based decision-support systems (AI DSS) “may enable better decisions by humans in conducting hostilities in compliance with international humanitarian law and minimizing risks for civilians by facilitating quicker and more widespread collection and analysis of available information.”
Moreover, a growing body of defense-oriented and academic work argues that AI, especially in intelligence, surveillance and reconnaissance (ISR) and targeting support, can help reduce misidentification, improve the accuracy of collateral damage estimates, and support more robust precautions in attack.
1. AI-enhanced ISR and target verification
AI-enabled ISR systems can process large volumes of imagery and sensor data from multiple sources, highlighting patterns and anomalies that might be missed by humans working under time pressure. Roberts and Venables, writing for NATO’s Cooperative Cyber Defence Centre of Excellence (CCDCOE), argue that advances in AI-driven image recognition and object detection “will provide greater certainty in distinction, positive target identification and more precise collateral damage estimates.” (CyCon 2021, p. 10).
In practical terms, this means:
- More reliable identification of military objectives: AI can help discriminate between, for example, a weapons cache and a purely civilian warehouse, or between an improvised explosive device and a harmless roadside object, by fusing visual, signals and contextual data.
- Fewer false positives: By cross-checking multiple data streams (ISR feeds, open-source intelligence, prior pattern-of-life analysis), AI-supported systems can lower the risk that civilians or civilian objects are mistakenly classified as military objectives.
From a distinction perspective, these capabilities can enable commanders to satisfy Article 48’s requirement to direct operations only against military objectives on the basis of a better, more up-to-date intelligence picture.
2. Using AI to detect civilians, protected objects and collateral-damage risks
One of the most concrete, empirically grounded lines of work is the use of AI to detect civilians and protected objects in or near a target area and to highlight changes that could affect the lawfulness of an attack. A detailed 2022 CNA study on Leveraging AI to Mitigate Civilian Harm maps specific AI applications to real-world patterns of civilian harm observed in Afghanistan and Iraq. It identifies, among others, the following use cases:
- Detecting transient civilians: AI-based change-detection algorithms compare imagery used in a collateral damage estimate (CDE) with more recent imagery, automatically flagging new vehicles or people that may indicate unexpected civilian presence near a target.
- Recognizing protected symbols and sites: Computer-vision models trained to detect emblems like the red cross or red crescent, as well as other indicators of medical or religious sites, can alert operators when such symbols appear in the target area—providing a “safety net” where human observers might miss them.
- De-conflicting with critical civilian infrastructure: AI tools can map and update the proximity of planned strikes to power grids, water systems, hospitals or other objects essential to the civilian population, thereby supporting both distinction (by identifying civilian objects) and proportionality assessments.
The same CNA study, and subsequent commentary in outlets such as Defense One has influenced ongoing discussions within the US Department of Defense about using AI to “reduce uncertainty about targets” so that commanders can better identify which targets to strike—and which to hold or abort because of civilian risk.
3. AI DSS across the targeting cycle and training
AI can also support compliance with distinction before and after the “attack” phase:
- Planning and weaponeering: Paul Margulies argues that AI-driven situational awareness technology can make target selection and CDE more accurate, thereby reducing harm to civilians. This includes recommending means and methods of attack that best avoid or minimize incidental civilian harm when several lawful options exist. The ICRC’s 2024 blog likewise notes that AI DSS can help identify the “presence of civilians and civilian objects” and suggest attack options that minimize incidental harm, as long as human operators critically interrogate the outputs.
- Urban operations training: For dense urban environments, AI-based simulations can expose units to realistic patterns of civilian movement, dual-use infrastructure and complex rules of engagement. An analysis published by the Lieber Institute on AI and civilian protection in urban warfare argues that AI-enhanced simulations can “better reflect the multidimensional structure of cities as well as the civilian dimension of operations in urban environments” and provide “new solutions in response to the enemy’s tactics that endanger the civilian population and their livelihoods.”
- After-action review and learning: AI can analyze incident and battle-damage assessment data at scale to identify systemic causes of civilian casualties (e.g. recurring misidentifications in a particular environment or pattern of life), feeding lessons back into doctrine, training and technical design. CNA’s “civilian protection life cycle” model explicitly frames AI as a tool to identify and address recurring drivers of harm across the entire mission cycle, not just at the moment of engagement.
These uses speak directly to the obligations of constant care and feasible precautions (Articles 57 and 58 AP1) that derive from the principle of distinction.
4. AI for mapping patterns of violence and supporting accountability
AI is also being used on the humanitarian and monitoring side in ways that can feed back into better respect for distinction.
In a 2023 project with the Swiss Data Science Center, the ICRC developed an AI-based tool to reclassify large open-source conflict datasets (ACLED) according to IHL and human-rights categories and to extract information about who did what to whom, and whether victims were civilians or combatants. This allows much faster and more granular mapping of:
- patterns of violence against civilians;
- episodes of restraint by armed forces and non-state armed groups;
- correlations between specific policies or dialogues and changes in the intensity/type of violence.
While this tool is aimed at humanitarian analysis rather than targeting, it illustrates how AI can clarify the factual picture necessary for both accountability and engagement with parties to a conflict on their IHL obligations, including their practice regarding distinction.
Similarly, work on AI-enabled ISR for the protection of humanitarian relief operations suggests that AI can help monitor threats to humanitarian corridors and facilities and improve respect for their protected status by making it easier to detect and attribute attacks on such objects.
5. Design requirements: turning potential into actual compliance gains
The potential for AI to improve distinction is not automatic. It depends on how systems are designed, trained and governed. The most serious empirical and policy work points towards several conditions under which AI is most likely to enhance IHL compliance rather than undermine it:
- Datasets and objectives oriented to civilian protection: Recent SIPRI analysis stresses that bias and poor training data are central risks in military AI—but they also outline technical and institutional measures (dataset curation, independent testing, red-teaming) to reduce those risks and thereby strengthen compliance with distinction, proportionality and precautions.
- Integration into legal review and doctrine: ICRC guidance on AI DSS and UNODA’s 2024 study on AI governance in the military domain both emphasize that AI systems used in targeting must be subject to rigorous legal review—ideally under Article 36 AP1—and embedded in doctrine, training and rules of engagement that keep humans accountable for compliance with IHL.
- Explainability and auditability for accountability: It has also been noted that AI decision-support tools should be designed to generate understandable, traceable outputs so that commanders and legal advisers can reconstruct why a potential target was flagged and how civilian-risk indicators were weighed. This is critical both for real-time decision-making and for ex post investigations of alleged violations.
When these conditions are met, AI does not replace the principle of distinction; it becomes part of the toolkit by which armed forces and defense contractors can operationalize distinction in highly complex environments—especially data-rich, time-compressed theatres that would otherwise overwhelm human analysis.
Conclusion
In this first post of our series on essential IHL principles, we have examined the principle of distinction as enshrined in Article 48 of Additional Protocol I and embedded in customary international humanitarian law.
As warfare evolves —with cyber-operations, autonomous systems, AI-driven decision-support and mixed combatant–civilian environments— the principle of distinction remains unchanged in its core but must be applied with commensurate consideration of the special challenges of all domain warfare and emerging disruptive technologies (EDT). It demands not only doctrinal awareness but technical, operational and organizational adaptation. Ensuring that military operations are directed only against military objectives is not simply a legal tick-box: it is a vital guarantee of humanity in war, a protection for civilians, and a compass for armed forces operating in complex theaters.
Defense contractors and armed forces must consider the basic rule of distinction already in the definition of system requirements and the design phase of military systems that can affect the compliance with this elementary principle of international humanitarian law. The failure to do so can result in IHL violations, undesirable media attention and information warfare that can have a substantial negative impact on the public opinion about new technologies that are becoming indispensable for effective defense in all domains.
In the next instalment of this series, we will turn to another cornerstone of the conduct of hostilities—precautions in attack (Article 57 AP1) and how emerging disruptive technologies affect that rule in concert with distinction.
If you have any questions, or would like to discuss particular aspects like targeting systems auditing, dual-use infrastructure, or cyber defense applications, get in touch with me by email or just give me a call.