In a previous article of this series on select principles of international humanitarian law (“IHL”), also referred to as the law of armed conflict (“LOAC”), we examined the protection of combatants from unnecessary suffering.
We also devoted three separate posts to the obligation of states to conduct legal reviews under Article 36 AP I. These posts addressed weapons, cyber capabilities and autonomous weapon systems (parts 1, 2 and 3).
In this article, we turn to one of the most fundamental rules governing the conduct of hostilities: the definition of military objectives under Article 52 of Additional Protocol I (AP I).
To enable readers to quickly grasp the essential phrasings in treaty provisions and academic sources, we have once again highlighted key passages.
Estimated reading time: 19 minutes
I. Why Does Article 52 AP I Still Matter Today?
Few provisions of LOAC have aged as well as Article 52 AP I. Drafted in 1977, the article still supplies the legal architecture for targeting decisions involving objects its drafters could not have imagined. These include cloud regions and fibre landing stations, as well as commercial satellite constellations. They also include AI-enabled sensor networks that now feed the joint targeting cycle in real time.
Recent conflicts have placed Article 52 back at the centre of operational and academic debate. They have not, however, exposed any fundamental defect in the rule. What they have exposed is the difficulty of disciplined application. Legal practitioners struggle to distinguish genuine military objectives from civilian objects when the two are physically intertwined. They also struggle when technological fusion occurs in ways the drafters of Article 52 AP I could not have anticipated.
This article offers a practitioner-focused refresher on the doctrinal core of Article 52. It then turns to four contemporary challenges. These include dual-use objects and complex environments. They also include the doubt rule under Article 52(3) and the role of AI-enabled object recognition in meeting the distinction obligation.
II. What Does Article 52(2) AP I Actually Require?
Article 52(2) AP I defines military objectives through two cumulative tests:
“military objectives are limited to those objects which by their nature, location, purpose or use make an effective contribution to military action and whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.“
The attacker must satisfy both tests. Neither is sufficient alone.
The Four Alternative Criteria: Nature, Location, Purpose and Use
The four alternative criteria are not rhetorical flourishes. Each captures a distinct way in which an object can contribute to military action:
- Nature covers objects that are inherently military, such as tanks, command bunkers and weapons depots.
- Location captures objects whose position alone makes them militarily significant. For example, this includes a bridge controlling a chokepoint or a hilltop dominating an axis of advance.
- Use covers civilian objects that belligerents employ for military purposes at the time of the attack.
- Purpose is the most analytically demanding criterion. It refers to an object’s expected future role in military operations, assessed before any actual military use begins. As Yoram Dinstein has explained, purpose turns on known enemy intentions, not on hypothetical worst-case assumptions. This constraint is one that intelligence gathering and pattern-of-life analysis can increasingly satisfy (at pp. 148–149).
What is a “Definite Military Advantage”?
The “definite military advantage” requirement excludes attacks justified by speculative or merely potential gains. Yoram Dinstein’s analysis of Article 52(2) AP I emphasises that the military advantage must be concrete, not hypothetical. Its assessment depends on the circumstances ruling at the time of the attack (at pp. 143–144). This is the so-called Rendulic standard. It protects commanders from assessments based on information that became available only after the fact.
Where Does State Practice Diverge?
State practice broadly confirms the customary status of Article 52(2) AP I. Two divergences are nevertheless worth flagging.
First, several AP I ratifying states — including the United Kingdom, Germany and France — entered interpretive declarations specifying that “military advantage” refers to the advantage anticipated from “the attack as a whole,” not from isolated parts of it. The UK Joint Service Manual of the Law of Armed Conflict reproduces the UK declaration and explains that commanders must assess proportionality against the overall operation rather than against its individual components (at paras 5.20.5 and 5.33.5).
Second, the United States accepts as military objectives objects that contribute to the enemy’s “war-fighting or war-sustaining capability”. This formulation appears in the DoD Law of War Manual at § 5.6.6.2 (at p. 222). Most other states and the Report of the International Law Association Study Group on the Conduct of Hostilities (at p. 341) have rejected the war-sustaining extension as going beyond what Article 52(2) AP I supports. Michael Schmitt has likewise endorsed that conclusion.
This article takes no position on the merits of that divergence. For present purposes, it is enough to note this. Practitioners advising forces operating in coalition need to know which reading their partners apply.
III. What Does the Doubt Rule under Article 52(3) AP I Demand of Commanders?
Article 52 para. 3 AP I provides that
“in case of doubt whether an object which is normally dedicated to civilian purposes, such as a place of worship, a house or other dwelling or a school, is being used to make an effective contribution to military action, it shall be presumed not to be so used.“
The provision is frequently misunderstood at both extremes. Some readings treat it as a tie-breaker that mechanically resolves marginal cases in favour of civilian status. Others read it as a demand for absolute certainty before an object can be attacked. Both readings miss the mark. The operative threshold is reasonable doubt that a commander must assess on the basis of information available at the time. NATO doctrine tracks this standard in parallel terms. It holds that those executing engagements are accountable for the information they actually possess. They are also accountable for the information they should reasonably have gathered (AJP-3.9, at § 1.7.b).
How Do States Currently Read the Doubt Rule?
State practice on the customary status and operational meaning of Article 52(3) diverges in instructive ways.
The 2015 edition of the US DoD Law of War Manual stated flatly that “under customary international law, no legal presumption of civilian status exists for persons or objects”. This appears at § 5.5.3.2, p. 197. The July 2023 revision substantially reversed that position. Under § 5.4.3.2 of the current Manual, commanders and decision-makers must now presume that persons and objects are protected unless contemporaneous information identifies them as military objectives. This confirms that the discussion reflects the DoD view of customary international law applicable to assessing whether persons or objects are military objectives, including in cases of doubt. Section 5.4.3.4 makes this explicit (at p. 206).
The French position, articulated in the 2019 paper on International Law Applied to Operations in Cyberspace (at p. 15), is more demanding still. France interprets Article 52(3) AP I as requiring states, in case of doubt, to presume the civilian nature of an object normally dedicated to civilian purposes. It does not require a fresh determination of whether it makes an effective contribution to military action. France explicitly rejects the contrary approach taken by the Tallinn Manual 2.0 at Rule 102 (at pp. 13–14, footnote 57).
Germany’s 2021 cyber position paper (at pp. 8/9) explicitly endorses the Article 52(3) AP I presumption for objects. It states that where substantive doubts remain as to the military use of an object after a careful assessment, it shall be presumed not to be so used. Germany’s reference to “substantive doubts” places its position between the UK’s “substantial doubt” reading and France’s more demanding approach. The French approach does not condition the presumption on any particular threshold of doubt.
How does NATO doctrine read the doubt rule?
NATO’s own multilateral doctrine tracks the same presumption. AJP-3.9 instructs that, where doubt remains as to whether a normally civilian object is contributing to military action, decision-makers must treat that object as civilian (at § 1.7.a).
What This Means in Practice
For practitioners, the operational implication is clear. Article 52(3) AP I operates as a discipline that shapes how practitioners collect, corroborate and weigh intelligence. It does not operate as a procedural box they must tick.
AI-enabled multi-source fusion is directly relevant here. Where a single sensor produces an ambiguous reading, the doubt rule does not prohibit attack. It requires those with access to available means to use them to resolve the ambiguity before forces execute the attack. We return to this point in Section VI.
IV. How Does Article 52 AP I Apply to Dual-Use Objects in Contemporary Warfare?
The term “dual-use object” has become standard shorthand in the operational and academic literature for objects that serve both civilian and military functions. It does not, however, appear in Additional Protocol I. The drafters knew that civilian objects can become military objectives through use. They wrote Article 52(2) to capture that reality without creating a separate legal category. As Oona A. Hathaway and others have emphasised, treating “dual-use” as a legal category rather than as descriptive shorthand risks blurring the line the article was designed to draw. The DoD Law of War Manual makes this point at § 5.6.1.2 (p. 217). We retain the term in this article for ease of reference, while applying the doctrinal analysis Article 52(2) actually requires.
From Bridges to Cloud Regions
The classic dual-use problem concerned infrastructure like bridges, power generation, transport hubs, fuel depots and telecommunications. The contemporary problem extends to:
- Data centres hosting both civilian and military workloads
- Cloud regions serving defence and commercial customers from the same hardware
- Undersea cables carrying mixed traffic
- Commercial satellite services providing imagery and connectivity to armed forces
France’s 2019 cyber paper (at pp. 13–14) confirms that ICT equipment, systems, data, processes and flows can all qualify as military objectives under Article 52(2) AP I where they meet the nature, location, purpose or use criteria.
Doctrine Has Not Changed — But Analysis Requires Discipline
Doctrinally, the analysis remains the one set out in Article 52(2).
When does an object become a military one?
An object becomes a military objective when its use makes an effective contribution to military action and its destruction offers a definite military advantage. The presence of substantial civilian use does not by itself prevent qualification as a military objective. As Michael N. Schmitt has observed, an object can qualify under the “use” criterion “even when the extent of civilian use far outweighs military reliance on it.”
Yoram Dinstein puts the point in categorical terms. As he observes, almost any civilian object can become a military objective through military use, even though civilian objects are inherently protected under the jus in bello (at p. 150). The proposition is uncomfortable but doctrinally correct. It is precisely why the proportionality and precautions obligations in Articles 51(5)(b) and 57 AP I carry the weight they do.
What does civilian use trigger?
What civilian use does trigger is a separate and demanding obligation. Articles 51(5)(b) and 57 AP I require an attacker to weigh proportionality and to take all feasible precautions in attack. Decision-makers must satisfy these obligations before any attack on a dual-use object becomes lawful. The indirect effects on civilians of an attack on dual-use infrastructure form part of the proportionality test, not a separate distinction question. As Michael N. Schmitt has explained in the context of Russian attacks on Ukrainian power infrastructure, proportionality assessments must account for both the direct effects of an attack and its indirect, “reverberating” effects on the civilian population. These include effects on essential services and infrastructure.
The DoD Law of War Manual reaches the same two conclusions at § 5.6.1.2 (at p. 217): it rejects “dual-use” as an intermediate legal category between military objective and civilian object. It also confirms that once an object has been classified as a military objective, the proportionality analysis must still account for the foreseeable civilian harm flowing from attacking it.
NATO’s AJP-3.9 reaches the same conclusion from the operational side. It notes that objects with mixed civilian and military use raise heightened proportionality concerns compared with purely military targets. This is due to the prolonged civilian harm that destroying such infrastructure tends to produce (at § 1.7.a).
A Brief Note on Data as a Military Objective
State practice on whether data can constitute a military objective remains genuinely divided. Most experts who drafted the Tallinn Manual 2.0 concluded that data, being intangible, does not qualify as an “object” under the law of targeting. It therefore does not enjoy the protection of distinction in its own right (as reported by the ILA Study Group at pp. 338–340 without reaching a consensus on this issue).
The French practice
France, by contrast, has taken a clear state-level position in favour of affirmative protection. In its 2019 cyber paper (at pp. 14–15), France considers that “civilian content data may be deemed protected objects” under the principle of distinction. It also states that “the special protection afforded to certain objects extends to systems and the data that enable them to operate.”
The German view
Germany’s 2021 cyber paper (at p. 8) states that data stocks can become a military target when put to military use, whether exclusively or alongside civilian use. This formulation implicitly treats data as capable of being an “object” under the law of targeting. It does not, however, go as far as France in affirmatively protecting civilian content data.
Denmark´Position
Denmark’s 2023 position paper (at p. 455) aligns with the Tallinn Manual majority on the threshold question. It takes the view that digital data generally does not qualify as an object under IHL. It qualifies that position with a secondary-effects test. Under the test, the destruction of data may still amount to an attack where it foreseeably results in injury, death or physical damage to individuals or physical objects.
The United States has not taken an explicit public position on this point, but its general approach is consistent with the Tallinn Manual majority.
The legal status of data as a target is genuinely unsettled and varies meaningfully by state. Judge advocates advising on cyber and information operations need to know this.
V. How Does Article 52 AP I Apply to Targeting in Complex Environments?
Modern targeting rarely happens in the open. “Urban terrain, civilian shielding and mixed-use infrastructure have changed the Article 52 AP I determination. The deliberate co-location of military assets with protected sites has also changed it. It is now an intelligence-intensive process rather than a purely legal one.
Article 52 in the Joint Targeting Cycle
NATO doctrine reflects this reality. AJP-3.9, the Allied Joint Doctrine for Joint Targeting, structures the process as a six-phase joint targeting cycle. It operationalises the Article 52(2) determination long before forces release any weapon (at § 1.5.1). Target development, at Phase 2, explicitly requires that legal advisors work alongside targeteers to confirm the lawfulness of each candidate target. It ends with the placement of validated targets on the Joint Target List, the Restricted Target List, or the No-Strike List. AJP-3.9 integrates legal review, collateral damage estimation, and the maintenance of no-strike and restricted target lists as standing features of the cycle. These are not ad hoc add-ons (at §§ 1.6, 1.7.i and 4.5).
The Joint Targeting Cycle is, in operational practice, the architecture through which practitioners apply Article 52 AP I. Most of its work happens before anyone in the cockpit or the operations centre sees a target on a screen.
Distinction is an Intelligence Burden, Not Only a Legal One
Two practical points follow.
First, rules of engagement and commander’s intent are not extraneous to Article 52. They are operational expressions of it. AJP-3.9 is explicit that policy considerations may justify imposing tighter constraints on targeting than IHL/LOAC itself requires. However, such considerations “may never be more permissive” (at § 1.6). Mission parameters that constrain target sets, geographic scope, and authorised effects translate the legal determination into executable orders. Target list management provides the operational architecture through which practitioners maintain and amend protected entities, including no-strike, restricted, and joint target lists (at § 4.5).
Second, the burden of distinction is as much an intelligence burden as a legal one. The lawful target is the one the commander reasonably believes to be a military objective. That belief must rest on all reasonably available information. As the volume and quality of that information increases, so does what reasonableness requires. This is the central insight that connects Article 52 to the role of AI in modern targeting. We now address that role.
VI. Can AI-Enabled Targeting Meet the Distinction Obligation?
A significant strand of recent commentary, drawing on contested reporting about AI-assisted targeting in Gaza, argues that AI-enabled decision-support systems are eroding compliance with core targeting obligations under international humanitarian law in ways that the prevailing framing of these systems as “merely” support tools fails to capture. The most developed version of this argument, by Jessica Dorsey and Marta Bo, focuses specifically on the principle of precautions in attack under Article 57 AP I and contends that the “speed and scale” of AI-generated targeting, the limited explainability of these systems, and embedded “automation and action biases” collectively risk shifting how practitioners operationalise the obligation to take all feasible precautions within the joint targeting cycle. The argument merits serious consideration and a direct response.
Reframing the Legal Question
Two preliminary observations are in order.
First, the underlying legal questions about how AI-enabled decision-support systems affect compliance with Articles 52 and 57 AP I are real and worth engaging with on their merits, regardless of whether the specific reporting about targeting in Gaza turns out to be accurate. Dorsey and Bo themselves are explicit that their case study is illustrative rather than evidentiary.
Second, the legal question is not whether AI object recognition is perfect. Nothing in Article 52 AP I or Article 57 AP I requires perfect identification or perfect precautions. The legal question is whether AI-assisted targeting can meet or exceed the reasonable commander standard under Article 52(2) AP I read with the precautions obligation in Article 57 AP I. Reframed this way, the answer is much more interesting than the dominant narrative suggests.
The Real Question Is About ISR Quality
Here is the point that the dominant debate consistently misses. Compliance with Article 52 AP I has always depended less on the means of attack than on the quality of information on which the targeting decision rests. AI-enabled object recognition, decision-support systems, and even autonomous weapon systems are downstream of that information. They process, prioritise and act on data collected by other systems: sensors, signals intelligence, human intelligence, multi-source fusion.
If the ISR upstream is poor, no amount of sophistication in the AI layer can rescue the targeting decision. If the ISR upstream is good, even comparatively modest AI tooling can elevate compliance above the historical baseline. The honest debate about AI in targeting is therefore, at bottom, a debate about intelligence architecture.
This reframing also exposes a weakness in some of the more critical commentary on AI decision-support systems. The argument that fusing multiple ISR sources into a single processed output deprives operators of information they should have access to deserves separate treatment in a future post. For present purposes, it is enough to note that cross-referenced and validated data from multiple sources is not, on its face, less reliable than raw single-source feeds. We will return to this question.
The Empirical Record on Object Recognition
The empirical record on automated target recognition is candid about both capabilities and limits.
The RAND Study
A 2020 RAND study by Gavin Hartnett and colleagues tested whether ATR models could be trained on artificial imagery generated from a commercial video game engine. The aim was to bypass the acute scarcity of labelled military training data. The pure-synthetic approach failed: models trained on artificial images alone transferred essentially not at all to real-world test images. A hybrid approach, combining real and synthetic images, produced a statistically significant performance boost, most pronounced in severely data-limited conditions. Even so, the authors were explicit that the resulting models would “likely not be immediately useful in a real operational scenario” and might at best serve as a “second set of eyes” for a human analyst.
The SIPRI Report
A 2025 SIPRI research report by Laura Bruun and Marta Bo reaches a complementary conclusion: AI-enabled targeting systems can encode and amplify existing biases in ways that complicate compliance with the principle of distinction. The report observes that bias in military AI cannot be entirely removed, but its consequences can be mitigated through requirements in the development and by applying operational limits to the use of such systems. Most of the experts consulted for the report’s workshop concluded that AI-enabled autonomous weapon systems should be limited to attacks on military objectives by nature, and should not be used against persons or military objectives whose status as lawful targets depends on assessments of human behaviour (p. 21) — precisely because such behaviour-dependent classifications are particularly vulnerable to bias-driven error.
That recommendation, while well-reasoned on its own policy terms, is more restrictive than what Articles 52 and 57 AP I strictly require, and practitioners advising forces operating with AI-enabled systems should keep the distinction between legal floor and prudential ceiling clearly in view.
Neither finding supports the conclusion that decision-makers cannot lawfully use AI in targeting. Both support the conclusion that lawful use depends on disciplined design, testing, operational constraint, and continued reassessment of how such systems perform once fielded. This is exactly the lifecycle approach we identified in our earlier post on weapon reviews under Article 36 AP I.
The Affirmative Case: AI as a Feasible Precaution
The affirmative case has begun to emerge in practitioner-oriented scholarship. AI-enabled decision support, properly designed and properly used, can elevate the level of care that a commander is reasonably expected to exercise. In a recent treatment of AI in US Army counterfire operations, Megan Ezekannagha argues that AI systems do not lower the bar that distinction and proportionality set in time-constrained engagements; instead, they can themselves function as a feasible precaution, by raising the level of care that a commander can reasonably bring to bear on targeting decisions.
The same logic applies to Article 52 AP I. Where multi-source sensor fusion, pattern-of-life analysis and real-time corroboration are available, the reasonable commander standard requires decision-makers to use them. Failing to use available AI tools to verify a target is, on this view, a failure of feasible precautions, not a virtue.
NATO doctrine operationalises this understanding directly. AJP-3.9 defines “feasible” by reference to what is practically achievable in the circumstances at hand, given the information reasonably available to the decision-maker (at § 1.5.1, fn. 28). On that standard, the reasonable commander cannot decline to use AI-enabled tools that are practically available and that would meaningfully improve target verification.
A Necessary Qualification
Klaudia Klonowska’s conceptually rigorous work adds an important qualification. Reasonableness is not a fixed standard that AI either meets or fails. Tool design, the interfaces commanders use, the time pressures the system imposes, and the institutional policies that govern its use shape it. Poorly designed AI can compress the time available for human judgement and degrade reasonableness rather than enhance it.
These findings point in a different direction. The legal review obligation under Article 36 AP I, the procurement standards and operator training requirements we discussed in our earlier post on AWS reviews, and the doctrinal frameworks set out in NATO and US guidance all form part of how practitioners achieve compliance with Article 52 AP I when AI sits in the targeting loop. This framework operates in practice, not in abstraction.
The Structural Answer to the Critics
The structural answer to the critics’ challenge is therefore not that decision-makers can trust AI in the abstract. It is that institutional and procedural disciplines have always structured how practitioners apply Article 52 AP I. These include the collection of intelligence, the corroboration of identifying signatures, the validation of targets, the legal review of proposed strikes, and the command authorisation of engagements. AI tools are now part of those disciplines.
What matters is whether the remaining human judgement draws on the best available means of identifying military objectives. Where it does, this strengthens distinction rather than weakens it. Where poorly designed or inadequately tested tools compress human decision-making rather than inform it, the failure lies in implementation, not in the rule itself.
VII. Conclusion and Outlook
Article 52 AP I has weathered nearly half a century of doctrinal stress-testing without any serious challenge to its core architecture. The provision remains sufficient to govern targeting decisions in the contemporary conflicts that have brought it back into the spotlight.
The contemporary environment demands disciplined application of the existing rule: rigorous intelligence, honest assessment of doubt, careful handling of dual-use objects, and the integration of AI tools as instruments of distinction. The rule itself is sufficient; the work is in applying it well.
Compliance with the law of armed conflict is not in tension with operational effectiveness. In properly designed systems and properly trained forces, the two converge. The states and armed forces that will navigate the contemporary targeting environment most successfully are those that treat Article 52 AP I as a working discipline embedded in design, doctrine, training, procurement and command, rather than as a slogan invoked after the fact.
In our next article of this series, we will turn to one of the most pressing questions raised but not resolved by Article 52 AP I in modern warfare: whether and when data itself can constitute a military objective.