Thunderforge: Advancing AI in Defense

The U.S. Defense Innovation Unit (DIU) has announced the deployment of Scale AI’s Thunderforge Decision Support System for U.S. Indo-Pacific Command (INDOPACOM) and U.S. European Command (EUCOM) to enhance mission-critical planning.

What is Thunderforge? Thunderforge is an advanced AI-powered decision support system (DSS) that integrates AI with human oversight. Developed by Scale AI and supported by Microsoft, Anduril, and Google, this system is a significant technological and strategic move—especially during times of heightened conflict in Europe. It is hoped that U.S. allied forces will also benefit from Thunderforge’s insights.

But why are AI DSS solutions crucial for European defense readiness? This article explores the importance of AI in military decision-making and the challenges surrounding this technology.

A Short History of Destructive Testing and simulation

In the civil sector, destructive testing has long been used to assess materials under stress. For example, Mercedes-Benz conducted its first crash test in 1959, which contributed to a drastic reduction in traffic fatalities (source). Computer crash test simulations had a rocky start in the 1980´s  due to the limited computing power at the time but made inroads in the nineties and was soon adopted by all major car makers for all models.

Why Destructive Testing Isn’t Always Feasible in Defense

Military crash tests, like the 1978 simulation of a fighter jet crashing into a nuclear power plant (source), highlight why real-world destructive testing is often impractical. Similarly, testing the effectiveness of weapons in actual combat would not only be more time consuming and impossible in peace times, but could also raise ethical concerns when computer simulations offer a better alternative.

AI-Powered Decision Support Systems: A Solution

AI Decision Support Systems (DSS) are primarily used for military modeling and simulation. According to NATO, AI-assisted simulations can play a key role in improving military decision-making (source). These systems allow for safe and precise analysis of battlefield scenarios, minimizing casualties while improving operational outcomes.

AI Decision Support Systems (DSS) are primarily used for military modeling and simulation. According to NATO, AI-assisted simulations can play a key role in improving military decision-making (source). These systems allow for safe and precise analysis of battlefield scenarios, minimizing casualties while improving operational outcomes.

Why Military AI Decision Support Systems Are Crucial for Defense

By 2025, the world is expected to generate 181 zettabytes of data (source). This includes vast amounts of military data, which exceed human processing capabilities.

For example, a modern military section of 8 soldiers has been estimated to generate 270 Gigabytes  per hour and 6.5 terabytes per day during missions (source). AI-driven DSS like Thunderforge can process and analyze this data effectively, integrating information from intelligence, surveillance, reconnaissance (ISR), collected with satellites and other means.

Concerns on the use of AI DSS in Military Operations

Do AI DSS systems kill people? No. AI DSS do not make autonomous “life-or-death” decisions. Instead, they provide intelligence and target identification, which are ultimately reviewed by military personnel. This has also been acknowledged by Organizations such as the Geneva Academy and the International Committee of the Red Cross which remain critical on military DSS. (source).

Several concerns have been voiced against military AI DSS, of which two shall be addressed here:

  1. Automation Bias – Over-reliance on AI recommendations, assuming they are always correct.
  2. Acceleration of Warfare – The fear that AI-driven decisions could escalate conflicts faster than human oversight allows.

Mitigating Automation Bias in Military AI

Research suggests that automation bias is most common among individuals with low AI literacy (source). This suggests that such bias can effectively be counteracted by adequate training of the military personnel using such systems. Other best practices include:

  • Using Explainable AI – Systems must provide transparency about how decisions are made.
  • Maintaining a Human-in-the-Loop – AI recommendations should be reviewed by trained personnel.
  • Regular Audits – Frequent evaluations ensure AI accuracy and reliability.
  • High-Quality Training Data – Reducing biases through better algorithm training.
  • Verification Processes – AI outputs should be cross-checked with multiple sources.

However, speed is critical in combat. In time-sensitive operations, slowing decision-making to involve humans could endanger lives.

Addressing the “Black Box” Problem in AI DSS

A common criticism of AI DSS is the “black box” problem—a lack of transparency in how AI generates its output. However, Explainable AI (XAI) is already addressing this issue. New AI models increase the transparency and comprehensibility , reducing concerns about unverifiable outputs.

Conclusion: The Future of AI DSS in Defense

AI is not perfect, but neither are human decision-makers. Military AI DSS should not be dismissed simply because they can make mistakes—after all, humans do too.

NATO proposes a practical approach: AI DSS should be used only when they improve decision quality or streamline processes. Under this approach, AI should not be deployed if it degrades decision-making or fails to alleviate the complexity of combat analysis.

To ensure AI DSS aligns with international humanitarian law (IHL), adequate testing before deployment and continuous improvement thereafter will remain essential. However, as AI technology evolves, so too will its role in military operations.

Final Thoughts

Thunderforge represents a significant step forward in military AI. While challenges exist, proper safeguards, transparency, compliance with IHL and ethical considerations can ensure AI-powered decision support systems remain a force for strategic advantage rather than a source of controversy.

About the author

Dr. Andreas Leupold is an industry lawyer with 25+ experience in advising and litigating cases for German, US and UK clients.

He serves on the advisory board of mga, the leading international network for Industrial additive manufacturing and is a member of the legal working group of the Platform Industrie 4.0 established by the German Federal Ministry for Economic Affairs.

Andreas is a published author of various handbooks on industrial 3D Printing and IT-Law and most recently covered the legal aspects of 3D Printing in study for the NATO/NSPA.

Connect with Andreas on LinkedIN