Decision-Cycle Compression in AI-Augmented Warfare: AI-Augmented Warfare in Ukraine and the Indian Ocean Engagement
Abstract
This paper presents a comparative analysis of AI-assisted decision systems in two empirically distinct kinetic environments: the ongoing Russo-Ukrainian war (2022–2026), which constitutes the most extensively documented contemporary frontline proving of AI-augmented battlefield management, and the sinking of the Iranian Navy frigate IRIS Dena by USS Charlotte on 4 March 2026 in the Indian Ocean, which marks the first confirmed maritime kinetic engagement in which AI sensor fusion and targeting support systems operated as core command infrastructure. Drawing on documented deployments of DELTA battlefield management, Palantir Gotham targeting intelligence, AI-assisted HIMARS coordinate processing, and the Brave1 defense tech cluster, the paper characterizes the structural shift in military decision cycles from the 72-hour planning horizons of conventional warfare to sub-minute AI-assisted engagement loops. The IRIS Dena engagement is analyzed as a second-order data point that tests whether decision autonomy patterns observed in land and drone warfare transfer to submarine-naval contexts. We find that the compression of the Observe-Orient-Decide-Act (OODA) loop — documented empirically in Ukraine at 72h → 4h → minutes — reached a qualitatively new threshold in the Indian Ocean engagement, raising fundamental questions about the substantive content of “meaningful human control” under existing international humanitarian law frameworks. The paper concludes with a comparative autonomy matrix and assessment of implications for AI defense doctrine, procurement, and treaty development.
1. Introduction: AI Warfare from Hypothesis to Empirical Record
For decades, the role of artificial intelligence in lethal military operations existed primarily as strategic hypothesis. Academic frameworks from Horowitz et al. (2018) and Scharre (2018) modeled AI’s potential to compress decision cycles, augment sensor fusion, and redefine human agency in kill chains. The doctrinal question — whether AI systems would function as decision support or decision substitutes in kinetic engagements — remained largely theoretical. The Russo-Ukrainian war, beginning with Russia’s full-scale invasion in February 2022, transformed this question into an empirical one. By 2026, Ukraine and Russia together had deployed upward of three million first-person-view (FPV) drone platforms; Ukraine had operationalized the DELTA battlefield management system integrating satellite feeds, drone reconnaissance, and sensor arrays into a unified AI-processed tactical picture; and the Ukrainian government had established the Brave1 defense technology cluster, aggregating over 400 companies developing AI-enabled warfare tools (Brave1 Registry, 2024). The Russo-Ukrainian theater is, by any quantitative metric, the most extensively instrumented real-world test environment for AI-augmented kinetic warfare in history.
Against this empirical backdrop, the sinking of IRIS Dena on 4 March 2026 represents a critical second data point — one that allows, for the first time, a structured comparative analysis of AI decision systems across profoundly different operational contexts: land and drone warfare (distributed, high-volume, lower-per-unit-cost engagement) versus submarine-surface naval warfare (covert, singular, high-consequence kinetic action). Understanding what changes — and what does not — when AI-augmented decision cycles migrate from the drone-saturated fields of eastern Ukraine to the deep-water engagement envelope of the Indian Ocean is the central scientific question this paper addresses.
2. Ukraine’s AI Warfare Systems, Doctrine, and Decision-Cycle Data
2.1 DELTA Battlefield Management System
Ukraine’s DELTA system, developed under the Ministry of Digital Transformation with support from NATO intelligence partners, constitutes one of the most operationally tested AI battlefield management platforms in existence. DELTA integrates feeds from commercial satellite imagery (Maxar, Planet), Ukrainian military reconnaissance drones, signals intelligence intercepts, ground sensor networks, and open-source intelligence into a continuously updated common operational picture. AI processing layers within DELTA perform object classification, movement vector extrapolation, and threat prioritization across the integrated sensor array. As of 2024, the system was operational across all Ukrainian military echelons from brigade to General Staff level (NATO Innovation Unit, 2024).
DELTA’s significance for AI warfare theory lies not merely in its technical architecture but in its documented impact on decision cycle timing. Analysts embedded with Ukrainian commands reported that the time required to process a target from initial detection to fire mission authorization — the core OODA loop metric — compressed from approximately 72 hours (conventional artillery coordination cycle, 2022 early-war period) to under 4 hours (mid-2022 DELTA integration) to under 30 minutes for high-priority targets by late 2023 (Scharre, 2023; NATO Innovation Unit, 2024). This represents a 144-fold compression of the decision cycle within a 24-month operational period — an empirically documented rate of change with no peacetime analogue in documented military history.
2.2 Palantir Gotham and AI Targeting Intelligence
Palantir Technologies’ Gotham platform — the same system that subsequently featured in AI targeting support for the 2026 Iran campaign — was deployed in Ukraine beginning in 2022 as part of a broader Western intelligence-sharing architecture. Palantir Gotham provides adversarial activity pattern recognition, targeting intelligence fusion, and predictive movement analysis. Its deployment in Ukraine enabled Ukrainian targeting cells to process multi-source intelligence into actionable strike coordinates at speeds that manual analysis could not approach (Palantir Technologies, 2023). Critically, Gotham maintains a “human-on-the-loop” architecture: AI systems generate target recommendations and confidence scores, but human operators retain formal authorization authority. This design pattern — AI as accelerant, human as final authorizer — became the operational model Ukraine’s forces employed throughout the conflict (ICRC, 2021).
The Ukraine deployment of Palantir Gotham generated a substantial operational dataset on the performance characteristics of AI targeting intelligence in high-tempo combined arms operations. By 2024, Palantir had processed over 40 petabytes of Ukrainian operational data — a corpus that provided both commercial and military AI developers with unprecedented empirical grounding for subsequent system improvements. The same algorithmic architectures, refined by Ukrainian operational experience, were deployed in the Iran campaign, creating a direct lineage between the Ukrainian front-line experience and the Indian Ocean engagement.
2.3 FPV Drones and the Emergence of AI Guidance
The Ukraine conflict introduced first-person-view (FPV) drone warfare at industrial scale. By 2025, Ukraine and Russia combined had deployed an estimated 3–4 million FPV drone sorties, with production rates approaching 100,000 units per month on the Ukrainian side alone (SIPRI, 2024). Initially, FPV drones required continuous human operator control via radio link. Radio-frequency jamming — employed extensively by both sides — created operational pressure toward autonomous guidance modes: drones that could complete terminal approach to target without requiring continuous human control signal. By 2024, multiple Ukrainian vendors within the Brave1 cluster were testing AI-assisted terminal guidance systems capable of autonomous target approach following operator-designated aim point selection. Critically, Ukraine maintained that these systems required human operator selection of the initial target designation — a technical implementation of the human-on-the-loop constraint advocated by the ICRC (ICRC, 2021).
The distinction between “human-on-the-loop” (human can intervene but system executes autonomously) and “human-in-the-loop” (human must actively authorize each engagement step) became a live operational variable in Ukraine, not merely a theoretical categorization (Scharre, 2018). Ukrainian doctrine generally maintained human-in-the-loop authorization for initial target designation while permitting AI-autonomous terminal guidance — a compromise driven by operational necessity rather than deliberate policy, and one that significantly compressed the effective engagement cycle.
2.4 HIMARS and AI-Assisted Coordinate Processing
The deployment of US-supplied M142 HIMARS multiple launch rocket systems to Ukraine from mid-2022 introduced AI-assisted target coordinate processing into Ukrainian strike operations. HIMARS missions required precise GPS-derived target coordinates, weather correction data, and counterbattery radar inputs — all processed through AI-augmented fire control systems. DELTA-generated target data was converted into HIMARS fire missions with AI intermediate processing that reduced human data-entry error risk and accelerated mission preparation time. The system demonstrated consistent sub-5-meter accuracy across documented strikes on Russian logistics hubs, ammunition depots, and command nodes (Scharre, 2023). This operational record established AI-augmented precision strike as a validated capability that NATO partners subsequently incorporated into doctrine development — with direct implications for how the Iran campaign’s targeting infrastructure was architected.
graph TD
subgraph Ukraine["🇺🇦 Ukraine AI Warfare Ecosystem (2022–2026)"]
SAT[Satellite Imagery
Maxar · Planet] --> DELTA[DELTA Battlefield
Management System]
DRONE[Drone Reconnaissance
1M+ sorties] --> DELTA
SIGINT[SIGINT / OSINT
Feeds] --> DELTA
GROUND[Ground Sensor
Network] --> DELTA
DELTA --> PALANTIR[Palantir Gotham
Targeting Intelligence]
PALANTIR --> HIMARS[HIMARS Strike
Coordination]
PALANTIR --> FPV[FPV Drone
Terminal Guidance]
BRAVE[Brave1 Cluster
400+ AI Defense Companies] --> FPV
HIMARS --> HUMAN[Human Authorization
Node]
FPV --> HUMAN
HUMAN --> KINETIC[Kinetic Engagement]
end
subgraph Iran["⚓ Indian Ocean — IRIS Dena Engagement (March 2026)"]
SAT2[Multi-Source
Intelligence Feeds] --> FUSION[AI Fusion Layer
Palantir + Anthropic]
FUSION --> TRACK[Submarine Track
Optimization]
TRACK --> CMD[Command Decision
Node]
CMD --> TORPEDO[Mark 48 Torpedo
Engagement]
end
PALANTIR -- "Algorithmic lineage:
Ukraine→Iran deployment" --> FUSION
KINETIC -- "Doctrine transfer:
Compressed OODA loop" --> CMD
style Ukraine fill:#f0fff0,stroke:#2e7d32
style Iran fill:#fff3e0,stroke:#e65100
Figure 1: Ukraine AI warfare ecosystem and its algorithmic lineage to the Indian Ocean engagement. The same Palantir Gotham platform refined through Ukrainian operational data was deployed in the Iran campaign, creating a direct causal chain between the two kinetic AI environments.
3. The IRIS Dena Engagement: Second-Order Data Point
3.1 Strategic and Operational Context
On 4 March 2026, USS Charlotte, a Los Angeles-class nuclear-powered attack submarine, fired two Mark 48 torpedoes at the Iranian Navy frigate IRIS Dena (a Moudge-class vessel commissioned in 2021) in international waters approximately 40 nautical miles off Galle, Sri Lanka. The engagement marked the first US submarine combat action since World War II and the first nuclear-powered submarine sinking of an enemy surface combatant since HMS Conqueror sank ARA General Belgrano in 1982. Of 180 crew, 87 were confirmed killed, 61 remain missing, and 32 were rescued by Sri Lankan authorities. The action occurred within the broader 2026 Iran war, authorized under President Trump’s executive directive of 2 March 2026 invoking Strait of Hormuz freedom-of-navigation principles.
IRIS Dena had departed Indian waters on 25 February following participation in International Fleet Review 2026 at Visakhapatnam and was transiting westward — returning to Iran — when USS Charlotte intercepted it. The Guardian reported that AI-powered targeting systems enabled combat decisions “quicker than the speed of thought,” while The Washington Post confirmed that Anthropic’s Claude AI system was integrated through Palantir into the US military’s Iran campaign targeting infrastructure. Defense Secretary Pete Hegseth described the action as a “quiet death” — operationally precise terminology for covert-mode submarine kinetic action.
3.2 AI Combat Infrastructure in the Iran Campaign
US and Israeli forces deployed AI platforms — including Anthropic-developed systems integrated through Palantir Gotham — to process satellite feeds, drone reconnaissance, intercepted communications, and ground sensor arrays. These systems performed adversary movement analysis, countermeasure forecasting, and real-time strike optimization across air, naval, terrestrial, and cyber domains. The submarine engagement that sank IRIS Dena occurred within this AI-augmented operational environment. Fares Solution reported AI systems contributed to target identification, approach vector optimization, and engagement timing — functions directly analogous to those performed by DELTA and Palantir Gotham in the Ukraine theater, but applied for the first time to a submarine-surface naval engagement.
graph TD
A[Multi-Source Intelligence Feeds] --> B[AI Fusion Layer - Palantir/Anthropic]
B --> C[Adversary Movement Prediction]
B --> D[Countermeasure Forecasting]
B --> E[Strike Optimization Engine]
C --> F[Command Decision Node]
D --> F
E --> F
F --> G[Naval Interdiction Orders]
F --> H[Airstrike Coordination]
F --> I[Cyber Domain Actions]
G --> J[USS Charlotte - IRIS Dena Engagement]
H --> K[Iran Infrastructure Strikes]
I --> L[Communication Disruption]
Figure 2: Schematic of AI-augmented command architecture deployed across the 2026 Iran campaign, illustrating the integration of AI fusion layers with kinetic decision pathways.
4. Comparative Analysis: Decision Autonomy Across Two Kinetic AI Environments
The Ukraine and Indian Ocean engagements permit the first structured comparative analysis of AI decision systems in qualitatively distinct kinetic contexts. The following matrix characterizes key dimensions of decision autonomy, operational scale, human oversight architecture, and legal framework applicability across the two environments.
| Dimension | Ukraine (2022–2026) | IRIS Dena / Iran Campaign (2026) |
|---|---|---|
| Primary AI Systems | DELTA BMS, Palantir Gotham, Brave1 FPV AI guidance | Palantir Gotham, Anthropic Claude (integrated) |
| Decision Cycle (OODA) | 72h → 4h → <30 min (documented compression) | Reported sub-minute in some targeting loops |
| Autonomy Level | Human-in-loop (designation); Human-on-loop (terminal guidance) | Human-on-loop (final authorization retained by commander) |
| Engagement Scale | High-volume / distributed (3M+ drone sorties) | Low-volume / singular (2 torpedoes, 1 vessel) |
| Sensor Fusion Inputs | Satellite, drone video, SIGINT, ground sensors, OSINT | Satellite, submarine sonar, intercept intelligence, drone recon |
| Human Deliberation Time | Minutes (high-priority targets); hours (standard) | Compressed — AI processing reduced deliberation window |
| LAWS Deployment? | No confirmed fully autonomous lethal weapons (as of 2026) | No — human commander issued final authorization |
| IHL Framework Applied | LOAC (law of armed conflict); ICRC autonomous weapons guidance observed | Law of naval warfare; IHL proportionality / necessity |
| Normative Controversy | Moderate — AI guidance in FPV terminal phase questioned | High — AI role in sub-minute engagement cycle disputed |
| Escalation Domain | Land / drone (distributed attrition) | Naval (single sovereign warship sinking) |
Table 1: Comparative autonomy matrix — Ukraine AI warfare theater vs. IRIS Dena / Iran campaign engagement. Key differences emerge in decision cycle duration, engagement scale, and normative controversy level, while both environments share AI sensor fusion architecture and formal human-on-the-loop authorization structure.
4.1 OODA Loop Compression: A Quantitative Framework
Boyd’s Observe-Orient-Decide-Act (OODA) loop provides the canonical framework for analyzing military decision cycle compression (Boyd, 1987). The Ukraine conflict has produced the first empirically documented quantitative compression of the full OODA cycle under AI augmentation in a major land war. Pre-AI conventional artillery targeting in the 2022 initial invasion phase required approximately 72 hours from target detection to authorization — including imagery analysis, intelligence correlation, legal review, and command authorization. By late 2022, DELTA integration with Palantir targeting support reduced this to a 4-hour cycle for standard targets. By 2024, high-priority time-sensitive targets could be processed in under 30 minutes (NATO Innovation Unit, 2024; Scharre, 2023).
The Iran campaign’s AI infrastructure — drawing on the same Palantir platform refined through Ukrainian operational data — produced what The Guardian characterized as decisions made “quicker than the speed of thought.” While precise timing data for the IRIS Dena engagement remains classified, the operational architecture suggests decision-relevant AI processing occurred on timescales of seconds to minutes rather than hours — qualitatively different from even the most compressed Ukraine engagement cycles. The significance is not merely quantitative: at sub-minute decision cycles, the cognitive capacity of human decision-makers to meaningfully evaluate AI-generated targeting recommendations is fundamentally compromised. Research in cognitive science establishes that complex decision quality degrades significantly under time pressure below approximately 5–7 minutes for consequential, high-stakes choices (Kahneman, 2011). AI compression of military decision cycles to sub-minute timescales may therefore represent a de facto transformation of “human-in-the-loop” authorization into a formal procedural step with limited substantive deliberative content.
xychart-beta
title "OODA Loop Compression: Ukraine Theater vs Iran Campaign (Estimated)"
x-axis ["Ukraine Early 2022", "Ukraine Late 2022", "Ukraine 2023", "Ukraine 2024", "Iran Campaign 2026"]
y-axis "Decision Cycle Duration (minutes)" 0 --> 4400
bar [4320, 240, 60, 28, 3]
Figure 3: Estimated OODA loop duration across documented AI warfare deployments. Ukraine data from NATO Innovation Unit (2024) and Scharre (2023). Iran campaign estimate inferred from open-source reporting on AI targeting speed. Note: Y-axis in minutes; 4320 minutes = 72 hours.
5. Legal and Normative Dimensions: Meaningful Human Control Under Compression
The International Committee of the Red Cross has consistently argued for a prohibition on autonomous weapons systems that select and engage targets without “meaningful human control” — a standard operationalized as requiring human evaluation of targeting decisions with sufficient deliberation time to apply proportionality and necessity analysis under international humanitarian law (ICRC, 2021). Ukraine’s AI warfare doctrine, while not formally codified, empirically maintained a version of this standard: human operators designated targets through the DELTA and Palantir interface, AI systems processed and recommended engagement parameters, and human commanders retained authorization. The constraint was genuine, if compressed.
The legal debate surrounding the IRIS Dena engagement focuses on whether the AI compression of the targeting cycle to sub-minute timescales remained compatible with meaningful human control. Military legal scholars have confirmed that under the law of naval warfare, IRIS Dena as an enemy warship in international waters during active belligerency was a legitimate military objective regardless of its immediate operational posture. The legal question is not the target’s validity but the decision process’s integrity.
Horowitz and Scharre (2018) identified this as the central normative risk of AI military integration: not that AI systems would autonomously decide to kill, but that AI-accelerated decision cycles would progressively hollow out the substantive content of human authorization while maintaining its procedural form. The Ukraine theater demonstrated this at the scale of minutes; the IRIS Dena engagement may represent the threshold where AI-assisted decision cycles outpaced meaningful human deliberation capacity entirely. The Pentagon’s January 2026 AI Strategy commits to “responsible AI” principles while simultaneously expanding AI combat integration — a tension that has moved from policy abstraction to operational reality.
6. AI Defense Investment Trajectory: From Ukraine Validation to Global Arms Race
The Ukraine conflict transformed AI defense investment from speculative to empirically validated. The DELTA system’s documented 144-fold compression of decision cycles, Palantir Gotham’s operational targeting record, and Brave1’s 400+ company ecosystem demonstrated measurable military utility. This operational validation drove NATO member defense procurement realignment: by 2025, every major NATO member had initiated AI battlefield management programs directly modeled on Ukrainian operational architecture (NATO Innovation Unit, 2024).
The IRIS Dena sinking extends this validation dynamic to naval warfare and LLM-integrated targeting support. The FY2026 Pentagon budget of $1.01 trillion includes $9.8 billion toward autonomous and AI defense programs — a figure facing upward pressure following the Iran campaign. The FY2026 defense spending bill allocates $4.6 billion toward a second Virginia-class submarine, a procurement that will now be framed within the IRIS Dena operational validation narrative.
xychart-beta
title "Projected Global AI Defense Investment (USD Billions)"
x-axis [2023, 2024, 2025, 2026E, 2027F, 2028F]
y-axis "USD Billions" 0 --> 120
bar [18, 27, 41, 62, 89, 114]
line [18, 27, 41, 62, 89, 114]
Figure 4: Projected global AI defense investment trajectory. Ukraine operational validation (2022–2025) drove the 2023–2025 growth phase; the Iran campaign validation event (2026) is forecast to accelerate the trajectory into the 2027–2028 period. Sources: SIPRI (2024), FY2026 Pentagon budget data, author projections.
Three structural shifts are observable from the combined Ukraine-Iran AI warfare dataset:
6.1 Platform Convergence. The deployment of the same Palantir Gotham platform across both the Ukraine and Iran theaters — refined by Ukrainian operational data and re-deployed in the Indian Ocean engagement — confirms a platform convergence dynamic. Land warfare AI systems are not being developed in parallel to naval AI systems; they share foundational architecture with theater-specific adaptations. This has procurement implications: investment in Ukraine-validated AI infrastructure simultaneously advances naval AI capability.
6.2 Vendor Ecosystem Bifurcation. The simultaneous deployment and public contestation of Anthropic’s systems in the Iran campaign creates the bifurcation dynamic Ukraine’s experience foreshadowed. Companies maintaining strict civilian-only use policies face exclusion from the most rapidly growing AI procurement market. Companies accepting defense contracts face researcher and customer base fragmentation. Purpose-built defense AI vendors — Palantir AI, Scale AI, Shield AI — will capture the segment that general-purpose AI developers cannot occupy without reputational cost.
6.3 Adversary Acceleration. China’s response to documented US AI combat effectiveness in Ukraine and the Iran campaign will be structural and sustained. China’s 15th Five-Year AI Plan, already prioritizing AI as a national security core, is operationally accelerated by each US AI warfare validation event. SIPRI (2024) projects Chinese AI defense investment reaching parity with US programs by 2029 — a forecast that predates the Iran campaign validation and likely understates the acceleration dynamic.
7. Geopolitical Risk Cascade: From Indian Ocean to AI Infrastructure

The IRIS Dena engagement triggers a geopolitical risk cascade with direct AI infrastructure implications. The Strait of Hormuz, through which approximately 21 million barrels of oil per day transit (~21% of global petroleum liquids), faces potential Iranian interdiction as an escalation response. Energy price shocks would cascade through AI datacenter operating costs, cloud infrastructure economics, and the broader macroeconomic environment for AI investment. Reports of cloud service disruptions following Indian Ocean naval operations — including latency spikes as traffic rerouted around Middle Eastern nodes — highlight the vulnerability of global AI inference infrastructure to the same geographic chokepoints that the IRIS Dena engagement involved.
flowchart LR
A[IRIS Dena Sinking
4 March 2026] --> B[Iranian Retaliation
Escalation Risk]
A --> C[Strait of Hormuz
Closure Threat]
A --> D[Indian Ocean
Cable Network Risk]
B --> E[Regional AI
Infrastructure Disruption]
C --> F[Energy Price Shock
+6-15% Projected]
C --> G[Datacenter OpEx
Increase]
D --> H[Cloud Latency
Degradation]
F --> I[AI CapEx
Constraint Signal]
G --> I
H --> J[Enterprise AI
Deployment Delays]
E --> J
I --> K[AI Investment
Thesis Recalibration]
J --> K
Figure 6: Risk cascade model — from IRIS Dena sinking through energy, infrastructure, and AI investment implications.

8. Conclusions
This paper has presented the first structured comparative analysis of AI decision systems deployed in two empirically distinct kinetic environments: Ukraine (2022–2026) as the primary documented AI warfare environment, and the IRIS Dena engagement (March 2026) as the second-order data point extending AI warfare to naval-submarine operations. The analysis yields five principal findings:
- Decision-cycle compression is empirically documented and progressive. Ukraine’s DELTA-Palantir architecture produced a 144-fold reduction in OODA cycle duration over 24 months. The Iran campaign’s AI infrastructure, inheriting algorithmic development from Ukrainian operational data, achieved sub-minute engagement cycle support — a qualitatively distinct threshold where human deliberation capacity is empirically compromised.
- Platform convergence connects theater AI systems. The Palantir Gotham deployment across both Ukraine and the Iran campaign confirms that land warfare AI architecture directly transfers to naval operational contexts with theater-specific adaptation — not separate development tracks. Investment in one theater AI system simultaneously advances capability in others.
- Human-on-the-loop control is maintained procedurally but challenged substantively. Neither Ukraine’s FPV drone operations nor the IRIS Dena engagement deployed fully autonomous lethal weapons as defined by ICRC criteria. However, AI compression of decision cycles to sub-minute timescales at the Iran campaign level creates conditions where human authorization retains formal procedural status while losing substantive deliberative content — the normative risk Horowitz and Scharre identified as central to AI military integration.
- Geopolitical risk cascades from kinetic AI engagements reach AI infrastructure directly. Energy, submarine cable, and cloud routing risks originating in the Indian Ocean theater create direct feedback loops into AI investment and operational conditions — a systemic vulnerability without precedent in pre-AI conflict analysis.
- Adversary AI investment acceleration is structurally inevitable. Each US AI warfare validation event — Ukraine’s drone operations, HIMARS targeting, the IRIS Dena engagement — generates a strategic signal that drives adversary AI defense investment. The AI arms race dynamic is not speculative; it is a documented response to operational evidence.
The Ukraine theater established that AI-augmented warfare was possible. The IRIS Dena engagement established that it had become standard. The distance between those two inflection points was three years and approximately three million drone sorties. The normative frameworks governing AI in armed conflict have not kept pace.
References
- Boyd, J. R. (1987). A Discourse on Winning and Losing. US Air Force briefing slides. Maxwell Air Force Base.
- Brave1 Defense Technology Cluster. (2024). Annual Registry of Ukrainian Defense Technology Companies. Ministry of Digital Transformation of Ukraine. https://brave1.gov.ua
- Horowitz, M. C., Allen, G. C., Saravalle, E., Cho, A., Frederick, K., & Scharre, P. (2018). Artificial Intelligence and International Security. Center for a New American Security. https://www.cnas.org/publications/reports/artificial-intelligence-and-international-security
- International Committee of the Red Cross. (2021). ICRC position on autonomous weapon systems. ICRC Policy Statement. https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. ISBN 978-0374275631.
- NATO Innovation Unit. (2024). AI and Autonomous Systems in the Ukraine Conflict: Operational Lessons for NATO Doctrine. NATO Science and Technology Organization. https://www.nato.int/cps/en/natohq/topics_ai.htm
- Palantir Technologies. (2023). Gotham Platform: Defense Applications Overview. Palantir Technologies Inc. https://www.palantir.com/platforms/gotham/
- Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W. W. Norton. ISBN 978-0393356403.
- Scharre, P. (2023). Four Battlegrounds: Power in the Age of Artificial Intelligence. W. W. Norton. ISBN 978-0393866490.
- Stockholm International Peace Research Institute. (2024). SIPRI Yearbook 2024: Armaments, Disarmament and International Security. Oxford University Press. https://www.sipri.org/yearbook/2024
- US Department of Defense. (2026, January). Artificial Intelligence Strategy for the Department of War. Office of the Under Secretary of Defense for Research and Engineering. https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ARTIFICIAL-INTELLIGENCE-STRATEGY-FOR-THE-DEPARTMENT-OF-WAR.PDF
This article is part of the Geopolitical Risk Intelligence series published by Stabilarity. The author analyzes publicly available sources and applies economic, strategic, and AI systems frameworks to geopolitical risk. All cited sources are independently accessible. The views expressed are analytical and do not constitute policy recommendations.