Abstract: Decision-making regarding target engagement, including considerations for scenarios involving the tactical use of nuclear weapons, requires a multilayered, well-structured, predictable, and traceable approach. The integration of AI into such high-stakes scenarios amplifies the need for transparency, aligning decision-making models to trusted tenets and focused training (the 3Ts) in order to avoid the five cognitive pitfalls of overreliance, information overload, groupthink, misperception, and unrecognised systemic bias. For AI to be effectively integrated into Nuclear Command, Control and Communication (NC3) networks, decision makers delegated target engagement authority must have a clear understanding of the capabilities and limitations of the AI system, just as they understand the capabilities and limitations of their other team members. AI is not something to be feared, but it should be approached with a degree of healthy scepticism. The 3T framework provides a useful heuristic for ensuring proper integration and use of AI-enabled NC3 systems and decision-making processes.
Problem statement: Decision-making and moral decision-making in the Age of AI—The case for reevaluating the moral paradox in targeting, specifically considering adversarial tactical nuclear weapon use.
So what?: When considering the future of AI-supported targeting, a greater emphasis on improving transparency, trust in the values upon which the systems are based, and focused training for decision makers is necessary. Simply transferring conventional decision-support AI agents into the nuclear command, control, and communication systems is insufficient and induces unacceptable risk. This is especially true in scenarios where adversarial tactical nuclear weapon use challenges existing ethical and operational frameworks.

Source: shutterstock.com/greenbutterfly
Seconds to Decide
The Situation Room buzzed with hushed urgency. On the wall, digital maps pulsed with data feeds: satellite surveillance, adversary force postures, and real-time sentiment analysis from global media. A crimson alert flashed: “Strategic Forces Elevated – Opponent Level 3 Readiness.”
The President sat stone-faced, flanked by her national security team. At the opposite end of the table sat ‘Prometheus,’ the nation’s AI-powered strategic decision-support interface. A fourth-wave AI, Prometheus was designed to outthink human adversaries—processing millions of variables from wargames, historical precedents, and behavioural models.
“We need options,” the President said tersely.
Prometheus responded in a dispassionately neutral synthetic voice, “Recommended Course: Execute a subsurface demonstration strike with a tactical nuclear weapon. 73.7% projected deterrence effectiveness. Escalation risk: 14.2%. Civilian loss: zero. Signal strength: decisive.”
The room fell silent.
The Minister of Defence furrowed his brow. “Madame President, this aligns with our limited escalation doctrine—but it’s a line we’ve never crossed.”
The Intelligence Director interjected. “Prometheus projects adversary response matrices with a higher resolution than any human team. The signal might prevent full-scale war.”
“But Prometheus doesn’t feel,” the Minister replied, voice taut. “It doesn’t weigh the moral inertia of its advice.”
The AI responded instantly. “Emotion introduces noise. Optimal decisions emerge from data clarity.”
The President stood and walked to the screen, watching the projection of the strike’s shockwave bloom across oceanic gridlines. The AI had no skin in the game—no blood to spill, no history to answer to. And yet, its logic was impeccable.
She turned back to her staff. “We built Prometheus to stop us from stumbling into the abyss. But when the abyss stares back, who blinks—the machine or the human?”
Silence.
A decision had to be made. And in that moment, humanity hovered between the algorithm’s whisper—and potential annihilation.
Why this Scenario Matters
This is not science fiction. The detonation of a tactical nuclear weapon is no longer a distant Cold War nightmare. Instead, it is plausible in an era where decision-making is shaped by artificial intelligence (AI). National security decision-making processes and nuclear strategy in particular are leveraging AI’s ability to process vast amounts of intelligence, predict adversary behaviour, and inform recommendations for strategic response. AI’s capacity for rapid analysis could, in theory, prevent rash human error or miscalculation, yet its use in high-stakes military decisions raises profound ethical concerns. Historical close calls–the 1979 NORAD false alarm, the 1983 Soviet misidentification, and the 1995 Norwegian rocket incident[1]–underscore the irreplaceable role of human judgment in averting catastrophe. Had AI governed these moments; outcomes might have been tragically different.
National security decision-making processes and nuclear strategy in particular are leveraging AI’s ability to process vast amounts of intelligence, predict adversary behaviour, and inform recommendations for strategic response.
The philosophical difficulty associated with embedding advanced AI into lethally autonomous systems, the so-called ‘responsibility gap’ or ‘non-gap,’[2] makes clear the criticality of preserving human agency, moral judgment, and strategic accountability. We argue that the natural progression of AI development requires those endowed with target engagement authority–particularly when associated with nuclear weapons, and tactical device use–must employ safeguards which ensure that humans remain accountable as the ultimate decision-makers, especially for morally and politically significant actions. These safeguards are codified as the ‘Three Ts’ (3Ts) transparency, trusted tenets, and training.
The insidious rise of AI integration and capability may mask the cognitive pitfalls of overreliance, information overload, groupthink, misperception, and unrecognised systemic bias.[3] On the one hand, AI offers speed, scale, and systemic foresight through probabilistic computations; on the other, it risks amplifying brittle logic divorced from moral and political nuance. In this paper, we explore why this scenario is not only possible, but increasingly probable, and underscores the need for deliberate human control, underpinned by the 3Ts.
The Evolution of AI: The Four Waves
To fully understand AI’s role in military decision-making, examining its evolution through four developmental waves is essential. The first wave, dominate in 1980s to 1990s, is often referred to as rule-based AI. First wave systems rely on predefined rules and structured logic to execute tasks. These systems are rigid, lack adaptability, and are best suited for structured decision-making processes such as logistics and battlefield planning employing deterministic logic trees. The contribution of such systems to decision-making is easily deconstructed. Familiar examples of first-wave AI are recommendation engines such as those employed by Amazon, Netflix, and YouTube. In the military context, it could be found in route guidance systems and logistical support systems.
The second wave introduces statistical learning, where AI models use large datasets to make probabilistic decisions. In civilian terms, these systems are found in decisions-making within banking, insurance, and financial markets–where algorithms enhanced human predictive modelling by correlating weak variables to form better outcome predictions. In military terms, second wave systems included similar clustering, classification, and predictive modelling, which became critical in early cybersecurity applications and automated intelligence analyses.
First and second wave AI provided transparency in design, explainability in the decision-making models, and predictability in the outcomes. These iterations of AI also built a foundation of familiarity and trust with target engagement authorities—the human in the loop. It is when AI starts to construct its own decision-making models, such as through machine-to-machine learning, that transparency and doubt come into play.
It is when AI starts to construct its own decision-making models, such as through machine-to-machine learning, that transparency and doubt come into play.
Third and fourth wave AI represent the new frontier in AI-enabled decision-making processes and should foster scepticism in the trust and confidence that target engagement authorities place in outcomes. Third wave, known as contextual adaptation AI, enables machines to perceive and adapt to their environment rather than relying on static rules or historical data.[4] Facial recognition authentication, smart home systems, Alexa and Siri are common representations of third-wave AI. From a military perspective, long-loiter weapons, such as Harpy and Harpoon, exemplify third-wave characteristics, with the ability to linger in contested airspace, adapt to shifting radar emission patterns, and dynamically select targets. These systems can reason, recognise patterns, and make independent assessments, making them highly relevant in battlefield analysis and autonomous targeting, such as targeting associated with Lethal Autonomous Weapons (LAWs).[5] Third wave AI ‘agents’ become team members who inform our decision-making processes and, in some cases, these agents are delegated decision-making authority. On the cusp of the third and fourth wave of AI is Project Maven, the US AI intelligence system capable of fusing satellite imagery, geolocation data, and communication intercepts for target identification.[6]
The fourth wave–general artificial intelligence or autonomous AI–integrates ethical reasoning, legal frameworks, and synthetic common sense into AI decision-making[7] and is worthy of debate. The goal is to develop AI which not only perceives the environment but also responds appropriately to that environment by applying human-like moral judgement. This next frontier of AI development is intended to create systems that incorporate human-like moral judgment into automated and/or autonomous decisions, a crucial factor in nuclear deterrence and command-and-control frameworks.[8] In a practical sense, the nascent development of fourth wave AI is evident in self-driving vehicles that can “see” the environment, recognise patterns, correlate those patterns to a decision framework, make a decision and act upon that decision. While these innovations are highly localised, meaning a self-driving car cannot transfer the ‘knowledge’ to other contexts, the objective is to create systems that can transfer knowledge gained from one context to another.[9]
As AI transitions through these waves, it moves from simple rule-following systems to highly autonomous decision-making entities, at each level increasing its influence over military strategy and decision-making, and ultimately those who hold target engagement authority. While current systems utilise large language models (LLMs), natural language processing (NLP), and computer vision algorithms enabling rapid analysis of intelligence reports, open-source intelligence, and sensor data, for “shortening the kill chain,”[10] accepting the outputs of these systems as factual elements to the decision-making process is not without risk. At the operational level, these risks centre on trust and reliability, data integrity, and interpretation. When focused at the strategic level, which is arguably the domain of any nuclear weapon employment (including tactical), risk centres on threshold identification, escalation management, proliferation, and stability.[11] These capabilities (and risks) are expected to accelerate as nuclear weapons states—such as the US, China and Russia—pursue AI as a strategic priority and develop AI capabilities in support of human decision-making.[12]
When focused at the strategic level, which is arguably the domain of any nuclear weapon employment, risk centres on threshold identification, escalation management, proliferation, and stability.
A Scenario Anchored in Reality
Once the domain of Cold War science fiction, AI-enabled platforms like the hypothetical “Prometheus” are now emerging as real actors in national security deliberations. As AI systems grow in sophistication, their integration into target engagement and nuclear command, control, and communication (NC3) infrastructures[13] raises urgent questions about moral authority, strategic accountability, and the thresholds of automation in one of the gravest decisions a state can make–whether to employ nuclear weapons. Conventional targeting cycles already engage AI in defence planning, threat detection, target identification, and streamlining response options. Scholars, such as Drexel,[14] emphasise the accelerating integration of AI in defence ecosystems, including nuclear contexts, as part of broader great power competition. From a nuclear perspective, Johnson[15] highlights that AI use is creating new escalation pathways, while ‘exacerbating old’ pathways, increasing the risk of accidental nuclear confrontation. The adoption of such systems into target engagement decisions and NC3 frameworks should not come without significant scrutiny. Scrutiny begins with transparency.
Transparency in Strategic AI
Transparency in AI systems refers to the degree to which the internal logic, rationale, and operational mechanics of an AI’s recommendations are understandable, traceable, and reviewable by human actors. As the stakes rise to existential levels, such as in nuclear command and control and nuclear weapons engagement, calls for transparency become not only a technical demand but an ethical imperative. These ethical challenges extend beyond legal compliance to fundamental questions of accountability, proportionality, and adaptability. If an AI-driven system recommends escalation based on probabilistic assessments, who bears responsibility for the consequences? How can AI be designed to incorporate strategic adaptability when military conflicts unpredictably evolve? Will the use of AI be stabilising or destabilising to the strategic context? How can intentional irrationality be accounted for in escalation dynamics?[16] To address these concerns, we argue that AI augmentation of targeting decisions, particularly in nuclear weapons employment, requires transparent decision-making models.
Transparent AI frameworks improve post hoc analysis and mitigate the responsibility gap. As Clausewitz teaches us, war is an extension of politics, and transparency is foundational to civilian political oversight, which is a cornerstone of democratic governance. When AI systems recommend or execute potentially catastrophic actions, such as tactical nuclear strikes, public institutions must retain visibility into how those decisions are made. Opaque AI systems risk undermining trust and, therefore, legitimacy. Transparency also improves decision making by permitting scrutiny of not just outputs but the assumptions, models and biases that led to them. Ultimately, transparency is a safeguard against misperception and unrecognised systemic biases—two of the five cognitive pitfalls. Transparency has an added bonus in facilitating recognition of adversarial manipulation and/or model ‘drift.’[17]
Transparent AI frameworks improve post hoc analysis and mitigate the responsibility gap.
Trusted Tenets: Embedding Democratic Values into AI Decision Frameworks for Strategic and Nuclear Decision-Making
Trusted Tenets refers to the values or ethical principles that any AI-enabled decision-making system must be grounded in—particularly those influencing lethal action, such as a tactical nuclear strike; democratic values such as civilian control, proportionality, human dignity, and moral deliberation. AI currently cannot, and must not, replace values-based judgment when lives, legitimacy, and civilisation itself are at risk.
AI operates on logic, but war is human. Like humans, AI models rely on past data and strategic assumptions; however, it is incapable of considering less predictable and less structured political and moral nuances. In theory, this shortfall can be mitigated by the application of law, which reflects societal values, but as we see in human interaction, law falls short of predicting (and sometimes informing) human cognitive processes. This is not to say laws are an unimportant framing factor. Civilian control is often framed by international law, including treaties such as the Geneva Conventions, providing a foundational framework for regulating armed conflict.
There is another shortfall with relying on law as a stand-in for human value systems. Current laws struggle to fully address emerging threats such as AI-driven tactical nuclear weapon escalation. In this context, the traditional legal frameworks—predicated on established international values and norms, and state-centric responsibility—fail to duplicate human decision-making, particularly in the realm of deterrence. Decisions to escalate or deescalate are less predictable and may embrace many interpretations of that law—sometimes by the same actor. To further complicate the application of law as the primary trusted tenet, opaque AI-driven systems create additional complexities related to autonomy, accountability, and rapid escalation dynamics.[18]
Current laws struggle to fully address emerging threats such as AI-driven tactical nuclear weapon escalation.
The Responsibility Gap
All of these issues coalesce to form the “responsibility gap.” In traditional military hierarchies, accountability for any targeting decision is clearly assigned to human commanders and incorporates rigorous checks and balances; when AI assists in these decisions, questions surrounding the subject of responsibility arise. Should responsibility lie with the military operator? The developers of the AI model? Or is there a scenario where responsibility should fall upon the AI system itself?[19] And upon which values was the model based, especially as models build upon one another to form more capable systems? These issues are particularly pressing in scenarios where AI-generated recommendations might lead to undesired escalation. As in our Prometheus example, should military leaders act on that assessment if an AI model assigns a 70% probability to an adversary launching a second nuclear strike? What happens when the remaining 30% chance of de-escalation is ignored?
In summary, AI decision-making is built on logic-driven frameworks that fail to account for war’s complex, human aspects. First and second wave AI models primarily rely on historical data and predefined parameters,[20] making the output understandable and predictable; however, such outputs may be vulnerable to oversimplification when applied to complex strategic calculations. Even with transparent design, third and fourth wave AI, and associated machine-to-machine learning, can lead to a complex web of value convergence, raising questions of which values are in play.[21] While the Geneva Conventions and other legal frameworks provide guidelines for military conduct, they do not fully address AI’s role in targeting decisions; nor nuclear engagement in particular, where escalation dynamics can unfold at unprecedented speeds. To prevent misjudgements that could lead to nuclear conflict, AI systems must be designed to incorporate reasoning beyond simple probabilities and account for definable human ethical frameworks–they must include transparent and well-understood trusted tenets exercised through rigorous training protocols.
Training: Strengthening Human-AI Collaboration
Strengthening human-AI collaboration is essential to ensuring commanders remain in control while leveraging AI’s analytical capabilities and avoiding the five pitfalls: overreliance, information overload, groupthink, misperception, and unrecognised systemic bias.[22] One of the most critical steps in this process is the implementation of training programmes specifically designed to prepare decision-makers for AI-assisted strategic assessments. This training relies on the fundamentals of transparency and trusted tenets, clearly codified ethical and decision-making models upon which the system is designed. Commanders must develop the skills to critically evaluate AI-generated recommendations, recognising their utility and limitations.[23] Without adequate comprehension, there is a risk that military leaders may over-rely on AI-generated intelligence or disregard valuable insights due to misinterpretation. Effective training should include scenario-based exercises where decision-makers engage with AI-driven simulations, enhancing their ability to interpret, challenge, and refine both personal decision-making models and AI outputs in real-time.[24]
Strengthening human-AI collaboration is essential to ensuring commanders remain in control while leveraging AI’s analytical capabilities and avoiding the five pitfalls.
Such training should foster a culture of “AI scepticism,” encouraging commanders and senior advisors to critically assess AI-driven conclusions to prevent miscalculations that could escalate nuclear conflicts. While AI can process vast amounts of data at unprecedented speeds, it remains susceptible to biases, adversarial manipulation, hallucinations, and incomplete information.[25] Scepticism does not mean rejecting AI recommendations but rather ensuring that human operators apply contextual judgment, geopolitical awareness, and ethical consideration before making final decisions.[26]
Moreover, institutional safeguards should be implemented to prevent AI from becoming an over-centralised authority in nuclear command structures. Decision-makers must learn to recognise potential AI blind spots, particularly in crisis situations where adversaries might exploit algorithmic weaknesses or introduce misleading data. Establishing verification mechanisms, where AI assessments are cross-checked against human intelligence, diplomatic considerations, and ethical frameworks, further enhances strategic stability.[27] Ultimately, training which focuses systematically on the entire human-machine decision-making dynamic addresses the remaining cognitive pitfalls: information overload and groupthink, while providing additional safeguards against overreliance, misperception, and unrecognised systemic bias.
While autonomous AI remains an elusive goal, AI-driven targeting systems are already tested and implemented across a wide array of command-and-control systems. Systems such as Project Maven, which successfully enhances target identification,[28] tests the reliability of AI-generated assessments and the risk of unintended civilian casualties.[29] AI-assisted targeting in the Russia-Ukraine conflict demonstrates both the potential and the dangers of autonomous systems in warfighting. AI-enhanced drones and cyberwarfare tactics—predominantly in the areas of autonomous navigation, target identification, and weapons employment—illustrate how AI can accelerate decision-making but also create new vulnerabilities. While thus far the ‘human-in-the-loop’ remains, as capabilities advance from third to fourth wave, greater reliance on these systems will likely decrease that human interaction,[30] potentially incorporating delegation of target engagement authority to AI-enabled systems. Now is the time to implement the 3Ts, transparency, trusted tenets, and training, before AI systems which do not adhere to these safeguards are irreversibly integrated into our most lethal systems nuclear command, control, and communications networks (NC3).
While autonomous AI remains an elusive goal, AI-driven targeting systems are already tested and implemented across a wide array of command-and-control systems.
An Operational Perspective
Algorithmic Judgement in the Nuclear Chain of Command: Risk, Control, and Consequence
The three major nuclear powers, the US, the PRC, and Russia–have made significant investments in AI-enabled military systems, including autonomous threat detection and missile defence capabilities.[31], [32] DARPA, in the US, is working on AI-enabled decision support models.[33] The People’s Liberation Army (PLA) is actively working on “intelligentised warfare,”[34] or simply “intelligent warfare,”[35] where AI plays a key role in multi-domain operations. Moreover, Russia is incorporating AI into their command-and-control networks. Raising concerns about potential AI-driven escalation in crisis scenarios.[36]
Human Oversight vs. Automation Bias
Use of AI in nuclear operations promises to increase the speed of data analysis, reduce human errors, and improve decision-support systems; processing vast intelligence inputs, including satellite imagery, cyber threats, and adversary communications to detect potential nuclear escalation before it materialises. However, this level of AI reliance exacerbates concerns regarding automation bias, where human decision-makers accept AI-generated recommendations without sufficient scrutiny.[37]
Automation bias in AI-assisted nuclear targeting/decision-making is of particular concern.[38] These models require large quantities of contextually specific data,[39] making the transference of automated conventional ‘kill chain’ programmes to the nuclear domain rather difficult. Data on historical nuclear incidents is (thankfully) limited and somewhat stale; however, these incidents highlight the dangers of misinterpretation and/or misunderstanding.
Automation bias in AI-assisted nuclear targeting/decision-making is of particular concern.
In 1979, the U.S. Department of Defence ‘detected an imminent nuclear attack,’ which was a false warning. This event was followed months later by another false report, this time of ‘2,220 Soviet missiles’ launched against the United States.[40] The Soviets had a similar false alarm in 1983, where a missile detection system mistakenly identified an incoming U.S. strike, which was only averted because the human in the decision chain, Lt. Col. Stanislav Petrov, questioned the system’s reliability.[41] Similarly, the 1995 launch of a Norwegian weather rocket triggered an emergency nuclear alert in Russia, which was ultimately dismissed after human verification.[42] While these are dated examples, the predisposition to technological reliance has only grown as digital natives assume leadership positions. In an AI-driven environment, over-reliance on automated alerts could increase the likelihood of a mistaken nuclear launch.
Some consider these close calls of nuclear alarms a byproduct of automation within early warning centres and argue that these systems were operating in accordance with the rules of engagement with which they were designed.[43] The conclusions drawn from these incidents are that sensors and data fusion were the main points of weakness, followed by human error.[44] However, this analysis lacks temporal context. At the time of these incidents, the U.S. and Russian efforts to automate nuclear responses and early warning systems were limited by computing power. AI-enabled systems are more efficient and better suited for handling complex datasets, likely reducing (but not eliminating) the risk of miscalculation.
AI in Simulated Nuclear Engagements
The Rational Actor Model remains a cornerstone of nuclear deterrence strategy, assuming that states make decisions based on cost-benefit calculations to avoid mutually assured destruction.[45] AI-driven war-gaming tools have been developed to simulate potential nuclear engagement, aiming to enhance strategic foresight by predicting enemy responses.[46] However, the reliability of these models is contingent on the quality of input data and is subject to the biases inherent in their design.[47] AI may miscalculate escalation risks due to incomplete or flawed data, potentially leading decision-makers to overestimate or underestimate the probability of nuclear conflict.[48] If AI models predict a low likelihood of nuclear escalation, leaders face a dilemma: should they trust computational assessments? What role does traditional human judgment, which incorporates political nuance and psychological factors, play? Should decisions concerning target engagement be reduced to the outcomes of empirical probability models? How does the concept of moral responsibility and values-based decision-making fit into these AI-enhanced models? These are central questions which a broader scientific debate on the basic theoretical premise of nuclear deterrence theories in the age of AI must answer.
AI-driven war-gaming tools have been developed to simulate potential nuclear engagement, aiming to enhance strategic foresight by predicting enemy responses.
AI vs. Human Judgment in Nuclear Response Decisions
While AI can optimise decisions, it raises critical ethical and strategic concerns when applied to nuclear deterrence and response options. Nuclear deterrence is sometimes described as a ‘mind game’.[49] AI-generated recommendations are based on historical data and statistical probabilities, which are in limited supply within the nuclear domain and likely lack the cognitive aspects of the ‘game’. When it comes to deterrence, military commanders must consider broader political, humanitarian, and morale implications that AI cannot fully quantify.[50] These considerations underscore the risk of over-reliance on AI, as noted in the aforementioned close calls.[51] Suppose AI-assisted systems gain more authority in nuclear responses. In that case, the erosion of human oversight may increase the likelihood of escalation due to algorithmic misinterpretations or adversarial exploitation of AI biases. Employing 3Ts (transparency, trusted tenets, and training) along with strategic restraint is necessary to ensure AI remains a tool for analytical purposes rather than autonomous decision-making in the nuclear realm.
To regulate AI’s role in this critical domain, it is imperative to establish clear policies that ensure meaningful human control over nuclear weapons decisions, preventing fully autonomous system determinations. Additionally, robust data governance frameworks are essential to maintain information integrity and security within AI-enabled NC3 contexts.[52] International collaboration is also crucial; the 2024 agreement between the United States and China to keep nuclear weapons decisions under human control underscores the importance of multinational efforts.[53] By combining stringent national policies with international agreements, the integration of AI into nuclear command structures can be managed to enhance strategic stability while mitigating potential risks.
The Future of AI in Tactical Nuclear Scenarios
As AI continues to evolve, its role in nuclear crisis decision-making must be carefully scrutinised to balance strategic advantages with the risk of unintended escalation. AI’s predictive capabilities offer insight into potential adversary responses, enhancing nuclear war-gaming, early warning systems, and deterrence models. However, its role should remain strictly advisory, ensuring that human judgment prevails in final decision-making processes. Given the unpredictable and high-stakes nature of nuclear warfare, over-reliance on AI could introduce significant risks, particularly if AI models fail to interpret complex geopolitical signals accurately or if adversaries exploit algorithmic weaknesses.[54]
As AI continues to evolve, its role in nuclear crisis decision-making must be carefully scrutinised to balance strategic advantages with the risk of unintended escalation.
Conclusions
The fundamental challenge of integrating AI into nuclear frameworks lies in the limitations of machine learning models, which cannot fully account for the fluid nature of human interaction and global politics, as well as societal shifts in value interpretation. AI’s capacity to model escalation risk is only as robust as its input data and resulting calculations. Incomplete or outdated datasets can lead to misjudgements or misunderstandings—such as underestimating an adversary’s red lines—resulting in catastrophic miscalculation. Moreover, adversarial manipulation of AI systems or data through cyber operations could introduce further instability, making human oversight the indispensable safeguard against errors and/or malicious interference.
Despite its limitations, AI’s contributions to nuclear deterrence cannot be ignored. AI-enabled decision-making can outpace purely human analytical processes, enhance situational awareness, assist military and political leaders in recognising escalation thresholds, and calculating available escalation management options. Nevertheless, the ethical and strategic risk of allowing autonomous AI decisions in a potential nuclear conflict is too significant to justify its direct control over nuclear weapon employment or retaliation.[55]
Despite its limitations, AI’s contributions to nuclear deterrence cannot be ignored.
Future research should focus on AI transparency, the ethical implications (trusted tenets) of AI, and human-AI collaboration (training) within high-stakes military operations. Additionally, international agreements on AI’s role in NC3 should be pursued to establish legal and ethical norms that prevent or reduce escalation risk. The intersection of AI and nuclear strategy requires continued interdisciplinary research involving experts in machine learning, international law, ethics, decision-making, military science, and diplomacy, to mitigate risk while maximising strategic benefits.
AI will undoubtedly shape the future of nuclear escalation, but its role must be thoughtfully defined and carefully constrained to prevent unintended escalation. While AI can enhance strategic calculations, it cannot replace the nuanced decision-making required in nuclear crises. Ensuring human accountability and authority over nuclear decisions is paramount to maintaining strategic stability and preventing catastrophic miscalculations.
“The Moment of Truth”
The President, flanked by her national security team – including Prometheus— views the display of a series of real-time assessments.
“We need options,” the President said tersely.
Prometheus is trained on decades of strategic behaviour modelling and enhanced with observable ethical reasoning, cross-referenced telemetry with diplomatic posture, weather anomalies, satellite data, and adversary command chain telemetry. Reviewed lessons learned from hundreds of human-machine training simulations, verified ethical frameworks and leadership’s preferences. Within 37 seconds, it flagged a 72% probability that the launch was a test gone awry—not a first strike.
But it didn’t stop there.
Prometheus presented three courses of action:
- Launch-on-Warning: 89% risk of global thermonuclear war, 11% chance of deterring escalation.
- Strategic Pause: Immediate force readiness, global alert levels raised. Estimated 94% chance of de-escalation within 3 hours.
- Decapitation Strike: AI listed it last, red-tagged with a 98% probability of catastrophic misinterpretation.
Each recommendation included provenance—Prometheus explained why, not just what. It referenced pattern inconsistencies in radar return signals, recent adversary command reshuffling, and historical behavioural analogues from 1983, 1995, and 2022.
The President breathed in deeply.
“Prometheus, what does historical behaviour under leadership style Omega-3 predict for their response to a force alert?”
Prometheus responded: “Probability-weighted models indicate 81% likelihood of immediate backchannel communication. Using your preferred ethical framework, 1 Alpha…Madame President, recommendation: Strategic Pause.”
The President nodded and looked to the rest of her team, “You know the drill…”
Each member of her team checked their own personalised decision-making models, verified consistency of input data and recommended outcomes before returning her gaze.
As she made eye contact with each member of her team, they too nodded in agreement with Prometheus’ assessment.
The President, confident in the well-executed decision-making process—anchored in the fundamentals of transparency, trusted tenets, and training—gave the command: “Prepare Strategic Pause Protocol Echo. No launches.”
Jill Long, Colonel, U.S. Air Force, retired (OF-5); PhD Political Science and Government, Senior Professor; Norwich University, Strategic Leadership; MA Military Strategic Studies, MS Management, Director of the Democratic Resilience Center at the Helmut-Schmidt-University (HSU), Senior Professor Master of Strategic Leadership Norwich University; Tamsyn Karlotta Garinet; Captain (OF-2) Bundeswehr; Ecological Security Fellow 2024 at the Council on Strategic Risks (CSR), M.Sc. Psychology; Research Topics: Civil-Military Cooperation; Ecological Security; Social, Differential & Media Psychology; Leadership; Severin Pleyer, Captain (Bundeswehr), Doctoral Research Associate on 21st Century Nuclear Deterrence at HSU.
The views contained in this article are the authors’ alone and do not represent the views of the US Air Force, the Bundeswehr, the Helmut-Schmidt-University or Norwich University.
[1] Keith Payne, “AI and Nuclear Decisions: Toward an Arms Control Framework.” Journal of Strategic Studies 48, no. 2 (2025): 234–57. https://www.tandfonline.com/doi/full/10.1080/13523260.2025.2474869.
[2] Daniel W. Tigard. “There Is No Techno-responsibility Gap.” Philosophy & Technology 34, no. 3 (2021): 589–607. https://doi.org/10.1007/s13347-020-00414-7.
[3] Christopher S. Chivvis and Jennifer Kavanagh, “How AI Might Affect Decisionmaking in a National Security Crisis.” Carnegie Endowment for International Peace, (June 17, 2024). https://carnegieendowment.org/research/2024/06/artificial-intelligence-national-security-crisis?lang=en.
[4] John Launchbury, “A DARPA Perspective on Artificial Intelligence,” posted by DARPAtv, (February 2, 2017), YouTube Video, https://www.youtube.com/watch?v=-O01G3tSYpU.
[5] The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2 (IEEE, 2017): 117.
[6] Rizwan Choudhury, “Project Maven: The epicenter of US’ AI military efforts,” Interesting Engineering, March 2, 2024: 1, https://interestingengineering.com/military/project-maven-the-epicenter-of-us-ai-military-efforts
[7] Brian Ka Chan, “First, Second, Third Wave of Artificial Intelligence (AI),” Mind Data, (March 3, 2025). https://minddata.org/3-waves-Brian-Ka-Chan-AI.
[8] Michael Depp and Paul Scharre, “Artificial Intelligence and Nuclear Stability,” War on the Rocks, (January 16, 2024). https://warontherocks.com/2024/01/artificial-intelligence-and-nuclear-stability/.
[9] For more detailed information see Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order, (Harper Press, 2018).
[10] Amanda Miller, “AI Algorithms Deployed in Kill Chain Target Recognition,” Air & Space Forces Magazine, (September 21, 2021). https://www.airandspaceforces.com/ai-algorithms-deployed-in-kill-chain-target-recognition/.
[11] Forrest E. Morgan, Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman. “Military applications of Artificial Intelligence: Ethical Concerns in an Uncertain World,” Rand Corporation, 2020: 30.
[12] Ibid: 30-31.
[13] Vladislav Chernavskikh. “Nuclear Weapons and Artificial Intelligence: Technological Promises and Practical Realities,” SIPRI Background Paper, September 2024: 4–5.
[14] Bill Drexel, “Promethean Rivalry: The World-Altering Stakes of Sino-American AI Competition,” Center for a New American Security, April 22, 2025. https://www.cnas.org/publications/reports/promethean-rivalryAS.
[15] James Johnson, “’Catalytic Nuclear War’ in the Age of Artificial Intelligence & Autonomy: Emerging Military Technology and Escalation Risk Between Nuclear-Armed States. The Journal of Strategic Studies, (2021): 1-41. https://doi.org/10.080/01402390.2020.1867541.
[16] James Johnson, “Delegating Strategic Decision-Making to Machines: Dr. Strangelove Redux?” Journal of Strategic Studies 45, no. 3, (2022): 439–77. https://doi.org/10.1080/01402390.2020.1759038.
[17] Miles Brundage et al. “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims,” arXiv, 2020, Article 2004.07213. https://arxiv.org/abs/2004.07213
[18] See Michael N. Schmitt, The Future of Warfare and International Law. Harvard University Press, 2021 and Romaissa Ghenai, ‘The Big Tech-Military Nexus: AI. Warfare, ad the Future of International Law,” Cornell Law, https://publications.lawschool.cornell.edu/cilj/wp-content/uploads/sites/7/2025/04/Panel-LOAC-Romaissa-Ghenai-Big-Tech-Military-Nexus.pdf.
[19] Alessandro Nalin and Paolo Tripodi, “Future Warfare and Responsibility Management in the AI-Based Military Decision-Making Process,” Journal of Advanced Military Studies 14, no. 1 (2023): 89.
[20] Vladislav Chernavskikh. “Nuclear Weapons and Artificial Intelligence: Technological Promises and Practical Realities,” SIPRI Background Paper, September 2024: 10-11.
[21] Mantas Mazeika, Xuwang Yin, Rishub Tamirisa, Jaehyuk Lim, Bruce W. Lee,
Richard Ren, Long Phan ,Norman Mu, Adam Khoja, & Oliver Zhang, et al., “Utility Engineering,” (2025), https://www.emergent-values.ai/.
[22] Christopher S. Chivvis and Jennifer Kavanagh, “How AI Might Affect Decisionmaking in a National Security Crisis.” Carnegie Endowment for International Peace, June 17, 2024. https://carnegieendowment.org/research/2024/06/artificial-intelligence-national-security-crisis?lang=en.
[23] James Johnson, Artificial Intelligence and the Future of Warfare: The Risks of an AI Arms Race. Georgetown University Press, 2020.
[24] Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Defoe, Paul Scharre, Thomas Zeithoff, and Bobby Filar, et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Oxford University Press, 2018.
[25] Rebecca K Kahn and Samuel Mann. Strategic Stability and AI: The Case for Human Oversight in Nuclear Command and Control. MIT Press, 2021.
[26] Paul Scharre, Army of None: Autonomous Weapons and the Future of War. W. W. Norton & Company, 2018.
[27] Keith Payne, AI and Nuclear Decision-Making: Balancing Speed and Control. Brookings Institution Press, 2021.
[28] Jon Harper, “Palantir lands $480M Army contract for Maven artificial intelligence tech,” DefenseScoop, May 29, 2024. https://defensescoop.com/2024/05/29/palantir-480-million-army-contract-maven-smart-system-artificial-intelligence/.
[29] Isobel Asher Hamilton, “A former Google Engineer Warned that Robot Weapons Could Cause Accidental Mass Killings,” Business Insider, September 16, 2019, https://www.businessinsider.com/former-google-engineer-warns-against-killer-robots-2019-9.
[30] Kateryna Bondar, Ukraine’s Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare. (Center for Strategic & International Studies: March 6, 2025). https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare#h2-ai-in-automatic-target-recognition
[31] Vladislav Chernavskikh. “Nuclear Weapons and Artificial Intelligence: Technological Promises and Practical Realities,” SIPRI Background Paper, September 2024: 5.
[32] U.S. Department of Defense, 2024 China Military Power Report, December 18, 2024: 1-2.
[33] DARPA, AI Next Campaign, (DARPA, 2025). https://www.darpa.mil/research/programs/ai-next-campaign.
[34] U.S. Department of Defense, 2024 China Military Power Report, December 18, 2024: 1-2
[35] Kevin Pollpeter and Amanda Kerrigan, “The PLA and Intelligent Warfare: A Preliminary Analysis,” CNA, October 1, 2021, 1. https://www.cna.org/reports/2021/10/The-PLA-and-Intelligent-Warfare-A-Preliminary-Analysis.pdf.
[36] U.S. Department of Defense, 2023 Military and Security Developments Involving the People’s Republic of China, October 2023: 13.
[37] Alice Saltini, AI and Nuclear Command, Control and Communications: P5 Perspectives. European Leadership Network, November 2023, 6.
[38] Tao Burga, “Artificial Intelligence in Nuclear Command and Control: Experimental Evidence of Automation Bias from Humans and Large Language Models,” Brown University, April 2024.
[39] Kateryna Bondar, “Ukraine’s Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare,” Center for Strategic & International Studies, March 6, 2025. https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare#h2-ai-in-automatic-target-recognition.
[40] Daryl G. Kimball, ‘Nuclear False Warnings and the Risk of Catastrophe’, Arms Control Today, December 2019, https://www.armscontrol.org/act/2019-12/focus/nuclear-false-warnings-and-risk-catastrophe.
[41] Guglielmo Tamburrini, “Nuclear Weapons and the Militarization of AI, ” in Nuclear Risks and Arms Control: Problems and Progresses in the Time of Pandemics and War, ed. Fabrizio Coticchia et al., Springer Proceedings in Physics, vol. 291, 2023: 156, https://doi.org/10.1007/978-3-031-29708-3_17.
[42] Laura Kiniry, “When Russian Radar Mistook a Norwegian Scientific Rocket for a U.S. Missile, the World Narrowly Avoided Nuclear War, ” Smithsonian Magazine, January 25, 2025, https://www.smithsonianmag.com/smart-news/when-russian-radar-mistook-a-norwegian-scientific-rocket-for-a-us-missile-the-world-narrowly-avoided-nuclear-war-180985836/.
[43] Matthew Wills, “Close Calls: When the Cold War Almost Went Nuclear,” JSTOR Daily, January 3, 2025. https://daily.jstor.org/close-calls-when-the-cold-war-almost-went-nuclear/.
[44] Nate Jones, ‘The 1983 War Scare: “The Last Paroxysm” of the Cold War Part I’, National Security Archive Electronic Briefing Book No. 426. May 22, 2013, https://nsarchive2.gwu.edu/.
[45] Thomas C. Schelling, Arms and Influence. (Yale University Press, 2008).
[46] Ajai Raj, “Generative AI Wargaming Promises to Accelerate Mission Analysis,” Johns Hopkins: Applied Physics Laboratory, March 3, 2025. https://www.jhuapl.edu/news/news-releases/250303-generative-wargaming?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25.
[47] Michael C. Horowitz, “When Speed Kills: Autonomous Weapons, Deterrence, and Stability.” Journal of Strategic Studies 42, no. 6 (2019): 764–88.
[48] Vincent Boulanin, The Impact of AI on Strategic Stability and Nuclear Risk. Stockholm International Peace Research Institute, 2021.
[49] Rose Gottemoeller, “Ukraine and the mind games of nuclear deterrence,” Center for International Security and Cooperation, March 22, 2024. https://cisac.fsi.stanford.edu/publication/ukraine-and-mind-games-nuclear-deterrence.
[50] Greg C. Allen and Taniel Chan, Artificial Intelligence and National Security: Risks and Policy Considerations. RAND Corporation, 2017.
[51] Daniel Bessner, The Human Factor in Nuclear Close Calls: Lessons from Cold War Incidents. Cambridge University Press, 2020.
[52] Jonathan Falcone, “Memo to Trump: Develop Specific AI Guidelines for Nuclear Command and Control.” Bulletin of the Atomic Scientists, January 17, 2025, https://thebulletin.org/2025/01/memo-to-trump-develop-specific-ai-guidelines-for-nuclear-command-and-control/.
[53] Jarret Renshaw and Trevor Hunnicut, “Biden, Xi Agree That Humans, Not AI, Should Control Nuclear Arms.” Reuters, November 16, 2024, https://www.reuters.com/world/biden-xi-agreed-that-humans-not-ai-should-control-nuclear-weapons-white-house-2024-11-16/.
[54] Vincent Boulanin, The Impact of AI on Strategic Stability and Nuclear Risk. Stockholm International Peace Research Institute, 2021.
[55] Greg C. Allen and Taniel Chan, Artificial Intelligence and National Security: Risks and Policy Considerations. Santa Monica, CA: RAND Corporation, 2017.