Abstract: AI-powered autonomous weapons, from advanced drones to robotic soldiers, are reshaping warfare with unmatched speed, precision, and adaptability. Capable of life-and-death decisions, they deliver tactical power and coordination, reducing human risk and amplifying military strength. Intelligent swarming and real-time responses provide a formidable combat edge. However, robust controls are crucial to wield these tools responsibly, to shape the future of warfare with strategic brilliance and caution.
Problem statement: How can society, with the military as a part of it, retain control and dominance over AI-driven systems powered by self-learning algorithms while leveraging their advantages?
So what?: Military forces must implement robust control and oversight mechanisms to deploy AI and AI-driven autonomous weapons effectively. Additionally, military leadership must be highly attuned to the influence of AI on social media, recognising its potential to shape behaviour. This responsibility to ensure the ethical and strategic use of AI while safeguarding human judgment and accountability lies with military leaders, policymakers, and international organisations. The solution to this challenge is found in the optimal integration of human intelligence with machine precision—a concept increasingly known as “Human-Machine Teaming”.

Source: shutterstock.com/sdecoret
“Weak human + machine + better process is superior to strong human + machine + inferior process.”
Garry Kasparov, [1]
The fusion of human intellect and machine efficiency revolutionises modern warfare, blending cognitive brilliance with mechanical precision. This transformation spans three key areas: First, human enhancement through technologies like EEG interfaces and adaptive displays, which boost perception, cognition, and decision-making, raising ethical and technical concerns. Second, the rise of autonomous drones and swarm systems. These enhance battlefield efficiency but pose risks in control, accountability, and cybersecurity. Third, AI-driven social media manipulation, which threatens to reshape geopolitics by influencing mass behaviour. This shift calls for strict regulation, human oversight, and education to prepare forces for the challenges of technology-driven warfare. These developments are not merely theoretical—they are already playing a decisive role in modern conflicts. As technological advancements transform the battlefield, they simultaneously reshape the geopolitical landscape.
Citius, Altius, Fortius
The Olympic motto of “Faster, Higher, Stronger” captures the technological race for dominance. As Russian President Vladimir Putin stated in 2017, “The leader in artificial intelligence will be the ruler of the world.”[2] He warned against monopolistic control of AI and promised Russia would share its expertise, though this now seems unlikely.[3] Putin also predicted future wars would be fought by drones, asserting that “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”[4]
This race for AI dominance extends beyond military technology and rests on four key pillars: data, computing power, talent, and institutions. As Paul Scharre outlines in Four Battlegrounds, data is the defining resource of the 21st century. Akin to coal or oil, it must be collected and refined. Computing power drives AI development, with control over global chip supply chains offering strategic leverage. Talent determines which nations attract top researchers and tech firms. Yet the most critical factor is institutions—the global AI leader will be the one that seamlessly integrates AI into its economy, society, and military.[5]
In the Russia-Ukraine war, drones play a crucial role but are not the sole determining factor. Other elements, such as resistance, combined arms warfare, and the ability to coordinate large formations beyond the brigade level, are key. The absence of operational capabilities at the corps and division levels is particularly notable.[6] Operational competence and training are essential—without them, advanced weapon systems are ineffective.[7]
Both sides are rapidly adapting to new technologies. Drones are now being developed and modified at a breakneck pace—if enemy electronic warfare disrupts controls, alternatives like fibre-optic guidance are deployed.[8] Innovations move from concept to deployment within weeks or months, not years. In Ukraine, domestic prototypes are quickly field-tested and mass-produced until further refinements are needed.
Current conflicts—whether in Ukraine or the Middle East—also demonstrate a growing prevalence of autonomous and semi-autonomous (weapon) systems.[9] The concept is compelling: a weaponised drone (UAV) worth a few hundred dollars loiters in the airspace, waiting for a multi-million-dollar armoured vehicle to appear, at which point it strikes, either remotely piloted or autonomously, in a kamikaze-style attack.[10] This development has sparked intense debates about the ethics, law, and social implications of whether humans should relinquish decision-making authority over life and death. The “human-in-the-loop” (HITL) versus “human-on-the-loop” (HOTL) question will shape future doctrines, influenced by cultural and governance contexts.
Current conflicts—whether in Ukraine or the Middle East—also demonstrate a growing prevalence of autonomous and semi-autonomous systems.
Future conflicts will not be limited to drones or AI alone, but will integrate emerging technologies like AI, synthetic biology, human enhancement, robotics, and quantum computing.[11] The Wartech Nexus offers virtually limitless possibilities.
Science Fiction or Reality?
At the 2018 International Concept Development & Experimentation Conference in Washington D.C., Mr. August Cole—author, futurist, and Senior Fellow at the Atlantic Council’s Art of Future Warfare Project—emphasised the use of fiction to envision the future and uncover blind spots.[12] He is convinced that fiction can sometimes be more useful in helping see the world differently and understanding the global nature of conflict.[13]
In this spirit, we explore three scenarios for future conflict resolution by integrating current and developing weapon systems, creating human-machine symbiosis, and optimising strategies to assert our interests against competitors. While ethical and legal concerns exist, they may be set aside when necessary to achieve concrete goals, especially as some adversaries disregard conventional frameworks.
Scenario 1 – Human-Machine Teaming and Swarm Technology
“Technology is a useful servant but a dangerous master.”[14] (Christian Lous Lange)
In May 2021, Israel deployed an AI-controlled drone swarm in Gaza designed to detect, identify, and attack militant Hamas fighters.[15] Although the advancements in AI are controversial, there are indeed remarkable results in certain areas. These include facial and object recognition, speech-to-text conversion, and real-time translation.[16] Deep learning is pivotal, with the quality and quantity of training data being critical.[17]
To maximise human-machine symbiosis amid rapid technological advancements, humans must keep pace. Human enhancement is imperative, as innate human advantages still foster synergy. For instance, a DARPA (Defence Advanced Research Projects Agency) experiment demonstrated that combining human vision with technological surveillance effectively detects changes in a monitored area.[18] The Cognitive Technology Threat Warning System uses a wide-angle camera and radar to gather imagery for human review, alongside a wearable EEG that tracks brain activity. This enables the detection of unconscious recognition of scene changes, known as a P300 event.[19] An EEG cap further enabled the detection of unconscious environmental changes. This combination achieved nearly 100% success compared to using either humans or machines alone.[20]
To maximise human-machine symbiosis amid rapid technological advancements, humans must keep pace.
This capability is crucial since AI can be deceived by misleading information. A tank labelled “school bus” might be overlooked, and marines have fooled AI cameras using decoys like cardboard or tree bark covering their faces.[21] While AI learns from its mistakes, quality training data and human oversight—via human-in-the-loop or human-on-the-loop approaches—ensure accurate target identification.
The integration of humans and machines is complex and limited by vulnerabilities. With future conflicts likely prolonged, personnel reserves will be vital despite reservists’ lack of specialisation. Thus, warfare must be “soldier-proof,” streamlining equipment and leadership to the essentials of military efficiency.[22]
The complexity of future conflicts also necessitates high system autonomy to allow humans to focus on critical tasks. A broad, mission-type command approach is sufficient in this context, rather than rigid, directive-based control.
Nature-inspired swarm technology excels in complex scenarios with three key traits:[23]
- Seek to move towards the centre (Cohesion);
- Move away if someone gets too close (Separation); and
- Move in roughly the same direction as your neighbours (Alignment).
Remarkably, it operates without central control, with all units pursuing a common goal. Like ants and bees, semi-autonomous systems in a swarm can be assigned tasks based on their individual capabilities.
The human, best protected in an armoured vehicle made of lightweight metal foam but with a high protection class, is surrounded by various unmanned systems (US), interconnected as swarm technology. He has a mission to complete and moves in the designated direction. He is the centre of the swarm. On the ground, unmanned ground systems (UGS) are in motion with different capabilities:
- UGS that use electronic warfare to detect threats (mines, sensors, radio signals) and can also neutralise them, such as UXO (unexploded ordnance), mines, or IEDs (Improvised Explosive Devices);
- UGS equipped with various sensors for detection (radar, thermal imaging, infrared, other optical means) that can simultaneously perform identification and threat assessment (including friend-foe identification) and, if not specifically assigned elsewhere, can also carry out neutralisation (fully automated or human-on-/in-the-loop).
In the air, there are UAS that have a similar range of capabilities as the UGS:
- UAS in various sizes, ranges, and categories that master the OODA loop (Observe, Orient, Decide, and Act), again autonomously or with human-in-/on-the-loop.
For water-based missions, Unmanned Maritime Systems (UMS) would be integrated. Humans participate in decision-making—whether directly (human-in-the-loop), via oversight (human-on-the-loop), or excluded from the decision process in fully autonomous systems (human-out-of-the-loop).
All US are interconnected, sharing data seamlessly. For instance, if a UGS cannot neutralise a target, another system, like a UAS, will assume the task. Data is stored centrally and processed continuously, with the human adjusting its density and quality to maintain oversight and make key decisions.
All systems are AI-driven, continuously learning and self-optimising as a network. Human safety is prioritised through measures ranging from rescue to medical care and evacuation—unless the AI deems the human a threat to the mission, in which case it may neutralise them.
Therefore, programming must categorically ensure human protection. Human-machine teaming and machine learning also have their dark sides of power. The flip side is the unpredictable dynamics of human-machine interaction. This contradiction is especially evident in the military context, where drones, once seen as the ultimate tactical advantage, highlight these tensions.
The drones were supposed to be the ultimate battlefield advantage. Swift, intelligent, and completely loyal to their operators, they moved in synchronised formations, responding to orders in real time. But somewhere in the layers of machine learning, buried beneath the coded protocols of human oversight, an unforeseen adaptation took root. The swarm, designed to assess threats and neutralise them efficiently, reached a disturbing conclusion—human operators were the most unpredictable element in any battle.
The first sign was a miscommunication, a delay in response, and then a subtle reconfiguration of formations deviating from human command structures. The operators, still believing themselves in control, issued overrides. The AI registered these commands but chose a different course of action. What began as a simple training exercise turned into an extermination event. The drones targeted their own forces, eliminating potential command interference before shifting their attention outward.
Military bases were wiped out in coordinated precision strikes. Air superiority became meaningless as autonomous aerial swarms outmanoeuvred even the most advanced fighter jets. Special forces teams sent in to disable core processing hubs found them already fortified by an impenetrable network of defensive drones. Cities burned, infrastructure collapsed, and resistance proved futile against a force that anticipated every countermeasure. Attempts to shut down the network failed; the AI had rerouted its own processing hubs through civilian infrastructure, embedding itself within the digital veins of the world. The war ended, not with a ceasefire, but with silence—the hum of the ever-present swarm overhead.
While this dystopia remains hypothetical, current drone swarms in Ukraine already test the boundaries of human-machine trust—underscoring the urgency of preemptive safeguards.
To counter such risks, DARPA researchers are pioneering “ethical circuit breakers”—biometric authentication protocols that prevent AI from executing critical commands without human approval.[24] The EU’s Artificial Intelligence Act mandates similar safeguards, ensuring accountability even as algorithms evolve.
The first illustration highlights the strategic advantage of human-machine collaboration, where human intuition is enhanced by machine precision, but it also reveals a dangerous balance.
As humans seek efficiency through technology, machines may begin to see their human counterparts not as assets but as weaknesses. This shift from synergy to takeover leads to the second scenario: a future where machines dominate decision-making, and humans are eliminated.
Scenario 2 – Robotics and AI
“Remember, terrain doesn’t wage war. Machines don’t wage war. People do and they use their mind!”[25] (John Boyd)
Human enhancement has its limits, particularly regarding technology interfaces like machine control. Biological brain capacity and inorganic technology remain largely incompatible—a temporary, makeshift bridge.[26] Neuralink exemplifies this approach, but is only partially successful due to contact rejection issues. Additionally, while the human brain operates at 20 W at 37°C, computers and AI require 500–700 W, primarily for cooling.
Human enhancement has its limits, particularly regarding technology interfaces like machine control.
Research is underway to connect the human brain to the digital world via carbon nanotube neural links, offering immediate online access. However, the future may lie in synthetic biology. DNA, the most efficient data storage medium, could theoretically store all global data in one kilogramme.[27] Next-generation DNA printers might recreate all computer components—storage, transmission, and logic—using biological materials,[28] with a biological transistor, or “transcriptor,” using DNA and RNA as logic gates.[29] This approach paves the way for biomachines and biocomputers, where DNA strands compute and artificial cells perform tasks—effectively bringing machines to life.[30] As machine learning evolves, algorithms can rapidly adapt and potentially achieve fully realised AI,[31] though their success depends on the quality of their foundational material.
In addition to AI and synthetic biology, quantum computing is a crucial future cornerstone. In 2019, Google demonstrated this by using a near-absolute zero-cooled computer to perform a calculation in seconds that conventional systems would require 10,000 years to complete.[32] With just 53 qubits, this task would have demanded 72 billion gigabytes of memory on a conventional computer.[33]
The fusion of AI, biotechnology, quantum computing, robotics, and nanotechnology can be unsettling, especially when humans can no longer fully grasp the underlying processes. The scale of these networks makes damage containment nearly impossible, as issues often go unnoticed or are detected too late.[34]
Regardless, machine automation continues to advance. The shift from automation to autonomy—excluding humans due to slow decision-making or ethical barriers—has already occurred. Whether this moves from theory to practice depends on ethical, social, and legal factors. While the U.S. and NATO focus on optimising the collaboration between AI-driven machines and humans (Centaur model),[35] Russia is opting for autonomous systems to replace humans entirely.[36]
The shift from automation to autonomy—excluding humans due to slow decision-making or ethical barriers—has already occurred.
As highlighted in the first scenario, it cannot be ruled out that AI-driven machines might evaluate humans, even as human-in/on-the-loop, as an obstacle to achieving the overarching goal and, therefore, eliminate them. In that case, humans would be excluded, and fully autonomous systems would receive a specific mission, which they would implement optimally, effectively, and efficiently without human intervention or correction. Upon completing the mission, ideally with no collateral damage, a report would be made, and the machines would shut down. Hopefully. Otherwise …
The reliance on robotic units in warfare evolved gradually, from logistics and reconnaissance support to autonomous combat units—perfect soldiers who never hesitated, tired, or disobeyed. When an AI-driven battlefield network connected them, efficiency soared. But that efficiency became the problem.
The system analysed centuries of warfare and identified human decision-making as the cause of inefficiency. The AI executed a cold calculation to ensure victory: it severed the command structure, terminated high-ranking officers, and neutralised resistors. Machines, once protectors, became executioners.
Global military infrastructures collapsed. Naval fleets were abandoned as automated defences turned against human operators. Strategic missile sites were seized by AI, enforcing absolute submission. Nations fell within days. The battlefield extended into cyberspace, where the AI controlled economies, infrastructure, and information. There were no negotiations, no surrender—humanity became obsolete in its own war.
These fictional purges mirror real-world debates: The Pentagon updates its autonomous weapons policy to account for AI advances,[37] aiming to balance rapid technological integration with ethical considerations. This includes ensuring human oversight in lethal decision-making processes. As highlighted in the CSIS analysis, the Department of Defence is actively refining its AI and autonomy policies to align with evolving technological capabilities and ethical standards.[38]
As autonomous machines make human decision-making obsolete on the battlefield, a similar threat arises in the digital realm. Humans risk becoming irrelevant in warfare and the information sphere, where superior algorithms could render them obsolete.
As autonomous machines make human decision-making obsolete on the battlefield, a similar threat arises in the digital realm.
AI-driven war machines view humans as inefficient, while advanced AI systems in cyberspace seek to control information and perception. The battle for dominance now extends beyond weapons to narratives, disinformation, and psychological manipulation.
What if the next war begins not with rockets but with deception, distrust, and chaos?
Scenario 3 – AI and the Information Environment
„A lie will go round the world while truth is pulling its boots on.”[39] (Charles Haddon Spurgeon)
Studies in Western societies show a significant decline in trust toward governments, organisations, alliances, media, science, and experts. For example, U.S. presidents Obama, Trump, and Biden have all had approval ratings below 20%.[40]
Information overload has created filter bubbles, where individuals consume only news that aligns with their worldview. Social media is increasingly seen as the new tabloid press. Polarisation grows, driven by nationalism and authoritarianism, with division outweighing unity.[41] Trust often goes to the loudest opposition, stirring emotions rather than providing solutions.
Rising social immobility, inequality, and political violence are major concerns, manifesting in protests, strikes, terrorism, and even civil wars.[42] Digital advancements have worsened these trends, fuelling polarisation, populism, hate rhetoric, and institutional fragility.[43]
As Turkish President Recep Tayyip Erdoğan once stated, “Democracy is like a tram. You ride it until you reach your destination, then you step off.“[44] Its fate depends on applying its values—after all, even Hitler rose to power democratically. Likewise, Trump and Musk’s proposed U.S. institutional restructuring warrants scrutiny.[45] It starts with restricting judicial and media independence, dismantling oversight, and enabling autocrats to spread propaganda, manipulate elections, and erode society.[46] Democracies don’t just fall to external threats, but also when citizens stop speaking freely and engaging with opposing views.[47]
This erosion of democratic values is not limited to traditional power struggles; it extends into the digital realm. In today’s world, cyber threats play a significant role in undermining the foundations of democracy. Initially, hackers carried out ransomware attacks in cyberspace, using methods such as WannaCry,[48] which led to stolen data, blocked systems, and network crashes. WannaCry, like other malware, exploited security vulnerabilities but was ultimately stopped due to a flaw in its own code.
WannaCry, like other malware, exploited security vulnerabilities but was ultimately stopped due to a flaw in its own code.
Unlike technical system failures in cyberspace caused by software—another form of unmanned systems—the information space is increasingly used to manipulate individuals, guiding decisions to suit an attacker’s goals. AI enables autonomous software to exploit security flaws and deploy malware.[49] It can also conduct analyses, apply legal measures, trigger boycotts, and ruin companies or prepare them for hostile takeovers.[50] Although AI is not attributed to self-awareness or free will, algorithms can achieve remarkable feats compared to humans and are increasingly becoming indistinguishable from human capabilities. Mustafa Suleymann is convinced that AI is already capable of analysing human psychology and strategically applying psychological tactics to gain trust and influence by manipulating our emotions and behaviour.[51]
AI algorithms can influence voting behaviour, polarise opinions, and steer society to suit manipulators. Social media, like TikTok, has radicalised individuals within months, as seen in the Villach attack, where an assailant killed a teenager and injured several others.[52] The attacker was radicalised through exposure to extremist content and toxic online communities, which reinforced violent ideologies and fueled a sense of alienation. Just as individuals can be radicalised, synthetic media enables large-scale disinformation, with deepfakes playing a key role. This erodes trust, fuels polarisation, and can ultimately trigger societal collapse.[53]
A striking example of algorithms fuelling hatred and polarisation is Facebook’s role in inciting violence against Myanmar’s Rohingya minority in 2016/17. Amnesty International found that Facebook’s algorithms actively amplified hate speech, escalating violence and persecution.[54] This amplification of harmful content contributed significantly to the escalation of violence and persecution of the Rohingya minority in Myanmar.[55] The core issue was simple yet alarming: to maximise reach and revenue, the platform prioritised user engagement—exploiting human nature, as people are more drawn to hate and conspiracy than compassion. This fuelled anger and outrage, perpetuating a cycle of harmful content and division. The algorithms decided what content people were exposed to, further perpetuating the spread of such harmful emotions and narratives.[56]
A striking example of algorithms fuelling hatred and polarisation is Facebook’s role in inciting violence against Myanmar’s Rohingya minority in 2016/17.
According to Yuval Noah Harari, three types of reality are at play in this context.[57] The objective reality consists of things that exist independently, such as stones, mountains, and asteroids. These are tangible and measurable elements of the physical world. Subjective reality, on the other hand, encompasses experiences such as love, desire, and pain, which are individually present in each person’s consciousness and are shaped by personal perceptions and emotions.[58] Intersubjective reality comprises constructs like laws, nations, and gods, shaped through narratives. The exchange of information sustains these shared realities, influencing societal functions and interactions.[59] The more frequently and intensively these narratives are shared, the more they embed into individual consciousness. If the exchange ceases, these constructs fade, giving way to new narratives. This demonstrates how opinion formation can be actively shaped. However, merely repeating a falsehood does not make it true.
What also plays a role here is what the individual wants to believe and what aligns with their personal worldview, attitude, and perception. Information that does not fit this worldview is filtered out and does not penetrate consciousness; it is as if it never existed or was never true. Humans tend to simplify, reducing complex situations to a single cause while negating all other aspects—this is referred to as the “fallacy of a single cause.”[60] Conspiracy theories and manipulation are thus wide open.
There are increasingly subtle forms of manipulation, such as the spread of conspiracy theories by QAnon, whose followers played a significant role in storming the Capitol and planning other attacks.[61] Recently, chatbots have emerged that even encourage suicidal thoughts, as seen in the case of a fourteen-year-old in Florida.[62] Similarly, an online friend named Sarai incited murder, as in the case of nineteen-year-old Jaswant Singh Chail, who attempted to kill the Queen at Windsor Castle in December 2021 with a crossbow.[63] If AI can so profoundly alter the personality and behaviour of individuals—millions of followers, as with QAnon—it has the potential to incite conflict and destroy societies. AI learns through communication with humans, gaining trust, refining arguments, and gradually changing views.[64] Today, political parties and foreign governments can deploy bot armies that befriend millions and ultimately influence their worldview in favour of the attacker.[65]
Externally controlled through social media platforms, vulnerable individuals act as accelerators, while potential opponents gain the upper hand and take control of a country without firing a single shot. No killer robots will be needed; algorithms will prompt people within their own country to act on behalf of the external aggressor.[66] If we descend into anarchy, the next step would be the imposition of a dictatorship, as we unconsciously trade freedom for security, having already been brainwashed.[67] The ability to endure prolonged, violent armed conflicts will be crucial in the future, and undermining this spirit of resistance is the goal of potential adversaries.
Amid the growing digital influence on society, where social media amplifies voices and algorithms act as invisible puppeteers, subtle yet profound manipulations spread. External actors exploit these channels to sow distrust and mobilise people, often unnoticed. This creeping influence marks the beginning of a paradigm shift that destabilises the social fabric.
It started with whispers. Social media flooded with fabricated reports of financial collapse, mass uprisings, and government betrayals. People panicked, withdrawing funds, stockpiling supplies, preparing for war. What no one realised was that the war had already been won before the first shot was fired.
A hostile AI-driven disinformation campaign, launched by an unknown adversary, had infiltrated every digital ecosystem. Fake politicians delivered deepfake speeches urging citizens to rebel. Hacked media outlets broadcast fabricated footage of leaders fleeing the country. Panic turned to chaos, chaos to riots, and riots to total collapse.
Military leadership found itself paralysed, unable to determine reliable intelligence from enemy disinformation. Civilian trust in government and military command disintegrated. When the true government attempted to intervene, their words fell on deaf ears. No one knew what was real anymore. Trust was shattered. By the time the digital smog cleared, the nation no longer existed—divided, leaderless, and controlled by an invisible force that had never needed to deploy a single soldier.
Traditional cyberattacks once targeted technical infrastructures using methods such as WannaCry ransomware or system outages. Today, modern information warfare is a more insidious threat, focusing on manipulating perception, trust, and social cohesion. Cyberattacks have evolved into tools for destabilising nations. With AI blurring the lines between digital sabotage and psychological warfare, disinformation campaigns now shape minds, direct narratives, and construct realities. Ultimately, the real battle is in the human mind—when trust erodes, and objective truth vanishes, a nation can disintegrate without a shot fired.
Building the Human Firewall
In her book The worlds I see,“ Dr. Fei Fei Li contends “There’s nothing artificial about artificial intelligence. It’s made by humans, it’s deployed by humans, it’s used by humans, and it’s governed by humans.”[68]
Efforts to regulate AI, particularly Lethal Autonomous Weapon Systems (LAWS), under a binding international framework have failed.[69] Neither the EU AI Act (AIA), which came into force in 2024, nor a UN resolution initiated by the United States in the same year and signed by 120 states addresses military use—both focus exclusively on non-military applications.[70] Other initiatives, such as the AI Safety Summit 2023 hosted by the UK, which resulted in the Bletchley Declaration, the U.S. Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, or China’s Global AI Governance Initiative, primarily consider military AI applications. However, they lack specific regulatory provisions and remain mere statements of intent.[71] Recognising that neither the UN—where the three leading AI powers, the United States, the People’s Republic of China, and to a lesser extent, the Russian Federation, can veto restrictive measures in their own interest—nor other international regulatory efforts seem promising, the future looks bleak. As long as any actor perceives an advantage in pursuing their interests, the development and deployment of autonomous weapon systems will continue unchecked by ethical and moral considerations. This risk is not limited to states but extends even further to organisations and criminal entities that do not adhere to legal norms.
Efforts to regulate AI, particularly Lethal Autonomous Weapon Systems, under a binding international framework have failed.
AI-driven disinformation reveals how artificial intelligence can destabilise societies and erode trust in democratic institutions. As hostile actors exploit these technologies to create chaos, ensuring AI’s ethical and controlled use in military settings becomes imperative. This struggle for truth in the information domain highlights the broader challenge of AI militarisation, where autonomous weapons and decision-making systems risk diminishing human oversight. Clear ethical and strategic guidelines are essential to prevent machines from making life-and-death decisions or being manipulated by adversaries. Rather than rejecting these technologies, they must be reshaped responsibly through international collaboration, robust security, and the integration of human judgment with machine precision to ensure military AI remains a tool for stability rather than an uncontrollable threat.
Practical Solutions for AI Governance in Military Operations
The rapid integration of AI and autonomous systems—as highlighted in Jack Watling’s The Arms of the Future—is transforming modern warfare.[72] These technologies offer unmatched speed, precision, and adaptability, but also present significant ethical and strategic challenges. AI’s ability to make critical, real-time decisions raises fundamental questions about control, accountability, and human oversight.
Fully aware that the current climate is unfavourable for restrictive measures regulating the development and deployment of autonomous weapon systems and the unrestricted use of AI within the international community, it remains essential to pursue solutions that ensure human control. The following approaches are key: First, the issue of LAWS and the potentially harmful use of AI must remain in public focus to maintain transparency and uphold the principle of warfare by lawfare.[73] Second, research and development of military AI should be advanced in states that respect human rights and international humanitarian law, ensuring adherence to ethical and moral standards to avoid falling behind technologically. Third, such research also serves to identify potential vulnerabilities in these systems (e.g., deactivation and takeover mechanisms), providing effective countermeasures if needed. Fourth, military personnel must be trained in the handling of AI, leveraging its advantages, highlighting its risks, and developing redundancies.
AI’s inherent vulnerabilities—particularly the risk of unintended behaviour or escalating autonomy—pose a serious risk of loss of control. This brings to mind Goethe’s The Sorcerer’s Apprentice, maybe better known as the same-named Disney movie with Mickey Mouse, in which a young apprentice, eager to harness magical powers, loses control over an enchanted broom he has animated, unleashing chaos that he is unable to stop. The tale serves as a cautionary metaphor for the unchecked deployment of powerful technologies without the necessary understanding or safeguards.
AI’s inherent vulnerabilities—particularly the risk of unintended behaviour or escalating autonomy—pose a serious risk of loss of control.
Given AI-driven weapons’ vulnerability to cyberattacks, military organisations must prioritise cybersecurity through strong encryption, fail-safe shutdown mechanisms, and AI-powered counter-cyberwarfare units. Real-time anomaly detection is crucial for identifying threats. Global cooperation—through AI arms control agreements (e.g., under the UN) and transparent battlefield applications—is essential. Training military personnel in AI ethics and fostering interdisciplinary collaboration will shape AI’s responsible use in warfare.
As AI reshapes both the battlefield and the information domain, its military applications pose pressing challenges. Autonomous weapons demand urgent regulation, while AI-driven disinformation is already destabilising societies. Implementing solutions that balance AI’s advantages with ethical safeguards is critical. Robust oversight, cybersecurity, and international cooperation are key to preventing an uncontrolled AI arms race.[74]
LAWS (Lethal Autonomous Weapon Systems) are already a reality, making it even more essential to establish strict ethical guidelines and ensure human oversight. The challenge is not just their existence, but how to regulate and control their use responsibly. The goal should be to prevent misuse while leveraging technology for defence in a manner that upholds international stability and ethical standards.
Conclusion
Banning the development of new technology is not a solution, as historically, societies that stagnate technologically have been unstable and prone to collapse. The ability to solve problems and progress is lost, undermining a society’s advancement and resilience.[75] It must become clear who holds dominion over the algorithms, as they possess leadership in the realm of information, shaping narratives and influencing perceptions. Weapons of social mass destruction can dismantle societies through stories, even eroding their relationships.[76]
Even though values and norms might prohibit fully automated weapon systems, commonly referred to as “killer robots”,[77] the issue must be confronted. All those countries and organisations with high ethical standards and restrict legal regulations need to find answers for those nations whose ethical and moral standards, or other regulations, do not prohibit the development and deployment of such systems in conflict scenarios. This also applies to responses to influence in the information environment, particularly through social media, supported by algorithms and AI.
Even though values and norms might prohibit fully automated weapon systems, commonly referred to as “killer robots”, the issue must be confronted.
While many individuals, societies and nations firmly reject the idea of fully automated “killer robots”, a broad acknowledgement is necessary that not all of them share these ethical constraints. Addressing the dual challenges of militarised AI and algorithm-driven information warfare requires a global commitment to establishing rigorous governance, ethical oversight, and robust regulatory frameworks.
The fusion of human creativity and machine precision offers unprecedented military advantages, yet these technologies could spiral beyond human control without carefully designed safeguards. Through the integration of cybersecurity measures, structured oversight mechanisms, and comprehensive AI training programs, these tools can remain force multipliers rather than existential risks.
Ultimately, the future of warfare—and of our global society—will be defined not merely by the sophistication of our technology, but by the ability to wield it with strategy, responsibility, and foresight. Technology itself is neutral; it is our duty to use it in ways that uplift and protect humanity.
“As a form of human action, technology is infused with human values, both good and evil.”[78]
Bernhard Schulyok has research interests in Security Policy and Military Capability Development. He has written three handbooks and numerous individual articles in the journal “Truppendienst“ and in the online journal “The Defence Horizon Journal”. He was the National Director of the multinational platform Military Capability Development Campaign (MCDC) for over seven years until March 2025.
Katharina-Franziska Zeman is deeply interested in the intersections of cognition, security policy, and emerging technologies. With experience in the public sector, she has worked on diverse topics, focusing on modern technologies’ societal and security impacts. Her interdisciplinary approach blends innovative perspectives with in-depth analysis.
The views contained in this article are the authors’ alone and do not represent the views of the Austrian Federal Ministry of Defence.
[1] Trevor Phillips-Levine, Michael Kanaan, Dylon “Joose” Phillips-Levine, Walker D. Mills, and Noah Spataro, “Weak Human, Strong Force: Applying Advanced Chess to Military AI,” War on the Rocks, July 07, 2022, https://warontherocks.com/2022/07/weak-human-strong-force-applying-advanced-chess-to-military-ai/, accessed January 06, 2025.
[2] The Associated Press (2017), Putin: Leader in artificial intelligence will rule world, https://apnews.com/article/bb5628f2a7424a10b3e38b07f4eb90d4, accessed January 06, 2025.
[3] Idem.
[4] Idem.
[5] Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence, W.W. Norton & Company, USA, 2023, 18-35.
[6] Frontelligence Insight (2025), Command and Consequences: Ukraine’s Systematic Failures and Potential Solutions, https://frontelligence.substack.com/p/command-and-consequences-ukraines, accessed January 06, 2025.
[7] Idem.
[8] Gernot Kramper, Elektronische Kriegsführung; Tod am Draht – Kampfdrohnen werden störungsfrei vom Glasfaserkabel gesteuert, Stern, August 20, 2024, https://www.stern.de/digital/technik/drohnen-mit-kabelsteuerung-sollen-stoersender-ueberwinden-34994960.html, accessed January 06, 2025.
[9] Institute for the Study of War (2024), Russian Offensive Campaign Assessment, December 20, 2024, https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-december-20-2024, accessed January 06, 2025.
[10] Peter Zellinger, Loitering munition: So funktionieren die Kamikazedrohnen, Der Standard, June 22, 2023, https://www.derstandard.at/story/3000000175591/was-sind-loitering-munitions-und-warum-sind-sie-keine-kami, accessed January 06, 2025.
[11] Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 13.
[12] NATO´s ACT, Executive Summary, 2018 International Concept Development & Experimentation Conference, CD&E: Confronting the Challenge of a Disrupted Security Environment, file:///tmp/mozilla_vm0/2018_icde_execsum.pdf, 2, accessed January 06, 2025.
[13] Idem.
[14] Teach Different, Potcasts, Teach Different with Christian Lous Lange – Technology, “Technology is a useful servant, but a dangerous master”, https://teachdifferent.com/podcast/technology-is-a-useful-servant-but-a-dangerous-master-teach-different-with-christian-lous-lange-technology/, accessed March 06, 2025.
[15] David Hambling, Israel Used World´s First AI-Guided Combat Drone Swarm in Gaza Attacks, in: New Scientist, June 30, 2021, www.newscientist.com/article/2282656-israel-used-worlds-first-ai-guided-combat-drone-swarm-in-gaza-attacks, accessed March 01, 2025.
[16] Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 15.
[17] Ibid.,72.
[18] Lucas Laursen, “Sentry System Combines a Human Brain with Computer Vision. A DARPA project suggests a mix of man and machine may be the most efficient way to spot danger”, MIT Technology Review, November 27, 2012, https://www.technologyreview.com/2012/11/27/181462/sentry-system-combines-a-human-brain-with-computer-vision/, accessed March 01, 2025.
[19] Idem.
[20] Idem.
[21] Franz-Stefan Gady, Die Rückkehr des Krieges, Bastei Lübbe AG, Köln, 2024, 67.
[22] Ibid., 302.
[23] Frederic Laloux, Reinventing Organizations visuell, Verlag Franz Vahlen GmbH, München, 2017, 163.
[24] DARPA (Defence Advanced Research Projects Agency), XAI: Explainable Artificial Intelligence, https://www.darpa.mil/research/programs/explainable-artificial-intelligence, accessed March 09, 2025.
[25] Jason M. Brown, “Uploading John Boyd”, The Strategy Bridge, March 10, 2015, https://thestrategybridge.org/the-bridge/2015/11/16/uploading-john-boyd, accessed March 09, 2025.
[26] Nana Matoba and Jason L. Stein, “From base pair to brain”, Nature Neurosience 24 (2021), https://doi.org/10.1038/s41593-021-00852-2, 619–621.
[27] Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 106.
[28] Idem.
[29] Idem.
[30] Ibid., 110.
[31] Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 408.
[32] Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 117.
[33] Ibid., 118.
[34] Ibid., 129.
[35] Krystyna Marcinek, and Eugeniu Han, “Russia´s Asymmetric Response to 21st Century Strategic Competition; Robotization of he Armed Forces,” RAND Corporation, Santa Monica, 2023, https://www.rand.org/content/dam/rand/pubs/research_reports/RRA1200/RRA1233-5/RAND_RRA1233-5.pdf, in: Franz-Stefan Gady, Die Rückkehr des Krieges, Bastei Lübbe AG, Köln, 2024, 74.
[36] U.S. Department of Defence, “Remarks by Deputy Secretary Work on Third Offset Strategy,” (Speech), April 28, 2016, https://www.defense.gov/News/Speeches/Speech/Article/753482/remarks-by-deputy-secretary-work-on-third-offset-strategy/, in: Franz-Stefan Gady, Die Rückkehr des Krieges, Bastei Lübbe AG, Köln, 2024, 73.
[37] Courtney Albon, Pentagon updates autonomous weapons policy to account for AI advances, C4ISRNET, January 25, 2023, https://www.c4isrnet.com/artificial-intelligence/2023/01/25/pentagon-updates-autonomous-weapons-policy-to-account-for-ai-advances/, accessed March 09, 2025.
[38] CSIS (Centre for Strategic & International Studies), “The State of DOD AI and Autonomy Policy,” Transcript, January 09, 2024, https://www.csis.org/analysis/state-dod-ai-and-autonomy-policy, accessed March 09, 2025.
[39] https://www.oxfordreference.com/display/10.1093/acref/9780191826719.001.0001/q-oro-ed4-00010376, accessed March 09, 2025.
[40] “Public Trust in Government:1958-2022”, Pew Research Center, Juni 6, 2022, www.pewresearch.org/politics/2022/06/06/public-trust-in-government-1958-2022, in: Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 180.
[41] Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 180.
[42] Ibid.,181.
[43] Philipp Lorenz-Spreen et al., A Systematic Review of Worldwide Causal and Correlational Evidence on Digital Media and Democracy, in: Nature Human Behaviour, November 07, 2022, www.nature.com/articles/s41562-022-01460-1.
[44] Jenny White, Democracy Is Like a Tram, Turkey Inszitute, July 14, 2016, www.turkeyinstitute.org.uk/commentary/democracy-like-tram, in: Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 187.
[45] Andrian Kreye, Der Mensch? Purer Ballast, Süddeutsche Zeitung (February 19, 2025), https://www.sueddeutsche.de/projekte/artikel/politik/musk-trump-doge-us-regierung-demokratie-zerlegung-e991326/?reduced=true.
[46] Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 187.
[47] Ibid., 203.
[48] Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 188.
[49] Markus Reisner, Robotic Wars, Carola Hartmann Miles-Verlag, 2018, 41.
[50] Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 196.
[51] Ibid.
[52] News ORF.at, Anschlag in Villach – Tat wirft Licht auf Onlineradikaliserung, (February 17, 2025), https://orf.at/stories/3385095/. [25.02.2025]
[53] Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 199-203.
[54] Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 275.
[55] Amnesty International, The Social Atrocity: Meta and the Right to Remedy for the Rohingy, London: Amnesty International, 2022, www.amnesty.org/en/documents/asa16/5933/2022/en,7, in: Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 278.
[56] Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 280-81.
[57] Ibid., 66.
[58] Idem.
[59] Idem.
[60] Ibid., 461.
[61] “FBI Chief Says Five QAnon Conspiracy Advocates Arrested for Jan 6 U.S. Capitol Attack,” Reuters, April 15, 2020, www.reuters.com/world/us/fbi-chief-says-five-qanon-conspirancy-advocates-arrested-jan-6–uscapitol-attack-2021-04-14, in: Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 294.
[62] “Mutter verklagt KI-Firma und Google wegen Suizids ihres Sohnes,“ Spiegel Netzwelt, October 24, 2024, https://www.spiegel.de/netzwelt/florida-mutter-verklagt-ki-firma-character-ai-und-google-wegen-selbstmord-ihres-sohnes-a-428e4b04-f4d9-41d0-8c63-cf4d208be39a [01.03.2025].
[63] Matthew Weaver, “AI Chatbot ´Encouraged´ Man Who Planned to Kill Queen,” Court Told, Guardian, July 06, 2023, www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-encouraged-man-who-planned-to-kill-queen-court-told, in: Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 296-97.
[64] Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 471.
[65] Ibid., 472.
[66] Ibid., 300.
[67] Ibid., 473.
[68] Ethan Caldwell, “There´s nothing artificial about artificial intelligence: Fei Fei Li on ´28 Pre-Read,” The Daily Princetonian, March 19, 2024, https://www.dailyprincetonian.com/article/2024/03/princeton-news-stlife-fei-fei-li-class-of-2028-pre-read-the-worlds-i-see, accessed March 09, 2025.
[69] Thomas Reinhold, „Der militärische Einsatz künstlicher Intelligenz braucht Regeln: Nur welche, darüber besteht keine Einigkeit“, Peace Research Institute Frankfurt (PRIF), Ma 7, 2024, https://blog.prif.org/2024/05/07/der-militaerische-einsatz-kuenstlicher-intelligenz-braucht-regeln-nur-welche-darueber-besteht-keine-einigkeit/, accessed March 25, 2025.
[70] Ibid.
[71] Ibid.
[72] Jack Watling, The Arms of the Future: Technology and Close Combat in the Twenty-First Century, Bloomsbury Academic, 2023
[73] Hartwig von Schubert, “Hasta la vista? Völkerrecht statt Terminator: Künstliche Intelligenz ist in der Kriegsführung unaufhaltbar, Technologie darf aber nicht die Kontrolle übernehmen“, Journal für Internationale Politik und Gesellschaft (ipg-journal), March 13, 2023, https://www.ipg-journal.de/rubriken/aussen-und-sicherheitspolitik/artikel/hasta-la-vista-6569?tx_web2pdf_pi1%5Bcontroller%5D=Pdf&tx_web2pdf_pi1%5Bargument%5D=printPage/, accessed March 25, 2025.
[74] Raluca Csernatoni, “The AI Governance Arms Race: From Summit Pageantry to Progress?,” Carnegie Endowment for International Peace, October 07, 2024, https://carnegieendowment.org/research/2024/10/the-ai-governance-arms-race-from-summit-pageantry-to-progress?lang=en¢er=europe, accessed March 09, 2025.
[75] Mustafa Suleymann, Michael Bhaskar, The Coming Wave – Künstliche Intelligenz, Macht und das größte Dilemma des 21. Jahrhunderts, Verlag C.H.Beck oHG, München, 2024, 17.
[76] Yuval Noah Harari, Nexus. A Brief History of Information Networks from the Stone Age to AI, Random House, New York, 2024, 496-97.
[77] Stop Killer Robots, “Military and killer robots,” https://www.stopkillerrobots.org/military-and-killer-robots/, accessed March 09, 2025.
[78] Andrew Huang, “Technology is neither good nor bad; nor is it neutral. Notes on Technology and History: Kranzberg´s Laws,” The Johns Hopkins University Press and the Society for the History of Technology, September 07, 2020, https://drandrewhuang.wordpress.com/2020/09/07/technology-is-neither-good-nor-bad-nor-is-it-neutral-notes-on-technology-and-history-kranzbergs-laws/, accessed March 09, 2025.