Source: shutterstock.com/PHOTOCREO Michal Bednarek
Source: shutterstock.com/PHOTOCREO Michal Bednarek
ArticlesEnglish Articles

The Integration Of AI-Empowered Autonomous Weapon Systems In European Defence

Abstract: As AI technologies advance, they alter military strategies and raise critical issues, including legal adherence to international humanitarian law and ethical concerns such as accountability and the potential for dehumanised warfare. The integration of AI-enhanced Autonomous Weapon Systems (AWS) presents both strategic opportunities and profound legal and ethical challenges for European defence.

Problem statement: What are the strategic, legal, and ethical implications of integrating AI-enhanced Autonomous Weapon Systems (AWS) in European defence strategies?

So what?: European policymakers, defence institutions, and legal experts must collaborate to create unified, enforceable standards for AWS development and deployment, striking a balance between innovation and accountability, as well as compliance with international humanitarian law.

Source: shutterstock.com/PHOTOCREO Michal Bednarek

Source: shutterstock.com/PHOTOCREO Michal Bednarek

The Current Status of AWS in Europe

Autonomous Weapon Systems (AWS) enhanced with Artificial Intelligence mark an important development in contemporary military technologies, allowing for partially independent operations across surveillance, targeting, and engagement due to AI’s ability to analyse vast amounts of data, make decisions, and execute actions with speed and precision that may go beyond human capabilities.[1] A notable early example is the 2020 Nagorno-Karabakh conflict, which saw the extensive use of autonomous and semi-autonomous systems, particularly loitering munitions, to conduct targeted strikes with limited human oversight. A more advanced and widespread application has emerged in the ongoing war in Ukraine, often described as the “first drone war.”[2] While AI integration remains partial, it is increasingly being incorporated into various systems to address specific operational challenges rather than enabling full autonomy.[3] These systems include diverse technologies, such as drone swarms capable of synchronised operations and advanced missile defence systems that autonomously detect and engage threats based on predefined criteria.[4]

While AI integration remains partial, it is increasingly being incorporated into various systems to address specific operational challenges rather than enabling full autonomy.

Some EU and NATO member states are actively developing and deploying AI-enhanced AWS, reflecting a global trend toward increasing autonomy in defence technologies.[5] France and the United Kingdom are among the frontrunners in Europe. France released its national AI strategy in 2018 and has since launched several initiatives, including ARTEMIS.IA and a national AI training programme, while also reaffirming its ambition in 2024 through substantial new funding and strategic recommendations from the French AI Commission. The UK began earlier, with foundational work outlined in the 2017 Digital Strategy. It further advanced through its 2022 Defence Artificial Intelligence Strategy, which aims to transform its armed forces into an “AI-ready” organisation. Both countries have embedded AI in their broader defence transformations through ethical frameworks, infrastructure investments, and partnerships across government, industry, and academia.[6] These developments reflect internal technological ambition and the strategic necessity of keeping pace with major powers such as Russia and the People’s Republic of China (PRC).[7]

Strategic Implications

The integration of AI-enhanced AWS is likely to influence the European strategic environment by altering how NATO and EU member states conduct surveillance, targeting, and operational planning.[8] These systems can process large volumes of sensor data and assist in identifying threats more quickly and consistently than manual methods.[9] In practice, AI is already being used in ISR platforms and loitering munitions to support target recognition and navigation in contested environments, as seen in Ukraine. Rather than replacing human decision-making, such systems tend to function as force multipliers, improving situational awareness and response coordination under time constraints. As such, these technologies can enhance strategic planning and defence capabilities, providing an edge in warfare.

However, these strategic advantages come with operational and ethical vulnerabilities. AI systems are susceptible to issues such as data poisoning, spoofing, and algorithmic bias, which adversaries may exploit to degrade system performance or manipulate outcomes.[10] While human operators retain the ability to intervene and slow down decision cycles, the integration of AI can reduce the window for reflection and oversight, particularly in high-tempo scenarios where systems are expected to operate at machine speed. That said, the risk of miscalculation or unintended engagement is not unique to AI. In traditional settings, staff officers may fail to fully assess situations or present incomplete options under pressure, leading to flawed decisions. AI does not create this risk, but it may magnify it by accelerating the pace at which decisions are made and by adding complexity to the information being processed, especially under tight constraints.[11]

AI systems are susceptible to issues such as data poisoning, spoofing, and algorithmic bias, which adversaries may exploit to degrade system performance or manipulate outcomes.

Legal Challenges

The legal environment for AI-enhanced AWS is complex, shaped by international frameworks that aim to regulate such systems’ use in warfare. Central among these is the United Nations Convention on Certain Conventional Weapons (CCW), which seeks to restrict technologies deemed to cause undue harm to combatants or civilians.[12] The CCW has established the Group of Governmental Experts (GGE) to specifically address Lethal AWS (LAWS).[13] However, states have not reached a consensus on the definition of these systems and the level of human control over critical combat decisions, with different states’ approaches reflecting different strategic considerations.[14]

This lack of consensus challenges enforcing International Humanitarian Law (IHL). It raises questions about the legality of autonomous operations that select and engage targets without meaningful human intervention.[15] While IHL prohibits certain weapons and sets general restrictions on the use of all others,[16] states can interpret these general rules differently without a specific international agreement.[17] This situation leads to uncertainties about whether AWS are compatible with the principles of distinction, proportionality, and precaution.

Within the EU, the legal challenges are compounded by the EU normative-strategic dilemma: the need to keep pace with global military advancements while adhering to its foundational values, such as human dignity and the rule of law.[18] These are values that geopolitical adversaries such as Russia, the People’s Republic of China, and North Korea do not uphold. This divergence in values creates an asymmetry in the strategic environment, where the EU’s normative commitments may translate into operational constraints, constraints that its adversaries are not only free from but may actively seek to exploit to their advantage. This normative-strategic dilemma is evident in the divergent perspectives within the EU. While the European Parliament pushes for regulations that would oversee the development and deployment of AWS to ensure they adhere to ethical standards and operational safety, internal discrepancies within European institutions and between member states persist.[19]

While the European Parliament pushes for regulations that would oversee the development and deployment of AWS to ensure they adhere to ethical standards and operational safety, internal discrepancies within European institutions and between member states persist.

For instance, in international discussions, especially at the CCW, France, while pursuing AI advancements for defence, asks for a flexible approach, proposing a political declaration emphasising adherence to ethical norms and international law.[20] On the other hand, Germany, and specifically Austria, pushes for stringent regulations demanding significant human control over AWS and advocating for legally binding rules to prohibit fully autonomous systems.[21], [22] These differing views within the EU complicate efforts to present a united front in international discussions, thus impeding the bloc’s ability to lead in establishing global norms for emerging military technologies.

Ethical Dilemmas

In addition to strategic and legal issues, the deployment of AI-enhanced AWS introduces profound ethical challenges, particularly concerning accountability in lethal scenarios. These systems can be configured with varying degrees of autonomy, and any significant reduction in human oversight, up to the point of fully autonomous operations, amplifies the challenge of assigning responsibility for life-and-death decisions. While the notion of granting full autonomy to AI systems may appear unrealistic at present, this possibility becomes more plausible when factoring in well-documented human cognitive biases toward automation.

A key concern is automation bias, or automation complacency, whereby human operators tend to over-trust AI outputs and disregard contradictory information, even when they retain some level of control in the decision-making loop. The presence of a human operator alone does not guarantee meaningful oversight, as excessive reliance on AI-generated recommendations can lead to passive acceptance of flawed or incomplete decisions. This is, for example, the case of a human being having the last decision over a strike, accepting or declining the strike, pushing buttons, almost always accepting the recommendation and granting the strike. Trust in automation can cause users to defer judgment, increasing the risk that decisions are made by algorithms rather than critically reviewed by humans.[23], [24] As a result, the ethical dilemma extends beyond the simple question of whether humans are “in the loop”; it also concerns the quality and assertiveness of human oversight. Delegating lethal decision-making to AI systems, even implicitly, challenges foundational moral and legal principles about agency, accountability, and the sanctity of human life.

Trust in automation can cause users to defer judgment, increasing the risk that decisions are made by algorithms rather than critically reviewed by humans.

The potential for autonomous killings marks a significant evolution in the conduct of warfare, raising concerns about the progressive dehumanisation of combat.[25] The deployment of AWS may reduce combatants to mere targets managed by algorithms, eroding the human empathy and discretion traditionally involved in combat decision-making processes.[26] While some argued that removing soldiers from direct engagement could lower the human and political costs of war and the threshold for initiating conflict,[27] this remains theoretical at present. Current conflicts, including the war in Ukraine, demonstrate that even the widespread use of unmanned systems has not reduced at all the need for human soldiers nor the intensity of conventional combat operations. On the contrary, Ukraine’s ongoing mobilisation efforts highlight the continued reliance on human personnel across all fronts. Moreover, at the present moment, there are few-if-confirmed cases in which autonomous systems have independently and successfully executed military missions without human involvement. For instance, while the Iranian missile and drone attack on Israel in April 2024 was largely unmanned, it served primarily as a strategic signalling operation rather than an example of autonomous warfare. These cases suggest that, although autonomy can enhance existing capabilities, it has not yet supplanted the central role of human presence in high-intensity conflict. That said, while speculative at this stage, future technological developments could shift this balance, potentially enabling more autonomous operations with reduced human involvement.

NATO has responded to these ethical challenges by establishing Principles of Responsible Use (PRUs) of AI in defence.[28] These principles are crucial in shaping how AI-enhanced AWS are developed and deployed, emphasising human control and ethical responsibility. The application of these principles, which the NATO’s Data and Artificial Intelligence Review Board (DARB) oversees,[29] is intended to mitigate risks associated with AI autonomy in military operations, ensuring that actions taken by automated systems are understandable, predictable, and aligned with the strategic intentions of human commanders.[30]

Case Studies: Ukraine and Gaza

The recent conflicts in Ukraine and Gaza offer stark illustrations of the dual-edged nature of AI-enhanced AWS. In the ongoing conflict in Ukraine, the use of AI has allowed the extension of drone capabilities for reconnaissance and targeted strikes deep into enemy territory, demonstrating the tactical flexibility provided by AI in countering electronic jamming and enhancing target acquisition.[31] This situation urges NATO and EU member states to reconsider their defence (and offensive) strategies in light of AI’s potential to reshape combat dynamics.[32]

In the ongoing conflict in Ukraine, the use of AI has allowed the extension of drone capabilities for reconnaissance and targeted strikes deep into enemy territory, demonstrating the tactical flexibility provided by AI in countering electronic jamming and enhancing target acquisition.

In contrast, the scenario in Gaza illustrates the darker side of AI applications in military settings. The Israeli use of the “Lavender” AI system to identify and target alleged militants with reported high civilian casualties raises profound ethical and legal concerns, particularly concerning the principles of proportionality and distinction under international humanitarian law.[33] The lack of meaningful human control over the systems demonstrates the issue of over-reliance on AWS decision-making algorithms, which may not always align with international and ethical standards. Notably, these standards are internationally recognised but not globally accepted or uniformly enforced, allowing for divergent interpretations and selective adherence by states. This demonstration underscores the pressing need for clear international legal frameworks to govern the use of AI-enhanced AWS.

Policy Options and Recommendations

Policy Option 1: Establishment of a Unified European AI-AWS Framework

This policy option suggests the establishment of a clear and uniform framework within NATO and the EU to regulate AI-enhanced AWS. This new normative architecture could build on existing instruments, such as the EU’s Common Position on arms exports and follow the principles under international humanitarian law (IHL), particularly the principles of distinction, proportionality, and precaution. Doctrinally, NATO’s Principles of Responsible Use for AI in Defence can provide a foundation for operational, ethical, and technical standards. These existing tools could be adapted and integrated into an instrument or set of common operational guidelines that clarify acceptable levels of autonomy, human control, and accountability mechanisms in AWS. This could involve a code of conduct, common certification mechanisms, and joint testing protocols for AI-enabled systems to ensure interoperability and compliance. As a result, this approach would guide the development, deployment, and oversight of these systems, ensuring they comply with international humanitarian laws.

Benefits and Drawbacks

The adoption of a unified framework offers the advantage of standardisation and interoperability, which would improve the effectiveness of joint military operations and reinforce ethical and legal compliance by stressing human oversight and accountability of AI-AWS. This would potentially increase public trust and strengthen the EU and NATO’s stance in global discussions on military AI technology.  While global adoption of such standards cannot be guaranteed, it is plausible that like-minded, norm-driven actors may follow suit, contributing to the gradual shaping of global expectations. Even a partial diffusion of these standards could help mitigate the misuse of AI technologies and reduce the risk of escalation or arms races. Although this approach may appear overly optimistic, the EU and NATO, by virtue of their influence, can play a meaningful role in promoting alignment among partners and exerting normative pressure on less cooperative states, thereby contributing to a more responsible and restrained trajectory in the military application of AI.

The adoption of a unified framework offers the advantage of standardisation and interoperability, which would improve the effectiveness of joint military operations and reinforce ethical and legal compliance by stressing human oversight and accountability of AI-AWS.

The main challenge lies in accommodating diverse military protocols and national laws. The controls required might discourage rapid technological advancements and put NATO and EU forces at a strategic disadvantage compared to less-regulated opponents, such as the PRC and Russia. Additionally, standardisation can be perceived as an infringement on states’ national sovereignty or an impediment to their technological advancements.

Recommendations

  • Develop strong ethical AI standards by working with lawyers, ethicists, military strategists, and technology experts, ensuring these standards include human oversight and clear accountability;
  • Establish an oversight body with authority to monitor and enforce these standards through regular audits and managing violations through sanctions to maintain alignment with ethical and legal norms;
  • Encourage innovation within regulatory limits by funding AI research that meets ethical standards; and
  • Regularly conduct war games and simulations to verify the effectiveness and ethical implications of AI-AWS, adapting and adjusting the technologies based on these results.

Policy Option 2: Development of Defensive AI Capabilities

This policy alternative proposes that the EU and NATO prioritise developing and deploying AI-enhanced AWS strictly for defensive applications. By focusing on areas such as missile defence and AI-assisted early warning systems, this approach seeks to strengthen member states’ defensive postures while minimising the ethical and legal controversies associated with offensive autonomy. Focusing on defensive applications allows for a clearer implementation of meaningful human control, reinforces compliance with international humanitarian law, and may improve public acceptability. Moreover, it presents fewer risks of escalation or misuse, as such systems are designed to react to incoming threats rather than initiate hostilities.

Focusing on defensive applications allows for a clearer implementation of meaningful human control, reinforces compliance with international humanitarian law, and may improve public acceptability.

Benefits and Drawbacks

The primary advantage of focusing on defensive AI systems lies in their alignment with ethical considerations, potentially alleviating public concerns about the militarisation of AI technology. Such systems, inherently designed to protect and deter rather than project power, may facilitate broader international acceptance and compliance with humanitarian norms.

However, this focus on defensive technology might limit strategic capabilities, particularly where offensive actions are necessary. Concentrating solely on defensive AI could also induce an arms race, as adversaries might develop more advanced offensive capabilities to overcome enhanced defences.

Recommendations

  • Develop AI systems as decision-support tools that provide situational analysis and threat detection, ensuring that human operators make informed, final decisions on targeting. This approach reinforces ethical and legal oversight, particularly in densely populated civilian areas. It aligns with international humanitarian law by aiming to minimise civilian harm. While human involvement does not at all inherently guarantee ethical outcomes—as historical atrocities have demonstrated—it remains a critical safeguard within existing legal and normative frameworks. Strengthening meaningful human control can also help avoid cases like the Israeli “Lavender” system, where lethal decisions were reportedly driven primarily by automated processes with minimal human scrutiny.
  • Enhance transparency by actively engaging with the public and regularly demonstrating the functioning of defensive AI systems. This approach helps build trust by clarifying how these technologies operate and emphasising their protective roles, thereby demystifying the systems and highlighting their safety benefits.

Conclusion

The integration of AI technologies into AWS presents significant opportunities and challenges for European security governance. As these technologies become increasingly central to military operations, the strategic, legal, and ethical implications for NATO and the EU grow more complex and pressing. European policymakers must undertake a proactive approach to address these challenges, developing robust frameworks that ensure the responsible use of AI in military settings while fostering an environment of continuous dialogue and research.

The integration of AI technologies into AWS presents significant opportunities and challenges for European security governance.

Engagement across various domains—including military, legal, ethical, and public sectors—is essential to navigate these complexities effectively. This comprehensive approach will enable European and like-minded nations to capitalise on the benefits of AI-enhanced AWS and ensure that their deployment is consistent with international legal standards and ethical norms. By leading in establishing global norms for using AI in military operations, Europe can set a benchmark for responsible conduct that contributes to global stability and peace.

 


Nicola Bonsegna is a Security and Defence Researcher at FINABEL and a Master’s student with research interests in defence technologies, cyber, and military strategy. He has published several analyses on topics including the role of space in modern military operations, the integration of unmanned systems in US strategy, and the use of decoys in the war in Ukraine. The views contained in this article are the author’s own and do not represent the views of FINABEL.


[1] Sandra M. Pedron and João A. da Cruz, The Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous Weapon Systems (LAWS), 2020, https://core.ac.uk/download/pdf/328003419.pdf.

[2] Seth Cropsey, Drone Warfare in Ukraine: Historical Context and Implications for the Future, Strategika, no. 91 (March 14, 2024), Hoover Institution, https://www.hoover.org/research/drone-warfare-ukraine-historical-context-and-implications-future. ​

[3] K. Bondar, “Ukraine’s future vision and current capabilities for waging AI-enabled autonomous warfare,” Centre for Strategic and International Studies (CSIS), Wadhwani AI Centre, https://csis-website-prod.s3.amazonaws.com/s3fs-public/2025-03/250306_Bondar_Autonomy_AI.pdf.​

[4] UNODA, “Lethal Autonomous Weapon Systems (LAWS),” n.d., https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/.

[5] Samuel Clement, NATO and Artificial Intelligence: Navigating the Challenges and Opportunities, NATO Parliamentary Assembly Report, 2024, https://www.nato-pa.int/download-file?filename=/sites/default/files/2024-04/058%20STC%2024%20E%20-%20NATO%20AI%20-%20CLEMENT%20REPORT.pdf.

[6] Idem.

[7] Idem.

[8] NATO, Summary of NATO’s Autonomy Implementation Plan, 2022, https://www.nato.int/cps/en/natohq/official_texts_208376.htm.

[9] Sandra M. Pedron and João A. da Cruz, The Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous Weapon Systems (LAWS), 2020, https://core.ac.uk/download/pdf/328003419.pdf.

[10] Jürgen Altmann and Frank Sauer, “Autonomous Weapon Systems and Strategic Stability,” Survival 59, no. 5 (2017): 117–142, https://doi.org/10.1080/00396338.2017.1375263.

[11] Michael C. Horowitz, “When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability,” Journal of Strategic Studies 42, no. 6 (2019): 764–788, https://doi.org/10.1080/01402390.2019.1621174.

[12] United Nations, Convention on Certain Conventional Weapons – Group of Governmental Experts on Lethal Autonomous Weapons Systems, 2024, https://meetings.unoda.org/ccw-/convention-on-certain-conventional-weapons-group-of-governmental-experts-on-lethal-autonomous-weapons-systems-2024.

[13] United Nations, Convention on Certain Conventional Weapons – Group of Governmental Experts on Lethal Autonomous Weapons Systems, 2024, https://meetings.unoda.org/ccw-/convention-on-certain-conventional-weapons-group-of-governmental-experts-on-lethal-autonomous-weapons-systems-2024.

[14] Dario Badell and Lena Schmitt, “Contested Views? Tracing European Positions on Lethal Autonomous Weapon Systems,” European Security 31, no. 2 (2022): 242–261, https://doi.org/10.1080/09662839.2021.2007476.

[15] Anna Solovyeva and Nikola Hynek, “Going Beyond the ‘Killer Robots’ Debate,” European Parliament Document, 2018, https://www.europarl.europa.eu/doceo/document/TA-9-2021-0009_EN.pdf.

[16] Federal Department of Foreign Affairs (FDFA), “Prohibition and Restriction of Weapons,” n.d., https://www.eda.admin.ch/eda/en/fdfa/foreign-policy/international-law/international-humanitarian-law/prohibition-restriction-conventional-weapons.html.

[17] United Nations Secretary-General, “Note to Correspondents: Joint Call by the United Nations Secretary-General and the President of the International Committee of the Red Cross for States to Establish New Prohibitions and Restrictions on Autonomous Weapon Systems,” October 5, 2023, https://www.un.org/sg/en/content/sg/note-correspondents/2023-10-05/note-correspondents-joint-call-the-united-nations-secretary-general-and-the-president-of-the-international-committee-of-the-red-cross-for-states-establish-new.

[18] Dario Badell and Lena Schmitt, “Contested Views? Tracing European Positions on Lethal Autonomous Weapon Systems,” European Security 31, no. 2 (2022): 242–261, https://doi.org/10.1080/09662839.2021.2007476.

[19] Idem.

[20] Jean-Baptiste Jeangène Vilmer, “A French Opinion on the Ethics of Autonomous Weapons,” War on the Rocks, June 2021, https://warontherocks.com/2021/06/the-french-defense-ethics-committees-opinion-on-autonomous-weapons/.

[21] Dario Badell and Lena Schmitt, “Contested Views? Tracing European Positions on Lethal Autonomous Weapon Systems,” European Security 31, no. 2 (2022): 242–261, https://doi.org/10.1080/09662839.2021.2007476.

[22] Austrian Federal Ministry for European and International Affairs, “Autonomous Weapons Systems,” accessed April 14, 2025, https://www.bmeia.gv.at/en/european-foreign-policy/disarmament/conventional-arms/autonomous-weapons-systems.

[23] Joint Air Power Competence Centre. “Is Human-On-the-Loop the Best Answer for Rapid Relevant Responses?” Joint Air & Space Power Conference 2021 Read Ahead, accessed April 14, 2025. https://www.japcc.org/essays/is-human-on-the-loop-the-best-answer-for-rapid-relevant-responses.

[24] Vicente, Lucía, and Helena Matute. “Humans Inherit Artificial Intelligence Biases.” Scientific Reports 13 (2023): Article 15737. https://doi.org/10.1038/s41598-023-42384-8.

[25] Anna Solovyeva and Nikola Hynek, “Going Beyond the ‘Killer Robots’ Debate,” European Parliament Document, 2018, https://www.europarl.europa.eu/doceo/document/TA-9-2021-0009_EN.pdf.

[26] Sandra M. Pedron and João A. da Cruz, The Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous Weapon Systems (LAWS), 2020, https://core.ac.uk/download/pdf/328003419.pdf.

[27] Jürgen Altmann and Frank Sauer, “Autonomous Weapon Systems and Strategic Stability,” Survival 59, no. 5 (2017): 117–142, https://doi.org/10.1080/00396338.2017.1375263.

[28] NATO, Summary of the NATO Artificial Intelligence Strategy, 2021, https://www.nato.int/cps/en/natohq/official_texts_187617.htm.

[29] NATO, NATO’s Data and Artificial Intelligence Review Board, 2022, https://www.nato.int/cps/en/natohq/official_texts_208374.htm.

[30] NATO. “Emerging and Disruptive Technologies.” Last updated August 8, 2024. https://www.nato.int/cps/en/natohq/topics_184303.htm.

[31] Manuel Zafra et al., “How Drone Combat in Ukraine Is Changing Warfare,” Reuters, 2024, https://www.reuters.com/graphics/UKRAINE-CRISIS/DRONES/dwpkeyjwkpm/.

[32] Fabrice Diaz-Maurin, “AI and the Future of Warfare: The Troubling Evidence from the US Military,” Bulletin of the Atomic Scientists, November 2023, https://thebulletin.org/2023/11/ai-and-the-future-of-warfare-the-troubling-evidence-from-the-us-military/.

[33] Amjad Iraqi, “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza,” +972 Magazine, 2024, https://www.972mag.com/lavender-ai-israeli-army-gaza/.

You may also like

Comments are closed.