Source: shutterstock.com/Peoplemages.com – Yuri A
Source: shutterstock.com/Peoplemages.com – Yuri A
ArticlesEnglish Articles

Conflict Contestable Military AI

Abstract: In the race for technological supremacy, setting up and enforcing legal and ethical accountability frameworks for how technology is constructed and deployed in warfare is unsettled. In the case of key enabling technologies such as AI, several challenges have to be addressed simultaneously since the technology not only has to combine product safety aspects with ethical and human rights aspects but, when deployed for military purposes, also has to answer to international law. As current legislation is lagging behind technological development, legal and ethical contestation will follow, and testing AI development and deployment against the current or initial legislation on AI will most likely need case-by-case court interpretation. In the case of military AI, this contestation will be influenced by the just war or humanitarian argument reasoning.

Problem statement: How should the foreseeable contesting narrative of the development and deployment of AI in the military be integrated with its original technological, social and ethical narrative?

So what?: States and defence-oriented international alliances need to negotiate the expressed values of the deployed AI systems, the ethical use of AI, to build a coherent narrative of the choices made during the development and for the deployment of AI systems that will stand the test of contestation.

Source: shutterstock.com/Peoplemages.com – Yuri A

Source: shutterstock.com/Peoplemages.com – Yuri A

Constructing, Deconstructing and Reconstructing Narratives of Technological Development and Deployment

Conflict time stimulates technological development, and at present, technology develops at tremendous speed. Key enabling technologies are being deployed for offensive and defensive purposes on the psychical battlefield and in the information war, as several latent and active conflicts and hybrid threats drive technological development. AI applications in defence fields include use in active combat and the creation of digital twins of the entire battlefield for training purposes. AI has also been integrated into deterrence capabilities – building up to build down strategy (being the first to invest in advanced systems). As such, deterrence is achieved by providing accuracy and shifting the focus from several warheads to precision targeting, with geopolitical implications.[1] In intelligence, AI is used in and for psychology monitoring systems, seamless cognitive cooperation systems, cognitive warfare, Foreign Information Manipulation and Interference (FIMI), and hybrid threats. The deployment of AI integrating systems in active warfare can be doubly challenged due to the black box nature of the technology and due to the contested conflictual environment it is used in – war, given that states have different or opposite views on (in)appropriateness and desirability of the deployment and effects of these systems.

In intelligence, AI is used in and for psychology monitoring systems, seamless cognitive cooperation systems, cognitive warfare, Foreign Information Manipulation and Interference, and hybrid threats.

Contesting Narratives

Military initiatives thus deploy cutting-edge technology and intelligence to secure strategic advantage while minimising civilian losses. The application range of AI systems in the defence and intelligence fields is expanding, with uses for both offensive and defensive purposes. As a result of these novel applications, debates on what trustworthy and ethical AI is abound, with academics and military specialists addressing the topic. The development of legislative and ethical frameworks faces the challenge of first defining what an AI system is, a conceptualisation that often lags behind the development of technology. Most approaches have a unidirectional focus, looking at the ethical aspects from the deployer’s angle. However, at the moment, there is a gap regarding ethical AI deployment resulting from the perspective of contested narratives during combat and opposing value systems. This is due to several facts, including the development of this particular technology and the understanding of its complex implications, which are still in the early phases and the fact that technological design cannot foresee all the complexities of the real-world deployment setting. Additionally, there is the matter of the double-black box nature of the AI system, which ultimately gives more room for ethical interpretation than in the case of other technologies. Whether this effect will become fully explainable in time remains to be seen. As ethical contestations of technology happen post-factum, the deployment of AI in real-life situations will, in time, bring to light these contestations. Narratives on socio-technical ethics and technology development have long shaped the political support for technology development and social acceptance. Studying narratives around technology development is important for understanding how international politics, security, and technological achievements are intertwined.[2] While the universality of legal frameworks is challenging, considerations on using AI in the military should not be restricted to ascertaining legality. When abiding by international humanitarian law, an AI system should not be used if legality cannot be ascertained. The discussion on ascertaining legality and whether this is both a prerequisite and sufficient condition is complex. With AI being anthropormofised, the contestation of the dual human-machine decision-making process will remain open to scrutiny. The contestation will be reflected in a (perceived) non-performance of an obligation, which will be doubled by contesting the (human) cognitive agency behind the AI’s deployment. When constructing the narrative of the technological, humanitarian and ethical need for the use of AI in the military, ethical aspects are most challenging to define in universally acceptable approaches due to different value systems and the conflictual nature of war. The contestability of the chosen ethical framework is enhanced by the conflictual nature of the setting in which the system is deployed and by the fact that technological advantage and obfuscation of capabilities represent strategic advantage on the battlefield, making outside investigation of legality and ethics difficult. In the case of AI use in the military, constructing, deconstructing and reconstructing the technological, humanitarian and ethical narrative to pass the test of contestation is imperative.

Legal and Ethical Challenges of Military Use of AI

There are countless studies on the ethics of novel technology used in the military, and the addition of AI-focused ethical approaches has enriched the field. The black box effect of AI systems has also been extensively analysed in literature. Because the technology is difficult to standardise in an explainable comprehensive framework, its use remains open to contestation. When used in the military domain, AI has a double black box effect, one from the AI system itself and the secretive/non-disclosure nature of the military domain, which impedes external verifiability of norm compliance. Thus, a double contestation of AI systems manifests itself in the military domain, with a possible additional third element being considered, depending on the developer–deployer relationship. This volatile character of technology is doubled by the intrinsic conflictual nature of war, where adversaries contest the legality and ethics of the opponents’ moves and actions. It is expected thus that in warfare, the development, deployment and effects of systems, including AI, will be doubly challenged due to the currently not fully explainable nature of the technology and, secondly, due to the politically charged environment in which it is used – war, given states have different or opposite views on (in)appropriateness and desirability. Experts also name a certain responsibility gap, given the active debate on responsibility for war conduct and the fact that states don’t share internal analysis of others’ behaviour.[3]

It is expected thus that in warfare, the development, deployment and effects of systems, including AI, will be doubly challenged due to the currently not fully explainable nature of the technology.

The chances that the developed and deployed military systems embedding AI will be contested post-factum are positively influenced by several factors. These include active conflicts where military forces with state-of-the-art weapons capacity are present and latent conflicts becoming active. Additionally, a worldwide increase in military expenditure will increase AI development and deployment while, on the other side, technology development far supersedes that of legislative framework development. In parallel, clashing value systems and views on what a just war is, will continue to exist. Active conflict, where improved weaponry is wanted for reaching the objectives faster, puts even more pressure on the speed with which these systems are to be used in active combat. Most likely, this will make the gap between deployment and explainability frameworks even greater, leading to further contestation on acceptance.

For example, use in active combat of targeting systems embedding AI is often motivated by improved target identification rates. Still, it can be contested in terms of the correct target classification by the other party. Anticipated contesting narratives inherent to active or latent combat should be included in the judgement of ethical AI and ethically used AI in the defence and intelligence domains. AI-based target selection systems in an active war claiming overall efficiency in target identification is contested by the adverse party in terms of correct representativeness of the selected target when relating to the Geneva Convention due to the claimed discriminatory character of the AI system. This could be related to situational awareness or on-the-ground observations, which could be subject to interpretation or alterations. However, even if AI possesses the agential capacity to determine what constitutes a threat and how to neutralise it, states only consider human agency responsible in front of the law. Post factum, these forms of contestation can ask states to take public responsibility and trigger demands for reparation.[4]

Negotiating and Embedding Values

Regarding the co-production of technology and ethics in cases of military-civilian collaboration, things become even more contested. This is due to a clash of value systems and to the ambiguity coming from the purpose for which technology is deployed-saving or ending life, depending on the dominant lens used to look at the problem. A representative for this clash is Project Maven, where Google employees were highly resistant to participate in a military-initiated project.

Based on previous studies of ethical contestation of technological use in the military, approaches addressing the ethical use of AI in the military must incorporate technological, geopolitical and societal considerations and principles. How and what kind of intelligence is used in the defence field has been a matter of constant debate and contestation, one famous example being the Human Terrain System (HTS), an experimental Pentagon programme developed to address cultural awareness shortcomings at the operational and tactical level by embedding social, cultural, political, and ethnographic knowledge on the local population in each forward-deployed brigade or regimental staff.[5] Anthropologists were embedded in U.S. military teams deployed on the ground in Afghanistan and Iraq to give an operational and tactical advantage to the military forces based on their cultural knowledge of the local population. This use of cultural knowledge by the U.S. military and intelligence agencies was considered unethical by the civilian side and a breach of the professional code of anthropologists by the American Anthropology Association.[6] This was supported using narrative analysis of the project documentation. Ethical mishaps are not uncommon in extreme scenarios, given that real-life contexts supersede initial expectations determined in lab settings and have unforeseen effects. Another example of unforeseen ethical effects is the Stanford prison experiment that had been green-lighted by the university’s ethical review board. Returning to the case of AI used in conflict or for defence and intelligence purposes, this narrative analysis can occur much later when changes in the value system occur between the moment of the event and the moment of judgement. It is, therefore, more important to attempt to have an explainable choice over the embedded values and chosen priorities, considering the likelihood of contestation.

This use of cultural knowledge by the U.S. military and intelligence agencies was considered unethical by the civilian side and a breach of the professional code of anthropologists by the American Anthropology Association.

Adding to the abovementioned complex issues are the questions of who developed the system, how the developers’ values were embedded in the system, and how the developer–deployer values are negotiated, irrespective of where and by whom systems are deployed. Because key national values inform decisions about which norms are embraced or rejected and assuming that these values differ across states,[7] the negotiating process on which values to embed within weapon systems deployed is particularly susceptible to international, both ex-ante and ex-post, contestation.

In the civilian sector, AI market domination by a few (Western) hyperscalers raised concerns that AI systems used worldwide reflect and, therefore, transmit the values, cultures and goals of these hyperscalers.[8] In so-called “systems of reliance”, military allies need to negotiate the values they hold paramount within the alliance and also need to have the stronghold to negotiate with AI developers when these are third parties. The perception that AI is crucial to attaining competitive advantage puts more pressure on deploying the systems in real-life situations. In the competition for strategic advantage, reaching consensus through negotiations on ways to regulate AI for military use seems to lack incentives beyond agreeing on lowest-common denominators,[9] as this would mean losing leverage in the interstate competition. Additionally, even like-minded democracies will derive their own interpretations of how principles should be best employed.[10] The EU has its own Open Strategic autonomy agenda, implementing protection and promoting measures to stimulate the civilian AI sector, investing in AI factories and promoting a European approach focused on “trustworthy AI”. The matter of how this translates in the military use of AI is raised, given the question of whose values are embedded within the AI system. If individual states with an aggressive posture decide solely based on self-interest what principles to prioritise and what strategy to adopt for AI, the military, within large strategic partnerships such as NATO, values around the deployed AI systems need to be negotiated. This is then a matter of military law and ethics, as well as power matrixes within the alliance. Since ethics is often culture- or system-bound, the power matrix within the alliance and between developers and deployers impacts the values that are ultimately embedded within the system. The fact that these value systems are different is exemplified by the call for a ban in 2014 on the use of so-called “killer robots” by the European Parliament while they are still in use by the allied forces of NATO in war theatres elsewhere.

Analyse, Reconcile, Address

Given the speed with which technology develops, regulatory and explainable frameworks developed now will most likely become outdated before being implemented. The foreseen contestation of the development and deployment of AI systems should be integrated into the initial narrative construction of the technology. Ethical AI in the military, therefore, means ethical AI and the ethical use of AI. For the first, frameworks that explicitly include prioritised values can be developed. Addressing the complexity of the geopolitical context in which the system is deployed is more challenging. Such an analysis must consider multilevel variables, different stakeholders in the system, reconcile competing agendas, and address conflicting narratives, both current and anticipated.

 


Diana M. Popa, AI, key enabling technologies, technological autonomy. https://orcid.org/0000-0001-8529-9416. The views contained in this article are the author’s alone and do not represent the views of Delft University of Technology.


[1] Steffan Puwal, “Should AI be banned from nuclear weapon systems?,” (2024), NATO Review.

[2] Irja Malmio, “Ethics as an enabler and a constraint – Narratives on technology development and artificial intelligence in military affairs through the case of Project Maven,” (2023), Technology in Society, 72 (2023) 102193.

[3] Dustin Lewis, “Natural and Artificial Intelligence in Armed Conflict: Exploring Settled and Open Legal Questions,” (2024), Asser Institute reading.

[4]Tim  Sweijs, Sofia Romansky, “International Norms Development and AI in the Military Domain,” (2024), CIGI Papers No.300.

[5] Jacob Kipp, Lester Grau, Karl Prinslow, Don Smith, “The Human Terrain System: A CORDS for the 21st century, (2006),” Military Review, September -October 2006.

[6] David Rohde, “Army Enlists Anthropology in War zones,” (2007), New York Times,  October 05, 2007.

[7] Alexander Wendt, “Collective Identity Formation and the International State,” (1994), American Political Science Review 88 (2): 384–96. https://doi.org/10.2307/2944711.

[8] Yoshua Bengio, et al., “International scientific report on the safety of advanced AI,” (2024), AI Seoul Summit.

[9] Christine Hunter, A. E. Edward, A. L. Laurynas, M. Klaus, “Regulating lethal autonomous weapon systems: exploring the challenges of explainability and traceability,” (2023),  AI and Ethics 4: 229–45. https://doi.org/10.1007/ s43681-023-00261-0.

[10] Idem.

You may also like

Comments are closed.