Technical, Organisational, And Strategic Dimensions In Defence-Critical Contexts

Abstract: Artificial Intelligence is transforming the functioning and interaction of weapon systems and changing how battles are fought. That said, there are still many unresolved questions regarding the safety and security of AI systems. The core concern is that military organisations must adopt a multifaceted approach to effectively manage AI-associated risks, combining technical measures with organisational adaptability. This can only be achieved by considering institutional mechanisms as well as software and hardware in the context of AI development. By addressing these challenges, military organisations can navigate the evolving landscape of AI more securely while enhancing their operational efficiency and effectiveness.

Problem statement: What constitutes AI security, and how should large-scale organisations in the military domain approach AI security when integrating AI into their operational processes?

So what?:  AI security differs from conventional IT security because it is concerned with systems that may learn and alter their behaviour without human intervention. Thus, besides addressing the obvious technical challenges that result from such systems, every organisation that adopts AI must take into account human factors and organisational framework conditions, which can influence AI security just as strongly as purely technical aspects. Particularly large, complex, and hierarchical organisations, such as public bodies or military organisations, should be especially attentive to the ‘soft’ side of AI Security, which relates to the organisational embedding of AI, its use cases, and the interfaces to and from AI systems.

Source: shutterstock.com/Haspshark
Source: shutterstock.com/Haspshark

Speaking of Artificial Intelligence…

Artificial intelligence (AI) is currently at the forefront of public discourse. Many people and organisations, in both the private and public sectors, are drawn to the potential of this technology. However, AI is not a recent development; it has, in fact, been evolving since the 1950s.[1] Back then, Alan Turing, considered one of the pioneers of modern IT, laid the groundwork for machines capable of something like intelligence. A bit later, in 1956, Artificial Intelligence was introduced as a multidisciplinary field of research as a result of a small conference at Dartmouth College (New Hampshire, U.S.) nowadays referred to as the “Dartmouth Conference” – quoted by many as the birthplace of modern AI research. In the decades that followed, AI was initially dominated by logic and reasoning, known as symbolic AI. In the 1980s, more data-driven approaches and probabilistic methods were developed, which allowed for a more implicit representation of knowledge and led to sub-symbolic AI. The first expert systems were developed, and the handling of uncertainty improved. Still, the technology struggled to cope with the requirements of real-world complexity. After periods of stagnation, the next wave of AI development emerged around the millennium, when increasingly powerful IT infrastructure, coupled with improved algorithms and better data availability, paved the way for breakthroughs in the 2010s. As applications such as computer vision, speech recognition, and reinforcement learning began to improve rapidly. Yet only recently, with the advent of large language models (LLMs) in the 2020s, the technology finally became accessible for non-IT people and made its way into the mainstream.

AI is not a recent development; it has, in fact, been evolving since the 1950s.

Interestingly, these language models (e.g., OpenAI’s ChatGPT) also appear to have shaped the colloquial idea of what AI means and is capable of. Although AI has a lot more to offer than LLM Chatbots, many people apparently remain unclear about what the technology is capable of and what technical principles underlie it. Studies show that competencies regarding AI (“AI Literacy”) often end at a basic level of understanding.[2] Even so, or perhaps precisely for that reason, AI is currently at the peak of inflated expectations, according to the consulting firm Gartner’s popular hype cycle of emerging technologies.[3]

While some already anticipate AI taking total control of the world, others maintain a more realistic and cautious perspective. This perspective is increasingly pointing towards AI security. Ever since some drastic malfunctions of popular models became public, people have started questioning the reliability and trustworthiness of AI systems. Whether in the context of self-driving vehicles, job application screening, chatbots insulting users, or the breach of intellectual property rights through AI models, the potential flaws of AI systems are manifold and sometimes difficult to detect.[4] The same goes for potential attack vectors on AI systems. Many organisations, along with the providers of AI solutions, have already recognised this risk and are working intensively to counter threats and improve the security of their AI systems.[5] Considerations of AI security are particularly important when the systems being used can have far-reaching consequences for the safety and lives of people, as is often the case in a military context. As we will see, AI security can differ significantly from traditional IT security.[6]

Risks of AI in the Military Domain

When considering the risks of AI in the military domain, it is essential to acknowledge that the uncertainty accompanying the rapid development and application of technology constitutes a significant component of the overall risk assessment. Finding the “unknown unknowns”–the things we don’t know we don’t know, as former U.S. Secretary of Defence Donald Rumsfeld coined it–proves naturally challenging in the field of AI. Here, however, uncertainty can also come into play on different levels concerning one specific system, the interaction of several systems, or the development of entirely new systems. Overall, this situation creates a very complex risk landscape that significantly impedes precise long-term estimates. Analysing recent literature,[7] we can nevertheless identify  several categories of risk, which are frequently mentioned in discussions of AI in a military context:

  • Introducing bias in decision-making: One of the critical concerns surrounding AI in military applications is the potential for false decisions and unintended escalation through new forms of distortion. This can either directly affect autonomous systems or occur through AI-generated (deep-) fakes of various kinds.  Autonomous systems, particularly those designed for target selection and threat assessment, may misinterpret data or execute actions that inadvertently lead to conflict. Given the speed at which AI processes information, incorrect threat identification could trigger military responses without sufficient human oversight. This risk is exacerbated by deploying AI in high-stakes environments where real-time decision-making is necessary. Recent research emphasises this risk, particularly, but not limited to, situations where AI must engage in unfamiliar domains and contexts with little previous, or only poor, data available. Examples include nuclear early warning systems,[8] autonomous drone strikes,[9] or missile defence systems.[10] On the other hand, even if the metaphorical trigger is pulled by human personnel, there is a growing occurrence of audio-visual deep fakes, distorted early warning assessments, or false-positive safety alerts in infrastructure facilities, which can all lead to forceful responses;[11]
  • Ethical and Legal Challenges: AI’s role in military decision-making raises significant ethical and legal concerns, particularly regarding delegating the use of lethal force to machines. The lack of human intervention in critical moments of engagement may lead to violations of international humanitarian law. Additionally, algorithmic biases can result in disproportionate targeting, increasing the risk of civilian casualties and undermining accountability in military operations. While this aspect is already problematic in itself, it also leads to a growing imbalance between states that place a high value on ethical and moral standards and those that do not. Such an asymmetric norm adherence confronts liberal democracies with a moral dilemma: how to uphold normative commitments without being strategically outpaced by less constrained adversaries;
  • Security Vulnerabilities and AI Exploitation: AI systems are eventually vulnerable to cyber threats, including hacking, data poisoning, and adversarial attacks. These vulnerabilities create opportunities for adversaries to manipulate AI-driven decision-making processes, potentially causing incorrect assessments or the malfunctioning of autonomous weapons. A growing dependence on AI systems necessitates well-considered cybersecurity measures to prevent exploitation;
  • AI Arms Race and Global Instability: The rapid development and deployment of military AI technologies contribute to an international arms race, with nations competing for technological superiority. In the context of AI, the “winner takes it all” principle likely applies even more strongly than in other domains. This means that those who excel in the field of AI utilisation have a significant advantage over those who are either not as advanced or have implemented it less successfully;
  • Over-Reliance on AI and Human Complacency: A growing dependence on AI may lead to too much trust being placed in algorithmic decision-making, diminishing human oversight. This might, in turn, lead to military personnel being less inclined to critically assess flawed or incomplete AI recommendations in particular scenarios. This overreliance can result in strategic miscalculations and ineffective operational planning. Recent research also indicates a loss of skill amongst human personnel, when AI is frequently used for a given task[12]–a pattern reminiscent of the decline in navigational skills following the widespread adoption of GPS technology.[13]

Organisational Capabilities for Dealing with AI-Associated Risks

As it may have become apparent to this point, securing AI is more than a purely technical endeavour. Yet, how can we best prepare for what lies ahead in terms of AI and AI security? The answer may sound trivial, but its implementation is anything but straightforward: individual understanding and organisational agility will become key pillars of AI and its secure use in the coming years. However, the larger, bureaucratic and hierarchical the organisation, the harder these qualities will be to achieve.

What is known to developers and organisational theorists as Conway’s law says that „organisations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organisations.“ [14] At this stage, we invite the reader to consider their own organisation and reflect on that premise to put the following remarks into context. While the authors primarily address the issue with a focus on the military domain, most insights will also hold true for other organisations that use AI, be it in the public or private sector.

As has been demonstrated in organisational theory and management research across various theoretical frameworks, volatile contexts are best addressed by flexibility, adaptive structures, and the capacity to rapidly reconfigure resources. To describe these properties of organisations, organisational economist David J. Teece elaborated on the notion of “Dynamic Capabilities”, which describe an organisation’s capacity to recognise changes early, draw appropriate conclusions, and subsequently adjust its behaviour, necessary routines, and processes to develop and sustain a competitive advantage.[15] The idea is not to be confused with individual short-term adjustments but refers to a deeply rooted organisational capacity for learning and transformation.

In this light, AI security should, due to its complexity and far-reaching interfaces, always commence at the organisational level. Organisations must first build the capacity to adapt to changing conditions on a system-structural level. Developing effective integrated system landscapes is already and will become increasingly relevant as warfare is getting more connected and integrated than ever. A pioneering example of such system landscapes and development approaches can currently be observed in Ukraine’s Delta system. Delta is a real-time digital battlefield management platform that integrates data from drones, satellites, sensors, and human intelligence into a unified operational picture. It was developed through a highly adaptive, iterative process that combined agile software development with close civil-military collaboration, aligning with NATO standards while remaining flexible and open for rapid innovation. In contrast to traditional military system architectures, which are often siloed, rigid, and slow to evolve, Delta embodies a networked, interoperable, and user-centred design philosophy. Such an approach enables quick and iterative improvements in the event of any weaknesses in the system or its components.

AI security should, due to its complexity and far-reaching interfaces, always commence at the organisational level. Organisations must first build the capacity to adapt to changing conditions on a system-structural level.

In that line of argument, we see that organisations with strong hierarchical structures often struggle to respond quickly to change. Change is typically understood as a linear transition from a certain state A to a desired state B. However, in dynamic environments, change is more of a continuous, pervasive process that requires ongoing adaptation rather than discrete transformations.[16] From an organisational perspective, the apparent problem is that large, hierarchical and command-driven organisations typically exhibit a preference for stability and control, which manifests in rigid structural arrangements. This, again, echoes Conway’s Law, according to which system design inevitably reflects an organisation’s communication and coordination structures.[17] Given a constantly evolving threat landscape that results from integrated or even orchestrated threats, this would lead to insufficient results with regard to security.

Moreover, due to the nature of threats, it will neither be feasible nor functional to rely solely on internal experts within the organisation. Instead, what is required is an institutional foundation for efficient and reliable collaboration with external knowledge and/or technology providers and expert groups. The current situation in Ukraine demonstrates the effectiveness of collaborations between public and private entities, as well as in some cases, individual actors, in facilitating the development and improvement of a military’s capacities. Another noteworthy example is the U.S. Defence Advanced Research Projects Agency (DARPA) initiative, which serves as a transfer institution for mission-oriented cutting-edge research from a variety of sources to support U.S. military forces with the latest technological innovations available.[18]

Understanding AI Security

After we have outlined organisational frame conditions that we consider essential for the secure use of AI,  it is worthwhile observing what makes AI security so special after all and in comparison to other IT systems.

AI systems are generally understood as socio-technical systems.[19] As such, their security requirements differ from purely technical systems, as there is a much stronger emphasis on a holistic safety and security assessment that includes technical, social, legal, environmental and other aspects.[20] Furthermore, due to the inherent complexity of socio-technical systems, a complete prediction and control of such systems is unrealistic – and in the case of AI, which largely relies on probabilities, it is hardly possible. Uncertainty inevitably plays a vital role as a component of risk.[21] Adding to that, defining and analysing the boundaries of a system can be challenging when the overall system expands beyond the technical level.[22] Thus, one should also differentiate between securing an AI model versus securing the overall system. In other words, it is valid and important to improve a model’s resilience in the face of adversarial attacks; however, this is not sufficient for ensuring the robust functioning of the overall system within its use case. Until recently, though, literature and respective guidelines addressing holistic assessments of AI systems were scarce, with research having focused mostly on the technical assurance of AI models.[23]

AI systems are generally understood as socio-technical systems.

The debate, however, has gained momentum–at least in the private sector–since the entry into force of the AI Act, as the Act requires providers of particularly high-risk systems to design them “in a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.” In this regard, Art 15 (4) of the act explicitly mentions technical and organisational measures to safeguard the system with regard to errors, faults or inconsistencies in the system’s operating environment. Although most military AI applications do not fall under the scope of the AI Act, the criteria mentioned there can still serve as a good benchmark for the requirements of the systems.

In general, organisations should audit their AI systems to ensure the systems’ trustworthiness–a notion that includes elements of not only transparency, accountability, safety and security but also ethical considerations and privacy.[24] Each of these is potentially difficult, but the real challenge lies in the interfaces between those domains and, particularly, the actors responsible for them.[25] These actors include developers, users and executives responsible for the uptake of AI services in the organisation, as well as professionals from the legal domain and procurement. Going into detail on the requirements for trustworthy AI systems, Brundage et al. suggest three domains vital to developing trustworthy AI systems, namely, institutional mechanisms, software and hardware.[26] If addressed properly, these domains can serve as guidelines for achieving trustworthy AI (eco)systems.

Institutional Mechanisms

Institutional mechanisms describe the principles that an organisation establishes for the development of AI applications. These principles relate to the organisation’s underlying values as well as development principles such as process documentation, information exchange among developers, third-party auditing, red teaming, and incentives for reporting incidents, biases, and safety concerns. They regularly serve as guidelines that enable the evaluation of systems.

Institutional mechanisms describe the principles that an organisation establishes for the development of AI applications.

In the private sector, such principles are further understood as a benchmark for assessing a company’s responsible use of AI. Brundage et al. note that by 2020, over eighty AI organisations had publicly stated their developing principles for AI applications.[27] Whereas, in a military context, the red teaming approach can provide valuable insights, particularly in revealing hidden risks (“unknown unknowns”). This approach involves simulating attacks on systems from the viewpoint of an enemy to discover weaknesses within the observed systems.

Software

When thinking of the software component, robustness is considered the central competency. To achieve an appropriate level of robustness, software (i.e., algorithms) should be regularly evaluated through methods of Adversarial Machine Learning. These evaluations go beyond traditional red teaming due to the specificity of machine learning applications and their ongoing training, deployment, monitoring and re-training cycle. To achieve robustness through adversarial methods, the algorithmic structure of the systems must be taken into account. Large Language Models and Image Recognition are two prominent examples:

  • Large Language Models (LLMs) are increasingly implemented in various use cases and proposed projects. As machine learning systems become more complex, LLMs are increasingly integrated to facilitate user interaction and interpretability. This approach enables non-expert users to query and comprehend machine learning outputs through natural language interfaces, thereby bridging the gap between advanced computational models and human interpretability.[28] This approach promises an increase in velocity, especially when confronted with substantial volumes of data in a fast-paced decision-making process. Complications could arise when considering the security aspect of LLMs. Boreiko et al. (2014) examine the increasing sophistication of jailbreaking attacks against LLMs, which aim to bypass safety mechanisms that prevent harmful outputs.[29] Harmful outputs in a military context could include presenting a user with information above their classification clearance or other violations of an established need-to-know chain, compromising strategic or tactical robustness. It is not a necessity for this to arise from malicious intentions on the part of the user; it could also be due to AI applications not being improved through adversarial methods. According to recent studies, such LLM jailbreak attacks remain highly effective, even against modern safety-tuned models. In this case, adaptive attacks that refine their strategies iteratively significantly outperform simpler methods.[30]
  • As AI systems become an increasingly important component of military decision-making, image classification models are being utilised in reconnaissance, target identification, and battlefield assessment. The promise of AI-driven vision systems lies in their ability to rapidly process large volumes of sensor data, providing near-instantaneous intelligence and reducing human cognitive workload in high-stakes combat scenarios. However, the security vulnerabilities inherent in AI-based image recognition models pose a significant risk to military effectiveness. Adversarial methods, which involve imperceptible perturbations to input data, can deceive deep learning models into making critical misclassifications, potentially altering the course of military operations.[31] Chen et al. (2022) examine the threat posed by adversarial examples to military AI systems, focusing on their potential to disrupt the “kill chain—the sequence of steps required to find, fix, track, target, engage, and assess (F2T2EA) enemy assets. The term adversarial example, as a technical term, hereby refers to the intentional deception of an AI system through the injection of perturbations into its input data (e.g. applying certain graphical patterns to military equipment to evade visual reconnaissance systems). The study highlights how image-based adversarial attacks can be weaponised to delay or even neutralise enemy strikes by disrupting AI-driven reconnaissance and target acquisition processes. For instance, the U.S. Army Research Laboratory has previously demonstrated how adversarial perturbations applied to physical objects, such as stickers on vehicles, can prevent AI systems from correctly identifying them as military targets in both urban and forest environments.[32] However, the tactical impact of adversarial attacks extends beyond mere misclassification. When adversarial examples are introduced at the “Find” and “Fix” stages of the kill chain, AI-enabled ISR (Intelligence, Surveillance, and Reconnaissance) systems can fail to detect or misidentify enemy units, leading to flawed targeting strategies. In Chen et al.’s wargame simulations, an AI reconnaissance drone tasked with identifying enemy infrastructure failed to recognise its target due to an adversarial modified camouflage pattern, significantly increasing the mission’s duration from 6 minutes to over 30 minutes.[33] The delays caused by adversarial attacks allow enemy forces to reposition assets, execute countermeasures, or even gain the upper hand in an engagement. The consequences of adversarial compromised AI are particularly severe in autonomous weapons platforms, where real-time image recognition dictates engagement decisions. Misclassification of civilian structures as enemy combatants or vice versa could lead to either unlawful collateral damage or mission failure due to an inability to execute valid strikes.[34] Moreover, adversarial techniques can extend beyond image classification to affect SAR (Synthetic Aperture Radar) imaging and infrared recognition systems, posing risks to AI-enhanced surveillance and early warning systems.[35]

Hardware

As military AI systems increasingly rely on specialised hardware accelerators, such as GPUs, TPUs, and domain-specific AI chips, the security of these infrastructures is a pressing concern. Unlike traditional computing environments, where trusted execution environments (TEEs) and secure enclaves protect against adversarial access, most AI hardware lacks standardised security mechanisms. This creates potential vulnerabilities, especially in high-stakes military contexts wherein adversarial actors could exploit hardware weaknesses to compromise AI-driven decision-making systems.[36] Brundage et al. (2020) discuss the growing need for trusted execution environments tailored to AI workloads, particularly for machine learning models deployed in classified or sensitive operations. The risk extends beyond software-based attacks; hardware vulnerabilities such as side-channel attacks, model extraction, and fault injection could enable adversaries to steal, manipulate, or corrupt AI models running on insecure processors. Secure enclaves, designed to isolate sensitive computations from potential external threats, have been widely implemented in enterprise computing but are largely absent in military AI hardware deployments. This omission presents a strategic weakness in AI-assisted reconnaissance, autonomous systems, and cryptographic AI applications.[37]

The importance of secure hardware in military applications becomes evident in scenarios where adversarial interference could compromise AI-based image recognition, autonomous targeting, or encrypted battlefield communications—without robust security mechanisms at the hardware level, classified AI-driven intelligence analysis tools risk being exploited through model inversion or adversarial perturbation attacks, leading to data leaks or compromised operational planning.[38] A potential scenario involves an adversary implanting backdoors in AI accelerators used for UAV-based reconnaissance, allowing real-time manipulation of object classification outputs—potentially altering or obscuring mission-critical intelligence.

The importance of secure hardware in military applications becomes evident in scenarios where adversarial interference could compromise AI-based image recognition, autonomous targeting, or encrypted battlefield communications.

Looking forward, Brundage et al. (2020) argue that future military AI systems must integrate secure execution environments directly into their hardware stacks, ensuring that machine learning models operate within verifiable, tamper-resistant infrastructures. This includes the development of AI-specific TEEs, cryptographically secured model deployments, and proactive anomaly detection systems at the hardware level. While existing research has focused mainly on software-based AI security, the authors emphasise that without secured hardware foundations, even the most advanced adversarial defences remain vulnerable to physical and side-channel exploits.[39]

Conclusion

Integrating AI in military applications presents a complex landscape of risks that necessitate a multifaceted approach to security. As AI systems evolve, their vulnerabilities must be addressed through robust institutional mechanisms, continuous software assessment, and secure hardware infrastructures. Rapid technological advancement, combined with a considerable potential for unintended consequences, adds to the complexity. An often-overlooked pitfall, particularly in larger organisations, is the lack of capability to respond dynamically to evolving technological requirements. Considerations of AI security must include adaptability within organisations. Thus, the development of trustworthy AI systems consists of a prominent ‘hard’ part that addresses the security aspect of AI applications themselves. Still, it should be accompanied by a ‘soft’ part concerned with the organisational context and embedding of the technology. Put more drastically, if there is one piece of advice to be given, it is this: ensure that the departments responsible for AI-related matters have access to all necessary information and interfaces, and are empowered to autonomously and directly respond to emerging risks.

Lastly, it is essential to consider that although AI has been around for some time, we are currently witnessing a rapid shift in how the technology is shaping the military domain. What holds true today may be outdated a week later. Accordingly, most technical considerations should be viewed with a degree of caution. From the authors’ perspective, the greatest potential lies in the organisation’s structural orientation towards agile ways of working — enabling faster responses to technological developments than the competition, while also facilitating the integration of internal and external expertise to achieve superior outcomes. Future research might investigate how organisations can systematically integrate and utilise heterogeneous knowledge sources and expert contributions effectively. Moreover, it would be beneficial to map already existing expertise within organisations and explore mechanisms to foster cross-functional collaboration.

It is essential to consider that although AI has been around for some time, we are currently witnessing a rapid shift in how the technology is shaping the military domain.

Ultimately, the most successful actors in the long run will be those who are not just aware of individual risks and threats but also build (AI) systems and organisational structures capable of adapting to rapidly changing technological landscapes.


Clemens Kerschbaum is Argus-AI’s Head of Research and an Assistant Professor for Business and Innovation Management at New Design University. He specialises in Knowledge Management, focusing on various types of knowledge with a particular emphasis on non-rational types of knowledge and their impact on strategic decision-making and innovation. Before pursuing his PhD in Business, he obtained a master’s degree in Business Law. He has published his work in various academic journals and presented at an array of international conferences.

Raphael Dachs is COO at Argus-AI. He has a background in Applied Machine Learning and organisational AI design. Over the past decade, he has primarily collaborated with governmental agencies across Europe to establish AI expert teams and facilitate their development of AI products. His interests include AI security and Adversarial AI for high-risk applications, as well as organisational knowledge creation through the use of AI.

The views contained in this article are the authors’ alone.


[1] Michael Haenlein and Andreas Kaplan, “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence,” California Management Review 61, no. 4 (2019): 5–14, https://doi.org/10.1177/0008125619864925.

[2] Marie Hornberger et al., “What Do University Students Know about Artificial Intelligence? Development and Validation of an AI Literacy Test,” Computers and Education: Artificial Intelligence 5 (2023): 100165, https://doi.org/10.1016/j.caeai.2023.100165.

[3] https://www.gartner.com/en/newsroom/press-releases/2024-08-21-gartner-2024-hype-cycle-for-emerging-technologies-highlights-developer-productivity-total-experience-ai-and-security

[4] Valentin Hofmann et al., “AI Generates Covertly Racist Decisions about People Based on Their Dialect,” Nature 633, no. 8028 (2024): 147–54, https://doi.org/10.1038/s41586-024-07856-5.

[5] https://safety.google/cybersecurity-advancements/saif/ ; https://aws.amazon.com/de/ai/generative-ai/security/scoping-matrix/ ; https://www.nsa.gov/AISC/

[6] Miles Brundage et al., “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims,” version 2, preprint, arXiv, 2020, https://doi.org/10.48550/ARXIV.2004.07213.

[7] Wyatt Hoffman and Heeu Millie Kim, Reducing the Risks of Artificial Intelligence for Military Decision Advantage (Center for Security and Emerging Technology, 2023), https://doi.org/10.51593/2021CA008; Mia Hoffmann and Heather Frase, Adding Structure to AI Harm (Center for Security and Emerging Technology, 2023), https://doi.org/10.51593/20230022; Utsav Sharma Gaire, “Application of Artificial Intelligence in the Military: An Overview,” Unity Journal 4, no. 01 (2023): 161–74; Adib Bin Rashid et al., “Artificial Intelligence in the Military: An Overview of the Capabilities, Applications, and Challenges,” International Journal of Intelligent Systems 2023, no. 1 (2023): 8676366, https://doi.org/10.1155/2023/8676366; Jan Maarten Schraagen, Responsible Use of AI in Military Systems, 1st ed. (Chapman and Hall/CRC, 2024), https://doi.org/10.1201/9781003410379.

[8] James Johnson, “Automating the OODA Loop in the Age of Intelligent Machines: Reaffirming the Role of Humans in Command-and-Control Decision-Making in the Digital Age,” Defence Studies 23, no. 1 (2023): 43–67, https://doi.org/10.1080/14702436.2022.2102486.

[9] Hitoshi Nasu, “The Kargu-2 Autonomous Attack Drone: Legal & Ethical Dimensions,” Lieber Institute West Point, June 10, 2021, https://lieber.westpoint.edu/kargu-2-autonomous-attack-drone-legal-ethical/.

[10] Kelsey Atherton, “Understanding the Errors Introduced by Military AI Applications | Brookings,” Brookings Institute, May 6, 2022, https://www.brookings.edu/articles/understanding-the-errors-introduced-by-military-ai-applications/?utm_source=chatgpt.com.

[11] Rashid et al., “Artificial Intelligence in the Military.”

[12] Nataliya Kosmyna et al., “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task,” arXiv:2506.08872, preprint, arXiv, June 10, 2025, https://doi.org/10.48550/arXiv.2506.08872.

[13] Roger McKinlay, “Technology: Use or Lose Our Navigation Skills,” Nature 531, no. 7596 (2016): 573–75.

[14] Melvin E Conway, “How Do Committees Invent?,” Datamation 14(5) (1968): 28–31 (31).

[15] David J. Teece et al., “Dynamic Capabilities and Strategic Management,” Strategic Management Journal 18, no. 7 (1997): 509–33, https://doi.org/10.1002/(SICI)1097-0266(199708)18:7%253C509::AID-SMJ882%253E3.0.CO;2-Z.

[16] Haridimos Tsoukas and Robert Chia, “On Organizational Becoming: Rethinking Organizational Change,” Organization Science 13, no. 5 (2002): 567–82, https://doi.org/10.1287/orsc.13.5.567.7810.

[17] Conway, “How Do Committees Invent?.”

[18] https://www.darpa.mil/

[19] Anezka Viskova-Robertson, “Socio-Technical System and Organizational Ai Integration: An Integrative Literature Review,” Academy of Management Annual Meeting Proceedings 2023, no. 1 (2023): 1–6, https://doi.org/10.5465/AMPROC.2023.48bp.

[20] Terje Aven and Marja Ylönen, “A Risk Interpretation of Sociotechnical Safety Perspectives,” Reliability Engineering & System Safety 175 (July 2018): 13–18, https://doi.org/10.1016/j.ress.2018.03.004; Viskova-Robertson, “Socio-Technical System and Organizational Ai Integration.”

[21] Aven and Ylönen, “A Risk Interpretation of Sociotechnical Safety Perspectives.”

[22] Albert Cherns, “The Principles of Sociotechnical Design,” Human Relations 29, no. 8 (1976): 783–92, https://doi.org/10.1177/001872677602900806; Albert Cherns, “Principles of Sociotechnical Design Revisted,” Human Relations 40, no. 3 (1987): 153–61, https://doi.org/10.1177/001872678704000303.

[23] Ayodeji Oseni et al., “Security and Privacy for Artificial Intelligence: Opportunities and Challenges,” version 1, preprint, arXiv, 2021, https://doi.org/10.48550/ARXIV.2102.04661.

[24] Brundage et al., “Toward Trustworthy AI Development.”

[25] Koshiyama et al., “Towards Algorithm Auditing.”

[26] Idem.

[27] Brundage et al., “Toward Trustworthy AI Development.”

[28] Idem.

[29] Valentyn Boreiko et al., “A Realistic Threat Model for Large Language Model Jailbreaks,” arXiv:2410.16222, preprint, arXiv, October 21, 2024, https://doi.org/10.48550/arXiv.2410.16222.

[30] Boreiko et al., “A Realistic Threat Model for Large Language Model Jailbreaks.”

[31] Yuwei Chen, “The Risk and Opportunity of Adversarial Example in Military Field,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 100–107, https://openaccess.thecvf.com/content/CVPR2022W/ArtOfRobust/html/Chen_The_Risk_and_Opportunity_of_Adversarial_Example_in_Military_Field_CVPRW_2022_paper.html.

[32] Ibid., 101.

[33] Ibid., 104.

[34] Ibid., 106.

[35] Ibid.,  104.

[36] Brundage et al., “Toward Trustworthy AI Development.”

[37] Ibid., 46.

[38] Ibid., 48.

[39] Ibid., 49.

Categories

Languages

Sign Up For Our Newsletter

Get the content you need, just when you need it.

DONATE

Support our mission by making a donation.

Visit our Partner