Source: shutterstock.com/Rawpixel.com
Source: shutterstock.com/Rawpixel.com
ArticlesEnglish Articles

The Illusion of Transparency: Data vs. Intelligence on the Battlefield

Abstract: Hybrid warfare integrates both military and non-military tactics, leveraging AI, cyberspace, and the information realm as force multipliers. Cyberspace and the information realm serve as a domain and battlefield, while AI accelerates decision-making and disinformation. Modern surveillance fosters a perception of a “transparent battlefield,” yet raw data is not actionable intelligence and understanding adversarial intent remains the challenge. The West faces a dilemma: adversaries exploit AI without restraint, while ethical and legal limits hinder its own capabilities. Regardless, AI’s use in hybrid warfare is inevitable, and will require strategic adaptation to maintain a competitive edge.

Problem statement: How to understand the claim for transparency on the battlefield in a digitised world?

So what?: Western military and political leadership must prioritise the integration of AI, not merely for data gathering, but for producing and exploiting actionable Intel in real-time—including understanding adversarial intent. This requires collaboration with private tech companies under strict ethical and legal frameworks to ensure strategic advantage without sacrificing accountability. Simultaneously, analogue systems must be maintained to safeguard against technological failure or compromise.

Source: shutterstock.com/Rawpixel.com

Source: shutterstock.com/Rawpixel.com

Modern Conflicts, Data and Intel

21st-century conflict is increasingly shaped by the dynamics of hybrid warfare, an approach that integrates conventional military operations with non-military means.[1] Hybrid warfare employs all instruments of national power, including economic, diplomatic, informational, and military capabilities, alongside semi-governmental and private entities that act as complementary effectors. In this complex environment, artificial intelligence (AI), cyberspace, and the information domain emerge as critical enablers and force multipliers, fostering the perception of a so-called “transparent battlefield.” Most recently, the employment of Unmanned Aerial Vehicles (UAVs) on both sides in Russia’s war against Ukraine have evoked claims of a vitreous battlefield.[2] Likewise, similar conclusions were drawn following the 2020 Nagorno-Karabakh war between Armenia and Azerbaijan. What was overlooked in this case was the fact that Türkiye provided Azerbaijan with a “Turkish All-Inclusive” package, covering training, education, materiel, intelligence (Intel), and battle damage assessment capabilities.[3]

The employment of Unmanned Aerial Vehicles on both sides in Russia’s war against Ukraine have evoked claims of a vitreous battlefield.

However, the reality is that transparency remains largely illusory, and claims of such are nothing new. More than 100 years ago, Giulio Douhet forecasted in his well-renowned opus “Command of the Air” that there would be no more hiding on the battlefield, no more distinction between civilian and military targets, and no more covert movement of forces, due to air power’s overwhelming impact.[4] Similarly, cyber warfare and spectrum warfare–the control, development, and use of advanced electromagnetic (EM) spectrum technologies for strategic advantage and mission success in military engagements and intelligence gathering–tempted humankind to assume battlefield omniscience.[5]

The assumption is that modern reconnaissance capabilities have rendered covert movement impossible. Yet, this perspective overlooks a fundamental distinction between raw data and actionable Intel. In none of these examples did a belligerent achieve decision-making superiority due to enhanced data collection–which is, ultimately, the outcome of either quantitative and/or qualitative or innovative sensors. Raw data does not automatically bequeath automatic comprehension of an opposing force’s intent. Such comprehension comes from superiority in Intel production, dissemination, and exploitation, leading to superiority in decision-making that made a difference on the battlefield. Raw data is the foundation of Intel, not the basis for decisive action.[6]

While AI, cyberspace, and information technologies excel in collecting and processing vast amounts of data, they often fall short in interpreting the intentions of the actors. The ability to gather data is not synonymous with understanding the strategic objectives, motivations, or psychological drivers behind adversarial actions. As a result, the so-called transparent battlefield is, in reality, an incomplete and sometimes misleading construct.[7]

The Ukrainian Armed Forces’ so-called Kursk incursion in 2024 illustrates this. The Russian Armed Forces had reams of raw data; however, the operation’s purpose caught the Russians off guard. Ukrainian forces were not invisible to Russian sensors, yet due to a lack of understanding about their purpose and intent, the Ukrainian Armed Forces penetrated Russian territory and seized, to Vladimir Putin’s embarrassment, a swathe of territory in the Kursk region. Consequently, this Ukrainian operation not only raised attention in the information domain; it fixed Russian forces for months and even led to the deployment of approximately 12,000 North Korean soldiers in the region.

Reinforcing the fallacy of the transparent battlefield, it was for months unclear—despite Russian claims—whether the Russian Armed Forces had managed to fully clear the incursion at the time of writing.[8]

Intel as Foundation for Decision-Making

Understanding an opponent’s intent within means and capabilities is vital for planning on all levels of command, within and without the military realm. Indeed, the peculiarity of human decision-makers makes it almost impossible to fully understand or even predict an actor’s intent with certainty. Humankind is often unpredictable. Consequently, all Intel is, to a certain extent, a best guess, based on historic data, best (or worst) practices, patterns and probabilities. It is an assumption that has to be validated perpetually.[9] However, this best guess provides planners in all realms and on all levels of command with a foundation for developing appropriate measures. Intel is, therefore, a vital means to situational understanding.

All Intel is, to a certain extent, a best guess, based on historic data, best (or worst) practices, patterns and probabilities. It is an assumption that has to be validated perpetually.

In this context, cyberspace plays a dual role in modern conflict: it is both an artificially created domain and an active battlefield. When combined with the information realm, cyberspace becomes a crucial factor in all phases of hybrid warfare. AI, in particular, accelerates the spread of information and disinformation, enabling rapid decision-making and influencing adversarial strategies.

Since human processing capabilities are limited by nature, both AI and cyberspace have the potential to alter the quality and velocity of Intel development. So far, and in the Western world, both are mainly employed to gather information and data.[10] They set the scene for Intel development or the visualisation of collected data rather than creating an exploitable understanding of opposing forces’ intent. Without this critical understanding, AI and cyberspace might evolutionise rather than revolutionise warfare; in other words, they may provide more of the same but at volume and velocity instead of changing the rules of the game.

In this regard, cyberspace and AI might be game-changers. Suppose raw data can be processed more accurately to develop and disseminate an operational understanding of opposing forces’ intent in real-time; Intel can be developed not only based on historic data but in correlation with an actor’s most recent deeds and most likely understanding of their necessities and urgencies; the employment of AI that considers ethical, moral and legal standards accordingly so AI can be embedded in mission command principles. Then, the scene is set for superiority in decision-making; indeed, potentially for all warring parties.

“I Need Some Intel, and I Need It Fast”

Technological advances, such as uncrewed systems, satellites, and cyberspace, mean that decision-makers and modern deployed armed forces are well-supplied with data. The challenge nowadays is how to filter the Big Data for the irrelevant from the relevant information in a timely manner. Collected data must be processed to gain valuable information, which has to be evaluated and turned into intelligence and knowledge. This theoretically simple process presents profound challenges. Processing data requires analytical and technical support to ensure currency, and the human information computing capability is the limiting factor in this process.[11] Consequently, AI may mitigate this human limitation or even render it obsolete.

Technological advances, such as uncrewed systems, satellites, and cyberspace, mean that decision-makers and modern deployed armed forces are well-supplied with data.

For example, regarding AI, the People’s Republic of China (PRC), is superior to the U.S. in setting the benchmark for quantum supremacy by combining AI and Quantum Computing as a quantum intelligence service.[12] Processing, evaluating information, and disseminating intelligence on time are preconditions for decision-making superiority; they allow the creation of multiple dilemmas for an opponent. However, technology itself is an opportunity for, as well as a potential threat to, the respective superior power. It is heavily linked to a capable industrial base and technology, but bears the risk of creating dependence on technology and its weak points.[13]

Threat

Relying on high technology runs a risk of creating a kind of defence-industrial dependence on other global powers. Indeed, to a certain extent, this applies to all major powers—never mind small states. The PRC, for example, is attempting to address this issue through its “Made in China 2025” state plan.[14] Whereas substantial progress toward self-reliance has been made regarding ship-building, aerospace, and missile technology, the PRC still falls short, for example, in producing semiconductors. Since the latter are vital, especially regarding computing, AI and cyberoperations, this is a serious shortfall in preparation for or ensuring deterrence against war; it becomes an existential threat in (sustainable) war.

Even though the U.S. has built the most developed defence-industrial base, it lacks, interestingly, the same shortfalls, namely dependence on semiconductor imports.[15] Measures like the CHIPS and Science Acts were taken. However, it will take years to achieve full self-reliance.[16] In consequence, the U.S., as the leading Western military power, runs the same potential risk as its global main adversary, the PRC. This is particularly surprising since the so-called Western way of warfare (in fact, the U.S. way) is based on possessing the technological edge, especially in the long run.

Even though the U.S. has built the most developed defence-industrial base, it lacks, interestingly, the same shortfalls, namely dependence on semiconductor imports.

Losing its strategic technological advantage will, inevitably, urge the U.S. to fundamentally adapt its doctrinal approach to warfare. This is in stark contrast to the PRC, which might—nothing more than—fall back on its default doctrines and strategic approaches to warfare, which were based on quantitatively overwhelming an opponent, combined with a willingness to bear substantial human and materiel losses.[17]

Besides, and this is valid for all powers, high technology is a vulnerability by itself. AI systems can be hacked, influenced and even turned against their human masters. Indeed, there might be technical solutions to these issues. Nevertheless, there will remain a certain risk despite defence industry claims of game-changing AI security.[18]

Consequently, and as with many developments in military technology throughout history, AI bears risks for all belligerents. Contrary to previous developments, AI is not solely a military asset that dwells in the military domain. Private corporates are critical players in modern conflict, often connected to battlefields via cyberspace. In times when global companies’ annual financial surplus exceeds the (defence) budgets of major European countries, one must not neglect these “independent non-state” actors.[19] There are several reasons why US President Trump surrounds himself with so-called “TechBros”.[20] Elon Musk, Mark Zuckerberg, and Jeff Bezos, undoubtedly, supported his campaign financially. Even more, their respective companies collect vast amounts of global data and hold profoundly influential opinions.

Like mercenaries in past times, these actors might go (or change) to the highest bidder, pose an inherent risk to data and AI security, and, ultimately, further monetise warfare. Those owning cyberspace as a means of data transportation and storing, possessing data, and having access to AI-assisted decision-making systems, eventually influence (or dictate) the course and outcome of wars. Tech giants could conceivably transform into warlords, and war become a means to an economic end.

Opportunity

On the flip side, embracing emerging and disruptive technologies bears several undeniable advantages. Algorithms and machines can work almost incessantly and thus replace the limiting (human) factor in running the Intel cycle and decision-making process. Military approaches and doctrines can be automatically assessed, patterns in human decision-making understood via automated Operations Research (OR),[21] likelihood and probability recognised, and plausible decisions forecasted. AI can increase the speed of these processes, and also optimise processes that are, as things stand right now, restricted by limited human processing capacities.

Military approaches and doctrines can be automatically assessed, patterns in human decision-making understood via automated Operations Research, likelihood and probability recognised, and plausible decisions forecasted.

A full and unrestricted employment of AI could upend the strategic paradigm, similar to the strategic upheaval following the advent of nuclear weapons: a technological leap forward that, according to the U.S., saved hundreds of thousands of lives while still killing tens of thousands.[22] Like the nuclear bomb, AI could contribute to a strategic parity of the sort that once led to relative peace and stability. Additionally, if handled appropriately, AI has the potential to add sanity to the (irrational) human endeavour of war.[23]

Indeed, there will still be no certainty! Human behaviour is unpredictable. It is influenced by very human characteristics such as pride, fear, fury and anxiety. Humanity is full of irrationality. Whereas European leaders assessed Russia’s aggression as presumably irrational due to the “change through rapprochement”, Russia and President Vladimir Putin see themselves in an inevitable struggle against Western imperialism and for national autonomy.[24] However, the so-called global West’s assumptions were rather emotional than rational.

AI follows the rational. Russia, and especially Vladimir Putin, knows and follows the drill. Russia’s policymakers sit down every time the West stands up.[25] AI acknowledges their, seemingly, endless record of broken treaties and agreements. AI understands Russia’s misbehaviour, including but not limited to, the Helsinki Accords, the UN Charter, the Intermediate-Range Nuclear Forces Treaty, the Budapest Memorandum, and the Minsk Treaties. Contrary to human beings, AI understands, acknowledges and considers this proven record of broken accords.[26] AI concludes without emotions and hope; it uses Operational Research, best practice, with high speed. It does so on the strategic level, where the employable means are by far broader than on the operational or tactical level. The lower the level of command, the more limited the field of options becomes. However, this influences human assessments even more than AI-facilitated decision-making. Consequently, AI has the potential to enable superiority in decision-making on all levels of command.

AI concludes without emotions and hope; it uses Operational Research, best practice, with high speed.

So, if commanders and decision-makers require Intel quickly, AI-supported processes will be key. Their speed, adaptability, and capacity to process vast amounts of data in real time, if harnessed systematically may provide modern forces with a significant operational edge. However, the fundamental nature of warfare—with its uncertainty, unpredictability, and potential for disruption—demands built-in redundancies.[27] Just as human forces can be misled or overwhelmed, technological systems can also be neutralised, compromised, or exploited by adversaries. Therefore, analogue fallback options will always be necessary to ensure resilience in the face of digital failure. While timely and accurate Intel is vital, once the fog of war descends, even delayed or partial intelligence is preferable to none at all. Consequently, even if AI supports decision-making, there will be an inherent need for trained and educated staff officers capable of operating if technology fails.

The Art of the Deal…

Policies for AI employment are often concern-driven rather than technologically (r-)evolution-oriented— especially in the global West. One of the main concerns revolves around the ethical contemplation that AI lacks morality, and, therefore, might act against the so-called Western standards. Most European nations find themselves in the European Union (EU), which defines itself as a Union of values. Human dignity, liberty, democracy, equality, the rule of law and adherence to human rights are enshrined in Article 2 of the Treaty of the European Union.[28] Whereas this treaty solely binds EU member states, one may assume that most Western nations ascribe to these values, at least verbally. Nevertheless, European nations have to adhere to these values, especially in warfare. This is particularly true if one accepts that military employment is ultimately meant to project a nation’s or community’s values and morale.[29]

The argument circulates around the ethical considerations that AI lacks moral judgment. AI has no soul, no consciousness, and no sense of (morally) right or wrong. An algorithm follows pre-set rules within a defined framework. However, AI is more complex than commonly assumed. Decisions are not taken binarily. AI works with assessed probabilities. Moreover, it operates within a framework and rules that humans set. In short, the expected and acceptable outcome depends on the quality, conciseness, and clarity of the framework and the rules. Moreover, there’s the question of accountability for machine failure and violations of the laws of war.

AI works with assessed probabilities. Moreover, it operates within a framework and rules that humans set.

Nevertheless, as superficially reasonable as it seems, the argument seems out of context. It assumes human beings take conscious, reasonable, ethically and morally correct decisions. As a matter of fact, we humans have an obnoxious record in adhering to rules, agreements and values developed by and for ourselves. Humankind defines ethics and moral; and humankind abandons, disregards and, eventually, redefines ethics and morals as it deems appropriate or, moreover, useful. Merely within the last 100 years, and without the support of AI, humankind initiated and, at times, morally justified, colonialisation, slavery, human trafficking, child abuse, genocide, ethnic cleansing, two world wars, more than 100 wars,[30] and the invention plus the (twice) employment of nuclear weapons. As empiricism illustrates, and with this (even tightly framed!) historical human record, it appears hypocritical to neglect AI due to ethical and moral apprehensions.

Additionally, there’s the question of accountability. Human-made law applies to human beings. However, legal accountability in war matters because it protects lives, delivers justice, upholds international norms, and helps prevent future violence.[31] Yet, algorithms can’t be made accountable for violations of (international) law. From the author’s point of view, what appears to be a show-stopper for AI employment in warfare is another fig leaf for not employing contemporary technology. Indeed, there is accountability. Certainly, AI might decide on life or death on a human’s behalf. However, so does, the end of the day, a smart bomb. No doubt, the shooter/operator/employer is to be held accountable for the caused effect. Voices in the Western world that blame the defence industry or weapon-delivering states for harm done in war are both irrational and a minority vote. Yet, there is a connection from arms production to deaths on the ground, and unlike industrialists commanders and soldiers in the field are often legitimately held accountable for their deeds. One can contextualise this divergence in accountability through the humble landmine, mines don’t care about ethics and morals. Moreover, not all countries have even signed the related Ottawa Convention. Some even left the Convention. Nevertheless, both mines and mine producers are hardly legally held accountable for more than 4,500 annual casualties worldwide. The forces positioning the mines are those held liable!

Moreover, even if AI makes decisions, it does so within a framework set by humans. Whether the human is in or out of the loop, there is still, at a certain point, human oversight and “rule setting.” AI employment does not automatically mean we will see a Terminator-style “Skynet” scenario.[32] Consequently, it remains questionable whether the probability of losing control over an algorithm is higher than suffering from ungovernable subordinates.

Even if AI makes decisions, it does so within a framework set by humans.

Furthermore, belligerent powers’ comparably lower adherence to Western-style standards must evoke remarkable concern. The aforementioned values and concerns do not pre-emptively impact their thoughts or actions regarding AI. Nevertheless, both the PRC and Russia introduced ethical rules for AI employment. Yet, they do so from a different perspective and with an unalike purpose. Antagonists set these rules to align AI with political purposes, not to endanger political leadership. Ethics and morals are used to ensure a regime’s survivability.[33], [34] So, the wording might be the same; however, it follows a different rationale.

Use It or Lose It

The integration of AI into warfare presents both strategic opportunities and existential risk. While adversaries like the PRC and Russia adopt AI with few (or simply different) ethical reservations, the Western community of nations faces constraints rooted in legal and moral principles, potentially leading to operational disadvantages. Despite advances in reconnaissance and AI, the notion of a fully “transparent battlefield” remains largely illusory—data alone does not equate to actionable intelligence. Human intent, shaped by unpredictable behaviour, still eludes even the most advanced systems. Yet from today’s vantage point, the battlefield increasingly appears transparent. This perception, however, is expected to be challenged as technology evolves.

Human intent, shaped by unpredictable behaviour, still eludes even the most advanced systems.

Intel—not raw data—remains the foundation for decision-making, underscoring the continuing importance of human interpretation. AI can accelerate data processing, but it must be framed within human-made ethical and accountability boundaries. Furthermore, reliance on high-tech systems introduces vulnerabilities; they can fail, be compromised, or exploited. Therefore, analogue fallback options remain vital to maintain operational resilience. As AI reshapes command and control, its success will depend on how well humans define its rules and integrate it into established military doctrines. Ultimately, AI may offer an edge—but only when balanced with enduring principles of human oversight, ethical governance, and redundancy in system design.

 


Matthias Wasinger is a Colonel (GS) in the Austrian Armed Forces. He holds a Magister in Military Leadership (Theresan Military Academy), a master’s degree in Operational Studies (US Army Command and General Staff College), and a PhD in Interdisciplinary Studies (University of Vienna). He has served both internationally and nationally at all levels of command. He is also the founder and editor-in-chief of The Defence Horizon Journal. The views expressed in this paper are the author’s alone and do not reflect or relate to any of the abovementioned organisations.


[1] Hybrid CoE – The European Centre of Excellence for Countering Hybrid Threats, “Hybrid Warfare – Hybrid CoE – the European Centre of Excellence for Countering Hybrid Threats,” last modified April 08, 2025, https://www.hybridcoe.fi/hybrid-warfare/.

[2] The Economist, “The Added Dangers of Fighting in Ukraine When Everything Is Visible,” The Economist, February 06, 2025, accessed May 23, 2025, https://www.economist.com/europe/2025/02/06/the-added-dangers-of-fighting-in-ukraine-when-everything-is-visible.

[3] Botakoz Kazbek, “The Turkish “All-Inclusive” Package of Military Service and the Nagorny Karabakh Case 2020,” TDHJ.org, January 10, 2022, accessed May 23, 2025, https://tdhj.org/blog/post/turkish-all-inclusive-military-service/.

[4] Giulio Douhet, The Command of the Air, USAF warrior studies (Washington, D.C.: Air Force History and Museums Program, 1942), 10.

[5] AUSA, “The Transparent Battlefield: Combat Training Centres Sharpen Unit Tactics for High-Tech Fight,” last modified June 25, 2024, https://www.ausa.org/articles/transparent-battlefield-combat-training-centers-sharpen-unit-tactics-high-tech-fight.

[6] Thierry Balzacq and Ronald R. Krebs, The Oxford Handbook of Grand Strategy, 1st. ed. (Oxford: Oxford University Press, 2021).

[7] US Naval War College, “LibGuides: Intelligence Studies: Types of Intelligence Collection,” last modified May 23, 2025, https://usnwc.libguides.com/c.php?g=494120&p=3381426&utm.

[8] Gareth Jones, “Russia Says Last Ukrainian Troops Expelled from Kursk Region, Kyiv Denies Assertion,” April 26, 2025, accessed April 27, 2025, https://www.reuters.com/world/europe/putin-hails-end-ukraines-kursk-incursion-with-expulsion-last-ukrainian-troops-2025-04-26/.

[9] Stig K. Andersen, “Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference,” Artificial Intelligence 48, no. 1 (1991), https://doi.org/10.1016/0004-3702(91)90084-W.

[10] Christian Nitzl et al., “The Use of Artificial Intelligence in Military Intelligence: An Experimental Investigation of Added Value in the Analysis Process” (2024).

[11] Antonella C. Vilasi, “The Intelligence Cycle,” Open Journal of Political Science 08, no. 01 (2018), https://doi.org/10.4236/ojps.2018.81003.

[12] Jonathan Ward, China’s Vision of Victory (The Atlas Publishing and Media Company, 2019), 86.

[13] Richard A. Poisel, Information Warfare and Electronic Warfare Systems (Artech House, 2013), 26-50.

[14] Vladimir Zamorin, “Contribution of the Chinese Military-Industrial Complex to the “Made in China 2025” State Plan,” Far Eastern Affairs 51, no. 002 (2023), https://doi.org/10.21557/FEA.86159334.

[15] Sujai Shivakumar and Charles Wessner, Semiconductors and National Defence: What Are the Stakes? (2022), https://www.csis.org/analysis/semiconductors-and-national-defense-what-are-stakes.

[16] Kashif Anwar, “The Battle for Global Chip Dominance: The U.S. “CHIPS and Science Act”,” TDHJ.org, August 22, 2022, accessed May 24, 2025, https://tdhj.org/blog/post/usa-chips-science-act/.

[17] US Army War College – Publications, “Adapting US Defence Strategy to Great-Power Competition,” last modified May 24, 2025, https://publications.armywarcollege.edu/News/Display/Article/4129357/adapting-us-defense-strategy-to-great-power-competition/.

[18] Pillar Security, “Pillar Security Raises $9M to Help Enterprises Build and Run Secure AI Software,” Yahoo Finance, April 16, 2025, accessed May 23, 2025, https://finance.yahoo.com/news/pillar-security-raises-9m-help-131500361.html?guccounter=2.

[19] Johns Hopkins SAIS, “How Private Tech Companies Are Reshaping Great Power Competition,” last modified February 21, 2024, https://sais.jhu.edu/kissinger/programs-and-projects/kissinger-center-papers/how-private-tech-companies-are-reshaping-great-power-competition.

[20] Jeffrey Goldfarb, “Breakingviews – Silicon Valley Daredevils Ride or Die with Trump,” Reuters Media, July 18, 2024, accessed May 24, 2025, https://www.reuters.com/breakingviews/silicon-valley-daredevils-ride-or-die-with-trump-2024-07-17/.

[21] “OR is defined as the application of scientific and mathematical methods to provide decision-makers with a quantitative basis for decisions regarding operations under their control. This approach focuses on enhancing the efficiency and performance of manpower, machinery, equipment, and policies within military operations.” A. Lakshani Pramodhya, “The History of Operations Research,” OR Society, June 21, 2022, https://ors.soc.pdn.ac.lk/blog/history-of-operation-research.

[22] Joseph H. Paulin, “America’s Decision to Drop the Atomic Bomb on Japan,” accessed May 24, 2025, https://repository.lsu.edu/cgi/viewcontent.cgi?article=4078&context=gradschool_theses&utm.

[23] Alex Cope, “When AI Meets the Laws of War | IE Insights,” IE Insights, October 03, 2024, accessed May 24, 2025, https://www.ie.edu/insights/articles/when-ai-meets-the-laws-of-war/.

[24] Der Pragmaticus, “Putins Hass Auf Den Westen | Der Pragmaticus,” Der Pragmaticus Verlag AG, February 17, 2023, accessed May 24, 2025, https://www.derpragmaticus.com/r/russland-westen?.

[25] Dermot Nolan, “Through Audacity and Arms: How Europe Can Restrain Russia,” TDHJ.org, March 04, 2024, accessed May 24, 2025, https://tdhj.org/blog/post/audacity-arms-europe-russia/.

[26] Smith K. Khare et al., “Emotion Recognition and Artificial Intelligence: A Systematic Review (2014–2023) And Research Recommendations,” Information Fusion 102 (2024), https://doi.org/10.1016/j.inffus.2023.102019, https://www.sciencedirect.com/science/article/pii/S1566253523003354.

[27] Alexander Schäbler, “1+1 ≠ 2: Digital Friction, Uncertainty, and the Limits of Technological Determinism,” TDHJ.org, May 22, 2025, accessed May 23, 2025, https://tdhj.org/blog/post/digital-friction-technology-determinism/.

[28] European Union Treaties, “Consolidated Version of the Treaty on European Union,” European Union Treaties, last modified May 15, 2025, https://www.legislation.gov.uk/eut/teu/article/2.

[29] Matthias Wasinger, “A Revolution in Military Ideas: The Continuing Importance of the Enlightenment in an Age of Technological Autonomy,” The Strategy Bridge, December 09, 2019, accessed May 24, 2025, https://thestrategybridge.org/the-bridge/2019/12/9/a-revolution-in-military-ideas-the-continuing-importance-of-the-enlightenment-in-an-age-of-technological-autonomy.

[30] Again, depending on the set rules, the numbers range from 100 to 1,000.

[31] International Committee of the Red Cross, “Investigating and Prosecuting Serious Violations: An Important Tool Against Impunity,” last modified October 14, 2024, https://www.icrc.org/en/statement/79-UN-crimes-against-humanity-investigating-and-prosecuting-serious-violations-tool-against-impunity.

[32] James Black et al., “Strategic Competition in the Age of AI: Emerging Risks and Opportunities from Military Use of Artificial Intelligence,” RAND, 2024, 6-8., accessed May 23, 2025, https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3200/RRA3295-1/RAND_RRA3295-1.pdf.

[33] Masha Borak, “Inside Safe City, Moscow’s AI Surveillance Dystopia,” WIRED, February 6, 2023, accessed May 23, 2025, https://www.wired.com/story/moscow-safe-city-ntechlab/.

[34] Samuel Yang and Chris Fung Bill Zhou, “AI Ethics: Overview (China),” last modified January 20, 2025, https://www.chinalawvision.com/2025/01/digital-economy-ai/ai-ethics-overview-china/?.

You may also like

Comments are closed.