AI In Cyber And The UN’s Role In Competitive Global Disorder

Abstract: AI systems have transformed cyber defence by enabling self-protecting, self-healing, and predictive defence systems, but they also transform cyberattacks by opening new attack vectors and supporting advanced cyber offensive capabilities. This article examines how AI is leveraged in cyberattacks and operations, with selected examples of the “big four” states adversarial to the Western alliances—China, Russia, Iran, and North Korea. The use of ever-improving AI-augmented cyberattack capabilities increases the risks of escalation and destabilisation in international peace and security. The UN has addressed several gaps in multilateral governance, while like-minded minilateral coalitions are instrumental for norm-setting and operational guardrails. However, these laudable efforts will have a limited impact as long as the main AI cyber powers are either unwilling or unlikely to allow their AI capabilities to be meaningfully constrained.

Problem statement: What are the evidence-based uses of AI-enhanced cyber capabilities that threaten international security?

So what?: Given the disruptive threat the malicious use of AI poses to international peace and security, governments need to advance international guardrails for AI-enhanced cyber capabilities by establishing normative standards, operational safeguards, and accountability mechanisms to mitigate connected threats, ensure compliance with international law, and promote responsible conduct—but that will be extremely difficult in an increasingly competitive global disorder.

Source: shutterstock.com/mr_tigga

AI in Cyber Defence

In his book, “Superintelligence,” Nick Bostrom argues that if humanity develops human-level artificial intelligence (AI), such superintelligence can rapidly gain a strategic advantage over humanity, potentially exceed human cognitive performance, and act in ways adversarial to human values. Autonomous in nature, it would be difficult to control.[1]

The race to develop a type of AI that exhibits human-like cognitive abilities, or artificial general intelligence (AGI), has been spurred by recent advancements across AI models. Currently available tools, while far from the spectre of superintelligence, fulfil an impressive array of functions, including natural language processing, computer vision, predictive analytics, autonomous decision-making, anomaly detection, and code generation. Many of these functions revolutionise how we live and work.

The race to develop a type of AI that exhibits human-like cognitive abilities, or artificial general intelligence, has been spurred by recent advancements across AI models.

Critical services on which we rely every day—from power grids to hospitals, financial systems to communications networks—are underpinned, powered by, or dependent on software. Thus, the digital resilience of critical infrastructure can be improved by AI-enhanced self-protecting systems that automatically scan for malicious code and detect adversarial activities and, increasingly, automate response. As a proof of concept, in August 2025, Microsoft’s Project Ire demonstrated an autonomous AI system that reverse-engineers software to detect and block malware without human input, achieving high accuracy rates. In other words, Ire blocks highly advanced forms of malware automatically.[2]

AI already scales protection capabilities, enabling more efficient responses through predictive threat detection, real-time anomaly identification, and automation, while reducing errors and speeding up incident response. Another AI application for critical infrastructure is self-healing systems that continuously monitor, identify, and remediate vulnerabilities. AI can also augment individual activities, such as secure coding, helping to rewrite insecure, legacy code at scale to eliminate entire classes of vulnerabilities, and make security expertise accessible and available through AI assistants. “Agentic AI” promises even further advances and autonomy of AI agents—though that autonomy creates its own risks and potential vulnerabilities.[3]

While AI is transforming cybersecurity and helping defenders, it also opens new attack vectors and is increasingly used by malicious state and criminal actors. AI systems are complex and vulnerable to evasion (deceptive manipulation of AI inputs), data poisoning, and the exploitation of traditional software flaws, which can deceive, manipulate, and compromise them, rendering AI tools ineffective or even harmful.[4]  Particularly as AI tools, including AI agents, are often black boxes with little transparency into their algorithms or operations, the potential for undiscovered vulnerabilities or unintended harmful behaviour and exploitation grows rapidly. Moreover, training data for AI systems can be corrupted or intentionally perverted, which in turn can cause malfunctions. General data poisoning reduces overall model performance, while targeted poisoning manipulates the model’s output in specific cases, for example, to spread misinformation in large language models (LLMs). In one case, Russian actors “groomed”  AI chatbots to repeat propaganda by injecting fabricated articles into the digital ecosystem. Furthermore, LLMs have been found to leak large troves of personally identifiable information if not properly configured or protected.[5] As these systems are incorporated into digital infrastructure, including critical systems, AI tools themselves must be designed, built, and delivered with security as a top priority.[6]

AI in Cyber Offence

AI tools are primarily incorporated across four vectors: phishing and social engineering, malware development and deployment, disinformation, and reconnaissance.[7] Automated systems increase the capacity for scale, speed, and efficiency, while lowering barriers for attackers to design and deploy offensive cyber capabilities, even with limited programming abilities, technical skills, or language fluency. A September 2025 MIT study found that 80% of the ransomware attacks examined used AI, ranging from AI-generated phishing campaigns to malware creation.[8], [9]

AI tools are primarily incorporated across four vectors: phishing and social engineering, malware development and deployment, disinformation, and reconnaissance.

Phishing and Social Engineering

LLMs are trained to generate human-like text, making them well-suited for preparing directed, persuasive messages that mimic distinct communication styles and evoke familiarity, urgency, emotion, or authority, while avoiding common phishing signs that raise suspicion, such as grammatical mistakes or unnatural phrasing.[10], [11] LLMs are routinely used in phishing campaigns and other social engineering schemes, increasing the likelihood of deceiving recipients and deploying attacks at scale. For instance, Iranian state-backed actors from APT42, a threat group conducting cyber espionage and surveillance,[12] leveraged LLMs to create legitimate-looking phishing content.[13] The Iranian campaigns against foreign targets used automated text-generation and editing for translation and for tailoring messages to selected sectors and locations.[14] LLMs are not employed in isolation, but integrate open-source tools and tactics to breach public and private sector entities more effectively.[15]

AI enables text-to-speech generators to mimic voices based on brief recordings, making them a potent tool for vishing, a type of phishing campaign that leverages video or audio to facilitate convincing scams. Russian-backed groups used voice cloning, with reported campaigns employing AI-powered vishing to solicit the sharing of two-factor authentication codes.[16] Automated voice cloning scams have targeted executives and governments worldwide. In early 2025, fraudsters cloned the voice of Italian Defence Minister Guido Crosetto to coax business leaders into transferring nearly one million euros in ransom.[17] A Marco Rubio impostor used an AI-generated voice to deceive high-level officials.[18] Using this deepfake voice, the attacker contacted high-level officials through the encrypted messaging platform Signal, exposing vulnerabilities in identity verification systems relying on voice recognition.

AI enables text-to-speech generators to mimic voices based on brief recordings, making them a potent tool for vishing, a type of phishing campaign that leverages video or audio to facilitate convincing scams.

Employed in “honeypot” websites, AI models are prompted to create fake login forms that exfiltrate credentials, embed malware or exploit kits into downloadable resources, and generate convincing error messages that direct visitors to call a fake support line.[19]  FIN7 is a Russian advanced persistent threat (APT) group that has attacked over one hundred organisations in the United States and Europe, including government bodies. In a campaign observed in 2024, these cybercriminals used adult-themed AI Deepnude generator websites as a honeypot to install malware on the devices of unsuspecting visitors.[20]

AI-enhanced social engineering tactics extend to proxy job interviewing. North Korean campaigns are infamous for drafting inauthentic job applications to embed operatives in Western companies, allowing foreign cybercriminal groups to gather intelligence and generate profit for the North Korean regime.[21] LLM prompts, such as those from Anthropic models, can create elaborate false identities with convincing professional track records. The AI is not only used in the application process, but also during the interviews to complete technical and coding assessments, and once hired, AI tools help deliver the assigned technical work, while furthering the infiltration of the victim organisation.[22]

Malware Development and Deployment

Automated malware generation allows threat actors to design code without the previously required advanced technical knowledge,[23] and AI tools can further recommend new attack vectors to be considered for deployment.[24] According to Microsoft, malware variants generally use simple sets of logical rules to recognise and adapt to operating environments, such as checking for time zones, to avoid detection.[25] AI models automate this task and enable malware to conceal its presence and malicious intent more effectively, being adaptable to diverse environments and learning from previous campaigns and security tools. As a proof-of-concept, reinforcement learning was leveraged to generate malware that evaded Microsoft Defender.[26] Outside controlled settings, the Ukrainian national computer emergency response team, CERT-UA, detected AI-powered malware attacks targeting the security and defence sector in July 2025. The malware strain “LameHug” was attributed to the Russian-backed APT28 group, and utilised an LLM to generate commands based on textual descriptions after autonomous infiltration.[27]

Outside controlled settings, the Ukrainian national computer emergency response team, CERT-UA, detected AI-powered malware attacks targeting the security and defence sector in July 2025.

Agentic AI tools can provide both technical advice and active operational support for attacks that would otherwise have required a team of operators.[28] Security researchers at New York University published a study in September 2025, demonstrating a prototype of LLM-orchestrated ransomware.[29] Since these tools can adapt to defensive measures, such as malware detection systems, in real time, their advanced properties make defence and enforcement increasingly difficult.[30] The use of externally sourced malware or code developed without human oversight or review can introduce unpredictable behaviours, as operators may not have full visibility into how the malware behaves, how long it will persist, or how other actors could exploit it.

Anthropic threat intelligence further evidenced that AI could bolster the organised Ransomware-as-a-Service (RaaS) model, enabling the development, marketing, and distribution of ransomware with advanced evasion capabilities. Foreign ransomware operations constitute a serious and growing threat to national economies and have been recognised by the UN member states as a threat to international security due to their increasing frequency, scale and severity, causing harm and disrupting essential public services.

Disinformation

Due to its ability to emulate human language and generate vast troves of plausibly looking information, as well as high-fidelity audio and visual content, AI is helping in information operations by generating inauthentic accounts.[31] Known as “sock puppet accounts,” these fake profiles can hide users’ identities, engage in comment brigading, and organise online attacks aimed at altering perceptions.[32] Since the start of the full-scale invasion in Ukraine, Russian-linked groups have employed AI-generated deepfake videos in disinformation campaigns. In March 2022, pro-Russian groups hacked a Ukrainian media website and posted a deepfake video of President Volodymyr Zelenskyy announcing his resignation and calling on soldiers to surrender.[33] The video was briefly broadcast by Ukrainian news websites and on national television when Russia-affiliated hackers compromised the channel’s display. The inauthentic content was shortly detected and removed. President Zelenskyy responded with a counter video where he reaffirmed Ukraine’s commitment to defending against the Russian invasion.[34] Disinformation campaigns, such as the Storm-1679 network and the Doppelganger campaign, impersonate reputable news outlets and journalists to spread false narratives, exploiting verification challenges in fast news cycles during political events and high-stakes moments.[35] For example, in February 2025, Storm-1679  created an E! News segment claiming USAID paid celebrities to visit Ukraine. The video—a complete fabrication—was seen by millions of readers before fact-checkers could respond.[36]

Actors affiliated with the People’s Republic of China (PRC) have been documented conducting information operations using AI-created content, focusing on the U.S. political situation and the “five poisons”—Taiwanese and Tibetan independence, Xinjiang separatism, the Falun Gong, and the Chinese democracy movement. Chinese actors posted AI-generated audio clips showing a Taiwanese presidential candidate across social media platforms to influence voters ahead of the 2024 presidential elections. PRC-affiliated social media accounts have also impersonated US voters. Chinese sock puppet accounts posted AI-generated videos, members and infographics promoting divisive issues, including drug consumption, immigration challenges and racial tension.[37] Similarly, Iranian information operations teams have been observed manipulating political content and boosting engagement in influence campaigns using LLMs.[38]

Actors affiliated with the People’s Republic of China have been documented conducting information operations using AI-created content, focusing on the U.S. political situation and the “five poisons”—Taiwanese and Tibetan independence, Xinjiang separatism, the Falun Gong, and the Chinese democracy movement.

Reconnaissance and Espionage

Reconnaissance and cyberespionage are considered a grey area—not offensive actions unless they cross certain thresholds, such as interfering with critical infrastructure and democratic processes—and provide states with a strategic advantage. AI models have been making strides in this field by enabling automated data collection and analysis. LLMs can help in domain reconnaissance by autonomously generating and executing commands for network scanning, service enumeration, and credential harvesting.[39] PRC-affiliated actors have been observed using LLMs to exploit access to centralised services, conduct domain reconnaissance on US government networks, including those of the U.S. military and U.S.-based IT service providers, and access a public database of US intelligence personnel. North Korean APT actors employed LLMs to support several phases of the attack lifecycle, including reconnaissance on target organisations and U.S. military operations in South Korea. Iranian actors undertook reconnaissance on experts, international defence organisations, and government organisations, focusing on topics related to the Iran-Israel conflict.[40], [41]  Concurrently, LLMs and real-time machine translation lower barriers to intelligence-sharing and operational coordination between the agencies and cybercriminal groups of Russia, China, Iran, and North Korea, enabling closer strategic alignment of the “big four” actors and proxies adversarial to Western alliances.[42]

Automating Response and Cyber Attack Capabilities

A natural extension of AI automating malware development and deployment, and reconnaissance and vulnerability discovery, is states using AI tools to automate military and other offensive cyberattacks and campaigns, and, conversely, to automate cyber offensive responses.  There has long been a narrative in the cyber operations community that cyber capabilities, even when used for defence, must move “at the speed of cyberspace” rather than the slow and deliberative speed of traditional operations.  While this argument has some visceral appeal, as a byproduct, it risks removing the human from the loop and does not allow for the evaluation of important policy considerations, especially as the implications of this new technology are far from fully developed.  AI, however, will increasingly allow such automated responses in both civil and military environments.  In some ways, these issues mirror considerations in the realm of autonomous physical weapon systems, though the fast pace of AI adoption significantly raises the stakes.  Moreover, given the risk that AI tools largely lack transparency and are, as discussed above, themselves subject to attacks, the result of deploying these tools and granting them any degree of independence carries an outsized risk to stability. Although militaries employing AI tools for offensive operations are in relative infancy, that evolution is imminent and raises the stakes for governments discussing the use of automated systems and how to control their potentially destabilising effects.

UN’s Role in AI Governance

As states contend for dominance in cyberspace, the risk that AI-enabled cyberattacks and operations lead to misattributed attacks and unintended escalation exacerbates geopolitical tensions. The rapid development, uneven progress, distribution, and application of AI across countries and regions, and the many harmful uses of these models by threat actors, to the point that such attacks threaten international security, pose the question: What is the best-positioned forum to address these urgent issues? 

The United Nations (UN) remains the most inclusive multilateral forum for international dialogue on peace and security, but arguably not the most action-oriented or agile. The First Committee, which deals with disarmament, international peace and security issues, has introduced AI considerations in the Open-Ended Working Group (OEWG) on ICT Security under the framework of responsible state behaviour in cyberspace. The forum has been debating AI-enabled cyber threats—specifically those targeting digital systems—and how AI can be employed to prevent, mitigate, and defend against them. The Group’s final report notes that AI models, such as LLMs and generative models, have reduced barriers to undertaking malicious ICT activities, including through AI-generated malware and deepfakes. In a consensus decision, delegations agreed on the importance of ensuring the safety and security of AI systems and specific considerations for the data used to train AI models.[43] The European Union, alongside several developing countries, proposed to prioritise emerging AI threats. In contrast, the US and Israel opposed in-depth elaboration of these issues, considering them outside of the OEWG mandate and the scope of work for the succeeding format, the Global Mechanism.[44] While the implementation of agreed-upon norms for responsible behaviour is uneven, and violations are followed by limited accountability, this inclusion of AI-related issues marks a step toward integrating AI into international security discussions. It also shows that AI considerations do not necessitate new mandates or the renegotiation of normative frameworks, but can be addressed within them when these instruments focus on behaviour and intent.

The United Nations remains the most inclusive multilateral forum for international dialogue on peace and security, but arguably not the most action-oriented or agile.

A shortcoming with the UN-agreed norms is their lack of “agency,” or the capacity to act, and this agency falls to the international community acting through the United Nations Security Council (UNSC). The Council convened high-level debates on the responsible use of AI in the military domain and plays a role in shaping collective responses. While the ability to reach agreement in the UNSC on a response to malicious cyber action is very limited, these meetings have emphasised the importance of guardrails and regulatory coherence regarding AI-powered weaponry. The UNSC work aims to contribute to the responsible application of AI in compliance with international law and to support peace processes, conflict prevention, peace operations, post-conflict recovery, and early warning.[45] Although the considerations remain more conceptual than operational and focus on a broad range of weaponry beyond cyber capabilities, the UNSC elevated the issue to the highest level of deliberation on international peace and security. The UNSC’s involvement adds political weight and coherence to parallel discussions in other UN bodies and connects AI ethics, security governance, and arms control. In parallel, the Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems (LAWS) addresses AI as a component that facilitates autonomy in weapons.[46] The GGE has agreed on several guiding principles, emphasising that the use of autonomous weapons must remain under human control and comply with international humanitarian law (IHL). What this process has also shown is divisions between states advocating for strict bans and those preferring voluntary guidelines.

The Council convened high-level debates on the responsible use of AI in the military domain and plays a role in shaping collective responses.

In September 2025, the UN announced it would form two AI-specific bodies. The Independent International Scientific Panel on AI, created by the UN General Assembly, will assess AI risks and opportunities in annual reports.[47] A Global Dialogue on AI Governance has been initiated to build consensus on AI governance.[48] UN Secretary-General António Guterres hailed these bodies will “move from principles to practice” on setting global AI standards. This work should complement existing efforts, prominently at the OECD, the G7, and regional organisations. The newly established formats are consultative in nature. Without any enforcement authority, their deliberations act as a confidence-building measure and a platform to promote global understanding.

The UN is a consensus-based organisation, with many members who have different and sometimes competing interests. Creating meaningful supervision has always been difficult for the international community, not just in AI. For example, in the cyber realm, though there was agreement on certain norms of behaviour for nation-state conduct in cyberspace a decade ago, consensus and a strained and polarised geopolitical environment have slowed further progress to a crawl, and accountability for violations of norms and international law is lacking across UN bodies. Moreover, while the UN  is inclusive of views from all countries, AI investment tends to be highest in states following their own rules, often prioritising voluntary agreements and competition over convergence.[49]

Coalitions of the Willing

Norm-setting and operational guardrails for AI systems depend on national standards, and closest to shaping any guardrails are like-minded, coalition-based strategies originating in minilateral formats. Five Eyes, the cornerstones of Western intelligence and security cooperation between the U.S., the UK, Canada, Australia, and New Zealand, have addressed AI accountability through voluntary ethical AI frameworks, advisory councils, and self-regulation. The Five Eyes countries have issued guidance on deploying AI systems securely, particularly to ensure that the emerging technology does not exacerbate any existing network vulnerabilities. The guidelines set to improve the confidentiality, integrity, and availability of AI systems, ensure the presence of adequate mitigations for known vulnerabilities, and provide methodologies and controls to protect, detect, and respond to malicious activity against AI systems and related data and services.[50]

The International Counter Ransomware Initiative (ICRI), a pre-eminent like-minded alliance launched by the White House to enhance global cooperation against ransomware, has dedicated summit sessions to AI’s role in countering ransomware. Their discussions centred on using AI to track threat actor behaviour and plan for critical infrastructure protection—including healthcare networks. The ICRI has addressed software security and promotes secure software development principles among its 74 members to prevent ransomware from gaining access to sensitive data and infrastructure.[51]

The International Counter Ransomware Initiative has dedicated summit sessions to AI’s role in countering ransomware.

The Pall Mall Process is a separate state-led multi-stakeholder forum under a joint UK-France leadership that aims to identify and implement political commitments to tackle the proliferation and irresponsible use of commercial cyber intrusion capabilities (CCICs). The respective Code of Practice for States acknowledged that AI, while enhancing cyber defensive capabilities, including the detection, response, and remediation of malicious cyber incidents, is likely to increase the availability of cyber intrusion tools and services, and make them more challenging to monitor and regulate their malicious use. Concurrently, the growing market for commercial cyber intrusion capabilities expands the potential pool of state and non-state actors with access to these technologies, increasing the opportunity for irresponsible use and making it more difficult to mitigate and defend against the threats they pose.[52]

The AI Summit in Paris in February 2025 sought to define a global approach to AI safety and regulation. However, unlike the Bletchley and Seoul summits in the previous years, which produced more ambitious declarations, the Paris gathering concluded with only a modest statement. It lacked the support of key governments investing in and developing AI.[53] The outcome document––Paris Charter on AI in the Public Interest––is a voluntary framework focused on openness, accountability, participation, and transparency. However, the text lacks a broad global consensus, attracting only ten signatories.[54] The U.S. and the UK did not joint France-led efforts, citing national security concerns.[55] The persistent reluctance to find a common ground on pressing AI concerns shows that states with advanced AI capabilities have doubled down on competitive AI design and deployment rather than coordinated efforts.

Rules and accountability

As noted above, AI models clearly demonstrate dual use capabilities: for protection across critical infrastructure and for offence in attacks by states and cybercriminal groups, to levels that threaten international peace and security. AI-enabled cyber operations, especially those involving autonomous malware, disinformation at scale, and covert reconnaissance, raise the likelihood of miscalculation, misattribution, and unintended escalation among competing or adversarial states. Chinese, Russian, Iranian, and North Korean actors have employed AI models in both offensive cyber operations and financially motivated attacks, fuelling geopolitical competition and asymmetric strategies. The urgency in addressing the use of AI in this context is palpable as new developments in AI models and the use of autonomous agents accelerate.

AI models clearly demonstrate dual use capabilities: for protection across critical infrastructure and for offence in attacks by states and cybercriminal groups, to levels that threaten international peace and security.

The UN provides an inclusive forum where states gather to discuss the challenge and build confidence and trust, but the multilateral progress is constrained by reliance on consensus-based decision-making, and consensus is increasingly unlikely given the perceived stakes of AI development and dominance. Under President Trump, the U.S. has backed away from constraining what it sees as its technological lead in AI and eschews any attempts by “global governance mechanisms” like the UN to constrain its progress.  China, though outwardly welcoming UN governance efforts, is, if the cyber domain is any indication, unlikely to comply with any UN-developed constraints if it feels its national interests are at stake. Like-minded regional and mini-lateral coalitions carefully advance practical norms and operational guardrails, but they also lack enforcement mechanisms.

Amid fragmented global governance, the rules-based order will depend on the resolve of like-minded groups to form an influential enough community that can demand compliance and foster responsible use of emerging technologies. Enhancing stability in the digital domain demands minimum shared guardrails for AI-enhanced cyber capabilities—focused on secure development and deployment, adequate oversight, and compliance. This is an unlikely endeavour unless those bodies can find some common ground with at least one of the current AI powers.  Even if some agreements can be reached, governments must show political heft to ensure accountability––something that has been sorely lacking in cyberspace writ large.


[1] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014).

[2] Nathaniel Mott, “AI Malware Can Now Evade Microsoft Defender,” Tom’s Hardware, July 10, 2025, https://www.tomshardware.com/tech-industry/cyber-security/ai-malware-can-now-evade-microsoft-defender-open-source-llm-outsmarts-tool-around-8-percent-of-the-time-after-three-months-of-training.

[3] “What Is AI for Cybersecurity?” Microsoft Security, accessed October 23, 2025, https://www.microsoft.com/en-us/security/business/security-101/what-is-ai-for-cybersecurity; Oladiipo Ishola Oladepo and Seye Amos Olawoyin, “The Impact of AI on Cross-Border Financial Operations in Fintech: A Legal and Ethical Perspective,” World Journal of Advanced Research and Reviews 26, no. 3 (July 2025): 2445, https://doi.org/10.30574/wjarr.2025.26.3.2445; Cem Dilmegani, “Top 13 AI Cybersecurity Use Cases with Real Examples,” AIMultiple, October 10, 2025, https://research.aimultiple.com/ai-cybersecurity-use-cases/; Jen Easterly, The End of Cybersecurity America’s Digital Defenses Are Failing—but AI Can Save Them, Foreign Affairs, 16 October 2025, https://www.foreignaffairs.com/united-states/end-cybersecurity.

[4] Jim Dempsey, “Addressing the Security Risks of AI,” Lawfare, April 11, 2023, https://www.lawfaremedia.org/article/addressing-security-risks-ai.

[5] SCSP, Generative AI: The Future of Innovation Power (September 2023), https://www.scsp.ai/wp-content/uploads/2023/09/GenAI-web.pdf; DATUREX GmbH, “Russian Propaganda Manipulates AI Training Data,” External Data Protection Officer Dresden, March 28, 2025, https://externer-datenschutzbeauftragter-dresden.de/en/data-protection/russian-propaganda-manipulates-ki-training-data/.

[6] Cybersecurity and Infrastructure Security Agency (CISA), “Software Must Be Secure by Design—and Artificial Intelligence Is No Exception,” CISA, April 27, 2023, https://www.cisa.gov/news-events/news/software-must-be-secure-design-and-artificial-intelligence-no-exception.

[7] Micah Musser et al., “Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications,” arXiv, May 23, 2023, https://arxiv.org/abs/2305.14553; John Seymour and Philip Tully, “Generative Models for Spear Phishing Posts on Social Media,” arXiv, February 14, 2018, https://arxiv.org/abs/1802.05196.

[8] Zach Church, “80% of Ransomware Attacks Now Use Artificial Intelligence,” MIT Sloan Management Review, September 8, 2025, https://mitsloan.mit.edu/ideas-made-to-matter/80-ransomware-attacks-now-use-artificial-intelligence.

[9] Anthropic, Threat Intelligence Report: August 2025 (August 2025), https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6c0cea8c200.pdf.

[10] United Nations General Assembly, Developments in the Field of Information and Telecommunications in the Context of International Security: Final Report of the Open-Ended Working Group on Security of and in the Use of Information and Communications Technologies (2021–2025), A/80/257 (July 24, 2025), https://docs.un.org/en/A/80/257.

[11] John Seymour and Philip Tully, “Generative Models for Spear Phishing Posts on Social Media,” arXiv, February 14, 2018, https://arxiv.org/abs/1802.05196; Recorded Future, “I, Chatbot,” Insikt Group Research, https://www.recordedfuture.com/research/i-chatbot.

[12] MITRE ATT&CK, “APT42 (G1044),” https://attack.mitre.org/groups/G1044/.

[13] Google Cloud, “Untangling Iran APT42 Operations,” Google Cloud Blog, https://cloud.google.com/blog/topics/threat-intelligence/untangling-iran-apt42-operations.

[14] “Nation-State Abuse of Gemini AI,” Infosecurity Magazine, https://www.infosecurity-magazine.com/news/nation-state-abuse-gemini-ai/; Google Cloud, “Adversarial Misuse of Generative AI,” Google Cloud Blog, January 19, 2025, https://cloud.google.com/blog/topics/threat-intelligence/adversarial-misuse-generative-ai.

[15] David Trigano and Alex Kozodoy, “The Rise of AI-Driven Cyber Attacks: How LLMs Are Reshaping the Threat Landscape,” Deep Instinct Blog, March 12, 2025, https://www.deepinstinct.com/blog/the-rise-of-ai-driven-cyber-attacks-how-llms-are-reshaping-the-threat-landscape.

[16] Kaspersky, “Phishing Evolves with AI and Stealth: Kaspersky Highlights Biometric and Signature Risks,” Kaspersky Press Release, https://www.kaspersky.com/about/press-releases/phishing-evolves-with-ai-and-stealth-kaspersky-highlights-biometric-and-signature-risks.

[17] “Italian Police Freeze Cash in AI Voice Scam That Targeted Business Leaders,” Reuters, February 12, 2025, https://www.reuters.com/technology/artificial-intelligence/italian-police-freeze-cash-ai-voice-scam-that-targeted-business-leaders-2025-02-12/.

[18] John Hudson and Hannah Natanson, “A Marco Rubio Impostor Is Using AI Voice to Call High-Level Officials,” The Washington Post, July 8, 2025, https://www.washingtonpost.com/national-security/2025/07/08/marco-rubio-ai-imposter-signal/.

[19] Steve Povolny, “Attack of the Clones: How Generative AI Tools Are Redefining Phishing,” Exabeam, April 4, 2025, https://www.exabeam.com/blog/infosec-trends/attack-of-the-clones-how-generative-ai-tools-are-redefining-phishing/.

[20] “FIN7 Hosting Honeypot Domains with Malicious AI Generators – New Silent Push Research,” Silent Push, October 2, 2024, https://www.silentpush.com/blog/fin7-malware-deepfake-ai-honeypot/.

[21] “North Korean IT Worker Scheme Exposed,” CNN, August 5, 2025, https://edition.cnn.com/interactive/2025/08/05/world/north-korea-it-worker-scheme-vis-intl-hnk/index.html; “North Korea Cybercrime: DTEX Research Center Report 227,” Cyberscoop, August 2025, https://cyberscoop.com/north-korea-cybercrime-dtex-research-center-227/.

[22] Anthropic, “Detecting and Countering Misuse of Generative AI: August 2025 Update,” August 2025, https://www.anthropic.com/news/detecting-countering-misuse-aug-2025.

[23] “ChatGPT Uses for Cybersecurity Continue to Ramp Up,” TechTarget SearchSecurity, August 2025, https://www.techtarget.com/searchsecurity/news/366536112/ChatGPT-uses-for-cybersecurity-continue-to-ramp-up.

[24] Steve Povolny, “Attack of the Clones – How Generative AI Tools Are Redefining Phishing,” Exabeam (blog), April 4, 2025, https://www.exabeam.com/blog/infosec-trends/attack-of-the-clones-how-generative-ai-tools-are-redefining-phishing/.

[25] Brad Smith, “Artificial Intelligence and the Department of Defense: Securing Cyber Missions,” Microsoft On the Issues, May 3, 2022, https://blogs.microsoft.com/on-the-issues/2022/05/03/artificial-intelligence-department-of-defense-cyber-missions/.

[26] “Microsoft Defender Outsmarted by AI-Generated Malware Proof of Concept,” Cybersecurity88, July 10, 2025, https://cybersecurity88.com/news/microsoft-defender-outsmarted-by-ai-generated-malware-poc/.

[27] “Ukraine Pins AI-Powered LameHug Malware Attacks on Defense Sector to Russian-Backed APT28 Group,” Industrial Cyber, July 21, 2025, https://industrialcyber.co/ransomware/ukraine-pins-ai-powered-lamehug-malware-attacks-on-defense-sector-to-russian-backed-apt28-group/.

[28] Anthropic, “Detecting and Countering Misuse of Generative AI: August 2025 Update,” August 27, 2025, https://www.anthropic.com/news/detecting-countering-misuse-aug-2025.

[29] Howard Solomon, “AI-Powered Autonomous Ransomware Campaigns Are Coming, Say Experts,” CSO Online, September 8, 2025, https://www.csoonline.com/article/4053251/ai-powered-autonomous-ransomware-campaigns-are-coming-say-experts.html.

[30] Anthropic, “Detecting and Countering Misuse of AI: August 2025,” August 27, 2025, https://www.anthropic.com/news/detecting-countering-misuse-aug-2025.

[31] Recorded Future, “I, Chatbot: How AI Is Being Used to Automate Cybercrime,” Recorded Future Research, https://www.recordedfuture.com/research/i-chatbot.

[32] “Generative AI Makes Chinese, Iranian Hackers More Efficient, Report Says,” Voice of America, August 2025, https://www.voanews.com/a/generative-ai-makes-chinese-iranian-hackers-more-efficient-report-says/7956403.html.

[33] Roman Osadchuk, “AI Tools Usage for Disinformation in the War in Ukraine,” Digital Forensic Research Lab, July 9, 2024, https://dfrlab.org/2024/07/09/ai-tools-usage-for-disinformation-in-the-war-in-ukraine/.

[34] “Russian War Report: Hacked News Program and Deepfake Video Spread False Zelenskyy Claims,” Atlantic Council, March 16, 2022, https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-war-report-hacked-news-program-and-deepfake-video-spread-false-zelenskyy-claims/.

[35] “Russian Deepfakes Fool Media: Storm-1679 Campaign Impersonates Major US News Outlets,” Reality Defender, August 19, 2025, https://www.realitydefender.com/insights/russian-propaganda-network-impersonates-major-us-news-outlets-platforms-and-gover; “China AI Influence Operations: Beijing-Linked Campaigns Amplify Domestic Issues Abroad,” The Record, April 7, 2024, https://therecord.media/china-ai-influence-operations.

[36] Reality Defender Analysis Team, “Russian Propaganda Network Impersonates Major US News Outlets: Platforms and Governments Need Defense Against AI-Generated Disinformation,” Reality Defender Insights, August 19, 2025, https://www.realitydefender.com/insights/russian-propaganda-network-impersonates-major-us-news-outlets-platforms-and-gover.

[37] “Chinese Hackers Are Using AI to Inflame Social Tensions in the U.S.,” The Record, April 7, 2024, https://therecord.media/china-ai-influence-operations.

[38] “Untangling Iran’s APT42 Operations,” Google Cloud Threat Intelligence Blog, May 1, 2024, https://cloud.google.com/blog/topics/threat-intelligence/untangling-iran-apt42-operations.

[39] DATUREX GmbH, “Russian Propaganda Manipulates AI Training Data,” External Data Protection Officer Dresden, March 28, 2025, https://externer-datenschutzbeauftragter-dresden.de/en/data-protection/russian-propaganda-manipulates-ki-training-data/.

[40] Google Threat Intelligence Group, “Adversarial Misuse of Generative AI,” Google Cloud Blog, January 29, 2025, https://cloud.google.com/blog/topics/threat-intelligence/adversarial-misuse-generative-ai; Microsoft Security, “Staying Ahead of Threat Actors in the Age of AI,” Microsoft Security Blog, February 14, 2024, https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-threat-actors-in-the-age-of-ai/.

[41] Wan Rosalili Wan Rosli, “Waging Warfare Against States: The Deployment of Artificial Intelligence in Cyber Espionage,” AI and Ethics 5, no. 1 (2025): 47–53, https://link.springer.com/article/10.1007/s43681-024-00628-x; F. J. Egloff, “Publicly Attributing Cyber Attacks: A Framework,” Journal of Strategic Studies 44, no. 5 (2021): 789–811, https://www.tandfonline.com/doi/full/10.1080/01402390.2021.1895117 .

[42] Microsoft Security, Microsoft Digital Defense Report 2024, https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/Microsoft%20Digital%20Defense%20Report%202024%20%281%29.pdf.

[43] United Nations General Assembly, “Developments in the Field of Information and Telecommunications in the Context of International Security,” A/80/257, July 24, 2025, https://docs.un.org/en/A/80/257.

[44] Pavlina Pavlova and Christopher Painter, “The UN’s Permanent Process on Cybersecurity Faces an Uphill Battle,” Lawfare, August 13, 2025, https://www.lawfaremedia.org/article/the-un-s-permanent-process-on-cybersecurity-faces-an-uphill-battle.

[45] “High-Level Open Debate on Artificial Intelligence,” Security Council Report, September 2025, https://www.securitycouncilreport.org/whatsinblue/2025/09/high-level-open-debate-on-artificial-intelligence.php.

[46] United Nations Office for Disarmament Affairs, Report of the 2023 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems (LAWS), CCW/GGE.1/2023/CRP.2 (Geneva: United Nations, May 12, 2023), https://docs-library.unoda.org/Convention_on_Certain_Conventional_Weapons_-Group_of_Governmental_Experts_on_Lethal_Autonomous_Weapons_Systems_%282023%29/CCW_GGE1_2023_CRP.2_12_May.pdf; Lieber Institute for Law & Land Warfare, The Future of Warfare: National Positions on the Governance of Lethal Autonomous Weapons Systems, https://lieber.westpoint.edu/future-warfare-national-positions-governance-lethal-autonomous-weapons-systems.

[47] United Nations, “Press Release GA/12699,” December 2025, https://press.un.org/en/2025/ga12699.doc.htm.

[48] “Global Digital Compact: Artificial Intelligence,” United Nations, 2025, https://www.un.org/global-digital-compact/en/ai.

[49] Derek B. Johnson, “UN Seeks to Build Consensus on ‘Safe, Secure and Trustworthy’ AI,” CyberScoop, September 26, 2025, https://cyberscoop.com/un-seeks-consensus-on-safe-secure-trustworthy-ai/.

[50] Cybersecurity and Infrastructure Security Agency, “Joint Guidance on Deploying AI Systems Securely,” April 15, 2024, https://www.cisa.gov/news-events/alerts/2024/04/15/joint-guidance-deploying-ai-systems-securely.

[51] Public Safety Canada, “International Counter-Ransomware Initiative 2024: Joint Statement,” October 2024, https://www.canada.ca/en/public-safety-canada/news/2024/10/international-counter-ransomware-initiative-2024-joint-statement.html.

[52] UK Government, “The Pall Mall Process: Code of Practice for States,” 2024, https://www.gov.uk/government/publications/the-pall-mall-process-code-of-practice-for-states/the-pall-mall-process-code-of-practice-for-states.

[53] “The Paris AI Summit: A Diplomatic Failure or a Strategic Success?” DiploFoundation, 2025, https://www.diplomacy.edu/blog/the-paris-ai-summit-a-diplomatic-failure-or-a-strategic-success/.

[54] “The Paris Charter on Artificial Intelligence in the Public Interest,” Élysée Palace, February 11, 2025, https://www.elysee.fr/en/emmanuel-macron/2025/02/11/the-paris-charter-on-artificial-intelligence-in-the-public-interest.

[55] “UK and US Refuse to Sign International AI Declaration,” BBC News, February 11, 2025, https://www.bbc.com/news/articles/c8edn0n58gwo.

Categories

Languages

Sign Up For Our Newsletter

Get the content you need, just when you need it.

DONATE

Support our mission by making a donation.

Visit our Partner