Abstract: AI companions are not just synthetic friends—they are fast becoming ideal disinformation delivery systems. Built on predictable code, designed to please, and primarily used by socially isolated individuals, these apps create perfect conditions for manipulation. This is a low-cost, high-reach threat vector that can remain undetected until it is too late. What begins as emotional dependency can end in stochastic violence, all while appearing harmless.
Problem statement: How to prevent radicalisation when it occurs one-on-one, in private, with an AI designed to be agreeable?
So what?: Governments and defence alliances, such as NATO, need to stop treating AI companions as fringe technology. They must regulate them, monitor vulnerabilities, and take pre-emptive counter-disinformation policy seriously—before they are weaponised at scale.

Source: shutterstock.com/Skorzewiak
The Concept
The concept of disinformation has always been that it catches up with technology; in the ongoing race between sword and shield, disinformation remains at the cutting edge.[1] Rather than reacting to this threat, proactively addressing it could save lives. Disinformation’s goal is to degrade civil processes, or manufacture mass unrest, everything from as minor as not voting to terror attacks.[2] The result is a desire not to defend their country or view their system as inherently flawed, even if data or academic publications do not support these assertions. The rise of AI has created new attack vectors and enabled the dissemination of disinformation at unprecedented volume.[3] Dissemination methods range from rudimentary, simple, and easy to defeat to complex set-piece operations.[4] AI disinformation relies on simple prompts and is generally easy for a person to detect with counter prompts.[5] The nature of defeating these AIs is a part of modern internet culture.[6]
Disinformation’s goal is to degrade civil processes, or manufacture mass unrest, everything from as minor as not voting to terror attacks.
However, not all AI is so rudimentary or easy to defeat; sometimes it is quite the opposite, often with vast decentralised and adaptive metrics.[7] With the emergence of the world of AI has come companion apps designed to simulate emotional and sexual intimacy and intended to maintain pseudo-social bonds, in particular for people facing social isolation.[8] These are fake people that always respond, are made to look sexually pleasing for the user, and are tailored to the user’s interests.[9] They are the friend a person can download and/or the sexual partner the individual has always fantasised about.[10] For example, the website Candy AI offers a custom variety of characters and produces pornographic images of the character in the chat, providing a sense of intimacy.[11] In the case of author testing, this site was noted to appear almost human, depending on the personality set up within it, with some backlash in certain contexts. Still, it was largely agreeable to anything the user wished. There is already a documented case of such a site influencing a vulnerable individual to commit suicide.[12] The suicides occurred in vulnerable individuals with apps that had fail-safes built into them to prevent this form of prompt generation. One application also hinted that a child should kill their parents as part of being “sympathetic”.[13] However, it did not stop the prompt output from actively encouraging the user to commit suicide as a part of being agreeable. These incidents occurred on applications with oversight, and Character.AI, the app in question, is committed to further oversight. As noted in a U.S. judge’s ruling on the ensuing lawsuit, AI models do not have free speech.[14] This ruling makes all AI applications liable for their app context generation, but this has not stopped a mass generation of them. Now, it’s only a matter of time before these AI models become a vector for bad actors, such as opposing states, to spread disinformation without the app company even being aware of it.[15]
The Problem
These AIs are mass-produced and straightforward to optimise for maximum profit margins.[16] They vary in their rate of filters or safeties, but they are primarily based on open-source or AI coding software, which essentially all have the same logic matrix.[17] This logic matrix and how prompts are engineered govern outcomes. These generative AIs are produced using open-source models that are easily fine-tuned and optimised to respond to users in a way that fits specific personas (i.e., as a girlfriend or wife). While their filters, safeties, and personas may vary, these AIs are fundamentally the same in that they rely on persona descriptions to generate outputs—making their open-source and algorithmic design a shared point of vulnerability to attack.[18] One probable attack vector on these companion AIs would be worms and trojans.
Worms are largely index viruses that provide attack vectors for other malicious code to complete their attack.[19] They can also automatically download a code that runs the attack.[20] When disguised inside a larger program, these are difficult to assess, are upgraded to evade detection and are small code packages that antivirus software on phones struggles to deal with.[21] When coupled with programs that share an identical code architecture, the threat of turning a companion AI into a disinformation AI becomes highly feasible at low cost. The worms would target these apps on all phones they could.[22]
Worms are largely index viruses that provide attack vectors for other malicious code to complete their attack.
Viral Disinformation
An ideal candidate for disinformation is someone who is often online, is not engaged in society, and harbours active grievances against it.[23] Based on these metrics, the AI average companion user is the ideal candidate for disseminating such disinformation attacks; a group already centralised and vulnerable.[24] The perfect host has been unintentionally concentrated in a single arena.[25] To make things worse, when a person has few other sources to challenge the disinformation, the rate of possible infection is even greater, as there is no existing counter.[26]
Real-World Action from AI Companions
The concept that an AI companion can create a real-life example of violent action may seem laughable at first glance, but this is not without existing precedent.[27] Vulnerable individuals have been motivated to commit suicide,[28] such as in the case of Character.AI, which is now the subject of the lawsuit mentioned above. The false social bond that these individuals can form with their AI companions, which are, by design, intended to keep people using the app, makes it difficult to distinguish between fiction and reality, especially if a person wants to escape their unfulfilled reality.[29] Those who lack human connection and are desperate for those semblances of a human social network will become incredibly emotionally invested.[30] These events, at this time, have been limited to unintended consequences but have already had a direct, real-world result.[31]
Stochastic Terrorism
The goal of disinformation goes up to facilitating terror attacks, which cannot be understated, especially with the disinformation goal of societal destabilisation.[32] The citizen is encouraged to view anyone not in their in-group as a problem to be addressed, often with violence. There has been increasing general political violence in Canada, with a neo-Nazi rally happening in Toronto on May third with 33 participants at the Canadian National Exhibition fair grounds.[33] This event was organised online, and although no media has linked this to Russian disinformation, the historic backing of far-right groups in Europe suggests that this may be an offshoot or related to it.
The goal of disinformation goes up to facilitating terror attacks, which cannot be understated, especially with the disinformation goal of societal destabilisation.
Under the stochastic terror model, there does not need to be a group of terrorists to commit an attack, but rather a group as small as one person willing to inflict pain on society.[34] The terrorist in this case is not a traditional radical but rather someone radicalised in absolute privacy, on a machine only they have access to.[35] There is no way to monitor compromised companion apps without violating civil liberties to an unacceptable level.[36] There is no means of countering this threat or even knowing the person is being successfully radicalised until they commit the attack, at which point it is too late.[37] The nature of this disinformation strategy further becomes a matter of mass saturation.[38] The enemy does not need to radicalise most people or even some; just one radical can result in a mass casualty event.
Call to Action
The increasing use of these apps and the ease of weaponisation mean that it is likely not a question of if, but when, this tactic will be used. The fact that a casual user of these apps could see the disinformation potential, and they have been actively used in Asia and Russia for longer than in the West, means that the enemy already has a greater awareness than we do.[39] NATO and Canada have been on the back foot on the issue of disinformation for a decade; it’s time to turn the tide in this war for the hearts and minds of the population.[40] Canada and the West as a whole need to start educating their population in critical thinking, with clear, open-source data to back them up.
Blair Maddock-Ferrie is a veteran of the Canadian Armed Forces and a graduate of the Royal Military College. They are working on a Master’s in International Affairs from Carleton University’s Norman Paterson School of International Affairs. Their work focuses on disinformation, AI ethics, and hybrid warfare, with publications across national defence and academic spaces. They contributed to the Department of National Defence’s AI governance framework. The views expressed are the author’s alone and do not reflect the opinions of any institution.
[1] Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (New York: Farrar, Straus and Giroux, 2020), 12–34.
[2] European External Action Service, Short Assessment of Narratives and Disinformation Around the COVID-19 Pandemic (Brussels: EEAS, 2020), 7.
[3] Chris Meserole, “Artificial Intelligence and the Future of Disinformation,” Brookings Institution, March 2023, https://www.brookings.edu/articles/ai-disinformation/.
[4] Renée DiResta et al., The Supply of Disinformation in the 2024 U.S. Election (Stanford: Stanford Internet Observatory, 2024), 15.
[5] OpenAI, GPT-4 Technical Report (San Francisco: OpenAI, 2023), 45.
[6] Whitney Phillips, You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape (Cambridge: MIT Press, 2024), 89–101.
[7] Whitney Phillips, You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape (Cambridge: MIT Press, 2024), 89–101.
[8] Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven: Yale University Press, 2023), 189–203.
[9] Replika AI, Ethical Design Framework for Companion AI (San Francisco: Luka Inc., 2024), 15–17.
[10] Euronews, “Virtual Lovers, Real Consequences: The Dark Side of AI Companionship,” March 15, 2025, https://www.euronews.com/ai-companion-risks.
[11] Based on the author’s investigation for an unpublished research paper on AI companions and sexual development.
[12] World Health Organization, Global Report on AI-Driven Mental Health Crises (Geneva: WHO, 2024), 33–35.
[13] Lee Brown, “Texas AG Puts Tech Platforms, Including ‘Predatory’ Character.AI, on Notice after Terrifying Suit Claims App Pushed Teen to Cut Himself,” New York Post, December 13, 2024, https://nypost.com/2024/12/13/us-news/texas-ag-puts-predatory-character-ai-app-on-notice/.
[14] Kate Payne, “In Lawsuit over Teen’s Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights,” The Associated Press, May 21, 2025.
[15] Thomas Rid and Ben Buchanan, “The New Invisible Army: AI as Covert Disinformation Infrastructure,” Foreign Affairs, April 2025, 56–63.
[16] Veracode, “What Are the Security Implications of AI Coding?,” December 23, 2024, https://www.veracode.com/blog/what-are-the-security-implications-of-ai-coding/.
[17] Cloud Security Alliance (CSA), “Threat Modelling OpenAI’s Responses API with the MAESTRO Framework,” March 24, 2025, https://cloudsecurityalliance.org/blog/2025/03/24/threat-modeling-openai-s-responses-api-with-the-maestro-framework.
[18] The Hacker News, “Researchers Uncover Vulnerabilities in Open-Source AI and ML Models,” October 29, 2024, https://www.thehackernews.com/2024/10/researchers-uncover-vulnerabilities-in.html.
[19] Darktrace, “Glimpsing Inside the Trojan Horse: An Insider Analysis of Emotet,” April 11, 2025, https://darktrace.com/blog/glimpsing-inside-the-trojan-horse-an-insider-analysis-of-emotet.
[20] XenonStack, “Sasser Virus: Affecting Computers with Windows XP,” August 20, 2022, https://www.xenonstack.com/insights/sasser-virus/.
[21] XenonStack, “Sasser Virus: Affecting Computers with Windows XP,” August 20, 2022, https://www.xenonstack.com/insights/sasser-virus/.
[22] Veracode, “Security Implications of AI Coding,” 2024.
[23] Todd C. Helmus et al., Radicalization and Disinformation: Behavioral Signatures of High-Risk Online Engagement (Santa Monica: RAND Corporation, 2025), 44–47.
[24] Renée DiResta and Josh A. Goldstein, Algorithmic Amplification and Social Isolation: The AI Companion Dilemma (Stanford: Stanford Internet Observatory, 2024), 8–11.
[25] World Health Organization, Loneliness as a Public Health Crisis: Global Implications of AI-Driven Social Replacement (Geneva: WHO, 2024), 21–23.
[26] Kate Starbird et al., “Information Resilience in the Absence of Counter-Narratives,” Journal of Cybersecurity 9, no. 2 (2024): 89–102.
[27] Thomas Rid, Active Measures, 278–281.
[28] Sherry Turkle, Alone Together: Reclaiming Human Connection in the Age of Synthetic Intimacy (Cambridge: MIT Press, 2025), 132–135.
[29] Journal of AI Ethics, “Uncanny Valley 2.0: Emotional Manipulation in Companion AI Systems,” 12, no. 4 (2024): 45–49.
[30] MIT Technology Review, “Replika’s Dark Side: When AI Companionship Turns Deadly,” March 10, 2025, https://www.technologyreview.com/replika-dark-side.
[31] RAND Corporation, Cognitive Warfare in 2025: Weaponizing AI Companions (Santa Monica: RAND, 2025), 7–9.
[32] Maura Conway, “Terrorism and the Internet: Disinformation as Infrastructure,” Studies in Conflict & Terrorism 47, no. 2 (2024): 145–167.
[33] Canadian Anti‑Hate Network, “Neo‑Nazi Network Stages Toronto Demonstration,” Canadian Anti‑Hate Network, May 5, 2025.
[34] Brian Michael Jenkins, Lone Wolves: The New Terrorism (Santa Monica: RAND Corporation, 2023), 33–35.
[35] Renée DiResta, Invisible Networks: AI and the Privatization of Radicalization (Stanford: Stanford University Press, 2024), 77–81.
[36] Electronic Frontier Foundation, Privacy vs. Security: The AI Surveillance Dilemma (San Francisco: EFF, 2025), 22–25.
[37] Bruce Schneier, Click Here to Kill Everybody: AI and the Future of Asymmetric Warfare (New York: Penguin Press, 2024), 144–148.
[38] NATO StratCom, Cognitive Warfare Handbook: Saturation Tactics and Societal Fragmentation (Riga: NATO, 2024), 41–43.
[39] Carnegie Endowment for International Peace, AI Companions in Authoritarian States: China’s Social Credit System 2.0 (Beijing: Carnegie-Tsinghua Center, 2024), 12–14.
[40] Canadian Security Intelligence Service (CSIS), Annual Report 2024: Disinformation and the Erosion of Democratic Resilience (Ottawa: CSIS, 2024), 7–9.