AI, ML and Elections
ArticlesEnglish Articles

Future Elections and AI-Driven Disinformation

Abstract: This paper conceptualises the impact of Artificial Intelligence (AI) on disinformation campaigns, contrasting AI-driven operations with traditional human-operated methods. Utilising a Human Intelligence Collector Operations (HUMINT) and Offensive Cyberspace Operations (OCO) framework, the research analyses the advancements in AI technology in terms of speed, efficiency, content generation, and adaptability. The findings reveal that AI-driven operations, particularly those with billions of tokens, significantly outperform human-operated disinformation campaigns in speed and efficiency, demonstrating an ability to process vast datasets and complex scenarios almost instantaneously.

Problem statement: How to understand the need to develop AI-driven strategies to protect democratic processes against disinformation campaigns?

So what?: Governments, tech companies, and academic researchers must collaborate on advanced AI countermeasures to combat AI-driven disinformation campaigns.

Source: shutterstock.ozrimoz

AI – A Blessing or a Curse?

The 2016 US presidential election serves as an example of how disinformation can influence public opinion and electoral outcomes. The computational propaganda research project has reviewed several case studies on how social media was used to manipulate public opinion: first, how bots were used as tools to spread disinformation in the presidential election in the States in 2016 [1], and later, between 2015 and 2017, in Brazil, Canada, the People’s Republic of China (PRC), Germany, Poland, Taiwan, Russia, Ukraine, and the US. Threat actors purposefully distributed misleading information over social media networks by exploiting algorithms, automation, and human curation. It was noted that the most potent forms of computational propaganda involved algorithmic distribution and human curation using bots and trolls in combination.[2]

The Cambridge Analytica (CA) case highlights the use of both algorithmic and human means to exert influence. Cambridge Analytica’s activities are emblematic of the political importance of the massive amounts of data that humans produce in today’s interconnected world. CA provided services for many different political actors, analysing the political campaigns they were supporting.[3], [4] Part of their success was due to scraping user data from popular social media sites and pairing it with individual psychological profiles.[5], [6]

Cambridge Analytica’s activities are emblematic of the political importance of the massive amounts of data that humans produce in today’s interconnected world.

Recent research has significantly advanced our understanding of digital behaviour analysis. For instance, the study “Computer-based personality judgments are more accurate than those made by humans” provides evidence that algorithmic assessments can surpass human accuracy in personality judgment.[7] This is further complemented by the findings of the study “Mining Big Data to Extract Patterns and Predict Real-Life Outcomes”,[8] which demonstrates the potential of big data in uncovering behavioural patterns and forecasting real-life events. In addition, “Psychological targeting as an effective approach to digital mass persuasion” [9] offers insights into how digital platforms can be used for tailored persuasive communication. Taken together, these studies underscore the growing capabilities of digital tools in understanding and influencing human behaviour.

These capabilities to influence human behaviour also have significant implications, particularly in the context of electoral scenarios. As mentioned, this was exemplified by the US presidential election in 2016 and other global cases, where a combination of algorithmic and human interventions played a notable role. These instances underscore the urgency of understanding and anticipating the impacts of more sophisticated AI-driven disinformation campaigns in future elections. As AI technology advances, its capabilities to manipulate information, target specific demographics, and influence public perception are expected to become more pronounced. This evolution could lead to increasingly disruptive effects.[10]

Research Methodology

At the outset of this research, it is important to disclose the utilisation of ChatGPT4, a Large Language Model (LLM), as a research assistant and language editor. This disclosure aligns with the ethical guidelines established by Nature and Springer Nature journals,[11] ensuring transparency in the research process.

In the initial step, the human intelligence collector approaches were identified.[12] Next, ChatGPT4 was asked to map disinformation / influence cyber operations to HUMINT approaches, developing a HUMINT-OCO framework.

HUMINT operators use various approaches, including psychological techniques and verbal trickery, to collect information from human sources successfully.[12] An example is the “Emotional Love Approach”, which exploits a target’s love for something, such as patriotism, by focusing on the anxiety the target feels about a particular issue.

HUMINT operators use various approaches, including psychological techniques and verbal trickery, to collect information from human sources successfully.

Similarly, Offensive Cyberspace Operations (OCO) may leverage psychological insights to achieve their objectives through cyberspace. These operations employ offensive methods to targets in cyberspace.[13] For example, OCO might involve altering the content of a web page to include disinformation that exploits human psychological traits, like patriotism, in a manner akin to techniques used in HUMINT. In addition, OCO can be used to exfiltrate sensitive information from a target, such as an email server, and post the exfiltrated information (emails) on a third party (such as WikiLeaks). The intent would be to increase doubt and embarrassment for the target. This parallel demonstrates how psychological manipulation is a common denominator in both the physical world and in cyberspace.

Furthermore, the purpose was also to identify those approaches that do not map to OCO because they require direct interaction with the target, while a cyber operation has remote access to the target. The large language model was fed all 19 human intelligence collector approaches and was provided with the following prompt:

“Map disinformation / influence cyber operations to the HUMINT approaches. For example, ‘Using disinformation to exacerbate discord on topics like race, immigration, and gun rights’ is mapped to the Emotional Hate Approach tactic. The purpose is to create a generalised framework for mapping cyber operations related to disinformation/influence to the HUMINT approaches. Can you do this? (it is likely that some cyber operations cannot be mapped to some HUMINT approaches because these approaches require physical contact; in those cases, just put N/A).”

‘Using disinformation to exacerbate discord on topics like race, immigration, and gun rights’ is mapped to the Emotional Hate Approach tactic.

The output was 19 approaches, with 10 approaches marked as N/A, meaning those approaches required direct access to the target. Hence, they were discarded, and only nine remained as part of the HUMINT-OCO framework. The nine approaches are listed below:

  1. Emotional Love: This involves promoting content that fosters strong affinity, loyalty, or patriotism towards a cause or group. It often involves positive disinformation, creating a sense of connection or allegiance.
  2. Emotional Hate: This strategy focuses on inciting hate or anger towards specific groups, races, or nations. It includes spreading false information to exacerbate racial tensions or create animosity.
  3. Emotional Fear-Up: This approach is about disseminating false information that induces fear or panic. Examples include rumours about exaggerated threats or fabricated crises to create a sense of urgency or dread.
  4. Emotional-Pride and Ego-Up: This involves cyber campaigns that flatter or inflate the ego of a target group. Disinformation is used to make a group feel superior, manipulating perceptions and actions.
  5. Emotional-Pride and Ego-Down: Contrary to the previous approach, this one aims to undermine the confidence or self-esteem of a target group. It often involves spreading false narratives that belittle or shame them.
  6. Emotional-Futility: This approach spreads disinformation to make a target audience feel that resistance or dissent is futile. It fosters feelings of hopelessness or apathy towards certain issues or actions.
  7. Repetition (Interrogation): This tactic involves repeatedly spreading the same false information or narrative across various platforms. The repetition reinforces its acceptance as truth.
  8. Rapid Fire (Interrogation): This method quickly bombards an audience with a high volume of disinformation. The goal is to overwhelm and confuse, preventing critical analysis and response.
  9. False Flag (Interrogation): This involves conducting cyber operations while masquerading as a different entity or group. The aim is to mislead about the source of the information or to discredit the entity being impersonated.

The next step was to apply the framework to the case of Russian disinformation and influence operations during the 2016 US election, including the cyberattacks on the Democratic National Committee (DNC). The aim was to demonstrate the framework’s applicability in a real-world case. Furthermore, the framework was applied to a hypothetical scenario involving a fictional country’s election in 2016, executed by an AI with millions of tokens, and compared to the human-conducted Russian operations. This comparison explored the differences between AI-conducted and human-conducted disinformation and influence operations. To explore the differences, the following questions were asked, generated in collaboration with the LLM:

1.            How would the scale and precision of disinformation campaigns differ between human-operated Russian interference and an AI-driven operation? Would the AI be able to target individuals more effectively based on their online behaviour and psychological profiles?

2.            How would the speed and adaptability of the AI’s operations compare to the alleged Russian operations? Could the AI respond and adjust its strategies in real time based on emerging trends and countermeasures?

3.            In what ways might the content created by AI differ in terms of sophistication, believability, and variety from that created by human agents? Could the AI generate more convincing fake news, deepfakes, or other forms of misleading content?

4.            How might the impact on public opinion and trust in democratic institutions differ between the two scenarios? Would an AI’s ability to personalise and optimise messages lead to more profound societal divisions?

5.            How might the use of AI in such operations affect global politics and international relations? Would it lead to an escalation in cyber warfare tactics among nations?

First, the basis for calculating the complexity of a model is tokens, which are the basic units for calculating the length of a text.[14] For example, in English, one word is approximately 1.3 tokens, while in Spanish and in French, one word is approximately two tokens. GPT-3.5 can “memorise” 4,096 tokens,[15] which, in English, would be approximately 3,000 words.[16] On November 6, 2023, OpenAI released GPT-4 Turbo, which can “memorise” 128,000 tokens, approximately 93,750 words (the equivalent of more than 300 pages [17]).

The idea of using an AI model with millions of tokens stemmed from the Russian cyber-enabled disinformation campaign targeting the U.S. election in 2016. The entire campaign likely consisted of thousands and thousands of social media posts, including the OCO targeting and exfiltrating emails from the Democratic National Committee (DNC). Consequently, the total sum of words is believed to be in the millions.

The idea of using an AI model with millions of tokens stemmed from the Russian cyber-enabled disinformation campaign targeting the U.S. election in 2016.

A similar rationale was applied to assess the AI model with billions of tokens: by assessing how many tokens the human brain was likely to hold. Various estimates exist.[18], [19]

Research Limitations

Several limitations should be acknowledged when conceptualising insights into the impact of AI advancements on disinformation campaigns. The paper primarily uses a large language model to map disinformation strategies to human intelligence collector operation approaches. While the model is sophisticated, the analysis is likely to be limited by inherent biases and the scope of data on which it was trained. Another limitation is the use of hypothetical scenarios. There is no empirical data available on real-world events. Therefore, the results should be interpreted as indicative rather than conclusive.

Furthermore, technology is rapidly changing in both cyber operations and AI; hence, some conclusions are drawn on the current state of technology, which does not account for future developments in these areas. Finally, the results are limited in their generalisability because even though the case of the US 2016 election was used, the structure of the system in the US is quite binary and (largely) composed of two parties, Republicans and Democrats.

Results

Scale and Precision of Operations

How would the scale and precision of disinformation campaigns differ between human-operated Russian interference and an AI-driven operation? Would an AI be able to target individuals more effectively based on their online behaviour and psychological profiles?

Scale: Human-operated Russian interference operates on a large scale but is inherently limited by human resources, constraining the number of campaigns and the extent of topics covered. In contrast, AI-driven operations, particularly those utilising millions of tokens, achieve a significantly larger scale due to automation, managing more campaigns and covering a wider range of topics. This scale is further amplified in AI operations with billions of tokens, which can run numerous diversified campaigns concurrently, surpassing human capabilities and less sophisticated AI systems.

Precision: The precision of human-operated campaigns is moderate, relying on human understanding of social and political contexts, but often lacks deep personalisation. AI-driven operations with millions of tokens offer more precision by analysing large datasets to identify trends and effectively tailor messages to specific groups. Precision reaches an exceptional level with billions of AI tokens, demonstrating a nuanced understanding of complex behaviours and trends, and generating deeply resonant content. Overall, AI-driven operations provide greater precision in content creation, with the most sophisticated AI achieving the highest level of nuanced content.

AI-driven operations with millions of tokens offer more precision by analysing large datasets to identify trends and effectively tailor messages to specific groups.

Targeting: Human operations typically target broader demographic and psychographic profiles, which are less effective in hyper-personalisation. AI-driven operations with millions of tokens improve targeting by analysing online behaviour and psychological profiles. With billions of tokens, the most advanced AI excels in individual targeting, utilising advanced algorithms for highly personalised content creation. Both AI systems outperform human operations in targeting, with the more advanced AI being particularly adept at crafting highly personalised messages.

As the sophistication of AI in disinformation campaigns increases, there is a corresponding increase in the scale, precision, and effectiveness of these operations, particularly in their ability to target individuals based on their online behaviour and psychological profiles.

Speed and Adaptability

How would the speed and adaptability of AI operations compare to the alleged Russian operations? Could the AI respond and adjust its strategies in real time based on emerging trends and countermeasures?

  1. Adaptability and Strategy Modification: Human-operated campaigns adapt more slowly to new information and changing circumstances, often employing more rigid and less dynamic strategies. AI-driven operations with millions of tokens are more adaptable and capable of modifying strategies based on data trends. However, they may not fully capture human behavioural complexities. With billions of tokens, the most advanced AI is highly adaptable and capable of real-time strategy modification based on a comprehensive analysis of emerging trends and nuanced human behaviours.
  2. Targeting and Personalisation: Human-operated campaigns rely on human intuition and available data for targeting, which may be less precise. AI-driven operations with millions of tokens improve targeting capabilities using data-driven approaches, but they may lack deep personalisation. In contrast, AI with billions of tokens excels in targeting and personalisation, being able to tailor content and strategies to individual preferences and behaviours at a granular level.
  3. Response to Countermeasures: Human campaigns are slower at identifying and responding to countermeasures, often reacting after the fact. AI-driven operations with millions of tokens are quicker to identify countermeasures than humans. However, their responses may not always be optimally effective. The most sophisticated AI, with billions of tokens, is highly efficient in identifying, anticipating, and countering measures, often in a proactive manner.

In summary, AI-driven operations, especially those with billions of tokens, offer significant advantages over human-operated campaigns in terms of speed, efficiency, adaptability, and sophistication, with potential implications for democratic processes and international relations.

AI-driven operations, especially those with billions of tokens, offer significant advantages over human-operated campaigns in terms of speed, efficiency, adaptability, and sophistication, with potential implications for democratic processes and international relations.

Content Creation and Variation

How might the content created by an AI differ in terms of sophistication, believability, and variety from that created by human agents? Could the AI generate more convincing fake news, deepfakes, or other forms of misleading content?

  1. Sophistication: While capable of understanding nuanced human emotions and cultural contexts, human agents are limited by individual knowledge and biases, and may lack speed and volume in content creation. AI-driven operations with millions of tokens can analyse and process large datasets, identify patterns, and generate coherent, contextually relevant content. Still, they are limited in understanding nuanced human emotions and complex scenarios. In contrast, AI with billions of tokens exhibits high sophistication, particularly in understanding and mimicking human expressions and complex scenarios, and is adept at creating content that closely resembles human thought and speech.
  2. Believability: Human agents can create believable content, but their output is limited by individual knowledge and time constraints. AI-driven operations with millions of tokens can generate believable content more rapidly than humans, although they may occasionally lack human-like nuances, especially in complex or emotional contexts. However, AI with billions of tokens excels at creating highly believable content, with an advanced understanding of language and subtleties, making it difficult to distinguish from human-generated content.
  3. Variety: Human agents produce diverse content, but individual capabilities and perspectives limit their output. AI-driven operations with millions of tokens can generate a wide variety of content, surpassing individual human capabilities, but may exhibit certain patterns or limitations. However, AI with billions of tokens offers immense variety, easily adapting styles, tones, and perspectives. It can mimic a range of human authors, making its content highly diverse and adaptable.
  4. Potential for Misleading Content: While human agents can produce misleading content, AI-driven operations, especially with millions of tokens, can generate convincing fake news and disinformation, albeit less tailored and targeted compared to more advanced AI. AI with billions of tokens is more adept at creating highly tailored and targeted misleading content, including convincing deepfakes, making it more effective at spreading disinformation. Consequently, AI with billions of tokens poses a greater risk of spreading misleading content, including more convincing deepfakes, compared to less advanced AI and human agents.

While both AIs would surpass human capabilities in content creation in terms of speed and volume, the AI with billions of tokens would be particularly more advanced in terms of creating sophisticated, believable, and varied content. Such a tool has the potential for creating and spreading compelling fake news, deepfakes, and disinformation, with significant implications for society and politics.

Impact on Public Opinion and Trust

How might the impact on public opinion and trust in democratic institutions differ between the two scenarios? Would an AI’s ability to personalise and optimise messages lead to more profound societal divisions?

  1. Impact Level: AI-driven operations with millions of tokens can generate convincing and personalised content, significantly influencing public opinion, especially when it comes to polarising societal divisions. However, they are somewhat limited in understanding complex socio-political contexts or nuanced human emotions. In contrast, AI with billions of tokens creates highly sophisticated, contextually accurate, and profoundly personalised content. It more convincingly mimics human behaviour and speech, profoundly and widely influencing public opinion. The advanced capabilities of AI with billions of tokens result in a more profound impact on public opinion due to the sophistication in creating personalised content.
  2. Trust in Institutions: The spread of disinformation by AI with millions of tokens could lead to increased scepticism and erosion of trust in democratic institutions, but still allow some resilience against complete erosion of trust. However, AI with billions of tokens crafts messages that resonate deeply with individual beliefs and biases, leading to a more significant erosion of trust in democratic institutions. The difficulty in distinguishing AI-generated content exacerbates the spread of disinformation. Consequently, AI with billions of tokens poses a greater risk to trust in institutions due to its ability to create more convincing and resonant disinformation.
  3. Societal Divisions: While AI with millions of tokens influences public opinion, it might lack depth and nuance, somewhat limiting its potential to deepen societal divisions. On the other hand, AI with billions of tokens tailors messages to exploit specific societal fractures, potentially deepening divisions. It adapts content based on real-time feedback, effectively exacerbating tensions. Therefore, AI with billions of tokens is more effective at deepening societal divisions due to its precision and adaptability regarding content creation.
  4. Personalisation and Optimisation: AI with millions of tokens can personalise content based on user data, but with less precision and subtlety compared to more advanced AI with billions of tokens. The latter leads to more effective echo chambers and further polarisation of public opinion. With superior personalisation and optimisation capabilities, AI utilising billions of tokens contributes significantly to this effect.

While both AIs would have a significant impact on public opinion and trust in democratic institutions, the AI with billions of tokens would likely be more effective in terms of personalising and optimising messages. This could lead to more profound societal divisions, as it would be more capable of subtly reinforcing and exploiting existing societal fractures and tensions.

Global Implications

How might the use of AI in such operations affect global politics and international relations? Would it lead to an escalation in cyber warfare among nations?

  1. Global Politics: AI-driven operations with millions of tokens can influence public opinion and interfere in nations’ internal affairs, potentially leading to strained diplomatic relations. Such operations are particularly effective at targeting countries with less sophisticated digital disinformation countermeasures. However, AI with billions of tokens has a more profound impact, with the ability to manipulate public opinion, destabilise nations, or influence election outcomes, becoming a major bone of contention in international politics. Advanced AI would have a significant and destabilising impact on global politics, with the potential to sway elections and national stability.
  2. International Relations: Deploying AI with millions of tokens might prompt discussions on cyber norms and AI regulation in information warfare. However, detection and attribution challenges could hinder international consensus. In contrast, deploying AI with billions of tokens could lead to an escalation in cyber warfare tactics and trigger an AI arms race, with nations striving to develop or acquire comparable capabilities. The use of more advanced AI intensifies international relations, leading to potential AI arms races and heightened discussions on cyber norms.
  3. Cyber Warfare: The increase in cyber tactics as a facet of geopolitical strategies is evident in AI-driven operations utilising millions of tokens. Nations might invest more in offensive and defensive cyber capabilities, including counter-AI strategies. However, AI with billions of tokens necessitates advanced countermeasures. Nations might engage in aggressive cyber operations for defence and pre-emptive disruption, signalling a new era of digital espionage and counter-espionage. AI with billions of tokens would lead to more advanced and aggressive cyber warfare tactics, necessitating sophisticated countermeasures and potentially changing the landscape of international cyber operations.
  4. Escalation in Cyber Warfare: AI-driven operations with millions of tokens contribute to escalating cyber warfare tactics. However, the utilisation of AI with billions of tokens represents a significant leap in capability, leading to more aggressive and widespread use of cyber operations. This advanced AI poses a greater risk of escalating cyber warfare tactics, with the potential for heightened aggression and wider impact.

The use of AI in disinformation campaigns, whether with millions or billions of tokens, could have a major impact on global politics and international relations. It could lead to an escalation in cyber warfare, potentially triggering an AI arms race among nations.[20], [21] The AI with billions of tokens, in particular, could represent a substantial shift in offensive cyberspace operations.

Equipped with an understanding of the differences between human-generated operations, and AI with operations generated by millions and billions of tokens respectively, the following three tables show how these were manifested in 1) the Russian interference operations targeting the US election in 2016, 2) a fictional scenario concerning an election in 2024, and 3) a fictional scenario regarding an election in 2030 with technologies such as 5G and 6G. Table 1 illustrates how the tactics used in the alleged Russian operations during the 2016 US presidential election can be mapped to traditional HUMINT approaches, adapted to the informational digital age and the context of cyber warfare and information manipulation.

Table 1. Disinformation operations conducted by humans; Source: Author.

Table 2 illustrates the application of HUMINT approaches within a hypothetical scenario where an AI, equipped with millions of tokens and operating at current broadband speeds, impacts a fictional country’s election in 2024 through disinformation and influence operations.

Table 2. AI with Millions of Tokens in 2024 Fictional Election Scenario; Source: Author.

In this scenario, the AI’s ability to analyse large datasets, understand human psychology, and generate targeted content would make it highly effective in terms of manipulating public opinion and influencing the election outcome. Using these approaches in a coordinated manner could significantly impact the political landscape of the fictional country.

Table 3 illustrates potential manifestations of HUMINT approaches in the hypothetical scenario where an advanced AI with billions of tokens and 5G/6G broadband infrastructure impacts the outcome of a fictional country’s election in 2030 through disinformation/influence operations.

Table 3. Advanced AI with Billions of Tokens in 2030 Fictional Election Scenario; Source: Author.

In this scenario, the advanced AI’s capabilities, especially in processing and analysing vast amounts of data, would enable it to conduct highly sophisticated and targeted influence operations. Its ability to adapt in real time to changing circumstances and countermeasures would make it a formidable tool in shaping public opinion and electoral outcomes.

Speed and Mass as Qualities

This research investigated the impact of advancements in AI technology on the speed, adaptability, content generation, and sophistication of disinformation campaigns, and compared these aspects with traditional human-operated campaigns. The findings specifically relate to differences in speed and efficiency, content generation and sophistication, as well as adaptability and strategy modification.

In terms of speed and efficiency, human-operated campaigns are relatively slower due to manual data analysis, content creation, and decision-making processes. In contrast, AI-driven operations with millions of tokens conduct faster operations than human-operated campaigns, quickly analysing data and generating content. At the same time, AI-driven operations with billions of tokens are exceptionally fast. They can process vast datasets and complex scenarios almost instantaneously, significantly outpacing human capabilities. Consequently, AI-driven operations, particularly those with billions of tokens, are significantly faster and more efficient than human-operated campaigns, indicating a substantial advancement in the speed and efficiency of disinformation campaigns due to AI technology.

In terms of speed and efficiency, human-operated campaigns are relatively slower due to manual data analysis, content creation, and decision-making processes.

In terms of content generation and sophistication, human-operated campaigns are constrained by human creativity and resources, resulting in potentially more culturally and contextually nuanced but less abundant content. In contrast, AI-driven operations with millions of tokens can rapidly generate large volumes of content, albeit lacking some depth and cultural and contextual understanding compared to humans. However, AI-driven operations with billions of tokens can generate content at a massive scale with high sophistication, potentially matching or surpassing human levels of creativity and contextual awareness. Therefore, AI, particularly with billions of tokens, excels in both the quantity and quality of content generation, surpassing human-operated campaigns. This underscores the significant impact of AI advancements on the sophistication and variety of content in disinformation campaigns.

When it comes to adaptability and strategy modification, human-operated campaigns are slower to adapt to new information and changing circumstances, as the strategies are more rigid and less dynamic. However, AI-driven operations with millions of tokens are more adaptable than humans as they are capable of modifying strategies based on data trends. Nonetheless, they may not capture the full spectrum of human behavioural complexities. AI-driven operations with billions of tokens, on the other hand, can be highly adaptable and capable of real-time strategy modification based on a comprehensive analysis of emerging trends and nuanced human behaviours. Thus, AI-driven operations, particularly with billions of tokens, offer superior adaptability and dynamic strategy modification compared to human-operated campaigns. This highlights the role of AI in enhancing the flexibility and responsiveness of information campaigns.

To summarise, advancements in AI technology profoundly impact disinformation campaigns, particularly in terms of speed, efficiency, content generation, and adaptability.[22], [23], [24], [25] AI-driven operations, especially those with billions of tokens, demonstrate significant advantages over traditional human-operated campaigns. These advancements allow for more rapid, sophisticated, and adaptable disinformation campaigns, which can be tailored more effectively to specific audiences, and which respond more quickly to changing circumstances.

AI-driven operations, especially those with billions of tokens, demonstrate significant advantages over traditional human-operated campaigns.

To this end, as new information and communication technologies such as 5G and 6G are being rolled out, combined with advanced AI models that can generate human-readable text, video, and audio, they are capable of influencing people’s perception of reality in general and elections in particular. Certain actors are likely researching how to develop AI-powered mass surveillance tools that profile individuals in real time, influencing their decision-making and perception of reality. In addition, AI could be used to manipulate the information resources of elections and critical national infrastructure, such as financial systems, communication systems, or supply chains. The consequences of HUMINT-OCO-Disinformation operations could affect voting behaviour, undermine electoral integrity, and lead to polarisation of society, as well as erode public trust in institutions.


Gazmend Huskaj is the Head of Cyber Security at the Geneva Centre for Security Policy (GCSP) and a doctoral candidate focusing on offensive cyberspace operations at the Department of Computer and Systems Sciences, Stockholm University. Previously, he was a full-time doctoral student at the Swedish Defence University, and before that, he served as the Director of Intelligence for Cyber-related issues in the Swedish Armed Forces. He is a military and UN veteran with over five years of field experience in conflict and post-conflict areas. At the GCSP, his focus areas include Executive Education, Diplomatic Dialogue, and Policy Research & Analysis. The views expressed in this article are the author’s alone and do not represent those of the Geneva Centre for Security Policy (GCSP) nor DSV.


[1] Samuel C. Woolley and Douglas R. Guilbeault, “Computational Propaganda in the United States of America: Manufacturing Consensus Online,” 2017, https://ora.ox.ac.uk/objects/uuid:620ce18f-69ed-4294-aa85-184af2b5052e.

[2] Samuel C. Woolley and Philip N. Howard, “Computational Propaganda Worldwide: Executive Summary,” 2017, https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2017/06/Casestudies-ExecutiveSummary.pdf.

[3] Nicholas Confessore, “Cambridge Analytica and Facebook: The Scandal and the Fallout So Far,” 2018, https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html.

[4] Katie Harbath and Chris Fernekes, “History of the Cambridge Analytica Controversy,” 2023, https://bipartisanpolicy.org/blog/cambridge-analytica-controversy/.

[5] Alex Hern, “Academic at centre of Cambridge Analytica scandal sues Facebook,” 2019, https://www.theguardian.com/uk-news/2019/mar/18/aleksandr-kogan-cambridge-analytica-scandal-sues-facebook.

[6] Lesley Stahl, “Aleksandr Kogan: The link between Cambridge Analytica and Facebook,” 2018, https://www.cbsnews.com/news/aleksandr-kogan-the-link-between-cambridge-analytica-and-facebook-60-minutes/.

[7] Wu Youyou, Michal Kosinski, and David Stillwell, “Computer-based personality judgments are more accurate than those made by humans,” PNAS 112, no. 4 (2014): 1036–1040, www.pnas.org/cgi/doi/10.1073/pnas.1418680112.

[8] Michal Kosinski, Yilun Wang, Himabindu Lakkaraju, and Jure Leskovec, “Mining Big Data to Extract Patterns and Predict Real-Life Outcomes,” Psychological Methods 21, no. 4 (2016): 493–506, http://dx.doi.org/10.1037/met0000105.

[9] S.C. Matz, M. Kosinski, G. Nave, and D.J. Stillwell, “Psychological targeting as an effective approach to digital mass persuasion,” PNAS 114, no. 48 (2017): 12714–12719, www.pnas.org/cgi/doi/10.1073/pnas.1710966114.

[10] UZH News, “GPT-3 Informs and Disinforms Us Better,” 2023, https://www.news.uzh.ch/en/articles/media/2023/GPT3.html.

[11] Nature, “Tools such as ChatGPT threaten transparent science; here are our ground rules for their use,” 2023, https://www.nature.com/articles/d41586-023-00191-1.

[12] Department of the Army, “FM 2-22.3 (FM 34-52) Human Intelligence Collector Operations,” 2006, https://www.marines.mil/Portals/1/Publications/FM%202-22.3%20%20Human%20Intelligence%20Collector%20Operations_1.pdf.

[13] Gazmend Huskaj, “The Current State of Research in Offensive Cyberspace Operations,” in 18th European Conference on Cyber Warfare and Security, edited by T. Cruz and P. Simoes, 660–667, 2019.

[14] OpenAI, “Understanding OpenAI GPT Tokens: A Comprehensive Guide,” 2023, https://gpt.space/blog/understanding-openai-gpt-tokens-a-comprehensive-guide.

[15] C. Dunn, “OpenAI tokens and limits,” 2023, https://devblogs.microsoft.com/surface-duo/android-openai-chatgpt-15.

[16] Raf, “What are tokens and how to count them?” 2023, https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them.

[17] OpenAI, “New models and developer products announced at DevDay,” 2023, https://openai.com/blog/new-models-and-developer-products-announced-at-devday.

[18] B. Millidge, “The Scale of the Brain vs Machine Learning,” 2022, https://www.beren.io/2022-08-06-The-scale-of-the-brain-vs-machine-learning/.

[19] B.H. Cottman, “How close is GPT-3 to Artificial General Intelligence?” 2021, https://towardsdatascience.com/how-close-is-gpt-3-to-artificial-general-intelligence-cb057a8c503d.

[20] Amandeep Singh-Gill, “A New Arms Race and Global Stability,” 2020, https://www.cigionline.org/articles/new-arms-race-and-global-stability/.

[21] Andrew R. Chow and Billy Perrigo, “The AI Arms Race Is Changing Everything,” 2023, https://time.com/6255952/ai-impact-chatgpt-microsoft-google/.

[22] Rik Ferguson, “Addressing the State of AI’s Impact on Cyber Disinformation/Misinformation,” 2023, https://www.securityweek.com/addressing-the-state-of-ais-impact-on-cyber-disinformation-misinformation.

[23] K. Sedova, C. McNeill, A. Johnson, A. Joshi, and I. Wulkan, “AI and the Future of Disinformation Campaigns: Part 1: The RICHDATA Framework,” 2021, https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns.

[24] K. Sedova, C. McNeill, A. Johnson, A. Joshi, and I. Wulkan, “AI and the Future of Disinformation Campaigns: Part 2: A Threat Model,” 2021, https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns-2.

[25] E. Karinshak and Y. Jin, “AI-driven disinformation: a framework for organisational preparation and response,” Journal of Communication Management 27, no. 4 (2023): 539-562, https://doi.org/10.1108/JCOM-09-2022-0113.

You may also like

Comments are closed.