Abstract: Cognitive warfare and especially disinformation now rely heavily on social media platforms, cybertechnologies and AI, with the aim of creating confusion, societal polarization, mistrust, anger, and hatred against governments, organizations, communities or opposing individuals. While disinformation is a global problem, early defences based on censorship also threaten core Western values such as freedom of speech and democracy. Unsurprisingly, when surveyed, EU citizens overwhelmingly consider disinformation a threat to democracy.
Problem statement: How can cybersecurity and AI serve democratic values and human rights for cognitive threat support while seeking to introduce the need for transparent, customizable and cognitive endpoint support tools?
So what?: The need to complement existing defences at endpoints is analysed, and indicative functionality is outlined and grouped according to the different response objectives – namely, support and education, threat surface reduction, detection and response, and situational awareness. A conceptual architecture, requirements analysis, use cases and proof-of-concept functionality could extend this work to illustrate its key points.
The rise of social media and demagogues
The rise of demagogues through democracy is not a new phenomenon, nor are their attempts to exploit new communication media to spread propaganda, manipulate the public and eventually lead them to tyranny. Since the inception of democracy Plato has warned of the danger of demagogues using democracy’s freedoms against itself. In modern times social media, as a new communication medium, invites many parallels to be drawn with historical examples, albeit now with global reach and amplified consequences.
While studies agree that mainstream media such as newspapers, radio and television remain the most important communication platforms, they also acknowledge the growing popularity of social media as a news and media outlet, especially among younger demographics. As the Flash Eurobarometer 536 survey reveals, a quarter of EU citizens, particularly those among the 15–24 age group,[1] have found data and statistics about their country or Europe on social media. Similarly, almost two in five respondents to the 2023 Media & News Survey (and three in five 15–24-year-olds) used social media to access news.[2] The percentage was even higher in the UK, with almost half of UK adult respondents and 71 per cent of 16–24-year-olds using social media for news.[3] Notably, the rise of TikTok as a news media platform has been steep, with ten per cent of those aged over 16 receiving news through it in 2023, up from one per cent in 2020.[4]
While studies agree that mainstream media such as newspapers, radio and television remain the most important communication platforms, they also acknowledge the growing popularity of social media as a news and media outlet, especially among younger demographics.
It is therefore understandable that political parties, organizations and individuals use social media to reach their audiences. However, unlike mainstream media, where the same content is transparently available to all who choose to access it, social media content is curated, microtargeted, promoted or suppressed by opaque platform algorithms, often irrespective of user choice.[5] This limits accountability and opens the door for demagogues seeking to use disinformation to manipulate and polarize. Despite the challenges of auditing, disinformation tracking software such as that developed by researchers at Trollrensics has emerged. It has found coordinated networks used to flood social networks with disinformation during the 2024 European elections, particularly in the interests of far-right parties. An analysis of 2.3 million posts in France, Germany, Italy and the Netherlands revealed 50,000 accounts spreading disinformation: one in five posts mentioned far-right French politician Éric Zemmour; and one in ten German posts about Alternative für Deutschland came from disinformation accounts.[6] With three billion people across the world expected to vote in elections in 2024 and 2025 it is perhaps unsurprising that the World Economic Forum (WEF) has identified disinformation as the most severe global risk over the next two years. The WEF also confirms the strong links between disinformation and societal and political polarization, interstate violence, and the erosion of human rights.[7] Democracy and human rights (including free speech) are particularly important values to Western societies.[8]
From disinformation to polarization and cognitive warfare
In addition to attempting to sway elections in favour of autocratic candidates, the broader role of disinformation in cognitive warfare should be considered. Professor Miller recognizes disinformation and sophisticated psychological manipulation techniques as key features of cognitive warfare.[9] Relying heavily on social media platforms, cybertechnologies and AI, these techniques remain closely interlinked and aim to cause confusion, societal polarization, mistrust, anger and hatred towards Western governments, organizations, communities or opposing individuals.[10] [11] The war in Ukraine has provided ample examples of the role of disinformation/FIMI in cognitive warfare, and how Ukrainian forces have adapted their defences accordingly.[12]
The war in Ukraine has provided ample examples of the role of disinformation/FIMI in cognitive warfare, and how Ukrainian forces have adapted their defences accordingly.
Arguably, allowing these threats to proliferate could lead to the rise of extremist, far-right and misogynistic movements, which could threaten human rights. Some early indications can be seen in a study by King’s College London and Ipsos which showed that younger male participants expressed more negative views of feminism than their older counterparts.[13] Andrew Kaung, a former TikTok analyst, revealed the differences in content recommendations that teenage girls and boys received, irrespective of their choices. Teenage boys were shown violent misogynistic content; girls were shown content related to music or makeup.[14] A further study by NPCC has indicated a notable rise in the number of crimes against women and girls in the UK, which may be linked to the radicalization of men by social media influencers promoting misogyny. The result is that they have since upgraded gender-based crimes to a national threat akin to organized crime and terrorism.[15]
An example of disinformation fuelling violence and extremism was seen after the killing of three children in Southport in the UK in July 2024. Despite the UK authorities publishing the details of the suspect, who was born in the UK, the crime had already been attributed to immigrants through disinformation from foreign-owned websites. The false association between immigration and violent crime has had the unfortunate effect of mobilizing far-right groups that have resorted to attacking immigration support structures across the country. There was a particular focus on Muslim and refugee communities, which led to attempts to incite anger, violence, anxiety and fear across society.[16] [17] It would be premature to attribute this disinformation incident to FIMI actors at the time of writing. Nevertheless, whatever the intention or attribution, its effects were real, and this relationship should be acknowledged.
The 2nd EEAS report on FIMI Threats offers an updated overview of the FIMI ecosystem and reveals its global scale and diverse range of targets. Nearly half the analysed cases targeted countries across the globe, 30 per cent targeted organizations (such as the EU, NATO and Euronews), and nearly 20 per cent targeted individuals, including non-political figures. Furthermore, there seems to be an emerging trend of gender-based and anti-LGBTIQ+ FIMI attacks.[18]
It would be remiss not to consider the potential implications of AI-generated fake content, which the WEF identified as a significant risk for 2024.[19] It is noteworthy that AI-generated audio imitating the voices of politicians has already been utilized in a limited capacity in FIMI cases.[20] The relatively low technological barrier to creating fake content, coupled with the speed and volume at which it can reach individuals, causes concern. Notable examples illustrating its impact, besides character assassination, are deepfake pornography and stock market manipulation. Explicit deepfake images of US singer Taylor Swift reached millions of views before eventually being removed. Similarly, the promotion of a deepfake image featuring a Pentagon explosion affected US stock markets before the US authorities countered the rumours.[21]
It is noteworthy that AI-generated audio imitating the voices of politicians has already been utilized in a limited capacity in FIMI cases.
It is possible that this climate of intimidation, polarization and violence, with FIMI in a featured role, will also lead to self-censorship, apathy or coercion if people fear the unwanted consequences of defamation or violence by speaking up. The 2023 Freedom of the Net report indicates that there have been a significant number of attacks against free speech.[22] In three quarters of the countries surveyed individuals have faced legal repercussions for expressing themselves online. In four out of seven countries this has even resulted in physical assault or loss of life.
Censorship vs free choice
Autocratic regimes have been known to resort to conventional and AI-powered censorship to control the narrative. This can be manifested in several ways, including the blocking of dissenting political, religious or social content, the repression of free speech, and the gradual yet consistent divergence from international human rights conventions.[23] However, censorship could not work in Western societies without eventually opposing their core values and freedoms. The WEF flags the risk that some governments will act too slowly, considering the tradeoff between preventing disinformation and protecting free speech. Meanwhile, others may erode human rights and increase censorship by adopting authoritarian practices.[24]
EU citizens also recognize these risks and overwhelmingly consider disinformation a threat to democracy.[25] Considerable work is underway to gain a deeper understanding of cognitive warfare and develop collaborative multilevel defences.[26] [27] A noteworthy and comprehensive response framework for FIMI threats is the FIMI Toolbox, which is based on a multilevel, collaborative, multidisciplinary, whole-of-society approach.[28]
A noteworthy and comprehensive response framework for FIMI threats is the FIMI Toolbox, which is based on a multilevel, collaborative, multidisciplinary, whole-of-society approach.
When considering who has the right and responsibility to decide on the level of protection, there are several stakeholders, each with distinct responsibilities. While it is within the authorities’ power to define, regulate and block patterns of illegal activity, there is still scope for further protection, for which individual citizens could be responsible should they choose to utilize them.
User susceptibility to fake stories
Maertens et al. designed the Misinformation Susceptibility Test (MIST) to understand the scale of human error in identifying fake stories.[29] A survey of approximately 1,500 US citizens found that two out of three news stories could be correctly identified. Younger adults and those relying on social media for their news, however, were less successful.[30] Meanwhile, the Eurobarometer survey, conducted in the EU, indicated that 30 per cent of surveyed EU citizens were not confident that they could recognize disinformation. Confidence decreased with age and increased with level of education.[31] A UK-based Ofcom survey reported similar levels of uncertainty, in which one in three UK internet users were found to be unsure or unaware of the truthfulness of online information. It is also noteworthy that a small subset, six per cent, even believed everything online was unquestionably true.[32] It would be fair to say that error or uncertainty levels are high, particularly when the error rates of another human-related threat, phishing, are considered. While not directly comparable threats or studies, the 2024 Verizon Data Breach Investigations report may still merit consideration. It suggests that phishing click rates ranged from three to ten per cent over the last eight years.[33]
To reduce error rates, it may be helpful to consider the potential impact of education. In the case of phishing Spitzner empirically suggests that initial click rates at the outset of an organization’s journey to raising awareness can typically range between 25 and 30 per cent, before eventually dropping to less than five within 18 to 22 months.[34] Awareness and education can highlight cognitive biases and emotional manipulation and encourage critical thinking, allowing humans to spot warning signs of unusual and unexpected attacks. It is also worth acknowledging the wider range of complementary multilayer technological approaches that can contribute to reducing the threat space through automation and ultimately the likelihood of human error by encouraging users to adhere to security norms. These could include email content filtering, blacklisting of known accounts, email origin authentication and validation (in the form of DMARC, DKIM and SPF).
Awareness and education can highlight cognitive biases and emotional manipulation and encourage critical thinking, allowing humans to spot warning signs of unusual and unexpected attacks.
Returning to FIMI and disinformation, it would be useful to consider how AI and human-centric security could help reduce the likelihood of human error (assuming user consent is present). This could involve reducing the threat space, the cognitive load of distinguishing the legitimacy or authenticity of stories, and the technological gap between humans and technological controls.
Disinformation detection
As a preliminary step towards reducing human error and maximizing user support, this section explores disinformation detection approaches, including sentiment analysis, propagation pattern analysis, origin reputation, provenance, deepfake detection, confirmation bias user profiling and factchecking. Rather than an exhaustive list, this represents a selection of approaches that have informed the options presented in this article.
Early approaches focused on signs of emotionally charged manipulative language or discourse patterns featured in news stories and social media reactions. These approaches involved natural language processing and sentiment analysis of social network content, particularly on X/Twitter.[35]
A prominent indicator worthy of our attention is how these stories spread. Investigations showed that stories aiming to evoke strong reactions were likely to spread faster, or at least differently, than genuine news. Another advantage of identifying anomalous propagation patterns is that it is content-agnostic, making it more easily applicable to multilingual environments. Graph neural networks, or temporal graph networks, can be especially effective at indicating signs of rapidly growing news stories, even adjusting to evolving propagation patterns.[36] [37]
Similarly, it may be possible to identify the anomalous behaviour of bot accounts spreading disinformation as a basis for informing their reputation. Initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) could go even further by cryptographically signing media content to verify its source and editing history. The presence of provenance information, or even its lack, could help improve trust in the authenticity and origin of image, audio or video content.[38]
Deepfake detection aims to identify anomalous effects caused by the editing processes of AI-generated software. In deepfake videos such inconsistencies may be observed in movement or misalignments of key facial points, unusual lighting, shadows and reflections, both within individual frames and sequences. Various methods can be used to detect deepfakes, with deep learning and multimodal deep learning approaches proving particularly effective.[39]
Another indicator considers the possibility that an individual is more likely to believe and spread misinformation if it already aligns with their existing beliefs, a phenomenon known as confirmation bias. User behaviour profiles of their historical use could therefore help predict individuals who could unwittingly spread misinformation.[40]
The above techniques are designed to detect various patterns of anomalous activity of different entities, which can demonstrate that disinformation detection is indeed possible. There is potential for further improvement by combining these techniques, or even by complementing them with the mapping of the broader characteristics of FIMI and cyber incidents, as defined in the DISARM and ATT&CK frameworks respectively.[41]
Finally, it is important to consider the powerful potential of computer–human teaming methods in the context of factchecking. Communities worldwide collaborate to investigate the accuracy of information based on journalistic standards and to unpack the narrative, intent and potential impact behind disinformation.[42] The emerging field of Large Language Models (LLMs) and generative AI that have been trained on disinformation datasets incorporate factchecking functionality. These are also important and particularly relevant to end users. Although LLMs show great promise, it would be prudent to await further evidence of their accuracy and resilience to disinformation attacks.
Towards endpoint solutions for FIMI threats
While cybersecurity principles have inspired the FIMI Toolbox, it is important to acknowledge its stronger sociocognitive elements, which extend beyond technical aspects to encompass a broader range of societal considerations. Its collective response protocols involve an extensive network of relevant stakeholders across society, each with distinct responsibilities, ensuring proportional, adaptive, collective, understandable and effective responses.[43]
Users and citizens have roles and responsibilities as stakeholders to protect their information space and explore how a response paradigm could be provided transparently and democratically. To this end, it is suggested that protection, detection and support functionality are made available at endpoints, where users can freely decide which to enable with the support of customizable default settings. Such user-centric functionality would provide the capacity for the greatest possible support, minimize the risk of human error, and accompany each option with the freedom to enable or disable it at the user level. A group of indicative options for users is outlined below: support and education; threat surface reduction; detection and response; and situational awareness.
Users and citizens have roles and responsibilities as stakeholders to protect their information space and explore how a response paradigm could be provided transparently and democratically.
Support and education
User-initiated support that facilitates the use of factchecking, credibility/reputation scoring, bot detection, disinformation tracking and education could be made available to users through browser extensions, context menu options or LLMs. For example, deepfake audio and video verification functionality (akin to solutions such as Microsoft Video Authenticator, Resemble AI, Sensity AI or WeVerify) could be invoked to verify the credibility of deepfake audio or videos. Simplified reports for factchecking, reverse image searches and content verification could also prove useful. Additionally, access to educational training resources could be facilitated to help users recognize warning signs of disinformation and emotional manipulation, operate suitable tools, understand their output, and select suitable and proportionate response options. Support functionality could also facilitate access to disinformation resources and communities for users who wish to volunteer, connect or report suspected threats.[44]
Threat surface reduction
Options for reducing the threat surface could include automated countermeasures for known threats that users would prefer not to see regularly. Several countermeasures could be employed, such as highlighting flagged content, filtering it, replacing it with its authentic alternative or saving it in a secondary alternative location for future review (similar to spam folders for suspected junk email). For example, the default setting might be configured to automatically filter content associated with known disinformation accounts. However, a user might also filter out deepfake political content or content featuring violence and extremism. Another user might want to redirect political content that lacks a verified origin to a secondary location for later review. To avoid undue technological barriers, customizable default recommended settings and user-friendly interfaces that encourage proportionate and appropriate threat reduction would be beneficial, regardless of the social media applications used.
Detection and response
The detection functionality could focus on identifying residual activity and subtler warning signs of novel disinformation threats. Such instances could be reported to the user, escalated to human-computer teams for analysis, or logged locally for future investigation. For example, it might be possible to identify users who are prone to unwittingly forwarding misinformation to others. A user activity report highlighting the misinformation might lead to useful prompts and guidance to relevant educational content.
Situational awareness
It may be beneficial to exchange threat intelligence information that aids situational awareness and helps link events with other domains. Post-incident review of user settings could also fall under this functionality group.
Conclusions and future work
The considerable concern about disinformation, the importance of democratic values and the degree of uncertainty expressed by users in their ability to identify disinformation correctly suggests the need to strengthen protection at endpoints and indicates that users may be willing to adopt the proposed functionality. The technological gap or privacy concerns may prove to be barriers for some people. Generative AI can prove particularly helpful in bridging technological gaps in user support, as would the use of optimal default profile settings. Raising awareness of privacy-enhancing technologies could alleviate fears and assure privacy protection.
Privacy-enhancing technologies such as differential privacy and federated learning could enable the utility of relevant data while assuring their privacy in accordance with data protection principles. This is particularly important for supporting detection and response, situational awareness or user profiling, where the privacy requirements would be higher. As the focus is on examining content rather than user behaviour, it could be argued that the privacy requirements of support, education and threat space reduction functionality would be relatively lower. In any case, the privacy requirements of any endpoint functionality must be determined and justified before seeking user consent.
Privacy-enhancing technologies such as differential privacy and federated learning could enable the utility of relevant data while assuring their privacy in accordance with data protection principles.
The proposed endpoint functionality aims to complement existing defences and social media controls by democratizing protection. It seeks to empower users with the right and responsibility to control their own information space, irrespective of their social media applications, encouraging transparency. It aims to bridge the technological gap between humans and disinformation controls, maximize support, reduce the likelihood of human error, and promote secure behaviour as the norm. It also strives to offer freedom of choice to individuals in cases where centralized controls could risk eroding democratic values and human rights. A conceptual architecture, requirements analysis, use cases and proof of concept functionality could extend this work in future to illustrate its key points.
Dr Maria Papadaki is an associate professor in Cyber Security at the University of Derby, UK. Her research interests focus on incident response, threat intelligence, maritime cybersecurity, and human-centred security. Her research outputs include more than 70 international peer-reviewed publications in this area. The views contained in this article are the author’s alone and do not represent the views of the University of Derby.
[1] European Commission, ‘Flash Eurobarometer 536 Report: Public awareness and trust in European statistics’, February 2024, https://europa.eu/eurobarometer/surveys/detail/2955.
[2] European Parliament, ‘Media & News Survey 2023’, November 2023, https://europa.eu/eurobarometer/surveys/detail/3153.
[3] OFCOM, ‘News consumption in the UK: 2023, Research findings’, July 2023, https://www.ofcom.org.uk/media-use-and-attitudes/attitudes-to-news/news-consumption/.
[4] Idem.
[5] Urbano Reviglio and Claudio Agosti, ‘Thinking Outside the Black-Box: The Case for “Algorithmic Sovereignty” in Social Media’, Social Media + Society Volume 6 Issue 2 (April 2020), https://doi.org/10.1177/2056305120915613.
[6] Lisa O’Carroll, ‘Disinformation networks “flooded” X before EU elections, report says’, July 2024, https://www.theguardian.com/world/article/2024/jul/12/disinformation-networks-social-media-x-france-germany-italy-eu-elections.
[7] World Economic Forum, ‘The Global Risks Report 2023: 18th Edition Insight Report’, January 2023, https://www3.weforum.org/docs/WEF_Global_Risks_Report_2023.pdf.
[8] European Parliament, ‘Eurobarometer: Parlemeter 2022’, January 2023, European Commission, ‘Flash Eurobarometer 536 Report: Public awareness and trust in European statistics’, February 2024, https://europa.eu/eurobarometer/surveys/detail/2955.
[9] Seumas Miller, ‘Cognitive warfare: An ethical analysis’, Ethics and Information Technology Volume 25, Issue 46 (September 2023), https://doi.org/10.1007/s10676-023-09717-7.
[10] Nicolas Hénin, ‘FIMI: Towards a European redefinition of Foreign Interference’, EU Disinfo Lab, April 2023, https://www.disinfo.eu/publications/fimi-towards-a-european-redefinition-of-foreign-interference/.
[11] Erika Magonara and Apostolos Malatras, ‘Foreign Information Manipulation and Interference (FIMI) and Cybersecurity – Threat Landscape’, ENISA, December 2022, https://www.enisa.europa.eu/publications/foreign-information-manipulation-interference-fimi-and-cybersecurity-threat-landscape/.
[12] Jakub Kalenský and Roman Osadchuk, ‘How Ukraine fights Russian disinformation: Beehive vs mammoth’, Hybrid CoE Research Report 11, January 2024.
[13] King’s College London and Ipsos, ‘Emerging tensions? How younger generations are dividing on masculinity and gender equality’, February 2024, https://www.kcl.ac.uk/policy-institute/assets/emerging-tensions.pdf.
[14] Marianna Spring, ‘“It stains your brain”: How social media algorithms show violence to boys’, BBC Panorama, September 2024, https://www.bbc.co.uk/news/articles/c4gdqzxypdzo.
[15] NPCC, ‘Violence Against Women and Girls (VAWG): National Policing Statement 2024’, July 2024, https://cdn.prgloo.com/media/5fc31202dd7e411ba40d29fdca7836fd.pdf.
[16] Mark Easton, ‘Protests reveal deep-rooted anger, but UK is not at boiling point’, August 2024, https://www.bbc.co.uk/news/articles/czx66dkx3wlo.
[17] Marianna Spring, ‘Did social media fan the flames of riot in Southport?’, July 2024, https://www.bbc.co.uk/news/articles/cd1e8d7llg9o.
[18] European External Action Service (EEAS), ‘2nd EEAS Report on Foreign Information Manipulation and Interference Threats: A framework for networked defence’, January 2024, https://www.eeas.europa.eu/sites/default/files/documents/2024/EEAS-2nd-Report on FIMI Threats-January-2024_0.pdf.
[19] World Economic Forum, ‘The Global Risks Report 2023: 18th Edition Insight Report’, January 2023, https://www3.weforum.org/docs/WEF_Global_Risks_Report_2023.pdf.
[20] European External Action Service (EEAS), ‘2nd EEAS Report on Foreign Information Manipulation and Interference Threats: A framework for networked defence’, January 2024, https://www.eeas.europa.eu/sites/default/files/documents/2024/EEAS-2nd-Report on FIMI Threats-January-2024_0.pdf.
[21] Luke Hurst, ‘How a fake image of a Pentagon explosion shared on Twitter caused a real dip on Wall Street’, Euronews, May 2023, https://www.euronews.com/next/2023/05/23/fake-news-about-an-explosion-at-the-pentagon-spreads-on-verified-accounts-on-twitter.
[22] Shahbaz, Funk, and Vesteinsson, ‘The Repressive Power of Artificial Intelligence’, in Freedom on the Net 2023, Shahbaz, Funk, Vesteinsson, Brody, Baker, Grothe, Barak, Masinsin, Modi, and Sutterlin, eds (Freedom House, 2023), freedomonthenet.org.
[23] Idem.
[24] World Economic Forum, ‘The Global Risks Report 2023: 18th Edition Insight Report’, January 2023, https://www3.weforum.org/docs/WEF_Global_Risks_Report_2023.pdf.
[25] European Commission, ‘Flash Eurobarometer 464: Fake news and disinformation online’, April 2018, https://europa.eu/eurobarometer/surveys/detail/2183.
[26] European External Action Service (EEAS), ‘2nd EEAS Report on Foreign Information Manipulation and Interference Threats: A framework for networked defence’, January 2024, https://www.eeas.europa.eu/sites/default/files/documents/2024/EEAS-2nd-Report on FIMI Threats-January-2024_0.pdf.
[27] European External Action Service (EEAS), ‘1st EEAS Report on Foreign Information Manipulation and Interference Threats towards a framework for networked defence’, February 2023, https://www.eeas.europa.eu/sites/default/files/documents/2023/EEAS-DataTeam-ThreatReport-2023.pdf.
[28] European External Action Service (EEAS), ‘2nd EEAS Report on Foreign Information Manipulation and Interference Threats: A framework for networked defence’, January 2024, https://www.eeas.europa.eu/sites/default/files/documents/2024/EEAS-2nd-Report on FIMI Threats-January-2024_0.pdf.
[29] Rakoen Maertens, Friedrich M. Götz, Hudson F. Golino, Jon Roozenbeek, Claudia R. Schneider, Yara Kyrychenko, John R. Kerr, Stefan Stieger, William P. McClanahan, Karly Drabot, James He, and Sander van der Linden, 2024, ‘The Misinformation Susceptibility Test (MIST): A psychometrically validated measure of news veracity discernment’, Behaviour Research Methods Volume 56, 1863–1899 (March 2024), https://doi.org/10.3758/s13428-023-02124-2.
[30] Linley Sanders, ‘How well can Americans distinguish real news headlines from fake ones?’, June 2023, https://today.yougov.com/politics/articles/45855-americans-distinguish-real-fake-news-headline-poll.
[31] European Parliament, ‘Media & News Survey 2022’, July 2022, https://europa.eu/eurobarometer/surveys/detail/2832.
[32] OFCOM, ‘The genuine article? One in three internet users fail to question misinformation’, March 2023, https://www.ofcom.org.uk/media-use-and-attitudes/attitudes-to-news/one-in-three-internet-users-fail-to-question-misinformation/.
[33] Verizon, ‘2024 Data Breach Investigations Report’, May 2024, https://www.verizon.com/business/resources/reports/dbir/.
[34] Lance Spitzner, ‘Why a Phishing Click Rate of 0% is Bad’, November 2017, https://www.sans.org/blog/why-a-phishing-click-rate-of-0-is-bad/.
[35] Shreya Ghosh and Mitra Prasenjit, 2023.’“Review of How Early Can We Detect? Detecting Misinformation on Social Media Using User Profiling and Network Characteristics’, in Lecture Notes in Computer Science, Gianmarco De Francisci Morales, Claudia Perlich, Natali Ruchansky, Nicolas Kourtellis, Elena Baralis, and Francesco Bonchi, eds, Vol. 14174, Springer. https://doi.org/10.1007/978-3-031-43427-3_11.
[36] Idem.
[37] Frederico Monti, Fabrizio Frasca, Davide Eynard, Damon Mannion, and Michael M. Bronstein, ‘Review of Fake News Detection on Social Media Using Geometric Deep Learning’, in: Representation Learning on Graphs and Manifolds Workshop, ICLR 2019, May 2019, Ernest N. Morial Convention Center, New Orleans, USA, https://rlgm.github.io/papers/34.pdf.
[38] Microsoft, 2023, ‘Microsoft Digital Defense Report: Building and improving cyber resilience’, October 2023, https://www.microsoft.com/en-us/security/security-insider/microsoft-digital-defense-report-2023.
[39] Md Shohel Rana, Mohammad Nur Nobi, Beddhu Murali, and Andrew H. Sung, ‘Deepfake Detection: A Systematic Literature Review’, in IEEE Access, Volume 10, 25494–25513, 2022, doi: 10.1109/ACCESS.2022.3154404.
[40] Yingtong Dou, Kai Shu, Congying Xia, Philip S. Yu, and Lichao Sun, 2021, July, User preference-aware fake news detection, in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (2051–2055).
[41] ‘DISARM Framework Explorer’, DISARM Frameworks, last modified November 2023, https://disarmframework.herokuapp.com/.
[42] EU Disinfo Lab, ‘Tools to monitor disinformation’, 2024, https://www.disinfo.eu/resources/tools-to-monitor-disinformation/.
[43] European External Action Service (EEAS), ‘2nd EEAS Report on Foreign Information Manipulation and Interference Threats: A framework for networked defence’, January 2024, https://www.eeas.europa.eu/sites/default/files/documents/2024/EEAS-2nd-Report on FIMI Threats-January-2024_0.pdf.
[44] EU Disinfo Lab, ‘Tools to monitor disinformation’, 2024, https://www.disinfo.eu/resources/tools-to-monitor-disinformation/.