Abstract: The cyber threat landscape has grown in scale, sophistication and complexity due to advancements in AI. This evolution can be observed across multiple threat vectors, including cyber-enabled fraud, supply chain attacks, malicious insiders, and disinformation. To fully assess their impact, it is essential to consider their synergistic effects. Meanwhile, the opportunities in cybersecurity are both formidable and exciting, provided the associated risks are properly addressed. Apart from the automation, evolution, and scaling of cyber defence, it is relevant to consider the substantial potential for reducing the attack surface and disrupting adversarial behaviour.
Problem statement: What new ambitions exist in the field of cybersecurity and AI?
So what?: To evolve and enact policies and strategic objectives at a faster pace, while considering risks and promoting a baseline security posture, would require emphasis on security and coordination at the sectoral, national, and international levels.

An Evolving and Complex Threat Landscape
“The rapid advancements and increasing adoption of digital platforms globally is matched by an equally evolving cyberthreat landscape. Cybercrime today is increasing not just in scale but also in sophistication. As our digital footprints widen, so does the potential attack surface for nefarious actors. It is essential that we work together to address this growing menace. The borderless nature of the internet necessitates collaboration across various jurisdictional limitations to ensure that threat actors have no safe haven for their evil activities.”
Ivan John E. Uy, Secretary of Information and Communications Technology of the Philippines[1]
According to the World Economic Forum (WEF), the cybersecurity threat landscape is shaped by geopolitical tensions and rapid technological advancements, all contributing to a complex and uncertain environment with significant economic implications. [2] Addressing these challenges is further complicated by the widening cyber skills gap. Ransomware, cyber-enabled fraud and supply-chain disruption represent the top three cyber risks affecting organisations, followed by malicious insiders, disinformation/ foreign information manipulation and interference (FIMI), and denial of service.[3]
The complexity of environments can be illustrated by supply chain incidents. For instance, the blockage of the Suez Canal by the container vessel Ever Given for six days in 2021 had a significant cascading impact on global trade, with $9.7 billion worth of goods being held up each day.[4] The Suez incident highlighted the vulnerability of global logistics to a single physical chokepoint, underscoring the need to enhance the resilience and diversity of supply chains. The same argument can be made for digital supply chains, where visibility and governance of security levels across complex supply chain networks can be problematic. Over half of the large enterprises surveyed in the WEF’s Global Cybersecurity Outlook report,[5] consider these vulnerabilities to be the top cyber risk in their ecosystem. Yet, the Marsh State of Cyber Resilience Report[6] estimates that only two out of five enterprises have conducted a risk assessment of their vendors and supply chain. As organisations lack visibility into their extended and complex vendor networks, these blind spots provide adversaries with opportunities for exploitation.[7] Notable examples include, but are not limited to, the SolarWinds Orion IT breach in 2020, the Kaseya ransomware incident in 2021,[8] the Airbus attack in 2023,[9] and the latest Marks & Spencer attack in 2025.[10]
As seen in the examples of Kaseya and Marks & Spencer above, the impact of supply chain attacks can be amplified further if combined with ransomware. According to Mastercard’s RiskRecon report on the impact of ransomware on supply chains, one in every 100 suppliers is expected to succumb to a successful ransomware attack each year. While one per cent may not seem significant in isolation, larger organisations with hundreds of suppliers could be expected to deal with five or more ransomware incidents on an annual basis as a consequence.[11] Gartner had predicted in 2021 that 45% of organisations would have experienced a software supply-chain attack by 2025.[12] While their subsequent findings have not confirmed if this prediction was accurate, they do show the widespread nature of attacks and their increased scale, as well as the subsequent drive of enterprises to bolster their defences.[13]
Conversely, even when an incident originates within a large organisation, rather than from an insecure supply chain network, its impact on those supply chains can be immense, with significant economic implications, including the risk of bankruptcy, as illustrated by the recent attack on Jaguar Land Rover. Although at the time of writing, the full impact of the incident is not fully known, the resulting supply chain risks have prompted calls for the UK government to step in and protect at-risk suppliers.[14] As for the incident itself, it is still under investigation and not officially linked to ransomware. However, the self-proclaimed threat actor group behind it, “Scattered Lapsus$ Hunters”, is known to have employed ransomware, social engineering, extortion and EDR evasion techniques in the past.[15], [16], [17]
While financial motivation appears to be the primary driver behind cyber threats, they are not mutually exclusive with geopolitical tensions. As geopolitical tensions continue to shape cybersecurity incidents and strategies, and as the financial model behind them proves successful, supply chain attacks are likely to become more prevalent and be integrated into increasingly complex attack chains.
Generative AI in the Hands of Threat Actors
The emergence of generative AI exemplifies the rapid advancement of technology. Unsurprisingly, given its growing popularity, it has become both a prime target and a tool for attacks, including those against software supply chains. Open-source software can be particularly vulnerable to state-backed actors, who are increasingly attempting to infiltrate it as a means of reaching targets globally.[18], [19] Insecure AI supply chain attacks, in particular, can allow malicious actors to introduce backdoors, malware, or other malicious code by pushing insecurely sourced software into shared libraries and components. Examples include the malware injected in the PyTorch-nightly pre-release version in 2022,[20] which was subsequently removed, as well as the poisoned PoisonGPT pre-trained Large Language Model (LLM), which was uploaded to HuggingFace under a misleading name and made briefly available to the public in July 2023.[21]
Open-source software can be particularly vulnerable to state-backed actors, who are increasingly attempting to infiltrate it as a means of reaching targets globally.
The innovations in generative AI have also fuelled the evolution of other cyber threats, most notably phishing. Generative AI tools seem to reduce the cost of phishing and social engineering campaigns while increasing their effectiveness. As a consequence, there is a growth in these attacks, which also reveals the enterprising scale behind them.[22], [23] Crowdstrike’s 2025 Global Threat Report reveals that Voice phishing (vishing) attacks, where adversaries try to social engineer victims over a phone call, have seen a five-fold increase between the first and second half of 2024.[24] Meanwhile, Verizon reports that the number of synthetically generated malicious emails has doubled in the past two years.[25] The scale of operation does not stop at initial access. The average time taken for adversaries to progress through the attack chain once initial access was achieved (the ‘breakout time’) was reduced to 48 minutes, with the fastest breakout observed at just 51 seconds.[26] As a consequence, the time available to detect and respond would also need to be reduced accordingly.
More concerning is that LLM-generated phishing pages appear to be much more effective in persuading victims than human-written ones, with click-through-rates of 54% for the former versus 12% for the latter in the research study by Heiding et al.[27] Meanwhile, detection rates appear to be similar,[28] which highlights the need for automated filtering mechanisms even more, as a way of reducing the attack surface for users. It is also important to update awareness-raising and educational initiatives to recognise abnormal signs of generative AI in phishing and social engineering, either independently or with the help of digital tools.
Another aspect affected by the malicious use of generative AI is insider threats. IBM warns that GenAI tools, like OpenAI’s ChatGPT, can retain and reproduce sensitive data shared by employees, effectively expanding the insider threat landscape.[29] The option to create proprietary AI applications tailored to a specific business use can also carry risks. For example, malicious internal actors could tamper with training data to skew results, or they could reverse-engineer AI applications under weak monitoring and logging practices.[30]
Moreover, generative AI has been used to exploit insufficient hiring processes by allowing remote IT workers to disguise their true identities and infiltrate victim organisations. As a result, they gain access to sensitive data and equipment and raise revenue for their own benefit. The Democratic People’s Republic of Korea (DPRK) Famous Chollima group is the most sophisticated example. DPRK remote IT workers, who appear to originate from all over the world, use Generative AI to alter their voice, use deepfake videos to disguise their appearance in virtual interviews, and have managed to create compelling fake IT job profiles, even on LinkedIn.[31] Crowdstrike threat hunters alone responded to approximately 120 Famous Chollima insider threat operations in 2024.[32] However, the true scale of the problem is expected to be higher with multiple organisations monitoring and reporting similar findings.[33], [34] To appreciate the additional dimensions of the problem, one also needs to consider the potential for such incidents being weaponised in the future for FIMI operations to incite xenophobic and distrust sentiments among populations.
Generative AI has been used to exploit insufficient hiring processes by allowing remote IT workers to disguise their true identities and infiltrate victim organisations.
Using generative AI to disguise one’s identity and customise social engineering campaigns could lead to an increase in identity theft cases, particularly against high-profile individuals. According to the WEF, identity theft is emerging as the primary personal cyber risk for both CISOs and CEOs.[35]
In the field of FIMI, there is a use of generative AI tools to scale the generation of tailored content in information campaigns. In August 2024, Green Cicada, an IO network comprising more than 5,000 inauthentic accounts, was identified as being enabled by a Chinese-language LLM system. It was used to amplify politically divisive issues in the lead-up to the 2024 U.S. presidential election. Russia-aligned operators also used LLMs to generate content and automate vast IO campaigns targeting U.S. audiences.[36]
Enterprise-Scale FIMI
The second and third EEAS reports on FIMI operations revealed their global scale, which have expanded to targets beyond Ukraine and the U.S. to multiple countries in Europe, Africa, Australia, the Middle East and Latin America. In 2024 alone, EEAS documented over 500 incidents involving 38,000 channels across 25 platforms, targeting 90 countries and more than 300 organisations. Moreover, the strategic approach behind FIMI campaigns has revealed linguistic and cultural adaptations tailored to local audiences, further facilitating the advancement of geopolitical objectives among these audiences through elections, high-profile international events, and crises.[37], [38]
The strategic approach behind FIMI campaigns has revealed linguistic and cultural adaptations tailored to local audiences, further facilitating the advancement of geopolitical objectives among these audiences through elections, high-profile international events, and crises.
Operations of such scale depend on sophisticated multi-layered infrastructures, which can be usefully mapped with the EEAS FIMI Exposure Matrix.[39] These include official state channels, state-controlled outlets, covert state-linked entities, and state-aligned networks. Russian ecosystems have adopted a more decentralised and adaptive architecture, whereas the PRC’s are more centralised and synchronised, with channels activated simultaneously on the same topic. Both of these ecosystems rely on extensive covert, state-aligned networks and booster nodes to infiltrate the local information environments of unwitting audiences. They also make extensive use of disposable transient accounts, impersonation of legitimate media, and increasingly generative AI to avoid attribution, evade detection, ensure persistence, and scalability of influence operations.[40]
Notably, the 3rd EEAS Report has highlighted the monetisation of “influence‑for‑hire” operations, particularly within the Chinese FIMI ecosystem.[41] Specifically, private PR companies and influencers were utilised to pose as independent entities, while covertly promoting state‑aligned narratives in campaigns, such as HaiEnergy,[42] Paperwall,[43] and the “Volume News” network. This outsourcing model extends the reach and resilience of FIMI activities, complicates attribution, and underscores the need for coordinated, evidence-based responses at both national and international levels.
Threat actors often combine multiple tactics and techniques to maximise the impact of their operations. Equally important are those attacks that evade detection and response by staying below conventional thresholds. Therefore, to assess the full impact of these threats, it is important to consider their synergistic effects and the range of coordinated collaborative countermeasures that enhance information sharing and minimise their impact in those grey zones.
Opportunities for Cybersecurity
On the one hand, threat actors have leveraged generative AI to enhance the sophistication, automation, and effectiveness of attacks, as well as the attack surface. On the other hand, the same innovations can offer formidable and exciting opportunities for cybersecurity. Generative AI has the potential to enhance detection and response, particularly in the analysis of linguistic cues, image and video inconsistencies, and behavioural patterns. This could mean that the potential for enhancing phishing, deepfake, and disinformation detection is substantial.[44] Training detection models on synthetically produced phishing datasets has already shown promising results.[45], [46] It is possible that incorporating images, audio, and video alongside text with multimodal detection could have additional advantages in phishing detection, as well as in the detection of deepfakes and disinformation.[47] Behavioural analysis can also recognise anomalies in email communications, message tone, and historical usage patterns to identify potential identity theft, spear-phishing attacks, or disinformation.[48], [49], [50]
Moreover, many opportunities for automation, evolution and scaling of cyber defence workflows exist, and the potential for optimisation is substantial. Routine but critical tasks—such as patching vulnerabilities, and managing updates—can now be integrated into security orchestration, automation and response processes. For instance, it may be possible to consider triggering patching workflows after vulnerabilities have been detected. It is also worth exploring the potential of monitoring to identify any post-patch anomalies.[51] Apart from the potential to reduce human error, minimise the attack surface, and improve detection and response times, automation could free up valuable human resources, which can then be used to add valuable insights to important activities, such as threat hunting or optimising existing controls.
Many opportunities for automation, evolution and scaling of cyber defence workflows exist, and the potential for optimisation is substantial.
Meanwhile, agentic AI has the potential to provide adaptive defence systems that evolve and react with the threat landscape. AI agents could have the capacity to learn from new attacks and adjust their strategies accordingly, making them more resilient against emerging and diversifying threats.[52] A further extension of this could be their application on honeypots, able to generate dynamic interactions to collect intelligence, and leverage them to increase adversarial risks and costs.[53]
Another exciting opportunity is the leveraging of agentic and generative AI to reduce the attack surface by modernising legacy software, transforming and migrating it to inherently more secure platforms. Legacy software, which often lacks modern security controls, has been frequently associated with vulnerabilities and significant challenges for defenders to manage, particularly in critical infrastructure. Therefore, accelerating migration to secure platforms could be a game-changer in terms of reducing the attack surface. Early initiatives to that effect include Microsoft’s agentic AI tools, which aim to modernise Java and .NET software, as well as databases.[54] Several other initiatives exist, with AI vendors such as Kyndryl, and Stride among them.[55], [56]
As the scale of innovation accelerates, so does the need for user-friendly, adaptable education and communication environments that would train and support users and the cyber workforce ecosystem. This is particularly important in light of the widely recognised widening cybersecurity skills gap [57] and to allow faster integration of new practices, alerts, tools, and guidelines. The importance of such initiatives is huge.
Perhaps the most exciting transformation is found in governance and risk management, where modelling processes, outcomes, and contextual parameters can inform agile, data-driven decision-making and strategy formation, as well as evaluation. AI-driven audits, benchmarking tools, and predictive analytics could inform agile decision-making frameworks, helping cross-functional teams to manage emerging risks and explore different possible outcomes.[58] Intelligent governance and risk management frameworks have the potential to model and benchmark complex supply chain environments, promoting transparency, accountability, optimisation, and foresight. Similarly, extending these innovations at national and international levels could provide more accountable, efficient, sustainable, and transparent citizen-centric services and digital societies.
Final Thoughts
Although large corporations and governments are better placed to adopt AI and manage risks, all these efforts could be undermined if smaller enterprises and countries are unable to keep up. Without adequate support, digital inequality and supply chain security risks could have serious consequences in increasingly complex and connected ecosystems. In this rapidly evolving landscape, collaboration, communication, and coordination are essential at the sectoral, national, and international levels to formulate policies, set strategic objectives, and establish a baseline security posture.
Although large corporations and governments are better placed to adopt AI and manage risks, all these efforts could be undermined if smaller enterprises and countries are unable to keep up.
Ultimately, the future of cybersecurity depends on the acceptance of the fact that every innovation carries risks and opportunities. This ongoing cycle requires us to understand the short-, medium-, and long-term impacts, if we are to turn challenges into opportunities and achieve long-term, sustainable growth. It is a responsibility to leverage the power of cutting-edge technologies, nurture collaborative ecosystems, while recognising and managing associated risks.
[1] World Economic Forum, “Global Cybersecurity Outlook 2025 Insight Report,” January 2025, https://reports.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2025.pdf.
[2] Idem.
[3] Idem.
[4] David Welch, 2021, “The Suez Canal And Our Fragile Global Supply Chain: How Redundancy In Industrial Development Can Ease The Strain”, Forbes.com, https://www.forbes.com/councils/forbesrealestatecouncil/2021/05/04/the-suez-canal-and-our-fragile-global-supply-chain-how-redundancy-in-industrial-development-can-ease-the-strain/
[5] World Economic Forum, “Global Cybersecurity Outlook 2025 Insight Report,” January 2025, https://reports.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2025.pdf.
[6] Marsh and Microsoft, “The State of Cyber Resilience,” https://www.marsh.com/en/services/cyber-risk/insights/the-state-of-cyber-resilience.html.
[7] World Economic Forum “Global Cybersecurity Outlook 2025 Insight Report,” January 2025, https://reports.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2025.pdf.
[8] ENISA, “ENISA Threat Landscape for Supply Chain attacks,” July 2021, https://www.enisa.europa.eu/sites/default/files/publications/ENISA%20Threat%20Landscape%20for%20Supply%20Chain%20Attacks.pdf.
[9] Aditi Goel, 2024, “Top 10 Supply Chain Attacks that shook the world,” https://www.encryptionconsulting.com/top-10-supply-chain-attacks-that-shook-the-world/.
[10] Shanaz Musafer and Liv McMahon, 2025, “What can I buy online at M&S since the hack?,” https://www.bbc.co.uk/news/articles/c0el31nqnpvo.
[11] Mastercard RiskRecon, 2025, “Ransomware in the supply chain: Six Lessons from 10 Years of Attacks,” https://www.riskrecon.com/hubfs/Ransomware2025/Ransomware StudySeptember2025v4.pdf.
[12] Scott Sellers, 2022, “Filling a critical gap in the software supply chain,” Forbes Technology Council, https://www.forbes.com/councils/forbestechcouncil/2022/11/07/filling-a-critical-gap-in-the-software-supply-chain/.
[13] Manjunath Bhat, Aaron Lord, Jason Gross, “Market Guide for Software Supply Chain Security,” Gartner Research, May 2025, gartner.com
[14] Theo Leggett, 2025, “JLR could face disruption until November after hack,” https://www.bbc.co.uk/news/articles/czewlj57e24o.
[15] James Coker, “Organisations must update defenses to Scattered Spider Tactics, Experts Urge,” Infosecurity Magazine, September 2025, https://www.infosecurity-magazine.com/news/update-defenses-scattered-spider/.
[16] CISA, “Cybersecurity Advisory: Scattered Spider,” July 2025, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-320a.
[17] Joe Tidy, “M&S hackers claim to be behind Jaguar Land Rover cyber attack,” September 2025, https://www.bbc.co.uk/news/articles/c4gqepe5355o.
[18] Manjunath Bhat, Aaron Lord, Jason Gross, “Market Guide for Software Supply Chain Security,” Gartner Research, May 2025, gartner.com.
[19] Kaseya, “Press Release: Kaseya responds swiftly to sophisticated cyberattack,” July 2021, https://www.kaseya.com/press-release/kaseya-responds-swiftly-to-sophisticated-cyberattack-mitigating-global-disruption-to-customers/.
[20] Mansi B., “PyTorch dependency ‘torchtriton’ on PyPI Supply Chain Attack,” SentinelOne, January 2023, https://www.sentinelone.com/blog/pytorch-dependency-torchtriton-supply-chain-attack/.
[21] Daniel Huynh, and Jade Hardouin, “PoisonGPT: How We Hid a Lobotomized LLM on Hugging Face to Spread Fake News,” Mithril Security, July 2023, https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/.
[22] CrowdStrike, “CrowdStrike 2025 Global Threat Report.”
[23] Marc Schmitt, and Ivan Flechais, “Digital deception: Generative artificial intelligence in social engineering and phishing,” Artificial Intelligence Review 57, no. 12 (2024): 324.
[24] CrowdStrike, “CrowdStrike 2025 Global Threat Report.”
[25] Verizon, “2025 Data Breach Investigations Report,” https://www.verizon.com/business/resources/T5d5/reports/2025-dbir-data-breach-investigations-report.pdf.
[26] CrowdStrike, “CrowdStrike 2025 Global Threat Report.”
[27] Fred Heiding, Simon Lermen, Andrew Kao, Bruce Schneier, and Arun Vishwanath, “Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns,” in: ICML 2025 Workshop on Reliable and Responsible Foundation Models, https://openreview.net/forum?id=f0uFpuea1s.
[28] Sayak Saha Roy, Poojitha Thota, Krishna Vamsi Naragam, and Shirin Nilizadeh, “From Chatbots to PhishBots?–Preventing Phishing scams created using ChatGPT, Google Bard and Claude,” arXiv preprint arXiv:2310.19181 (2023). https://arxiv.org/pdf/2310.19181v2.
[29] Charles Owen-Jackson, “How generative AI is expanding the insider threat attack surface,” June 2024, IBM Think Insights, https://www.ibm.com/think/insights/generative-ai-insider-threat-attack-surface.
[30] Idem.
[31] CrowdStrike, “CrowdStrike 2025 Global Threat Report.”
[32] Idem
[33] U.S. Department of Justice, “Press Release: Justice Department Announces Coordinated, Nationwide Actions to Combat North Korean Remote Information Technology Workers’ Illicit Revenue Generation Schemes,” June 2025, https://www.justice.gov/opa/pr/justice-department-announces-coordinated-nationwide-actions-combat-north-korean-remote.
[34] Microsoft Threat Intelligence, “Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations,” June 2025, https://www.microsoft.com/en-us/security/blog/2025/06/30/jasper-sleet-north-korean-remote-it-workers-evolving-tactics-to-infiltrate-organizations/.
[35] World Economic Forum, “Global Cybersecurity Outlook 2025 Insight Report,” January 2025, https://reports.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2025.pdf.
[36] CrowdStrike, “CrowdStrike 2025 Global Threat Report.”
[37] European External Action Service (EEAS), “2nd EEAS Report on Foreign Information Manipulation and Interference Threats: A framework for networked defence,” January 2024, https://www.eeas.europa.eu/sites/default/files/documents/2024/EEAS-2nd-Report on FIMI Threats-January-2024_0.pdf.
[38] European External Action Service (EEAS), “3rd EEAS Report on Foreign Information Manipulation and Interference Threats: Exposing the architecture of FIMI operations,” March 2025, https://www.eeas.europa.eu/sites/default/files/documents/2025/EEAS-3nd-ThreatReport-March-2025-05-Digital-HD.pdf.
[39] Idem.
[40] Idem.
[41] Idem.
[42] Google Cloud Threat Intelligence (2024), Pro-PRC “HaiEnergy” Information Operations Campaign Leverages Infrastructure from Public Relations Firm to Disseminate Content on Inauthentic News Sites, https://cloud.google.com/blog/topics/threat-intelligence/pro-prc-information-operations-campaign-haienergy/.
[43] Citizen Lab, “Paperwall: Chinese websites posing as local news outlets target global audiences with pro-Beijing content,” https://citizenlab.ca/2024/02/paperwall-chinese-websites-posing-as-local-news-outlets-with-pro-beijing-content/.
[44] Checkpoint, “Top 5 AI‑Powered Phishing Detection Tools for 2025,” https://www.checkpoint.com/cyber-hub/tools-vendors/top-5-aipowered-phishing-detection-tools-for-2025/.
[45] Sayak Saha Roy, Poojitha Thota, Krishna Vamsi Naragam, and Shirin Nilizadeh, “From Chatbots to PhishBots?–Preventing Phishing scams created using ChatGPT, Google Bard and Claude,” arXiv preprint arXiv:2310.19181 (2023), https://arxiv.org/pdf/2310.19181v2.
[46] Checkpoint, “Top 5 AI‑Powered Phishing Detection Tools for 2025,” https://www.checkpoint.com/cyber-hub/tools-vendors/top-5-aipowered-phishing-detection-tools-for-2025/.
[47] Md Shohel Rana, Mohammad Nur Nobi, Beddhu Murali and Andrew H. Sung, “Deepfake Detection: A Systematic Literature Review,” in IEEE Access, vol. 10, pp. 25494-25513, 2022, doi: 10.1109/ACCESS.2022.3154404.
[48] Shreya Ghosh, and Mitra Prasenjit, “Review of How Early Can We Detect? Detecting Misinformation on Social Media Using User Profiling and Network Characteristics. In Lecture Notes in Computer Science,” edited by Gianmarco De Francisci Morales, Claudia Perlich, Natali Ruchansky, Nicolas Kourtellis, Elena Baralis, and Francesco Bonchi, Vol. 14174. Springer, https://doi.org/10.1007/978-3-031-43427-3_11.
[49] Frederico Monti, Fabrizio Frasca, Davide Eynard, Damon Mannion, and Michael M. Bronstein, “Review of Fake News Detection on Social Media Using Geometric Deep Learning,” in: “Representation Learning on Graphs and Manifolds Workshop,” ICLR 2019, May 2019; Ernest N. Morial Convention Center, New Orleans, USA. https://rlgm.github.io/papers/34.pdf.
[50] Yingtong Dou, Kai Shu, Congying Xia, Philip S. Yu, and Lichao Sun, 2021, “User preference-aware fake news detection,” in: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (2051-2055).
[51] CISA, “Guidance for SIEM and SOAR Implementation,” May 2025, https://www.cisa.gov/resources-tools/resources/guidance-siem-and-soar-implementation.
[52] Nataly Kremer, “AI agents could tip the cybersecurity balance towards defenders,” World Economic Forum Stories, June 2025, https://www.weforum.org/stories/2025/06/ai-agents-cybersecurity-defenders-tip-the-scales/.
[53] Luis Noguerol, “AI-Generated honeypots that learn and adapt,” June 2025, https://www.cybersecuritytribe.com/articles/ai-generated-honeypots-that-learn-and-adapt.
[54] Amanda Silver, “Accelerate migration and modernization with agentic AI,” Microsoft, September 2025, https://azure.microsoft.com/en-us/blog/accelerate-migration-and-modernization-with-agentic-ai/.
[55] Stride, “Top 10 AI-Driven Legacy Modernization Platforms of 2025,” https://www.stride.build/thought-leadership/top-10-ai-driven-legacy-modernization-platforms-of-2025.
[56] Lindsey Wilkinson, “How AI can mitigate legacy tech troubles,” CIODIVE, September 2025, https://www.ciodive.com/news/AI-modernization-legacy-tech-strategy/759337/.
[57] World Economic Forum, “Global Cybersecurity Outlook 2025 Insight Report,” January 2025, https://reports.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2025.pdf.
[58] David Rogers, “Rethinking Governance for Digital Innovation,” MIT Sloan Management Review, August 2023, https://sloanreview.mit.edu/article/rethinking-governance-for-digital-innovation/.








