STRATCOM and Planning
ArticlesEnglish Articles

AI And Microtargeting Disinformation As A Security Threat To The Protection Of International Forces

Abstract: Disinformation has mainly been viewed as a communication challenge. For entities like the UN, the EU and NATO, it has evolved into a security threat and a Force Protection (FP) challenge, as well as a threat to the well-being of deployed individuals and their families overseas. Feasibly, this threat will only grow with the combination of AI-enabled “deepfakes” and microtargeting.

Problem Statement: What role does strategic communication play in ensuring that peacekeeping and EU missions continue to have enhanced protection of their military forces overseas?

So what?: Strategic, emotive communication must urgently become an integral part of the planning and execution of mission security, which should expand in scope to include civil society organisations in the areas where personnel are deployed.

Source: shutterstock.com/Frame Stock Footage

 “FIMI” as a Security Threat, Not Just a Communication Challenge

The EU has identified Foreign Information Manipulation and Influence (FIMI) as a key hybrid threat.

‘A mostly non-illegal pattern of behaviour that threatens or has the potential to negatively impact values, procedures and political processes. Such activity is manipulative in character, conducted in an intentional and coordinated manner, by state or non-state actors including their proxies inside and outside of their own territory.’[1]

This is akin to an umbrella description of disinformation, malinformation and other forms of malign operation in both the information and cognitive domains. Disinformation is “the creation, presentation and dissemination of verifiably false or misleading information for the purposes of economic gain or intentionally deceiving the public”.[2]

Mis-information is unintentionally doing so, whilst mal-information entails deliberately designing and employing dis- and mis-information to cause harm to specific individuals and organisations. It is easy, therefore, to see why FIMI has largely been viewed as a communications threat in the first instance, undermining the soft cornerstones of democratic order, such as trust, legitimacy and cohesion. Whereas this is an accurate description and diagnosis in a broad sense, it does not deal with FIMI as a security threat with consequences on the ground (such as civil disorder and terrorism). These consequences include increased radicalisation, recruitment of terrorists, and incitement to attack. Examples include accusing Western soldiers in the Sahel of abusing the local population. In one case, false photographs were published in Sahel countries purporting to depict a destroyed village due to French Air Force activities. This was deliberate disinformation.

FIMI poses a narrower, more focused and direct military and mission security dimension in the area of force protection, such as in UN operations, EU deployments and NATO missions. It is a force protection threat during deployments, whether civilian or military. In this context, FIMI can take the form of deepfake photos, videos or audio falsely depicting UN peacekeepers, EU or NATO mission personnel torturing or abusing locals. This can have a twofold consequence of increasing the radicalisation and recruitment of terrorists locally, but also creating a backlash in public opinion in the sending countries against deploying troops overseas. Clearly, this is not just a communication challenge that needs to be resolved by correcting the record and addressing the communication issues in general. It is a direct force protection and mission security threat. As such, Strategic Communications (STRATCOM) should become an integral and more important component of planning UN, EU and NATO deployments beyond tactical “cultural” Strategic Communications and towards formulating and disseminating narratives.

FIMI can take the form of deepfake photos, videos or audio falsely depicting UN peacekeepers, EU or NATO mission personnel torturing or abusing locals.

A related strategic threat resulting from FIMI involves undermining public and political support in the countries that participate in sending personnel on these missions, duly undermining their commencement or continuity.

Well-being of Individuals is the Other Face of Maintaining Political Support

Moreover, fake news, as well as deepfakes, are exploited by state and non-state actors to attack the well-being (physical, mental and emotional) and safety of individuals in the field, as well as their families back in their home countries. An AI-generated deepfake video circulated widely just days after the military junta in Burkina Faso ordered French troops to leave the country following the successful coup in 2023. The video urged support for the junta and its leader. A similar video targeting the presence of French troops circulated widely in Mali around the same time. Equally worrying are the “cheap fakes” that are on the other end of the technical specifications spectrum from AI-enabled deepfakes. “Cheap fakes are quicker and less resource-intensive. They can be similarly misleading, though less realistic. Cheap fakes range from videos taken out of context to simple edits such as speeding up or slowing down video or audio to misrepresent events … In Africa, ultra-cheap fakes are more of a problem for disinformation than deepfakes … It is easier to produce large numbers of quick, cheap fakes.”[3]

This form of FIMI, in which AI-enabled deepfakes utilize synthetic data coupled with microtargeting, is very likely to become a key hybrid tool in the context of individuals participating in overseas missions and political influence operations, such as elections. In this context and in the broader hybrid threat framework of FIMI, the combination of AI-enabled deepfakes with bot-driven microtargeting will raise the FIMI threat to a totally new, and very dangerous level. Essentially, not knowing not just “what” is true or false but also “who” is real or not, coupled with the emerging phenomenon of developing emotional dependencies and intimacies with bots, will make it extremely challenging to deal with the threat without putting in place very well-resourced concrete institutional and expert structures.

Moreover, a key aim of FIMIs is to whip up resentment against the mission and the individuals participating, both in the recipient and in the sending countries. This latter objective can undermine the physical security of the individuals and their missions as local populations become enraged by fake news and deepfakes. Moreover, the public and political sentiment in the home (sending) countries of the individuals participating in the missions can turn hostile against the individuals, their families, and the missions, including in the local communities where the individuals reside and their families live. At the conference on “75 years of UN peacekeeping: how can UN peacekeeping missions tackle the challenge of disinformation?”,[4] it became clear from the various contributions that such activities further undermine the safety and mental well-being of the individuals and their families, as well as budgets, recruitment and support for participation in future missions.

A key aim of FIMIs is to whip up resentment against the mission and the individuals participating, both in the recipient and in the sending countries.

The Anatomy of a Hostile FIMI Operation Using Deepfakes

Deepfakes are video and audio clips that depict individuals doing and saying things they never did or said. They were already being deployed even when the technology and software required actors and considerable time. As technology has rapidly developed, the time, cost, and technical skills required to produce convincing deepfakes have exponentially shrunk. This makes deepfakes more accessible, including the proverbial individual spending time online in their homes. With the emergence of AI, however, AI-enabled deepfakes are likely to become a key security threat in the hands of malign actors operating in the hybrid domain, particularly given that “within the information environment, the human-centric cognitive dimension remains the most important”.[5] This is mainly the case as deepfakes can currently be produced using completely synthetic data: the faces of people who never existed speaking with voices that never existed in all existing languages and dialects, doing things they never did.

A multi-modal operation has the potential to be both cheap and effective. This kind of operation involves the deliberate combination and coordination of several different hybrid tools to cause damage to an individual, state, group or organisation. For example, a deepfake depicting mission personnel torturing a local individual can be combined with social and traditional media campaigns. The dissemination of this deepfake can also target the deployed individual’s family and friends back home.

The deepfake could then cross into the digital sphere, leading to diverse repercussions. These range from security threats related to force protection because of an outraged local population, to concerns about the physical safety of the individuals involved and that of their families. There is also the risk of  psychological and mental strain on the families, potentially leading to social ostracisation in their home communities, for example. A snowball effect of incremental tactical security threats can lead to broader malign strategic threats, such as undermining political support for continuing a particular operation.

Developing an Emotive Narrative Key to Defeating FIMI

A key long-term step in preventatively countering FIMI is to stop relying on cold facts alone to defeat and counter emotively formulated FIMI. This became clear during the war against Daesh/ISIS/ISIL. Counter-radicalisation efforts focused on highlighting factual defects in what Daesh was offering. In fact, the motivation for many would-be recruits to Daesh’s cause was driven by emotive, idealist, or romantic motivations, or a mixture of the three drivers. These drivers could not be effectively countered by restating cold objective facts without their emotive context.

A key long-term step in preventatively countering FIMI is to stop relying on cold facts alone to defeat and counter emotively formulated FIMI.

Europe and the West, in general, have targeted minds for far too long by using blunt facts and, perhaps more often than not, by ignoring hearts. Europe, in particular, needs to deploy a positive emotive narrative and reclaim dominance in the cognitive domain. It has a great story to tell – but facts alone will not win hearts in many regions of the world where missions are deployed. There is currently a sufficiently large space that is being filled with hostile narratives. It is essential to re-occupy this space in the information and cognitive domains through content development and dissemination, which should become an integral part of mission planning.

Superiority is Mandatory

FIMI covers a battlefield on which the West must prevail. This war is fought in two domains –  information and cognitive, in a quest to influence “What one thinks” and “How one thinks”.

Whereas facts play a key role in the information domain, the battle in the cognitive domain shapes perceptions, involving emotions as much as it does facts. Narratives also play a crucial role in shaping perceptions. Yet efforts to counter emotive narratives and shape perceptions with cold facts have not yielded the desired results.

Whereas facts play a key role in the information domain, the battle in the cognitive domain shapes perceptions, involving emotions as much as it does facts.

It is equally clear that always acting in a defensive mode risks two outcomes. The first is that in order to counter a piece of disinformation, it will need to be repeated first to be able to refute it. In the majority of cases, this simply contributes to the spread and embellishment of the piece of disinformation, not only undermining the effort to counter it, but potentially lending it additional credibility. The second outcome is that the information and cognitive domains will remain in need of proactive and preventive saturation with positive emotive narratives that act as a natural barrier against malign disinformation if the current modus operandi continues to prevail.

Moreover, and from a practical perspective, local civil society organisations can play an important role in countering FIMI and enhancing force protection. To turbo-charge their activities, establishing institutional and structured open-source support (such as a centre) will play an important role in enhancing their ability to defeat FIMI in locally acceptable cultural methods and deliver them in the languages of the deployment locale. This capability will enhance the ability of local civil society organisations and others to defeat FIMI locally.

The need for overseas missions, civilian as well as military (whether the EU or the UN), will continue to grow in importance. There is a clear need to develop an effective set of tactical and strategic responses to FIMI, particularly when it comes to mission security and the continued political support for these missions over and above the generalised responses to FIMI as a communications threat. It is, and will continue to be, a key security threat requiring a commensurate, imaginative and effective set of measures specifically related to STRATCOM.

 


Bernard Siman is a Senior Associate Fellow at Egmont Royal Institute for International Relations in Belgium, where he is responsible for hybrid threats and warfare. He teaches at the Royal Military Academy in Belgium, and the European Security and Defence College. He also heads Cyber Diplomacy at the Brussels Diplomatic Academy of the Vrije Universiteit Brussel (VUB). Geographically, he specialises in the Mediterranean and Black Sea regions, including the Middle East, and in global maritime geopolitics. He has authored various publications on hybrid threats and global geopolitics. The views expressed in this article are the author’s alone.


[1] EEAS, “Strategic Communications, Task Forces and Information Analysis (STRAT.2),” February 2023.

[2] European Court of Auditors, “EU action plans against disinformation,” March 2020.

[3] Kirsten Cosser, “AI-powered disinformation: deepfakes, detection technology and the weaponisation of doubt,” August 07, 2023, https://africacheck.org/fact-checks/blog/ai-powered-disinformation-deepfakes-detection-technology-and-weaponisation-doubt.

[4] “75 years of UN peacekeeping: how can UN peacekeeping missions tackle the challenge of disinformation/misinformation?,” Egmont Institute, June 29, 2023, https://www.egmontinstitute.be/events/75-years-of-un-peacekeeping-how-can-un-peacekeeping-missions-tackle-the-challenge-of-disinformation-misinformation/.

[5] Matthew Fecteau, “The Deep Fakes are coming,” Army War College, The War Room, April 23, 2021, https://warroom.armywarcollege.edu/articles/deep-fakes/.

You may also like

Comments are closed.