Abstract: Machine Learning/Artificial Intelligence (ML/AI) technologies are a double-edged sword. While ML/AI prove to be beneficial to us in myriad ways, we should also remain vigilant about the potential harm that ML/AI can bring upon our societies. With the use of rapidly evolving, sophisticated ML/AI beginning to manipulate both the rate and nature of content creation and distribution, the challenge to counter disinformation while maintaining a balance of user freedoms and overall credibility becomes a challenging task.
Bottom-line-up-front: The more ML/AI technology develops, the more evolved disinformation content becomes. In a sense, this could mean that there can be no definitive way of ensuring with total certainty the authenticity of any content. As tools advance to detect fake content, an equally advanced tools develop to create more advanced fake content.
Problem statement: In what ways has ML/AI transformed the art of disinformation campaigning, and what can be done about it?
So what?: The onus lies on the big tech leaders in controlling the growth of their technology by ensuring safe practices, oversight, transparency, and accountability. More so, we need to revisit the discussion on how much oversight governments should exercise upon said private tech companies under their legislation.
Source: shutterstock.com/Jirsak
An Overview of Disinformation
Although disinformation has long played a role in our societies – from the Roman Empire to the 46thUS presidency in 2021 – disinformation remains a rapidly evolving process, and it is becoming more sophisticated with the advancement and successful integration of AI technology.[1] “Ten years from now, you won’t be able to tell whether you’re interacting with a human online or not. In the future, most online speech and content will be machines talking to machines.”[2]
There has been a recent rise in the disinformation tactic of “fake news,” which for example has been targeting the current Israeli-Palestinian conflict in Gaza. Although there are legitimate videos showing the recent conflict, some content has been deliberately deceptive. A recently circulated video, which actually showed fighting in Syria, purported to show rockets being fired by Hamas into populated areas in Israel, only to malfunction and fall back on Palestinians instead. Likewise, fake screenshots depicting IDF soldiers sharing troubling tweets such as “Just bombed some kids” were circulating as well.[3] During such critical times, the truth can become more like a needle in a haystack of disinformation. This is especially true if people rely on social media for news reporting instead of reliable and reputable news agencies.
A recently circulated video, which actually showed fighting in Syria, purported to show rockets being fired by Hamas into populated areas in Israel, only to malfunction and fall back on Palestinians instead.
Additionally, Covid-19 anti-vaccination campaigning has been increasingly circulating on social media. For example, some of the false claims about the vaccines include one which says that vaccines make you infertile. It appears that a heavy reliance on social media such as Facebook, Twitter and WhatsApp for news is the main driver behind the reluctance to get vaccinated.[4] I believe that in these platforms’ efforts to remove disinforming content, they may actually end up removing beneficial content in the process as well. For example, a detection algorithm may not be able to distinguish between someone sharing a fake news article about how vaccines make you infertile, and someone else sharing that very same article in the context of showcasing how this can be dangerously misleading and warning people to not fall for it. An algorithm can detect certain words that are lumped together such as “vaccine” and “infertile”, yet it cannot understand that those two words can be used together in the context of educating people on the dangers of fake news and not only to spread fake news. The main problem here is that an algorithm cannot figure out human intent.
Aside from people directly creating and spreading fake news and articles, AI itself has become rather good at creating entire fake news articles. If you feed a text-generating AI enough text, it will be able to produce its own content. It may surprise you how gradually good it becomes at doing this with the more text that you feed it. For example, here is what an AI had created after I fed it just a few sentences regarding the subject of fake news:
Social media essentially functions to fortify one’s own perspective, and does not require empirical verification that the equally unique experience of another actually represents every unique viewpoint. In fact, simply suggesting or accepting a point of view is likely to lend the very concept of ideology itself another and more offensive layer of irony. In the context of a series of talking heads on mainstream television, the ‘facts’ of an argumentative campaign are rarely presented with much intellectual rigor – despite the notion among content consumers that they could usually be substantially correct. This means that there is virtually no principle of logical reasoning that can be applied to this rhetorical situation. If a viewer does not believe the repeated assertions, they cannot judge for themselves whether they can understand the underlying context and/or values of the messages. The use of high-brow language and highly compressed images makes any pundit who intends to communicate the content of their message appear stilted and unaware.[5]
In fact, simply suggesting or accepting a point of view is likely to lend the very concept of ideology itself another and more offensive layer of irony.
Deepfake Technology
As ML/AI technology rapidly evolves, malignant actors become better equipped to produce content that is almost indistinguishable from real content. This is called a Deepfake, and it is a kind of video synthesis which allows perpetrators to hyper-realistically impersonate others online. A recent example from Deepfake technology shows us a series of TikTok videos where a man leverages ML to make himself look almost identical to the famous Hollywood actor Tom Cruise.[6] His videos garnered millions of views and have raised numerous questions regarding authenticity and the dangers behind ML/AI. As ML/AI technology becomes cheaper and easier to use, virtually anyone will be able to create false content that can damage a reputation and incur a substantial financial cost, and even threaten people’s security.
Now, there are many facial transformation apps available for smartphones that anyone can download and play around with: Reface, Avatarify, Wombo, MyHeritage.[7] While these apps may not produce something as uncanny as the Tom Cruise viral videos, they still represent a concerning level of sophistication to the Deepfakes technology. The challenge here lies in the ability to tell the difference between what is real and what is fake constantly. The more sophisticated ML/AI technology becomes at distinguishing real from false, the more sophisticated it gets at creating even better fake content. This dilemma will propagate a profound lack of trust in governments, people and potentially completely erode trust in everything. A recent example of this phenomenon is a stunt by the famous comedian Jordan Peele, who leveraged Deepfakes to send a warning message to people in which he advises them not to believe everything they see on the internet. To send this message, Peele made it appear as though it were former US president Barack Obama who was speaking these words.[8]
Deepfakes have been commonly used to manipulate audio, images, and video. However, now it appears that Deepfake technology can be used to manipulate even geographic maps and satellite images. Researchers are warning that through Deepfakes, one can create a satellite imagine of a town, a street, or a bridge that is not real. One can even manipulate existing satellite images by removing or adding entire towns.[9] The security implications of this kind of manipulation can be quite severe, especially when considering how authoritarian governments can leverage this technology. A malignant actor or state can make it look as if, for example, an entire town -which does not exist- had been wiped out by airstrikes or a wildfire. Similarly, one can also cover up disastrous events which have occurred in order to avoid public embarrassment.
A malignant actor or state can make it look as if, for example, an entire town -which does not exist- had been wiped out by airstrikes or a wildfire.
Who Should be Held Accountable?
Another critical aspect to consider regarding the transformative nature of the state-of-the-art disinformation is plausible deniability. Compared to disinformation campaigns before the integration of ML/AI, plausible deniability was a trickier and more exclusive process. This was the case because fake news sources were more easily traceable to a specific individual or group of people. Additionally, the whole process of generating fake news was sometimes considered too difficult and costly to construct and sustain to be initiated at all. With the more sophisticated ML/AI technology now, it has become easier to hide your identity and deny having involvement in an incident.
End-to-end encryption and non-indexed webs, for example, make it easier for the manufacturer of fake content to deny involvement due to a lack of tractable digital footprints. Moreover, plausible deniability can also be used from both angles of the information wars, where the targeted person can easily claim real audio or video of world leaders which depicts themselves in an undesirable image to be fake, even when it is not. Given that we now know how easy and cost-effective it can be to produce fake audio and video, virtually anyone is capable of denying having said or done something, even if they have done it.[10] Plausible deniability inevitably perpetuates the incredibility of all players involved, regardless of their status. This can bear serious political and security consequences if the credibility of a world leader is constantly in question.
In a 2018 survey published by Watson, 29% of people were convinced that social media sites (mainly Facebook and YouTube) were primarily responsible for the spread of fake news, while 60% believed that those sites were only partly responsible and that other sites should be held more accountable.[11] Given the discourse on accountability, one must ask who should be held accountable, and how can it be done.
One way would be to strengthen the transparency of the algorithms’ output data regarding hyper-personalized consumption. However, social media sites tend to be quite protective of their algorithms, specifically their output data. Facebook initially refused to share its political campaign data regarding the 2016 US presidential elections but eventually succumbed to the mounting pressure and released the information.
The Inherent Logic as a Show-Stopper
Unless other major sites were found in a similar Zugzwang, would they naturally follow suit? That may be unlikely. For example, even though social media sites strive to detect and remove misinforming and disinforming content, those sites are still reluctant to be completely transparent about the removed content, reasons for removal, and the identity of the content producer. All of the above unintentionally strengthen plausible deniability. In the sites’ attempt to counter disinformation, they inadvertently – to some extent – end up protecting content sharers’ identity, to safeguard the site’s revenues and protect management from lawful criticism against their “black box” algorithms.[12]
For example, even though social media sites strive to detect and remove misinforming and disinforming content, those sites are still reluctant to be completely transparent about the removed content, reasons for removal, and the identity of the content producer.
Rana Ghalib Sami Al-Nusair has a Bachelor’s in Psychology from the University of Jordan and an international Master’s in Security Intelligence & Strategic Studies from the University of Glasgow with a specialization in Technology Security. Her main research aims to solve the problem of AI Value Alignment. The views contained in this article are the author’s alone.
[1] Willian Weir, History’s Greatest Lies (Beverly, Massachusetts: Fair Winds Press, 2009), 28–41.
[2] Matt Chessen, “The MACDOM Future: How Artificial Intelligence will Enhance Computational Propaganda,” in Artificial Intelligence Safety and Security, edited by Roman Yampolskiy(New York: CRC Press, 2019), 128.
[3] BBC Monitoring, (May 17th 2021), Available at https://www.bbc.com/news/57111293.
[4] Sallam M, Dababseh D, Eid H, et al. High rates of COVID-19 vaccine hesitancy and its association with conspiracy beliefs: A study in Jordan and Kuwait and other Arab countries. Vaccines 2021; doi.org/ 10.3390/vaccines9010042
[5] Available at Talk to Transformer – InferKit.
[6] Vecanoi, “Very realistic Tom Cruise Deepfake | AI Tom Cruise,” (2021), https://www.youtube.com/watch?v=iyiOVUbsPcM.
[7] Geoffrey A. Fowler, “Anyone with an iPhone can now make Deepfakes. We aren’t ready for what happens next,” (2021) https://www.washingtonpost.com/technology/2021/03/25/deepfake-video-apps/.
[8] BuzzFeedVideo, “You Won’t Believe What Obama Says in this Video”, (2018), https://www.youtube.com/watch?v=cQ54GDm1eL0
[9] Hebe Campbell and MatthewHolroyd, “Deepfake geography could be the latest form of online disinformation”, (2021), https://www.euronews.com/2021/05/07/deepfake-geography-could-be-the-latest-form-of-online-disinformation.
[10] Caleb Townsend, “Deepfake technology: implications for the future,” United States Cybersecurity Magazine, https://www.uscybersecurity.net/deepfake/.
[11] Amy Watson, Statista, (2019), https://www.statista.com/statistics/707507/national-local-news-trust/.
[12] Wael Ghonim, “Transparency: What’s gone wrong with Social Media and What Can we Do about it?,” Shorenstein Center on Media, Politics and Public Policy, (2018), https://shorensteincenter.org/transparency-social-media-wael-ghonim/.