Abstract: This article examines the ethical implications of Artificial Intelligence (AI) in the context of the proliferation of AI-generated media content, with particular emphasis on deepfakes, general disinformation, and targeted fake news. The aim of the study is to identify and analyse key aspects of demagoguery and electoral manipulation, focusing on their broader societal impacts. To this end, relevant legal frameworks—most notably the European Union’s AI Act—are incorporated into the analysis. The findings indicate that algorithmic systems can exacerbate the risk of discrimination and that deepfakes are increasingly capable of significantly influencing electoral processes. As a consequence, there is a growing threat to the integrity of democratic and constitutional principles grounded in the rule of law.
Problem statement: How can AI-driven disinformation, deepfakes, and demagogic manipulation be detected, regulated, and ethically managed to protect democratic processes and public trust?
So what?: Policymakers, regulators, and civil society must implement continuous monitoring, legal frameworks, and educational initiatives to mitigate the harms of AI-mediated disinformation. Strengthening societal competencies in critical media literacy and fostering transparency in AI systems are essential steps to preserve democratic norms and prevent manipulative exploitation.

The Relevance of AI and Fake News
Hate speech, disinformation, and fake news have long shaped the contemporary social media landscape. Recent technological advances—particularly the integration of AI-generated content—have significantly intensified these dynamics. AI-generated content not only reshapes how information is produced and consumed but also poses serious risks to access to information, the rule of law, and individual rights. Moreover, its potential to undermine democratic processes highlights the growing urgency of addressing the ethical challenges faced by law enforcement in an AI-driven information environment. AI has evolved beyond a specialised tool for tasks such as translation or text generation and has become a persistent feature of everyday life and public discourse, with increasing (positive as well as negative) relevance for social and legal domains. Arshin Adib-Moghaddam, the German-Turkish professor of global thought and comparative philosophies, whose central research topics include the study of AI,, describes the all-encompassing development of AI as follows:
„Big Brother isn`t merely watching us anymore – Big Brother has nestled himself in our living rooms – he could even overhear us in the bedroom, if he wanted to.“[1]
Rapid technological development and frequent updates have made it increasingly difficult to distinguish between AI-generated content and content produced by human actors, particularly where such outputs are designed to influence individual opinions and decision-making processes. This blurring of boundaries may lead individuals to perceive decisions as autonomous and organically formed when they are, in fact, guided or influenced by AI.[2] Under certain conditions, this undermines individual freedom of decision-making and autonomy, particularly where AI-generated content covertly shapes perceptions or choices without users’ awareness. AI may thus be understood—similar to interpretations of the early internet in the 2000s—not as a wholly independent revolution, but as a force accelerating preexisting technological and social developments, amplifying trends that were already underway. While this process may contribute to the democratisation of information, it can also amplify significant disadvantages—including the increasing difficulty of distinguishing independent thought from AI-influenced decisions and beliefs.[3]
Although AI has been applied in scientific fields for decades, including in the medical sector, it was only with the rise in popularity of generative AI that it entered the mainstream. Generative AI refers to software capable of creating new content based on machine learning and the utilisation of diverse datasets, including text, image, and audio data. Prominent examples include the language models ChatGPT (American) and DeepSeek (Chinese). Despite the generally high quality of generated language, the accuracy and reliability of the content must be critically assessed, making careful verification of AI-generated outputs essential.[4]
The use of AI in various everyday contexts does not solely result in conveniences, but also creates new opportunities for misuse. This generates not only (new) societal and ethical challenges but also fundamentally demands novel perspectives on effective law enforcement. On social media platforms, the issue of “fake news” is particularly central. Unlike misinformation, fake news and disinformation are deliberately disseminated, often with malicious intent, such as propaganda, and play a key role in the spread of conspiracy theories on platforms such as Instagram, TikTok and Facebook.[5] The American professor Keith Moser, whose research includes the study of anti-vaccination movements, underscores the importance of combating fake news, noting that society is at the onset of what he terms an “information war,” with far-reaching consequences:[6]
„In a world awash with post-truth claims that are sometimes utterly divorced from rudimentary realities, which have been clearly established by empirical data, it is time to channel our inner Sisyphus, even if the arduous task before us at times seems too heavy to shoulder.“[7]
Another manifestation of AI is the creation of “deepfakes,” AI-generated synthetic media that pose significant social and legal challenges, particularly on social media, where they intersect with phenomena such as fake news and misinformation. Deepfakes can be used to create false or misleading images of individuals that are often difficult to distinguish from reality, thereby deliberately causing confusion among their audiences.[8] In this context, it is possible to project one person’s face onto another’s body, creating the impression that actions were taken or statements were made that never occurred.
The AI Act is the European Union’s legal framework regulating AI to ensure it is developed and used in ways that protect fundamental rights, safety, and democratic values, while still encouraging innovation. It adopts a risk-based approach, categorising AI systems by their potential for harm, banning “unacceptable risk” systems, tightly regulating “high risk” systems, and imposing transparency requirements on others.[9] Article 3, Section 60 of the EU AI Act defines deepfakes as image, audio, or video content generated by AI that resembles real persons, objects, places, institutions, or events and can cause individuals to perceive them as truthful. It should be emphasised that the creation of deepfakes is associated with deliberate intent, namely the dissemination of disinformation, and must consequently be distinguished from mere misinformation.[10] AI systems, including deepfakes, have already been deployed in political campaigns worldwide. These tools extend an existing phenomenon by making it easier to produce convincing fake content quickly, thereby increasing the potential for public confusion and manipulation.
In this context, the concept of “demagoguery” is of critical importance. Within scholarly discourse, demagoguery refers either to political leadership that protects the populace’s rights from corruption (positive context) or, more commonly, to the instrumentalisation of rhetoric by individuals to acquire and maintain power (negative context). In the latter sense, demagoguery denotes political communication that employs manipulative techniques to influence or mobilise the electorate.[11] The advent of the AI era has significantly transformed these methods, posing a threat to the rule of law and the democratic system.
The 2024 U.S. presidential election in particular highlighted how AI has already factored into modern campaigns in multiple ways: AI-generated audio and imagery have been deployed to spread false or misleading content, such as fake announcements about candidates that circulated widely on social media before removal by platforms, and political consultants have faced legal consequences for using AI tools in robocalls. At the same time, experts caution that much of the viral misinformation around the election did not require generative AI to spread and often amplified existing narratives rather than creating entirely new ones, underscoring both the present impact of AI-enhanced tactics and the challenges in isolating their specific effects.[12]
While the majority of deepfakes raise serious concerns, it is important to recognise that the technology also has potential for legitimate uses. In certain contexts, deepfakes may serve as a platform for creative expression, support healthy social and political discourse, and be utilised in entertainment and artistic contexts without necessarily causing harm.
AI Ethics and Discrimination through Algorithms
When examining contemporary AI and its developments on social media, the significance of ethics emerges as an indispensable and central facet. AI ethics is a field whose scientific relevance is expected to expand and become increasingly prominent in the coming decades. This can be attributed to various factors, with the ethical requirements outlined in the EU AI Act playing a central role. AI and robot ethics, understood as a subfield of technology ethics, is primarily defined as a scientific discipline that analyses the use of AI to achieve purposes aligned with moral action.[13]
AI and robot ethics is primarily defined as a scientific discipline that analyses the use of AI to achieve purposes aligned with moral action.
In this context, the concept of an algorithm is particularly significant. From a technical perspective, an algorithm can be understood as a formalised computational procedure programmed according to predefined rules, designed to trigger the processing steps required to accomplish a specific task.[14] It is, however, of utmost importance to emphasise that a purely technical analysis of algorithms is insufficient on its own, due to complex social and societal factors such as users’ prior beliefs and biases, social network dynamics that amplify certain content, and institutional or regulatory contexts that shape how algorithmic outputs are received and acted upon. Adib-Moghaddam highlights four key characteristics essential to understanding algorithms:[15]
- Algorithms form the backbone of a wide range of institutionally employed software systems (for instance, in schools, hospitals, and banks). Therefore, they can be understood as already established and essential components of the institutional framework;
- Algorithms are trained by humans or software programmers. Since these programmers are part of society and influenced by their own biases, the same applies to the algorithms they create;
- Programming languages can be understood as a codification and continuation of both conscious and unconscious norms, expectations, and values;
- The challenge of human biases influencing algorithmic design is characterised by the fact that such biases are often unintentionally and subconsciously embedded into algorithms, rendering them largely invisible.
Thus, algorithms should not be understood as purely neutral or objective entities but are significantly shaped by social and societal factors that influence not only the platform itself but also the overall “user experience,” thereby affecting how users receive information. The choice of dataset in algorithm development, as well as inherent societal biases, must be regarded as key influencing components. Another important facet to consider in the context of AI’s presumed neutrality is that current studies indicate that information is perceived as credible when it comes from a trusted source. In this regard, since AI systems are often regarded as trustworthy, their biases are highly relevant.[16] The issue of developers’ biases and discrimination in AI systems (e.g., algorithms), both intentional and unintentional, is a crucial topic that must be carefully examined and continually considered. The functionality of algorithms is heavily dependent on the data they are trained on, which can lead to a strategic disadvantage for certain groups of people. In Austria, there is an explicit obligation to ensure the equal treatment of all citizens. In this context, implementing transparent evaluation procedures and independent oversight is essential to consistently counter all forms of discrimination.[17]
It should therefore be emphasised that AI is not free from biases or forms of structural discrimination against marginalised groups. On the contrary, racism and misogyny can indeed be relevant factors when deploying AI systems, particularly in the context of social media. Social media platforms such as TikTok, Instagram, and Facebook employ AI‑powered algorithms that collect and analyse extensive user data—including behavioural patterns and interactions—to personalise content and infer information about users’ interests, preferences, and potentially sensitive attributes such as political orientation, personality traits, or preferences. Such algorithmic profiling enables platforms to predict and deliver tailored content based on these inferences, even when users have not explicitly disclosed such characteristics.[18] Profiles are constructed by analysing the favouring of other users’ content or the uploading of personal images. In relation to this phenomenon, the question naturally arises as to the consequences of these new forms of technology, which effectively categorise individuals based on their online behaviour on social media.
AI is not free from biases or forms of structural discrimination against marginalised groups.
From a historical perspective, racial and gender hierarchies have, among other purposes, been used to organise society to the advantage of the ruling class.[19] These systems rely on the alienation, social exclusion, and stigmatisation of certain groups as scapegoats. In this context, such forms of social marginalisation are perpetuated, as structurally embedded societal discrimination continues to be reinforced through algorithms.[20]
Additionally, an algorithm determines which content is shown to users based on the personal information previously collected. This situation raises significant concerns regarding personal data protection and privacy. In the digital age, safeguarding sensitive, personally identifiable information must be of paramount importance.[21] This issue is particularly problematic in the age of social media, where the resale of personal data by platform providers often becomes normalised, a development that must be critically scrutinised.
AI, Fake News, Deepfakes & Demagoguery on Social Media
AI technologies have already become a socially accepted norm on social media. Their use applies both to everyday applications—such as content recommendations, filters, and chatbots—and to more impactful scenarios, including election campaigns. The deployment of these technologies in election campaigns can involve the analysis of extensive datasets to optimise reach and the delivery of personalised messages to voters, as well as the manipulation of voter perception through deepfakes, utilising both audio and visual content. While these practices have been in place for some time, their capabilities have significantly advanced due to the rapid development of deepfakes in digital spaces. These ongoing developments are the result of substantial progress in AI and machine learning, enabling the automated generation of content that is difficult for the general public to identify as false.[22] While deepfakes are often discussed as a direct product of AI, it is important to distinguish between AI as a general enabling technology and deepfakes as a specific application or outcome. A “deepfake” refers to manipulated or fabricated media that misrepresents reality, which can be produced either manually by human actors using conventional editing techniques, or automatically through AI-based systems. The decisive difference lies not in intent, which may be malicious in both cases, but in scale, speed, and accessibility. AI-generated deepfakes allow for the rapid, low-cost and large-scale production of highly realistic (synthetic) media, significantly lowering the technical barrier to deception. This technological change amplifies the impact of deepfakes on society by enabling quick and widespread dissemination that would be difficult to achieve solely through human-generated manipulation.[23] The instrumentalisation of deepfakes to influence election outcomes can be cited as an example of the potential risks of this technology, as the suppression or mobilisation of voters through false information can play a significant role.[24] A study by Diakopoulos and Johnson (2021) on the ethical implications of deepfakes in election campaigns identified three categories of potential areas of harm:[25]
- Harms to audiences/viewers (i.e., the electorate);
- Harms to involved subjects (i.e., candidates and their campaigns);
- Harms to social institutions (i.e., the democratic electoral system).
The electorate is potentially at risk in this regard, as it may be exposed to both deception and intimidation. Deception can adversely affect individuals by impairing their ability to make autonomous, informed decisions.
Furthermore, there exists the broader and highly problematic possibility of using deepfakes of a person’s likeness—including sexually explicit deepfakes in extreme cases—to intimidate or discredit individuals, which could indirectly suppress voter participation by eroding trust or causing reputational harm. For example, during the 2022 election campaign in Northern Ireland, a female candidate was targeted with a highly realistic pornographic deepfake that drew widespread attention and hostility, underscoring how such techniques can be weaponised against public figures, even if direct cases of blackmail aimed specifically at discouraging voting are not yet documented.[26] Regarding the involved subjects, various issues—such as the unauthorised use of a person’s likeness and the potential reputational damage that may result—can be problematic. In the context of election campaigns, the values and positions a candidate represents are critical. Misattribution of statements through deepfakes can therefore not only negatively impact the current campaign but, in the worst case, jeopardise the entire political career of the individual targeted by deepfakes.[27]
Misattribution of statements through deepfakes can therefore not only negatively impact the current campaign but, in the worst case, jeopardise the entire political career of the individual targeted by deepfakes.
The greatest potential for harm from deepfakes, however, manifests at the institutional level: they fundamentally undermine trust in democratic electoral processes. The deliberate blurring of the boundaries between truth and fiction, as well as the manipulation of public and democratic discourse through AI, ultimately leads to a long-term erosion of confidence in democratic institutions.[28] This development is currently influenced, among others, by U.S. President Donald Trump and other politicians associated with the right-wing sector. In response, this is now compounded by similar tactics from the American Left. Trump’s influence extends not only to global politics and the formation of international opinion, but also to the social media landscape. This pertains to the nature of political communication on social media platforms, especially on X (formerly Twitter). This period is frequently referred to as the “post-truth” era, as ideologies, emotions, and personalities take on significantly greater importance in political debates than factual accuracy, with fake news serving as a central factor in disinformation. This results in latent distrust toward experts, partial rejection of scientific facts, and the replacement of objective realities with subjective beliefs or ideologies. A significant aspect of political activity lies in presenting oneself as a representative of the electorate in opposition to the established political establishment. Often, this also involves the generation of conspiracy theories, which in turn contribute to the anti-intellectualisation of politics, dominated by emotional factors and right-wing ideologies.[29] These aspects are integral to constructing an alternative reality that may not align with the truth. The AI-generated content employed in this context serves as a means for rapid dissemination, allowing any information within the constructed reality to be quickly established as “truth.”
For example, during the 2024 U.S. presidential primary cycle, multiple voters in New Hampshire received robocalls with an AI-generated voice impersonating President Joe Biden that urged them to “save your vote for the November election,” prompting an investigation by the New Hampshire Attorney General’s office into apparent voter suppression.[30] More recently, in January 2026, the White House confirmed that it shared a digitally altered image of a protester’s arrest that exaggerated emotional expression, underscoring how AI-manipulated media can circulate even in official political communications.[31]
In January 2026, the White House confirmed that it shared a digitally altered image of a protester’s arrest that exaggerated emotional expression.
This approach can be interpreted as a strategic and targeted disinformation measure by opposing political factions. Furthermore, deepfake videos were used in the U.S. election campaign, encompassing both subtle alterations and entirely artificially generated statements.[32] It should be noted that such phenomena are not limited to the United States but have global relevance. Since 2019, it has been observed worldwide that deepfakes are being deliberately used to influence public opinion ahead of elections, including in countries such as India, Slovenia, Poland, the United Kingdom, Turkey, and Brazil.[33], [34]
Misuse and Exploitation Potential of AI Technologies
Deepfakes provide users with unprecedented opportunities to exercise power, targeting not only public figures but also private individuals. One of the first countries to address the issue of deepfakes and the legislative challenges they pose is the United Kingdom.[35] News outlets, such as the British daily The Guardian and the publicly funded broadcaster BBC, have reported on these new developments and their legal consequences.[36], [37]
An analysis by the British news program Channel 4 News revealed that websites providing deepfakes already include over 4,000 female celebrities of various nationalities and professions. Since the implementation of the Online Safety Act in the United Kingdom, the dissemination—but not the creation—of such images has been deemed illegal. Furthermore, the number of deepfake pornography videos available online rose to 143,733 in 2023, representing a significant increase compared to 2016, when only a single instance was found.[38] This phenomenon is also observed in private contexts. Photos of clothed individuals can be easily projected onto existing nude images of other persons and widely disseminated via private accounts. In August 2025, Scotland issued its first ruling regarding the creation and distribution of AI-generated deepfake pornography. The convicted individual was fined £335. The female victim regarded the penalty as insufficient and emphasised the need for more precise legislation specifically addressing crimes involving AI.[39]This reveals not only the legal and societal challenges of continuously adapting to the latest developments in AI technologies but also the imperative to anticipate them as effectively as possible. In this context, cybersecurity expert Lynsay Shepherd emphasises the importance of collaboration across scientific, legislative, and technical domains to effectively counter deepfake-related crimes.[40] It can be argued that the harm caused by deepfakes bears an analogy to the concept of so-called “revenge porn.” The term “revenge porn” refers to the distribution of intimate images of a person without their explicit consent, resulting in a loss of control over their own image and, to some extent, their body. Deepfakes can therefore be understood as a form of loss of control that can evoke feelings of helplessness in victims. Once deepfakes are disseminated on social media, their removal proves extremely difficult. Typically, this responsibility falls to social media users, who must report such images or videos and request their removal.[41]
Once deepfakes are disseminated on social media, their removal proves extremely difficult.
Another crucial aspect illustrated here is AI’s potential to reinforce structural discrimination against women. According to the Federation of Autonomous Women’s Counseling Centers on Sexual Violence in Austria, three-quarters of all women have experienced sexual harassment, and one-third have experienced sexual violence.[42] The presented statistics suggest that sexualised violence against women is regarded as an accepted phenomenon within society and is likely to evolve and transform further with the increasing integration of AI. For example: The presented statistics from New South Wales suggest that sexualised violence against women—manifesting in the form of non‑consensual AI‑generated deepfake pornography—is widespread and gendered in nature: in 2023, approximately 98 % of all deepfake content online was pornographic and about 99 % of the individuals depicted were women, highlighting how deeply such harms are embedded in the digital landscape and how AI integration may amplify these dynamics.[43] Consequently, it is of great importance to address this issue early, both legally and socially, and to continue monitoring it closely.
In this regard, deepfakes should not be viewed solely as an ethical issue but also carry significant legal consequences, particularly in cases of deepfake pornography. The EU AI Act contains specific provisions regulating the creation and misuse of deepfakes. According to Article 50 of the EU AI Act, from August 2026 onwards, operators of AI systems will be required to clearly label deepfakes as artificially generated or manipulated and to disclose their use accordingly. This is of particular significance, as deepfakes, in the long term, not only undermine public trust in news and public figures but also facilitate identity fraud.[44] Other legal measures, such as the aforementioned Online Safety Act, also aim to address this issue by providing enhanced legal protection to both minor and adult users online. These measures do not apply exclusively to social media platforms but to all services that allow users to upload images or interact with others. A key aspect considered is the role of algorithms in exposing users to illegal content.[45]
The EU AI Act contains specific provisions regulating the creation and misuse of deepfakes.
Austria is also experiencing numerous changes in the regulation, public awareness, and societal impact of AI-generated content and deepfakes. It is evident that a significant number of companies have recognised the potential of AI and are employing it in diverse ways—for example, engineers working within corporate settings. However, only a small proportion of other professional fields engage explicitly in AI development. AI is also being extensively researched at academic institutions, such as universities and colleges, particularly in machine learning, symbolic methods, robotics, and autonomous systems.[46]
The Austrian federal government has developed its own AI strategy, emphasising that its goal is a human-centred application of AI that serves the common good while simultaneously promoting competitiveness and innovation.[47] Furthermore, it states that its aim is to work in cooperation with European and international partners to shape the legal frameworks for the safe use of AI, which are to be continuously reviewed and optimised.[48]
In this context, for AI systems to be classified as trustworthy, they must meet the following requirements: they must first and foremost be lawful, complying with all existing laws and regulations; they must adhere to fundamental ethical principles, such as equality and fairness; and they must be robust, both in technological and societal contexts.[49] According to Article 5 of the EU AI Act, specific applications of AI are generally prohibited—so-called “forbidden practices.” These include, in particular, AI systems designed to manipulate human decisions, exploit vulnerabilities, or categorise and evaluate individuals based on personal characteristics. Additionally, systems that generate predictions about a person’s likelihood of committing a crime, or that recognise a person’s facial features from the internet or video surveillance to infer emotions or use biometric data for categorisation, are prohibited. Exceptions to this rule are made only in cases involving missing persons or the prevention of potential terrorist attacks.
AI and related technologies have the potential to positively impact societal well-being. However, continuous monitoring and mitigation of negative effects remain central concerns in AI ethics. AI should not be seen as isolated but embedded within cultural, social, and legal structures. Legally, more comprehensive international restrictions on deepfakes are expected, guided by the EU AI Act’s minimum transparency requirements, though a complete ban is unlikely due to enforcement challenges.[50]
AI and related technologies have the potential to positively impact societal well-being.
The implementation of novel legal measures (such as Austria’s “Hate on the Net Prevention Act”) aims not only to provide effective protection against hateful content on social media at the international level, but also to extend fundamental legal principles to the Internet. This includes the requirement that online platforms provide their users with an easily accessible, transparent reporting procedure. Another component concerns eliminating the cost risk for victims of such crimes in legal proceedings.[51] However, the lack of accountability among many social media providers in regulating AI programs (such as deepfakes) is questionable. At present, such content is only removed in cases of guideline violations (e.g., pornography).[52] A central issue is that in the future handling of AI, the ethical component is significant alongside the evident relevance of criminal offences. Integrating ethical considerations into social media guidelines could represent a crucial first step.
Societal Approaches
A significant problem arises concerning the reach of AI technologies such as deepfakes. The seemingly unstoppable presence of these technologies poses a considerable risk. In their study, Rafia Islam and Azmine Toushik Wasi (2025) analyse the major impacts and the complexity of content moderation, concluding that even when reported, such content often achieves millions of views. Experimental studies indicate that audiovisual disinformation can produce lasting effects on attitudes and increase the social costs associated with delayed moderation.[53] It can be noted that even when AI-generated or manipulated content is removed, it still achieves significant reach. The dissemination of disinformation through these materials should not be underestimated, as it is difficult to prevent.
Because fake news is often deliberately employed, it should be classified as far more harmful than simple misinformation. Consequently, the proper and nuanced handling of fake news is of great relevance, as it is frequently associated with misuse. This raises the question of how to most effectively counter fake news and disinformation—particularly those generated by AI and deepfakes. The most effective approach is to develop the competence to critically evaluate information and distinguish deepfakes and other forms of manipulated content from the truth, preventing such materials from being easily accepted as factual. Research on this topic has shown that, in the educational context, training in critical media literacy and the implementation of the psychological approach known as the “inoculation theory” have a positive impact. According to inoculation theory, which draws on the metaphor of vaccination, it is possible to neutralise misinformation by exposing individuals to a small, relatively weak dose of false information, thereby preparing them to deal with it. This, in turn, initiates a cognitive process that generates counterarguments to false information in advance.[54] In this context, it can be emphasised that for current and future generations—who are increasingly confronted with the presence of fake news, deepfakes, and AI-generated content—it appears necessary to acclimate to the new media landscape. The goal is thus to promote critical thinking skills at an early stage, particularly among younger generations.
For current and future generations, it appears necessary to acclimate to the new media landscape.
For example, Moser, in his book Fake News in Contemporary Science and Politics: A Requiem for the Real?, describes the “inoculation theory” as a promising tool for combating the false equivalence of claims based on conspiracy theories and evidence-based perspectives. An example of the successful application of inoculation theory is the failure of the Putin regime’s attempt to use the myth of denazification to justify the 2022 invasion of Ukraine.[55] This case illustrates that inoculation theory presents a promising strategy for addressing misinformation at the international and political levels, and, as a result, can play a significant role in societal efforts to combat fake news. Furthermore, it is necessary to explore which additional approaches might provide potential solutions. In this regard, Diakopoulos and Johnson (2021) present in their study four comprehensive strategies for the future mitigation of harm caused by deepfakes, which are applicable to a wide range of AI-generated content:[56]
- Education and media literacy (including the “inoculation theory”);
- Individual resilience;
- Verification; and
- Attention modulation.
The development of education and media literacy aims to enable individuals to identify deepfakes and independently verify the reliability of information and sources, laying the foundation for taking responsibility for one’s own consumption and sharing of information. Furthermore, it is the affected individual’s responsibility to take measures for self-protection and to clarify the situation. In the context of an election campaign, for example, this includes monitoring social platforms where deepfakes are circulating to remove them as quickly as possible. Additionally, it is possible to assemble a team that develops a plan in advance for responding to deepfakes, particularly in a legal context. The concept of verification refers to technologies that enable the determination of the origin of audio and video files and the reconstruction of the production process in the context of deepfakes. This allows confirmation of whether an image or video has been generated or manipulated. Although these technologies achieve high precision, it must be noted that the reliability of such programs can never be guaranteed at 100%.
Authentication can only be interpreted as a failed attempt to find evidence of manipulation. Another aspect that can limit the harm caused by deepfakes is restricting their reach. With the rise and establishment of social media, it has become possible for any individual to produce and disseminate deepfakes and other AI-generated content. Comprehensive measures are therefore required to counter this problem. Banning deepfakes, labelling them as AI-generated, or strategically debunking them could prove to be effective methods for mitigating their impact. It should be noted, however, that each of these approaches has limitations. Importantly, deepfakes are not inherently a product of AI; rather, AI tools have enhanced their creation, making the process faster, more accurate, and accessible to a broader range of users. These tools also facilitate the rapid and widespread dissemination of manipulated content, increasing its ubiquity and potential societal impact.[57] In general, there will be an increased need for research in this field in the coming years, and several unresolved questions require clarification at a fundamental level. Furthermore, it is crucial that these measures are designed and analysed comprehensively and through interdisciplinarity, drawing on insights from computer science, law, political science, media studies, and ethics, in order to ensure optimal outcomes.
Authentication can only be interpreted as a failed attempt to find evidence of manipulation.
Conclusion
In summary, AI and its diverse applications should be regarded as an integral part of society, as they are already widely employed to support content recommendation systems on social media, automate routine administrative and customer service tasks, enhance scientific research through data analysis, and assist in decision-making processes in healthcare, finance, and public policy. Simultaneously, these applications carry significant ethical and societal risks, such as the propagation of misinformation and the potential for algorithmic bias, which must be carefully managed. AI technologies continue to evolve dynamically and are subject to ongoing, significant optimisation.
In conclusion, while AI-enhanced deepfakes pose significant risks, their potential for constructive applications is limited, and context-specific settings, such as creativity, expression, and entertainment, should not be overlooked; however, these potential applications remain secondary to, and must be weighed against, the considerable risks associated with misuse. Accordingly, the primary focus of this analysis remains on the threats posed by AI rather than on its speculative or emerging benefits.
While AI-enhanced deepfakes pose significant risks, their potential for constructive applications is limited, and context-specific settings, such as creativity, expression, and entertainment, should not be overlooked.
While the regulation or criminalisation of deepfakes is of fundamental importance, it is essential to consider the complexity of the issue at both technological and legal levels. However, the rapid spread and intensification of deepfakes in election campaigns and on social media are alarming. This raises the question of whether criminalisation as a last resort is sufficient or whether more proactive measures are required.[58]
There are numerous approaches that promise to mitigate harm, yet each has limitations. Since a complete ban on AI is legally unlikely, additional societal and legal solutions must continually be developed, ideally staying ahead of AI. However, given the lengthy legal processes involved, this appears highly unrealistic. Another significant aspect, which must be emphasised, is the importance of collective collaboration. Addressing the challenges posed by AI requires interdisciplinary cooperation at all levels—legislative, political, scientific, and technological.
In this context, measures such as the AI Regulation (EU AI) Act are important, as they provide a framework for risk assessment and enforcement. Maintaining democratic structures certainly requires AI regulation. Relevant considerations in this regard include legal accountability, social media platforms’ responsibility, and public awareness efforts.[59] Austria, on the other hand, faces the need to intensify efforts to address issues such as social inequalities, cultural diversity, and new forms of citizen participation in public administration. To actively counteract discrimination, fair and transparent decision-making processes must be effectively promoted. Other ethical challenges (e.g., sustainability and combating corruption) will also remain significant in the future, requiring new and more efficient solutions.[60] Undoubtedly, these aspects are also closely linked to AI, as they not only introduce new challenges but also require a continuously adapted approach to these dynamic developments.
[1] Arshin Adib-Moghaddam, Is Artificial Intelligence Racist? The Ethics of AI and the Future of Humanity (London: Bloomsbury Academic, 2023), 21.
[2] Harry Borovick, AI and the Law: A Practical Guide to Using Artificial Intelligence Safely (Berkeley, CA: Apress, 2024), 5.
[3] Borovick, AI and the Law, 196.
[4] Austrian Academy of Sciences, Institute of Technology Assessment, Generative AI and Democracy: Final Report (Vienna, 2025), 19, https://epub.oeaw.ac.at/0xc1aa5572_0x00402d80.pdf.
[5] Ed Coper, Facts and Other Lies: Welcome to the Disinformation Age (Sydney: Allen & Unwin, 2022).
[6] Keith Moser, Fake News in Contemporary Science and Politics: A Requiem for the Real? (Cham: Palgrave Macmillan, 2024), 11.
[7] Moser, Fake News in Contemporary Science and Politics, 11.
[8] Borovick, AI and the Law, 5.
[9] Martina Hulok, “The EU Model of AI Governance: Regulating Artificial Intelligence through Law and Policy,” ERA Forum 26 (2025): 527–47, https://doi.org/10.1007/s12027-025-00869-1.
[10] Nicholas Diakopoulos and Deborah Johnson, “Anticipating and Addressing the Ethical Implications of Deepfakes in the Context of Elections,” New Media & Society 23, no. 7 (2021): 2079, https://doi.org/10.1177/1461444820925811.
[11] Jennifer Mercieca, Demagogue for President: The Rhetorical Genius of Donald Trump (College Station: Texas A&M University Press, 2020).
[12] The Poynter Institute, “‘Not the AI Election’: Why Artificial Intelligence Did Not Define the 2024 Campaign,” 2024, https://www.politifact.com/article/2024/dec/19/not-the-ai-election-why-artificial-intelligence-di/.
[13] Michael Funk, Roboter- und KI-Ethik: Eine methodische Einführung. Grundlagen der Technikethik (Wiesbaden: Springer Fachmedien, 2022), 11.
[14] Adib-Moghaddam, Is Artificial Intelligence Racist?, 22.
[15] Adib-Moghaddam, Is Artificial Intelligence Racist?, 21ff.
[16] Adib-Moghaddam, Is Artificial Intelligence Racist?, 19f.
[17] Marlon Possard, Verwaltungsethik im Fokus: Ethische Grundlagen und Orientierungshilfen. Ein kompaktes Nachschlagewerk für die österreichische Verwaltung (Vienna: facultas, 2025), 117.
[18] Anastasia Kozyreva et al., “Public Attitudes towards Algorithmic Personalization and Use of Personal Data Online: Evidence from Germany, Great Britain, and the United States,” Humanities and Social Sciences Communications 8 (2021): 117, https://doi.org/10.1057/s41599-021-00787-w.
[19] Adib-Moghaddam, Is Artificial Intelligence Racist?, 21.
[20] Adib-Moghaddam, Is Artificial Intelligence Racist?, 21f.
[21] Possard, Verwaltungsethik im Fokus, 41.
[22] Taberez A. Neyazi, Tan Khai Ee, and Ozan Kuru, “Campaign Deepfakes and Affective Polarization: The Role of Artificial Intelligence in Campaigns in Shaping Voter Attitudes,” Social Science Computer Review (2025): 3, https://doi.org/10.1177/08944393251362247.
[23] Robert Chesney and Danielle K. Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review 107 (2019): 1753–1819, https://scholarship.law.bu.edu/faculty_scholarship/640.
[24] Rafia Islam and Azmine Toushik Wasi, “Deepfakes in Political Manipulation: Evaluating Risks under the AI Act,” OpenReview (2025): 2, https://openreview.net/forum?id=Kt9bSLxDih.
[25] Diakopoulos and Johnson, “Anticipating and Addressing the Ethical Implications of Deepfakes,” 2077.
[26] Anna Moore, “‘It Was Extremely Pornographic’: Cara Hunter on the Deepfake That Nearly Ended Her Political Career,” The Guardian, December 1, 2025, https://www.theguardian.com/society/ng-interactive/2025/dec/01/it-was-extremely-pornographic-cara-hunter-on-the-deepfake-video-that-nearly-ended-her-political-career.
[27] Diakopoulos and Johnson, “Anticipating and Addressing the Ethical Implications of Deepfakes,” 2079ff.
[28] Diakopoulos and Johnson, “Anticipating and Addressing the Ethical Implications of Deepfakes,” 2083.
[29] Christian Fuchs, Digital Demagogue: Authoritarian Capitalism in the Age of Trump and Twitter (London: Pluto Press, 2018), 174f.
[30] Art Raymond, “‘A Bunch of Malarkey’: New Hampshire AG Investigating AI-Generated Robocalls in Biden’s Voice,” Deseret News, January 23, 2024, https://www.deseret.com/2024/1/23/24048018/new-hampshire-primary-fake-robocall-artificial-intelligence-joe-biden-voice.
[31] Sam Levine, “White House Posts Digitally Altered Image of Woman Arrested after ICE Protest,” The Guardian, January 22, 2026, https://www.theguardian.com/us-news/2026/jan/22/white-house-ice-protest-arrest-altered-image.
[32] Islam and Wasi, “Deepfakes in Political Manipulation,” 2.
[33] Islam and Wasi, “Deepfakes in Political Manipulation,” 2f.
[34] Islam and Wasi, “Deepfakes in Political Manipulation,” 461.
[35] Borovick, AI and the Law, 153.
[36] BBC News, “How AI and Deepfakes Are Changing Politics,” March 16, 2024, https://www.youtube.com/watch?v=wxEpPin8MWw.
[37] Ramon Antonio Vargas, “Democratic Political Operative Admits He Commissioned Robocall of AI Biden,” The Guardian, February 26, 2024, https://www.theguardian.com/us-news/2024/feb/26/steve-kramer-admits-he-commissioned-robocall-ai-biden-new-hampshire.
[38] Nadeem Badshah, “Nearly 4,000 Celebrities Found to Be Victims of Deepfake Pornography,” The Guardian, March 21, 2024, https://www.theguardian.com/technology/2024/mar/21/celebrities-victims-of-deepfake-pornography.
[39] Julie-Anne Barnes and Laura Goodwin, “Friend Stole My Face for Deepfake Nudes: Now I Want Tougher Laws,” BBC News, September 19, 2025, https://www.bbc.com/news/articles/c4g7kw8ly4ko.
[40] Barnes and Goodwin, “Friend Stole My Face for Deepfake Nudes.”
[41] Tyrone Kirchengast, “Deepfakes and Image Manipulation: Criminalisation and Control,” Information & Communications Technology Law 29, no. 3 (2020): 313, https://doi.org/10.1080/13600834.2020.1794615.
[42] Women’s Counseling Emergency Hotline for Sexual Violence Vienna, Facts and Figures on Sexual Violence against Women (Vienna, 2023), https://www.sexuellegewalt.at/site/assets/files/1469/zahlen_und_fakten-sexuellegewalt_08-2023.pdf.
[43] Tom Gotsis, Parliament of New South Wales, Parliamentary Research Service, Sexually Explicit Deepfakes and the Criminal Law in NSW (Sydney, 2025), https://www.parliament.nsw.gov.au/researchpapers/Documents/Sexually%20explicit%20deepfakes.pdf.
[44] Borovick, AI and the Law, 154.
[45] UK Department for Science, Innovation and Technology, “Online Safety Act: Explainer—What Does the Online Safety Act Do?” 2025, https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer.
[46] Department for Science, Innovation and Technology, “Online Safety Act: Explainer.”
[47] Federal Ministry for Climate Protection et al., Federal Government Strategy for Artificial Intelligence: Artificial Intelligence Mission Austria 2030 (AIM AT 2030) (Vienna, 2021), 19, https://www.digitalaustria.gv.at/verwaltung/strategien/ki-aim-at-2030.html.
[48] Federal Ministry for Climate Protection et al., Federal Government Strategy for Artificial Intelligence, 28.
[49] Federal Ministry for Climate Protection et al., Federal Government Strategy for Artificial Intelligence, 27.
[50] Federal Ministry for Climate Protection et al., Federal Government Strategy for Artificial Intelligence, 23.
[51] Federal Ministry for Climate Protection et al., Federal Government Strategy for Artificial Intelligence, 27.
[52] Borovick, AI and the Law, 155.
[53] Federal Ministry of Justice (Austria), “New Measures to Combat ‘Online Hate Speech’,” accessed October 13, 2025, https://www.bmj.gv.at/themen/Fokusthemen/gewalt-im-netz.html.
[54] Borovick, AI and the Law, 155.
[55] Islam and Wasi, “Deepfakes in Political Manipulation,” 2.
[56] Moser, Fake News in Contemporary Science and Politics, 11.
[57] Moser, Fake News in Contemporary Science and Politics, 162.
[58] Diakopoulos and Johnson, “Anticipating and Addressing the Ethical Implications of Deepfakes,” 2084.
[59] Diakopoulos and Johnson, “Anticipating and Addressing the Ethical Implications of Deepfakes,” 2085ff.
[60] Kirchengast, “Deepfakes and Image Manipulation,” 321.








