AI and democratic elections: Curbing possible manipulation at an early stage

In the super election year 2024, deepfakes and co. are attracting a lot of attention. The concern is that fake images, films and audio created with the help of generative Artificial Intelligence (AI) could influence individual voting decisions. Their actual influence on the outcome of elections has not yet been proven. Nevertheless, experts from Plattform Lernende Systeme are calling for AI to be prevented from influencing political processes and opinion-forming. This is because the mere attempt to disinformation undermines trust in democratic institutions. A current white paper highlights the significance of generative AI for elections and democracy using concrete and possible examples. It recommends, for example, proof of origin for AI-generated content and the strengthening of AI-related media skills.

Download the executive summary

This year, more than four billion people in around 40 countries will be asked to vote - including the Americans, who will elect a new president; several state and local elections will also take place in Germany. Attempts to influence public opinion with fake news are not a new phenomenon. With the help of generative AI, however, the possibilities for exerting influence are reaching new dimensions. AI-tools make it possible for a much larger number of people to create fake or manipulative material of rapidly increasing quality. The authors of the white paper “AI in the super election year” refer to a “gradually emerging ecosystem of disinformation” in which online platforms play a significant role in the spread of artificially created disinformation or misinformation.

Generative AI can artificially create and manipulate deceptively real photos, videos and voices of real people. In this way, statements can be put into the mouths of politicians that they have never said. Recently in the USA, for example, an AI-faked Joe Biden called potential voters and gave advice on voting behavior,” explains Jessica Heesen, co-author of the white paper and head of the research focus Media Ethics, Philosophy of Technology and AI at the Eberhard Karls University of Tübingen.

In addition to malicious attempts at manipulation, generative AI can also unintentionally influence the formation of political opinion. For example, incorrect information that language models currently still produce due to their technical limitations can have an influence on democratic elections: In the run-up to German state elections, research has shown that ChatGPT and co. sometimes provide false information about candidates, election programs or the general conditions of the election.

Whether and to what extent the outcome of elections can actually be manipulated with the help of generative AI cannot yet be answered due to insufficient data. However, AI-based disinformation does not necessarily have to have a measurable influence on election results in order to become the focus of political actors. The attempt to exert influence alone does not conform to democratic and journalistic standards and does not contribute to overcoming the social divide of recent years, according to the white paper.

"Overall, AI can create a general mistrust of images and media reporting. This climate of mistrust is harmful to democracy and opinion-forming. In addition, populists, for example, can benefit from it and even claim that true reporting is false and a deepfake. This is known as a 'liar's dividend',” says Heesen, co-head of the IT Security and Privacy, Legal and Ethical Framework working group of Plattform Lernende Systeme.

The authors therefore recommend that political and civil society actors carefully monitor the possibilities of influence and counteract them preventively. Both technical measures should be taken, such as watermarks that transparently and traceably identify AI-generated content, and social efforts should be made, for example to increase media and AI skills among the population.

About the white paper

The white paper “AI in the super election year 2024. Generative AI in the environment of democratic processes” was written by members of the IT Security, Privacy, Legal and Ethical Framework working group of Plattform Lernende Systeme. The paper expands on the considerations of the white paper “AI systems and the individual voting decision” from 2021. The new publication is available to executive summary free of charge.

An interview with Prof. Jessica Heesen, author of the white paper and member of Plattform Lernende Systeme, is available for editorial use.

Further information:

Linda Treugut / Birgit Obermeier
Press and Public Relations

Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | D - 80333 Munich

T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39
presse@plattform-lernende-systeme.de

Go back