ELIZA, ChatGPT and Democracy: It all depends on the right measure

Prof. Dr. Christoph Neuberger, Freie University Berlin, Weizenbaum Institute and member of Plattform Lernende Systeme

Artificial Intelligence (AI) has long remained a promise, an unfulfilled promise. This seems to be about to change: With ChatGPT, Artificial Intelligence has arrived in everyday life. The chatbot's ability to answer openly formulated questions spontaneously, elaborately and, moreover, frequently correctly - even in the form of long texts - is extremely astounding and exceeds anything seen before. This is causing some excitement and giving AI development a whole new meaning in the public perception. In many areas, people are experimenting with ChatGPT, and business, science and politics are sounding out the positive and negative possibilities.

It is easy to forget that there is no mind in the machine. This phenomenon was already pointed out by computer pioneer Joseph Weizenbaum, who was born in Berlin a hundred years ago. He programmed one of the first chatbots in the early 1960s. ELIZA, as it was called, was able to conduct a therapy conversation. From today's perspective, the answers were rather plain. Nevertheless, Weizenbaum observed how test subjects established an emotional relationship with ELIZA and felt understood. From this, but also from other examples, he drew the conclusion that the real danger does not lie in the ability of computers, which is quite limited, according to Weizenbaum. Rather, it is the false belief in the power of the computer, the voluntary subjugation of humans, that becomes the problem. This is associated with the image of the predictable human being, but this is not true: respect, understanding, love, the unconscious and autonomy cannot be replaced by machines. The computer is a tool that can perform certain tasks faster and better - but no more. Therefore, not all tasks should be assigned to the computer.

The Weizenbaum Institute for the Networked Society in Berlin - founded in 2017 and supported by an association of seven universities and research institutions - conducts interdisciplinary research into the digitization of politics, media, business and civil society. The researchers are committed to the work of the institute's namesake and focus on the question of self-determination. It arises, for example, for the public sphere, the central place of collective self-understanding and self-determination in democracy. Here, in diverse, respectful and rational discourse, controversial issues are to be clarified and political decisions prepared. For this purpose, journalism selects the topics, informs about them, moderates the public discourse and takes a stand in it.

Using AI responsibly in journalism

When dealing with large-scale language models such as ChatGPT, the question therefore arises to what extent AI applications can and should determine news and opinion? Algorithms are already used in a variety of ways in editorial work: they help track down new topics and uncover fake news, they independently write weather or stock market news and generate subtitles for video reports, they personalize the news menu and filter readers' comments.

These are all useful applications that can be deployed in such a way that they not only relieve editorial staff of work, but also improve the quality of media offerings. But: How much control do editorial offices actually have over the result, are professional standards adhered to? Or is a distorted view of the world created, are conflicts fueled? And how much does the audience get to hear about the workings of AI? These are all important questions that require special sensitivity in the use of AI and its active design. Transparent labeling of AI applications, testing of safety and quality standards, promotion of further development and education, critical handling of AI, as well as the reduction of fears through better education are important key factors for using AI in journalism responsibly.

Here, too, the question posed by Joseph Weizenbaum then arises: what tasks should not be assigned to computers? There are still no chatbots out and about in public, discussing with each other - that could soon change. ChatGPT also stimulates the imagination here. A democracy simulation that relieves us as citizens of the task of informing, reflecting, discussing, mobilizing and co-determining would be the end of self-determination and maturity in democracy. Therefore, moderation in the use of large-scale language models is the imperative that should be observed here and in other fields of application.

The white paper (in German) of the IT Security, Privacy, Legal and Ethical Framework Working Group provides an overview of the potential and challenges of AI use in journalism.