Why a responsible design of language models is needed
Prof. Dr. Peter Dabrock, Friedrich-Alexander-Universität Erlangen-Nürnberg and member of Plattform Lernende Systeme
Large language models like ChatGPT are celebrated as technical breakthroughs of AI - their impact on our society sometimes discussed with concern, sometimes demonized. Rarely does life paint black and white, but mostly gray on gray. The corridor of a responsible use of the new technology must be sounded out in a criteria-based and participatory manner.
A variety of ethical questions are associated with the use of language models: Do the systems cause unacceptable harm to (all or certain groups of) people? Do we mean permanent, irreversible, very deep, or mild harms? Ideational or material? Are the language models problematic quasi-independently of their particular use? Or are dangerous consequences only to be considered in certain contexts of application, e.g. when a medical diagnosis is made automatically? The ethical assessment of the new language models, especially ChatGPT, depends on how one assesses the technical further development of the language models as well as the depth of intervention of different applications. In addition, the possibilities of technology for dealing with social problems and how one assesses its influence on human self-image always play a role: Can or should technical possibilities solve social problems or do they reinforce them, and if so, to what extent?
Non-discriminatory language models?
For the responsible design of language models, these fundamental ethical questions must be considered. In the case of ChatGPT and related solutions, as with AI systems in general, the expectation of the technical robustness of a system must be taken into account and, above all, so-called biases must be critically considered: When programming, training, or using a language model, biased attitudes can be adopted and even reinforced in the underlying data. These must be minimized as much as possible.
Make no mistake: Biases cannot be completely eliminated because they are also expressions of attitudes toward life. And they should not be completely erased. But they must always be critically reexamined to determine whether and how they are compatible with very basic ethical and legal norms such as human dignity and human rights, but also - at least as desired in broad sections of many cultures - with diversity and do not legitimize or promote stigmatization and discrimination. How this will be possible technically, but also organizationally, is one of the greatest challenges ahead. Language models will also hold up a mirror to society and - as is already the case with social media - will be able to reveal and reinforce social fractures and divisions in a distorting but nevertheless unmasking way.
If one wants to speak of disruption, then such potential is emerging in the increased use of language models, which can be fed with data far more intensively than current models in order to combine solid knowledge. Even if they are self-learning and only deploy a neural network, the effect will be able to be so substantial that the generated texts will simulate real human activity. Thus, they are likely to pass the usual forms of the Turing test. Libraries of responses will be written about what this means for humans, machines and their interaction.
Whistle blown for creative writing?
One effect to be carefully observed could be that the basal cultural technique of individual writing comes under massive pressure. Why should this be anthropologically and ethically worrisome? Recently it was pointed out that the formation of the individual subject and the emergence of romantic epistolary literature were in a constitutive interrelationship. Therefore, one does not have to conjure up the end of the modern subject with the hardly avoidable dismissal of overview essays or proseminar papers, which are supposed to document basic knowledge in basic studies and are easy to create with ChatGPT. But it is clear that independent creative writing must be practiced and internalized differently - and this is of considerable ethical relevance if the formation of a self-aware personality is crucial to our complex society.
Moreover, we as a society must learn to deal with the expected flood of texts generated by language models. This is not just a matter of personal time hygiene. Rather, it threatens a new form of social inequality - namely, when the better-off can be inspired by texts that continue to be written by humans, while those who are more distant from education and financially weaker have to settle for the literary crumbs generated by ChatGPT.
Technically disruptive or socially divisive?
Not per se, ChatGPT's technical disruption automatically threatens social fissures. But they will only be avoided if we quickly put familiar things - especially in education - to the test and adapt to the new possibilities. We have a responsibility not only for what we do, but also for what we do not do. That is why the new language models should not be demonized or generally banned. Rather, it is important to observe their further development soberly, but to shape this courageously as individuals and as a society with promotions and demands - and to take everyone along with us as far as possible in order to prevent unjustified inequality. This is how ChatGPT can be made responsible.