3 Questions for

Jessica Heesen

Head of the research area Media Ethics and Information Technology at the International Centre for Ethics in the Sciences and Humanities (IZEW) at the University of Tübingen.

3 Questions for Jessica Heesen

Responsibility in Technology Development: How does Ethics get into AI?

Artificial Intelligence promises many beneficial applications. The extent to which these are actually realised depends largely on whether people have confidence in the technology. Ethical values and principles therefore play a central role - at least in Europe - in the development of Artificial Intelligence (AI). According to a recent survey conducted by Bitkom, a large majority of Germans would like to have secure AI and demand that AI systems be particularly thoroughly tested and can only be used in devices after they have been approved. What challenges arise for the responsible development and application of AI systems? According to which criteria should they be implemented? And what can companies do to apply AI without discrimination? Answers to these questions can be found in a guideline that was developed by the IT Security, Privacy, Law and Ethics working group of Plattform Lernende Systeme under the direction of Jessica Heesen. It takes up the approach "Ethics by, in and for Design" formulated in the AI strategy of the Federal Government and aims to provide orientation for developers, providers, users and those affected by AI systems.

1

Ms Heesen, what principles should be followed for the responsible development and application of AI systems?

Jessica Heesen: In principle, AI development should serve society and not lead to the creation of new technical or economic constraints that violate ethical standards of coexistence or restrict positive developments. This is what is meant when many political documents and speeches speak of "AI for people". And this is what the philosopher Theodor W. Adorno already expressed in 1953: "There is no technological task that does not fall within society". Technological development is therefore not limited to the technical solution as such, but is an elementary building block of a humane society. Accordingly, the development of AI as one of the modern key technologies must be guided by guiding values such as non-discrimination, protection of privacy, reliability or transparency.

In general, it is important that the routines and recommendations for action specified by AI systems are not recognised as having no alternative ("practical constraints"). As in other technical products, certain changeable purposes and preferences are "inscribed" in AI software, which can benefit certain groups and individuals but harm others.

2

What does this mean in concrete terms for the development and application of AI?

Jessica Heesen: We must distinguish two levels here: On the one hand, the scope for action of individuals or groups when using AI, and on the other hand the political and legal framework conditions that are set for a value-oriented and secure use of AI. For example, algorithmic decision systems with AI components can be used to decide on social benefits. The United Nations speaks here - quite critically - of a "Digital Welfare State"AI systems can, for example, decide on the allocation of food vouchers - as is already practised in India. Or they can track down potential social fraud. The use of a programme called SyRI, which the Dutch Ministry of Social Affairs used to identify people who might have been or might be receiving unemployment or housing benefits in error, has now been prohibited by the courts.

One thing is clear: when using such sensitive AI decision-making systems, a high sense of responsibility is required. The administrative staff on site must understand, interpret and fairly apply the functionalities, at least in broad outlines. This includes being able to oppose the decision of an AI system in the role of the "human final decision maker". The authority, in turn, must ensure reliability and non-discrimination when purchasing such a system. The developers are responsible, among other things, for ensuring the quality of training data and for examining the systems in a regulated manner for discriminatory factors.

3

What are the implications for a possible regulation of AI systems?

Jessica Heesen: The concept of regulation encompasses a whole range of possibilities for giving AI systems a form that is technically advanced by a society, but at the same time is in line with the values of liberal constitutional states. That is why the EU Commission repeatedly emphasises the importance of value-oriented development of AI. In general, there are different approaches to this: Weak forms of regulation are, for example, codes of ethics of professional societies or companies. Strong forms of regulation are legal and state requirements in the form of certifications of AI. In all these different forms of regulation, ethical values are fundamental and must be formulated in a further step for the respective application contexts.

Plattform Lernende Systeme has developed an ethics guide that outlines an ethically reflected development and application process of AI systems for developers, providers, users and those affected. It names principles such as self-determination, justice and protection of privacy, from which the criteria for the regulation of AI systems are largely derived. We would like to contribute to the public discussion and give an impulse for further debates.

Go back