3 Questions for

Armin Grunwald

Professor of Philosophy of Technology at KIT, Head of the Office of Technology Assessment at the German Bundestag and member of the working group IT Security, Privacy, Legal and Ethical Framework of Plattform Lernende Systeme

3 Questions for Armin Grunwald

Trustworthy AI: "The Systems need a regular technical examination"

The EU Commission wants to regulate by law what Artificial Intelligence may and may not do. In its draft for an AI law, which it presented in April of this year, the Commission defined a catalogue of criteria that will be used to assess the risks, the so-called criticality, of an AI system in the future. Depending on the level of risk, an AI system can be banned, must meet certain requirements before it is launched on the market or is not subject to any specifications. Armin Grunwald, Professor of Philosophy of Technology at the Karlsruhe Institute of Technology (KIT) and Head of the Office of Technology Assessment at the German Bundestag (TAB), explains why assessing the risks of artificial intelligence is particularly difficult and what is necessary for a trustworthy AI system beyond the EU proposal.

1

Mr Grunwald, is an AI system with low criticality harmless?

Armin Grunwald: The criticality of an AI system depends on how, where, when and by whom it is used. Within this context of use, possible risks can be assessed in a comprehensible way in theory. This is helpful for taking possible precautions for the use of AI-supported products and services in practice, "just in case", so to speak. However, the theoretical assessment cannot anticipate the possible dynamics of later real use. This is often unpredictable, if only because market events and acceptance by people are always surprising. In addition, human creativity often exceeds the assumed application scenarios - in a positive sense, when new desirable applications and innovations are discovered, but also in a negative sense, when AI systems are misused for unethical actions or misappropriated. Therefore, a low criticality is only an indication of an initially assumed low potential for harm, but no guarantee that this will remain the case. It is by no means synonymous with harmlessness, because the latter only becomes apparent in the course of practical applications. Safety is not simply a technical parameter, but combines technical properties with human usage behaviour. And the latter is always good for surprises - in very different directions.

2

Why is it so difficult to reliably assess the risks of an AI system?

Armin Grunwald: We have at least some reliable knowledge about the dangers and risks (i.e. possible dangers) of conventional IT. For example, we know the probability of errors occurring or how possible dangers can develop. AI, on the other hand, is often embedded in visionary narratives and application scenarios in which this knowledge is not available. Instead, assumptions must be made about future applications whose occurrence is more or less expectable or even speculative. The openness of the future - often pejoratively referred to as uncertainty - prevents a risk assessment based on empirical data, because data from the future does not exist. This fundamental limitation is compounded by the fact that AI systems can change in unpredictable ways through machine learning, whereas conventional IT cannot change its characteristics itself. Any risk assessment for AI systems must therefore also take into account possible changes and even consider the risks of non-intended learning - i.e.: learning of undesired effects. Guard rails of learning may also need to be programmed into the AI system to avoid such undesirable effects.

3

Regulation based on criticality levels is therefore not enough. What is necessary to make an AI system trustworthy?

Armin Grunwald: The answer is easier if we compare the question with other techniques. When do we consider a car trustworthy? If we didn't trust that brakes, steering and other functional elements worked as they should, we wouldn't drive a car. Trustworthiness of functioning has two components: Trust in the technology itself, i.e. that it does what it is supposed to do. And trust in people and institutions that vouch for it and check it. This ranges from the manufacturing company to the workshop to the TÜV. Trust arises when technical performance parameters, corresponding experience and their human or institutional monitoring come together. This is analogous with AI systems: they have to be approved, they have to do their job reliably and demonstrably, and they have to be regularly checked by some kind of TÜV. Thanks in part to the aforementioned symbol of the TÜV, we trust in the safety of traditional cars. In contrast, not everyone would get into a self-driving car today. The difference is that we have no experience with AI as part of the on-board computer and it lacks a trustworthy seal.

 

As a member of the working group IT Security, Privacy, Legal and Ethical Framework, Armin Grunwald co-authored the white paper Criticality of AI systems in their respective application contexts (in German).

Das Interview ist für eine redaktionelle Verwendung freigegeben (bei Nennung der Quelle © Plattform Lernende Systeme).

Go back