Comprehensible AI: making results target group-orientated and transparent

With the rapid spread of chatbots, artificial intelligence (AI) has become tangible for many people in their everyday lives. How and why ChatGPT and other AI-based systems achieve their results often remains opaque to users. What exactly happens in the ‘black box’ between model input and output? In order to make the results and decision-making of complex AI systems comprehensible, algorithmic decisions must be explainable. This can improve model quality on the one hand and strengthen trust in AI on the other. A current white paper from Plattform Lernende Systeme shows which methods and tools can be used to make AI results comprehensible for different target groups and provides design options for research, teaching, politics and companies.

Download the white paper

In many AI application areas, such as medical diagnostics, the selection process for job applications or quality control in production, traceability is crucial in order to be able to categorise and scrutinise the results. This provides developers with information to improve AI systems and users can find out which factors were decisive for the acceptance or rejection of an application.

Research on the topic of explainable AI (XAI) mainly addresses two areas: The first focuses on improving data and models. In practice, XAI methods are often used by AI engineers to test AI models before they are implemented and to improve the quality of AI models. The second area deals with the ethical criteria for responsible AI. They serve to provide users with explanations and thus enable transparency and trust in the technology.

Strengthen trust and improve quality with XAI

Transparency is particularly relevant when machine learning is used in areas of society where trust in people, institutions or technologies is especially important - for example, when it comes to the safety of people. In companies, on the other hand, knowing the ‘why’ of AI-supported predictions can provide important insights for business development or contribute to the improvement of production processes. In addition, explainability is an important criterion for the introduction of AI technology in companies and for a human-centred design of the AI-supported working environment.

The requirements and expectations of XAI can therefore vary greatly. Based on seven different groups of people - from AI specialists to technically inexperienced users - the new white paper from Plattform Lernende Systeme shows how individual framework conditions influence the forms and methods of explainability.

‘Explainable AI is an important building block for the transparency of AI systems and the ability to understand what information an AI system has used to arrive at a particular output,’ says Prof Dr Ute Schmid, Chair of Cognitive Systems at the Otto Friedrich University of Bamberg and co-author of the new white paper from Plattform Lernende Systeme. "Traceability is also an essential prerequisite for human control and supervision. By combining explainable AI and interactive machine learning, models can be specifically improved through feedback. In this way, machine learning and human expertise can interact meaningfully."

The question of ‘how’ and ‘why’ AI applications arrive at their results, a chatbot input leads to a certain sequence of words or an image generator creates exactly this image and no other, can be answered using explainable AI: XAI methods can help to draw conclusions about the quality of the database by analysing how strongly which features influence the model output. They can provide insights into artificial neural networks in the model by providing information about the function of certain components. XAI methods help to find out which features of the data are responsible for a particular prediction or classification being calculated.

Further developing research into XAI methods

The authors of the white paper point out that XAI - both as a basis for trustworthy AI and as a tool for improving models - should be seen as an opportunity in the public debate. In order to drive the further development of this AI technology and better utilise its potential, general design options are proposed, as well as those specifically geared towards target group orientation. For example, research should improve established methods and further develop XAI methods for new types of AI. Possible options include instruments for inspecting and controlling large AI models or standard toolboxes for model correction without the need for retraining. In teaching, XAI should be more firmly anchored in AI and data science degree programmes as an engineering tool. Companies could increasingly rely on XAI, for example to reduce internal communication hurdles and differentiate themselves from competitors.

About the white paper

The white paper "Nachvollziehbare KI. Erklären, für wen, was und wofür" was written by members of the ITechnological Enablers and Data Science working group of Plattform Lernende Systeme. The publication can be downloaded free of charge at this link.

An interview with Wojciech Samek, co-author of the white paper and member of Plattform Lernende Systeme, is available for editorial use.

Further information:

Birgit Obermeier
Press and Public Relations

Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | D - 80333 Munich

T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39
presse@plattform-lernende-systeme.de

Go back