3 Questions for

Abtin Rad

Global Director Functional Safety, Software and Digitization TÜV SÜD / Member of Plattform Lernende Systeme

Safe AI in medicine: What does EU regulation bring?

Whether as an assistance system in the doctor's office, in a rollator that prevents falls, or as software for evaluating X-ray images - Artificial Intelligence (AI) can improve healthcare. However, if medical AI systems are faulty, people can be harmed. In this interview, Dr. Abtin Rad explains the risks associated with the use of AI in medicine, how these can be controlled, and what contribution the EU's AI Regulation can make to safe AI applications. He is Global Director Functional Safety, Software and Digitization at TÜV SÜD and a member of the "Health Care, Medical Technology, Care" working group of Plattform Lernende Systeme.


Mr. Rad, AI can improve medical diagnoses, personalize therapies, and enable chronically ill people to live a self-determined life. What risks are associated with its use?

Dr. Abtin Rad: Every new technology is accompanied by new technology-specific risks. In the case of medical AI, particular attention must be paid to the so-called "bias": Incorrect statistical distributions in the data on which the AI model is based can not only lead to discrimination against certain patient groups. As a result, misdiagnoses can also occur, jeopardizing patient safety, as examples in the past have shown.

Blind trust by physicians in the recommendations of AI systems - the so-called "over-trust" - and a lack of transparency in the AI decision-making processes pose further risks. With increasing automation in medicine, a certain decrease in the competencies of professionals can also be observed.

By making barely detectable changes to medical data, AI systems can also be deliberately manipulated to induce incorrect diagnoses or therapies. Not to be neglected are data protection issues. Only recently, researchers were able to show that sensitive patient information such as medical images can be extracted from the training data underlying the AI model. These risks highlight the need for careful control of AI systems in medicine.


How can we check whether AI systems in medicine are safe? What requirements should AI applications fulfill?

Dr. Abtin Rad: The European regulations for medical devices and in vitro diagnostics (MDR/IVDR) divide medical devices into seven classes depending on the risk. For products that are considered to pose a higher risk, manufacturers must involve a government-authorized testing body (known as a Notified Body) to check whether the product meets EU requirements. Depending on the procedure chosen, this conformity assessment includes, on the one hand, an examination of the quality management system to ensure consistent, compliant development and manufacture of the AI medical device.

In addition, the technical documentation of the product is examined and thus technically assessed to ensure that the product is safe at every point in its life cycle - from planning, design, development and manufacture, through clinical evaluation, to placing on the market, as well as monitoring of the medical device after it has been placed on the market.

Independent third-party verification of the medical device ensures an objective evaluation, based on the Notified Body's expertise and experience in medical devices, free from market interests. Testing criteria include, but are not limited to: regulatory compliance, assurance of clinical performance, AI life cycle testing, AI data management, model selection, evaluation of ethical aspects, and downstream market surveillance.


What does the European Union's planned AI Regulation mean for AI-supported medical devices from Germany?

Dr. Abtin Rad: On the one hand, the AI Regulation brings legal certainty by creating clear framework conditions for placing AI-supported medical devices on the market. In addition, the European regulation creates a harmonized European market.

On the other hand, according to the current status of the AI Regulation, which is currently still being negotiated in the so-called trilogue, the Regulation means for medical device manufacturers that the effort for the proof of conformity increases. Furthermore, there are some contradictions and partly redundant requirements compared to the valid regulations for medical devices and in-vitro diagnostics.

In addition, the AI Regulation requires a new designation of the inspection bodies for medical devices, which have already been certifying AI-supported medical devices according to the state of the art for years. This is accompanied by a considerable (redundant) bureaucratic effort. In my opinion, however, the greatest challenge at present and in the future is the shortage of AI experts - for manufacturers in Germany as well as for authorities and notified bodies.

More information on the regulation of AI can be found in the Plattform Lernende Systeme topic special (in German).

The interview is released for editorial use (if source is acknowledged © Plattform Lernende Systeme).

Go back