Regulating Artificial Intelligence: The EU regulation at a glance

Artificial Intelligence (AI) is a key technology for the future viability of our economy and society. However, AI systems also harbour risks. With the AI Act, the European Union (EU) has now adopted the world's first transnational set of regulations for the safe and trustworthy use of AI. In its new issue of the AI Compact series, Plattform Lernende Systeme explains in a concise and clear manner what exactly is set out in the AI regulation, what opportunities and challenges are associated with the law and how the regulations are now being implemented.

Download the PDF

The Artificial Intelligence Act, or AI Act for short, sets out clear rules for the development and use of AI systems, thereby creating a standardised, binding legal framework in Europe. The aim is to minimise the risks that can arise from the technology. At the same time, the AI Regulation is intended to keep AI research and development competitive within the EU and promote innovation. AI applications with unacceptable risks, such as social scoring or the biometric identification of people in real time, will be banned in Europe. High-risk AI systems, such as those used in schools, human resources management or law enforcement, must fulfil strict safety requirements before they can be placed on the EU market. These requirements include risk management, human supervision and the quality of training data.

Key points of the AI Regulation

In principle, the AI Regulation stipulates that the use of AI must be made transparent. It should always be clear to everyone when they come into contact with AI. This applies to chatbots as well as AI-generated images and texts, which must be labelled accordingly in future.

Another key point of the regulation is the handling of AI base models such as ChatGPT, which form the basis for many generative AI applications. The special feature is that no specific use is initially intended for such models. However, they could later be integrated into a high-risk system. Depending on their computing capacity, they are now subject to different levels of strict regulations on transparency, cyber security and energy efficiency.

"The AI Regulation is a pioneer: it represents the world's first attempt to guarantee the safety of AI systems ex ante. However, the definition of which information systems fall under the central concept of an AI system is complex: these should be systems with different degrees of autonomy that do not operate solely on the basis of rules created by humans. However, according to the recitals, knowledge- or rule-based expert systems should certainly be covered by the regulation. The concretisation and application to specific borderline cases is left to case law," says Ruth Janal, Professor of Law at the University of Bayreuth and member of the IT Security, Privacy, Legal and Ethical Framework working group of Plattform Lernende Systeme.

In addition to the definition of AI, the application of the law in the member states raises concerns. For example, different mechanisms for checking whether an AI system complies with the law could distort competition. Critics also criticise the fact that the AI Act could inhibit AI innovations, particularly in medium-sized companies. The reason for this is the potentially high cost of complying with EU regulations.

What happens next?

The AI Act comes into force 20 days after its publication in the Official Journal of the EU. While the bans will apply after just six months, the entire regulation will not be applicable for another two years. Harmonised European standards are currently being developed to regulate exactly how the regulation can be implemented in various fields of application. They will apply in all EU member states. In addition, each member state will set up at least one authorised inspection body and a market surveillance authority to implement the regulation at national level.

About the KI Kompakt format

KI Kompakt provides a concise, well-founded and scientifically based overview of current developments in the field of Artificial Intelligence and highlights potentials, risks and open questions. The analyses are produced with the support of experts from the Plattform Lernende Systeme and are published by the office. The third issue of the series focuses on the "AI Act of the European Union. Rules for trustworthy AI". It is available for download free of charge.

Further information:

Linda Treugut / Birgit Obermeier
Press and Public Relations

Lernende Systeme – Germany's Platform for Artificial Intelligence
Managing Office | c/o acatech
Karolinenplatz 4 | D - 80333 Munich

T.: +49 89/52 03 09-54 /-51
M.: +49 172/144 58-47 /-39
presse@plattform-lernende-systeme.de

Go back