European approach to AI regulation

EU AI regulation

The European Union has introduced the EU AI Act, the world's first comprehensive legal framework on artificial intelligence. This regulation is built on EU values and is designed to foster trustworthy AI, promote innovation, and protect the rights and safety of citizens across all Member States. The AI Act reflects a clear commitment to a human-centric and risk-based approach to AI development and deployment.

Key policy objectives

The EU continues to support AI development through strategic policy initiatives, particularly the Coordinated Plan on AI (2021). While the AI Act sets legally binding rules, the Coordinated Plan outlines steps to boost investment, encourage research, and coordinate national AI policies.

These goals are achieved through 4 key objectives:

The EU AI Act: Europe's legal framework for artificial intelligence

Adopted in May 2024, the EU AI Act provides a structured legal basis for AI use within and outside the EU when it affects EU citizens. The regulation aims to ensure safety, trust, and legal certainty while enabling innovation.

Scope

European Union Artificial Intelligence Regulation

Unacceptable risk

These are AI systems considered a clear threat to the safety, livelihoods, and rights of people. The regulation prohibits eight types of practices, including:

High risk

These are AI systems considered a clear threat to the safety, livelihoods, and rights of people. The regulation prohibits eight types of practices, including:

These high-risk systems must meet strict requirements before being placed on the market, including:

Limited risk

AI systems in this category raise concerns about transparency but do not pose major threats. Examples include chatbots and recommendation systems. These systems must inform users that they are interacting with AI. Providers of generative AI must ensure that:

Minimal risk

These are systems with low or no impact on people’s rights and safety. Most AI applications fall into this category, such as video games and spam filters. The AI Act does not impose specific obligations on this category.

Transparency for generative AI 

Providers of generative AI systems must:
Label AI-generated content.Publish summaries of training data.Implement safeguards against generating illegal content.

Do you want to stay informed?

Sign up to receive the latest news updates and information regarding AI regulation

Updates AI regulation

Courses informations

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

FAQ