European approach to AI regulation

EU AI regulation

With the Regulation on a European Approach for Artificial Intelligence, the European Commission has proposed a strategic framework based on EU values to boost excellence in AI, guarantee the safety and fundamental rights of European people and businesses, while strengthening investment and innovation across EU countries and Member States.

Trustworthy AI promises to provide innovative and effective solutions to meet the world’s various challenges today. To realise this potential, it is crucial that technology used in the European Union and its Member States is of the highest quality, and developed and used in ways that earn the trust of all citizens.

Key policy objectives

In order to nurture AI excellence into the market, the updated 2021 Coordinated Plan outlines important steps to drive investments in AI and to align AI policies throughout the European Union and its Member States.

They plan to achieve this through 4 key policy objectives:

The EU’s risk-based approach: an outline

Despite the overwhelming benefits of AI technology and its anticipated impact on our global society, there are certain associated risks regarding fundamental rights and user safety, which could lead to negative implications if not unchecked. This may potentially result in certain legal uncertainties for companies that could significantly impede the widespread adoption of AI technologies by businesses and citizens due to a lack of trust. Autonomous regulatory oversight by individual EU nations would jeopardise the synergistic ecosystem of the internal market.

To address these issues, the Commission aims to build the first-ever legal framework for AI systems which outlines the various levels of risks. The Commission states that "...new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI."

European Union Artificial Intelligence Regulation

Unacceptable risk

This category includes anything considered a clear threat to EU citizens to be banned: from social scoring by governments to toys using voice assistance that encourages dangerous behaviour of children.

High risk

Note: This risk category has been provided with a lot more detail by the EC.

Limited risk

AI systems such as chatbots are subject to minimal transparency obligations, intended to allow those interacting with the content to make informed decisions. The user can then decide to continue or step back from using the application.

Minimal risk

Free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category where the new rules do not intervene as these systems represent only minimal or no risk for citizen’s rights or safety.

Do you want to stay informed?

Sign up to receive the latest news updates and information regarding AI regulation

Updates AI regulation

Courses informations

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

FAQ