Key policy objectives
The EU continues to support AI development through strategic policy initiatives, particularly the Coordinated Plan on AI (2021). While the AI Act sets legally binding rules, the Coordinated Plan outlines steps to boost investment, encourage research, and coordinate national AI policies.
These goals are achieved through 4 key objectives:
- Set enabling conditions for AI's development and uptake
Creating scale and synergy across the EU requires robust governance, high-quality data infrastructure, and cross-border collaboration.
- Build strategic leadership in high-impact sectors
Seven sectors are prioritised: environment, health, robotics, public sector, home affairs, transport, and agriculture. Joint actions aim to position the EU as a leader in these domains.
- Make the EU the right place for AI to thrive
This includes support for world-class research, scalable deployment of AI solutions, and funding for startups and high-impact initiatives.This includes support for world-class research, scalable deployment of AI solutions, and funding for startups and high-impact initiatives.
- Ensure AI technologies work for people
The EU promotes AI that is ethical, safe, sustainable, inclusive, and aligned with fundamental rights.
The EU AI Act: Europe's legal framework for artificial intelligence
Adopted in May 2024, the EU AI Act provides a structured legal basis for AI use within and outside the EU when it affects EU citizens. The regulation aims to ensure safety, trust, and legal certainty while enabling innovation.
Scope
- Applies to providers, users, importers, and distributors of AI systems.
- Covers entities both inside and outside the EU if their systems impact people within the EU.
Unacceptable risk
These are AI systems considered a clear threat to the safety, livelihoods, and rights of people. The regulation prohibits eight types of practices, including:
- AI-based manipulation or deception
- Exploitation of vulnerabilities
- Social scoring by governments
- Risk assessments or predictions of individual criminal behaviour
- Untargeted scraping of online or CCTV content for facial recognition databases
- Emotion recognition in work or education settings
- Biometric categorisation that deduces protected characteristics
- Real-time remote biometric identification by law enforcement in public spaces (with very limited exceptions)
High risk
These are AI systems considered a clear threat to the safety, livelihoods, and rights of people. The regulation prohibits eight types of practices, including:
- AI safety systems in critical infrastructure (e.g. transport)
- AI used in education to evaluate students or affect access to education
- AI embedded in medical devices or surgical tools
- AI used in hiring, performance evaluation, or access to employment
- AI that assesses eligibility for essential services like credit or housing
- Biometric identification, categorisation, and emotion recognition systems
- AI tools in law enforcement that assess evidence or predict criminal activity
- AI used in border control and migration decisions
- AI assisting in judicial decisions or legal interpretations
These high-risk systems must meet strict requirements before being placed on the market, including:
- Comprehensive risk assessment and mitigation processes
- Comprehensive risk assessment and mitigation processes
- Use of high-quality, non-biased datasets
- Logging capabilities to ensure traceability
- Clear documentation of system purpose and functioning
- Transparent communication to users or deployers
- Human oversight procedures
- Proven robustness, accuracy, and cybersecurity
Limited risk
AI systems in this category raise concerns about transparency but do not pose major threats. Examples include chatbots and recommendation systems. These systems must inform users that they are interacting with AI. Providers of generative AI must ensure that:
- AI-generated content is labelled as such
- Deepfakes and synthetic content intended for public communication are clearly marked
Minimal risk
These are systems with low or no impact on people’s rights and safety. Most AI applications fall into this category, such as video games and spam filters. The AI Act does not impose specific obligations on this category.
Transparency for generative AI
Providers of generative AI systems must:
Label AI-generated content.Publish summaries of training data.Implement safeguards against generating illegal content.
- Label AI-generated content
- Publish summaries of training data
- Publish summaries of training data