EU AI Act for Builders
The EU AI Act is the world's first comprehensive AI regulation with binding legal force. If your AI product serves users in the EU, this law applies — regardless of where you are based. This page explains the risk tier framework, GPAI obligations (effective August 2025), and practical steps for compliance.
Enforcement Timeline
The EU AI Act entered into force on 1 August 2024, with obligations rolling in over a three-year implementation period:
| Date | What takes effect |
|---|---|
| February 2025 | Prohibited AI practices (unacceptable risk) — banned from this date |
| August 2025 | GPAI model obligations — technical documentation, copyright policy, systemic risk assessment |
| August 2026 | High-risk AI system obligations + EU AI Office enforcement powers for GPAI |
| August 2027 | GPAI models released before August 2025 must comply (grandfathering period ends) |
The Four Risk Tiers
Every AI system deployed in the EU must be classified into one of four risk tiers. Your obligations are determined by which tier your system falls into.
Unacceptable Risk — Prohibited
These practices are banned outright:
- Social scoring systems by public authorities
- Real-time biometric surveillance in public spaces (very limited exceptions)
- Subliminal or manipulative techniques that exploit vulnerabilities
- Emotion recognition in workplace and education contexts (with exceptions)
- AI systems that profile based on sensitive characteristics to predict criminal behaviour
High Risk — Stringent Requirements (August 2026)
High-risk systems require conformity assessment, technical documentation, human oversight implementation, and registration in the EU database before deployment. Categories include:
- Biometrics — Identification, categorisation, emotion recognition
- Critical infrastructure — Water, gas, electricity, traffic management
- Education — Admissions, student performance assessment, proctoring
- Employment — Recruitment, CV screening, performance evaluation, termination
- Essential services — Credit scoring, insurance assessment
- Law enforcement — Risk assessment, evidence analysis, profiling
- Migration & border control — Visa/asylum decisions, risk screening
- Justice & democracy — Court decisions, electoral influence
Limited Risk — Transparency Obligations Only
- Chatbots — Users must be informed they are interacting with an AI
- Deepfakes — AI-generated content must be labelled (with exceptions for art/satire)
- Emotion recognition / biometric categorisation — Must inform individuals
Minimal Risk — No Mandatory Requirements
AI-powered spam filters, recommendation systems, video games, general-purpose writing/coding assistants not in a regulated vertical — no mandatory obligations. Voluntary codes of conduct are encouraged.
GPAI Obligations (Effective August 2025)
General Purpose AI (GPAI) models — models like GPT-5, Claude, Gemini, and Llama that can be used across many tasks — have a separate obligation framework. Since August 2025, all GPAI providers serving the EU must comply.
Standard GPAI Obligations
- Technical documentation — Architecture, training approach, evaluation methods, capabilities, and limitations documented in sufficient detail for downstream integrators to understand
- Copyright compliance policy — Must document how training data copyright is handled; must comply with EU copyright law including opt-out mechanisms
- Transparency summary — Public summary of training data (types, sources, preprocessing) published and kept current
- Usage policy for integrators — Terms that downstream deployers can use to understand how to deploy the model compliantly
Systemic Risk GPAI — Enhanced Obligations
Models trained with more than 10²⁵ FLOPs (floating point operations) are designated as posing "systemic risk" and face additional requirements:
- Adversarial testing and red-teaming before and after deployment
- Incident reporting to EU AI Office within specified timeframes
- Cybersecurity measures commensurate with identified risks
- Energy efficiency and environmental impact reporting
Who is affected by GPAI rules
Primarily the major AI labs (OpenAI, Anthropic, Google, Meta). If you areusing a GPAI model via API to build a product, you are a "deployer" — you benefit from the provider's compliance but still have separate obligations if your own system falls into a high-risk category.
High-Risk System Requirements (August 2026)
If your AI system falls into a high-risk category, you must comply before deploying to EU users. Requirements include:
- Conformity assessment — Self-assessment or third-party audit demonstrating compliance with all applicable requirements
- Technical documentation — Full documentation of the system's design, development, validation, and intended use
- Data governance — Training and testing data must meet quality criteria; bias risks must be identified and mitigated
- Logging and traceability — System must log activity to enable post-market monitoring and incident investigation
- Human oversight — Effective human control mechanisms; ability to override, stop, or reverse the system
- Accuracy and robustness — Validated performance across the intended use cases and population groups
- EU registration — Register in the EU AI database maintained by the EU AI Office
Enforcement and Fines
| Violation type | Maximum fine |
|---|---|
| Prohibited practices (unacceptable risk) | EUR 35M or 7% of global annual turnover |
| GPAI systemic risk obligations / other high-risk requirements | EUR 15M or 3% of global annual turnover |
| Incorrect or misleading information to authorities | EUR 7.5M or 1% of global annual turnover |
Enforcement is led by the EU AI Office (for GPAI models) andNational Competent Authorities designated in each EU Member State (for other AI systems). The EU AI Office began enforcement proceedings for GPAI from August 2026.
Practical Steps for Builders
- Classify your system — Does your product's use case fall into any high-risk category? Review Annex III of the AI Act carefully, especially the employment, education, and essential services categories.
- Assess GPAI exposure — Are you providing a GPAI model (you need to comply from August 2025) or deploying one (check your provider's GPAI compliance documentation)?
- Document everything now — Even if your obligation date is August 2026, start documentation now. Retroactive documentation under enforcement pressure is significantly harder.
- Implement the minimal risk requirements — Chatbot disclosure has been required since February 2025. Ensure all conversational AI interfaces notify users they are interacting with an AI.
- Assign a responsible role — Designate someone to track AI Act obligations and coordinate compliance across engineering, legal, and product teams.
Warning: The EU AI Act applies to any AI product used in the EU — not just products built by EU-based companies. A US startup selling an HR screening tool to a German employer faces high-risk obligations in full. Geographic origin of the provider does not affect applicability.
Checklist: Do You Understand This?
- Can you classify a given AI product into the correct EU AI Act risk tier?
- What GPAI obligations took effect in August 2025?
- What is the 10²⁵ FLOP threshold and what additional obligations does it trigger?
- What are the five core requirements for high-risk AI systems?
- What is the maximum fine for violating prohibited AI practices?
- Does the EU AI Act apply to a US company selling to EU customers?