What is the EU AI Act and why it matters
The EU AI Act is the world’s first horizontal framework for AI – a risk-based regulation that governs the development, placing on the market, and use of AI systems across the EU. It applies extraterritorially where AI outputs are used in the Union, affecting providers, deployers, importers, distributors, and manufacturers. The law entered into force on 1 August 2024 following publication in the Official Journal on 12 July 2024 – marking the start of a staged application through 2027.
Who must comply – and where
The Act covers:
- Providers placing AI systems or general-purpose AI (GPAI) models on the EU market.
- Deployers established in the EU or whose AI outputs are used in the EU.
- Importers, distributors, product manufacturers, and authorised representatives.
Scope extends to non-EU businesses when AI outputs are used in the EU – and non‑EU GPAI and high‑risk providers must appoint an EU representative.
Key EU AI Act compliance dates – 2025 to 2027
- 2 February 2025 – Prohibited AI practices ban starts (6 months after entry into force).
- 2 August 2025 – GPAI model rules and most governance/penalty provisions begin; codes of practice support starts.
- 2 August 2026 – General application begins; Member States’ regulatory sandboxes must be operational.
- 2 August 2027 – Safety‑component high‑risk systems (Annex I) and end of transition for some pre‑2025 GPAI models; medical and other sectoral safety products captured here.
What is prohibited – the “unacceptable risk” bar
The Act bans AI practices that materially distort behavior, exploit vulnerabilities, enable social scoring, or misuse biometric identification in public spaces – subject to narrow law‑enforcement carve‑outs. Deepfake content must be labeled as AI‑generated or manipulated, with limited exceptions for evident artistic work. Violations of the prohibitions carry the heaviest penalties.
High‑risk AI obligations – what you must build and prove
High‑risk AI includes systems used as safety components of regulated products and eight Annex III domains (e.g., recruitment, education, essential services, law enforcement). Providers and deployers face obligations across data governance, technical documentation, logs, robustness, human oversight, transparency, CE marking, registration, post‑market monitoring, and incident reporting. Deployers must perform a Fundamental Rights Impact Assessment before using high‑risk AI.
GPAI in 2026 – transparency, copyright, and systemic risk
From 2 August 2025, GPAI providers must implement transparency and copyright compliance, publish sufficiently detailed training data summaries, and furnish documentation to downstream providers. A voluntary General‑Purpose AI Code of Practice – published 10 July 2025 and endorsed as an adequate tool – helps providers demonstrate compliance now, with additional practices for models with systemic risk. Major model providers have begun to sign on.
Fines and enforcement – the cost of non‑compliance
- Prohibited practices – up to €35M or 7% of global turnover, whichever is higher.
- Other operator obligations (providers, deployers, distributors, etc.) – up to €15M or 3%.
- Supplying incorrect or misleading information – up to €7.5M or 1%.
- GPAI providers – up to €15M or 3% for specific GPAI breaches. SMEs face “lower‑of” caps.
Governance – who will supervise you
An EU AI Office coordinates EU‑level oversight, issues guidance, and steers GPAI codes of practice. Member States must designate national competent and market surveillance authorities and set up at least one operational regulatory sandbox by August 2026. Expect coordination with existing product‑safety, privacy, and cybersecurity regulators.
Compliance checklist – foundational moves for 2026
- Classify systems by risk tier – prohibited, high‑risk (Annex III or safety components), limited risk, minimal risk.
- Build an AI risk management system – integrate data governance, model lifecycle controls, human oversight, logging, and post‑market monitoring.
- Prepare technical documentation – align to Annex IV for high‑risk; maintain logs; ready CE marking and registration where applicable.
- Run assessments – FRIA for high‑risk deployers; security and robustness testing; bias and performance evaluation per intended use.
- Stand up governance – assign accountable owners, create incident reporting pathways, train staff on transparency and deepfake labeling.
- GPAI providers – adopt the GPAI Code of Practice, publish a training data summary, and prepare systemic‑risk controls if relevant.
A practical 90‑day plan – action before audit
- Map AI systems and vendors – inventory all models, uses, and data flows; tag EU usage and outputs.
- Risk‑classify – determine prohibited/high‑risk status; document rationale and mitigations.
- Policy and process – codify AI lifecycle controls, incident processes, and labeling rules for synthetic media.
- Documentation sprint – assemble technical files, model cards, and deployment playbooks; set up log retention.
- FRIA/DPIA alignment – run FRIAs for high‑risk deployments and align with GDPR DPIAs to avoid gaps.
- GPAI pathway – sign the GPAI Code of Practice, publish training data summary, and define downstream information sharing.
Cross‑regulatory alignment – Data, security, and ops
For regulated markets, align AI Act controls with GDPR (lawful basis, DPIA, data minimisation), DORA (ICT risk and resilience for financial entities), NIS2 (essential/important entities’ cybersecurity), and sectoral product‑safety regimes. Harmonising evidence across frameworks reduces audit friction and duplicate work.
Role‑based obligations at a glance
| Role | Top obligations | When it applies |
|---|---|---|
| Provider (high‑risk) | QMS, risk mgmt, quality data, tech docs, logs, robustness, human oversight, CE marking, registration, post‑market monitoring | General from 2 Aug 2026; safety‑component systems by 2 Aug 2027 |
| Deployer (high‑risk) | Follow provider instructions, competent human oversight, input data controls, logs (if controlled), incident reporting, FRIA | General from 2 Aug 2026 |
| Importer/Distributor | Verify compliance, documentation, traceability, and cooperate with authorities | General from 2 Aug 2026 |
| GPAI Provider | Transparency, training data summary, copyright policy, documentation to downstream; systemic‑risk evaluations where applicable; leverage CoP | From 2 Aug 2025; CoP published 10 July 2025 |
For public sector buyers and GovTech vendors
Public buyers should embed EU AI Act requirements in tenders, including FRIA deliverables, human oversight assurances, synthetic media labeling, and audit‑ready documentation. Use national regulatory sandboxes for innovative use cases – they must be operational by August 2026 – to de‑risk pilots and align with authorities early.
FAQs
Is a standard chatbot “high‑risk”?
Most customer‑support chatbots are not high‑risk unless used in Annex III contexts such as access to essential services – but they still face transparency duties (informing users they interact with AI) and labeling rules for synthetic content.
Do we need a CE mark for all AI?
CE marking applies to high‑risk AI under sectoral product‑safety regimes and certain Annex III use cases after conformity assessment – not to every AI system.
What if we’re outside the EU?
If your AI system’s outputs are used in the EU, the Act can still apply. GPAI and high‑risk providers outside the EU must appoint an EU representative.
Key takeaways
- 2025 brings enforceable bans on prohibited practices and live duties for GPAI providers – with heavy fines for breaches.
- 2026–2027 is when most high‑risk obligations bite – plan documentation, assessments, and governance now to avoid rushed remediation.
- The GPAI Code of Practice is the fastest path to demonstrate compliance on transparency, copyright, and systemic risk – adopt it early.
- Penalties can exceed GDPR – up to €35M or 7% of global turnover for the worst violations – so executive ownership and budget are non‑negotiable.