Free tool
EU AI Act Risk Classifier
Twelve questions, one tier. The classifier walks Article 5 prohibitions first, then Article 6 + Annex III high-risk uses (with the Article 6(3) derogation), then Article 50 transparency obligations, and finally GPAI + systemic-risk thresholds. Output is a downloadable markdown report with the exact article and annex that determined the result.
How the classifier works
The EU AI Act establishes a four-tier risk pyramid plus a separate track for general-purpose AI models (GPAI). The classifier evaluates your inputs against the cascade in the order the regulation itself uses, so the first match wins.
1. Article 5 — prohibited practices
Article 5 of Regulation (EU) 2024/1689 lists eight categories of prohibited AI: subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, untargeted scraping of facial images, emotion inference in workplaces and schools, biometric categorisation inferring sensitive attributes, real-time remote biometric identification in publicly accessible spaces by law enforcement, and individual predictive policing based solely on profiling. These prohibitions took effect on 2 February 2025.
If your system falls under any of these, the classifier returns "prohibited" and stops — no further analysis is needed because the system cannot be placed on the EU market.
2. Article 6 + Annex III — high-risk
Article 6(1) covers systems used as safety components of products already regulated under EU harmonisation law (Annex I). Article 6(2) and Annex III cover eight standalone high-risk areas: biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential services and benefits, law enforcement, migration and border control, and administration of justice and democratic processes.
Article 6(3), added during the trilogue, provides a derogation: a system listed in Annex III is not high-risk if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs a preparatory task. The classifier asks about the derogation conditions and applies them only when the user has indicated all required conditions are met.
High-risk classification triggers a long obligations list: risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency (Art. 13), human oversight (Art. 14), accuracy and robustness (Art. 15), conformity assessment (Art. 43), CE marking, registration in the EU database (Art. 49), and — for deployers of certain categories — a Fundamental Rights Impact Assessment under Article 27.
3. Article 50 — transparency obligations
Even if a system is not high-risk, Article 50 imposes transparency duties. Operators of AI systems that interact directly with natural persons must inform users they are interacting with AI. Providers of synthetic-content generators must mark output as artificially generated. Deployers of emotion-recognition or biometric-categorisation systems must inform affected persons. Deployers who generate or manipulate deep fakes must disclose that the content is artificial. The classifier flags these as "limited-risk" with the specific Article 50 paragraph that applies.
4. GPAI and systemic-risk GPAI
General-purpose AI models — those trained on broad data at scale and capable of competently performing a wide range of tasks — sit on a separate track. All GPAI providers must publish a sufficiently detailed summary of training content, comply with EU copyright law including the text-and-data-mining opt-out, and supply technical documentation to downstream deployers (Art. 53). Models with systemic risk — currently those trained with more than 10²⁵ floating point operations of compute — face additional obligations under Article 55, including model evaluation, systemic risk assessment, adversarial testing, and serious-incident reporting to the AI Office.
5. Minimal risk
Everything that survives the cascade above lands in minimal risk. The Act imposes no specific obligations on this tier beyond voluntary codes of conduct (Art. 95). Spam filters, AI-enabled video games, and inventory-optimisation models typically end up here.
When the classifier is not enough
The classifier produces a defensible first-pass classification suitable for internal triage and vendor scoping. It is not legal advice. Borderline cases — particularly around the Annex III biometrics carve-out, the Article 6(3) derogation, and whether a foundation model meets the systemic-risk threshold under Article 51 — should be reviewed with counsel. The official European Commission guidelines on Article 6 derogations were published in February 2025 and should be consulted for marginal cases.
What to do next
If the classifier returns high-risk, your next two artefacts are a fundamental rights impact assessment and a risk register. We have free tools for both:
- FRIA generator — produces a draft Article 27 assessment.
- AI risk register — pre-populated against NIST AI RMF, ISO 42001, OWASP LLM Top 10, and EU AI Act Article 9.
And once you know your tier, the AI Compliance Vendors matchmaker will rank vendors that document coverage of the obligations your tier triggers.
Sources
- Regulation (EU) 2024/1689 (consolidated text) — eur-lex.europa.eu
- Commission guidelines on the definition of an AI system (Feb 2025) — digital-strategy.ec.europa.eu
- Commission guidelines on prohibited practices (Feb 2025) — digital-strategy.ec.europa.eu
- EU AI Act compliance timeline — AI Act Service Desk — artificialintelligenceact.eu