AI Compliance Vendors

EU AI Act high-risk AI systems: scope and obligations

High-risk AI systems under the EU AI Act face the heaviest obligations — risk management, data governance, technical documentation, human oversight, conformity assessment — with most rules applying from 2 August 2026.

Last updated April 27, 2026 · Every fact traceable to a public source

The EU AI Act creates a tiered risk-based regime. The middle and most-regulated tier is "high-risk." Article 6 defines two routes into high-risk: (1) AI systems that are safety components of products covered by EU sectoral law listed in Annex I (e.g. medical devices, machinery, toys, vehicles), or (2) AI systems used in the eight areas listed in Annex III.

What are the eight Annex III areas?

Biometrics; critical infrastructure (water, gas, electricity, traffic); education and vocational training (admission, evaluation, monitoring); employment and worker management (recruitment, performance evaluation, task allocation); access to essential private services and essential public services (creditworthiness, public-benefit eligibility, emergency triage, life and health insurance); law enforcement (risk assessments, polygraph-like tools, evidence reliability); migration, asylum, and border control; administration of justice and democratic processes.

What does Article 6(3) "filter" do?

Article 6(3) added a filter: an AI system in an Annex III area is NOT high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. Four conditions can lift it out of high-risk: (a) narrow procedural task, (b) improvement of human-completed activity, (c) detection of decision-making patterns or deviations, or (d) preparatory task to a relevant assessment. Profiling of natural persons always stays in high-risk.

What are the obligations for providers of high-risk systems?

A risk management system across the lifecycle (Article 9), data and data governance for training, validation, and testing (Article 10), technical documentation per Annex IV (Article 11), record-keeping/automatic logging (Article 12), transparency and instructions for use (Article 13), human oversight by design (Article 14), accuracy/robustness/cybersecurity (Article 15), a quality management system (Article 17), conformity assessment and CE-marking (Articles 43–49), registration in the EU database (Article 49), and post-market monitoring (Article 72).

What about deployers?

Deployers of high-risk AI systems have lighter but real obligations under Article 26: use the system per the provider’s instructions, ensure relevant input data is appropriate for the intended purpose, monitor operation and report incidents, retain logs, and — for certain Annex III systems used by public authorities or large private deployers — conduct a Fundamental Rights Impact Assessment (FRIA) under Article 27.

When do high-risk obligations apply?

Most high-risk obligations apply from 2 August 2026. For high-risk systems that are safety components of Annex I products (medical devices, machinery, etc.), the rules apply from 2 August 2027 to align with sectoral conformity-assessment cycles. Systems already on the market before 2 August 2026 only need to comply if they undergo "significant changes" after that date.

Related

Sources

Editorial independence

This FAQ is editorial. No vendor can pay to be highlighted or ranked in answers, and the written commentary on this page is payment-free. Featured slots in directory listings are always labeled where they appear. Read our methodology for details.