The EU AI Act Compliance Checklist for 2026

A practical 20-item checklist covering risk classification, GPAI obligations, high-risk system requirements, conformity assessment, and fines under the EU AI Act.

By ACV Editorial · April 22, 2026 · 11 min read · Last reviewed April 22, 2026

The EU AI Act Compliance Checklist for 2026

The EU AI Act entered into force on 1 August 2024, and the compliance calendar is now running in earnest. Prohibited practices became enforceable on 2 February 2025. General-purpose AI (GPAI) obligations went live on 2 August 2025. And the date that matters most to the majority of enterprise AI deployments — full application for high-risk systems under Annex III — arrives on 2 August 2026.

If you are responsible for AI governance at a company that develops, deploys, or imports AI systems touching the EU market, that deadline is not a distant abstraction. Conformity assessments require documentation that takes months to assemble. Technical files must predate market placement. Post-market monitoring systems need to be operational before go-live, not after.

This checklist is designed for practitioners, not policy observers. It works through the four risk tiers, the GPAI obligations already in force, the 2026 high-risk requirements, and the documentation and conformity assessment routes that determine whether you are ready.

Understanding the Four Risk Tiers

The EU AI Act uses a risk-based architecture. Compliance obligations scale with assessed risk, and the classification you assign your system determines virtually everything that follows — which requirements apply, which conformity route is available, and what penalties attach to non-compliance.

Unacceptable Risk (Prohibited)

Article 5 bans eight AI practices outright. As of 2 February 2025, deploying any of these is illegal across the EU, regardless of safeguards:

  • Subliminal manipulation that harms individuals
  • Exploitation of vulnerabilities (age, disability, social circumstances)
  • Social scoring by public or private entities
  • AI-based criminal risk assessment from profiling alone
  • Untargeted biometric data scraping to build facial recognition databases
  • Emotion recognition in workplaces and educational institutions
  • Biometric categorisation to infer protected characteristics (race, political opinion, sexual orientation)
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions)

Violations carry the Act's maximum penalty: €35 million or 7% of global annual turnover, whichever is higher.

High-Risk (Annex III)

High-risk systems are permitted but subject to the Act's most substantial obligations. The classification is determined by use case, not technical sophistication. Annex III covers AI systems deployed in:

  • Critical infrastructure (energy, water, transport)
  • Educational and vocational training
  • Employment, worker management, and access to self-employment
  • Access to essential private and public services, including credit scoring
  • Law enforcement
  • Migration, asylum, and border control
  • Administration of justice and democratic processes
  • Biometric identification and categorisation (where not prohibited)

Note that Article 6(3) creates a narrow exception: Annex III systems that demonstrably do not pose a significant risk of harm to health, safety, or fundamental rights may escape high-risk classification — but this must be documented and notified to the relevant authority. Non-compliance with high-risk requirements carries penalties of €15 million or 3% of global turnover.

Limited Risk

Limited-risk systems carry transparency obligations only. Chatbots must disclose that users are interacting with an AI. Systems generating synthetic content must label it. Emotion recognition and biometric categorisation systems must notify users. No conformity assessment is required.

Minimal Risk

The large majority of AI systems — spam filters, AI in video games, recommendation engines for content — fall here. No mandatory obligations apply beyond the general AI literacy requirement under Article 4. Voluntary codes of conduct are encouraged. Fines apply only for providing false information to authorities (€7.5 million or 1% of turnover).

GPAI Obligations: Already In Force Since August 2025

If your organisation develops or deploys a general-purpose AI model — meaning a model trained on broad data, capable of a wide range of tasks, and made available to third parties — the GPAI framework under Title VIII has applied since 2 August 2025.

All GPAI model providers must: - Maintain technical documentation of training data, methodology, and evaluation results - Publish a summary of training data content for copyright compliance purposes - Comply with EU copyright law, including the text and data mining opt-out mechanism - Cooperate with the AI Office in providing information on request

Providers of GPAI models with systemic risk — defined as models trained with more than 10^25 FLOPs, or designated by the European Commission — face additional obligations: - Conduct adversarial testing (red-teaming) before market placement - Report serious incidents and corrective measures to the AI Office without undue delay - Implement cybersecurity measures proportionate to the risk - Report on energy efficiency

The AI Office, operating within the European Commission's structure, is the primary supervisory authority for GPAI models. The General-Purpose AI Code of Practice, published on 10 July 2025, provides voluntary compliance guidance for demonstrating conformity with these obligations.

High-Risk Obligations: The August 2026 Deadline

For providers of Annex III high-risk AI systems, the full Chapter 2 compliance framework must be in place before 2 August 2026. The obligations fall across six domains:

Risk management system (Article 9): A continuous, iterative process throughout the lifecycle — not a one-time assessment. Must identify known and reasonably foreseeable risks, evaluate risks under reasonably foreseeable conditions including misuse, and adopt risk mitigation measures.

Data and data governance (Article 10): Training, validation, and testing datasets must meet quality criteria relevant to the intended purpose. Data governance practices must address data collection methods, relevant characteristics, potential biases, and appropriate mitigation measures.

Technical documentation (Article 11 and Annex IV): Comprehensive documentation must exist before market placement. Annex IV specifies what this includes: general system description, design specifications, system architecture, data requirements, training methodologies, validation and testing procedures, and the monitoring and post-market plan.

Record-keeping and logging (Article 12): High-risk systems must automatically log events throughout their lifetime — at minimum, the period of operation, reference database, input data that led to decisions, and the identity of natural persons involved in verification.

Transparency and information to deployers (Article 13): Providers must supply deployers with instructions for use that are concise, complete, and in a format accessible to deployers. These must cover the intended purpose, performance levels, and circumstances where the system may not perform as expected.

Human oversight (Article 14): Systems must be designed to enable effective oversight by natural persons during operation. Where technically feasible, individuals must be able to interpret the system's output and refuse or override automated decisions.

Accuracy, robustness, and cybersecurity (Article 15): High-risk systems must achieve appropriate levels of accuracy for their intended purpose, be resilient against attempts by unauthorised third parties to alter outputs, and maintain performance levels across expected operational conditions.

Quality management system (Article 17): Providers must implement a documented QMS covering all aspects of compliance — from design to post-market monitoring — with clear role assignments and update procedures.

Conformity Assessment Routes

Before placing a high-risk system on the EU market, providers must complete a conformity assessment under Article 43. Two routes exist:

Self-assessment (Annex VI / Internal Control): Available to most Annex III systems. The provider performs and documents the assessment internally, using harmonised standards where available. This route does not require notified body involvement.

Third-party assessment (Annex VII / Notified Body): Mandatory for high-risk systems in two situations: (1) where no harmonised standards exist or the provider has not applied them; and (2) for biometric identification systems and AI used by law enforcement, immigration, or asylum authorities. The notified body independently audits both the quality management system and the technical documentation.

For systems covered by existing EU product safety legislation listed in Annex I (medical devices, machinery, toys), the relevant sectoral conformity assessment procedure applies, extended to cover the AI Act requirements.

Upon successful completion, providers must: 1. Draw up an EU Declaration of Conformity 2. Affix the CE marking 3. Register the system in the EU database for high-risk AI systems 4. Establish post-market monitoring procedures

Substantial modifications after market placement trigger a new conformity assessment.

The EU AI Act Compliance Checklist

The following 20 items represent the operational tasks organisations must work through before August 2026. Vendors like Credo AI, Holistic AI, and LatticeFlow AI offer structured workflows that map to many of these requirements; Trustible similarly provides centralised AI inventories and audit-ready evidence generation.

Classification and Scoping

  1. Inventory all AI systems touching the EU market — including third-party models you deploy — and assign a preliminary risk classification to each.
  2. Apply the Article 6 classification test to every candidate high-risk system: does it fall under Annex I (regulated products) or Annex III (listed use cases)?
  3. Evaluate Article 6(3) exceptions for any Annex III systems: document the assessment and, if claiming the exception, prepare the required notification for the relevant national authority.
  4. Identify your role for each system: are you the provider (developer/importer), the deployer, or both? Obligations differ significantly.
  5. Audit GPAI exposure: if you operate any general-purpose model available to third parties, confirm compliance with Title VIII obligations that have applied since August 2025.

Documentation

  1. Draft technical documentation per Annex IV for each high-risk system — system description, architecture, data specifications, training methodology, testing procedures.
  2. Establish a risk management file covering identified risks, evaluation methodology, and adopted mitigations, with a process for continuous updates.
  3. Document data governance practices for training, validation, and testing datasets, including bias identification and mitigation measures.
  4. Prepare instructions for use meeting the Article 13 transparency requirements for deployers — intended purpose, accuracy levels, operational conditions, and known limitations.
  5. Create and maintain a post-market monitoring plan specifying how performance, incidents, and near-misses will be tracked after deployment.

Technical Requirements

  1. Implement logging and record-keeping per Article 12, ensuring automated capture of operational events throughout the system's active lifetime.
  2. Design and test human oversight mechanisms per Article 14, including the ability for operators to interpret outputs, halt operation, and override decisions.
  3. Conduct robustness and cybersecurity testing under Article 15 conditions, including adversarial inputs and expected edge cases.
  4. Validate accuracy metrics are appropriate for the intended purpose and that performance thresholds are documented and tested across representative populations.
  5. Implement a Quality Management System covering the full compliance lifecycle — design, implementation, monitoring, and corrective action.

Conformity Assessment and Registration

  1. Determine conformity assessment route: self-assessment (Annex VI) for most Annex III systems, or third-party notified body (Annex VII) if required.
  2. Engage a notified body if required, and schedule the assessment with sufficient lead time — quality bodies are already booking into late 2025 and early 2026.
  3. Complete the conformity assessment, resolve any non-conformities, and draw up the EU Declaration of Conformity.
  4. Affix the CE marking and register the system in the EU high-risk AI database before market placement.
  5. Train relevant staff on AI literacy obligations under Article 4 and on the operational requirements of the QMS, including incident reporting procedures.

Penalties: What Is Actually at Stake

The penalty framework under Article 99 creates three tiers:

ViolationMaximum Fine
Prohibited AI practices (Article 5)€35M or 7% of global annual turnover
High-risk system non-compliance€15M or 3% of global annual turnover
Providing false or misleading information to authorities€7.5M or 1% of global annual turnover

For SMEs and start-ups, the lower of the two figures applies. For large multinationals, 7% of global revenue can represent sums in the billions. The AI Office has enforcement authority over GPAI model providers; national competent authorities oversee other AI systems within their jurisdictions.

Member states were required to have national penalty frameworks in place by 2 August 2025. The European Commission published guidelines on the practical implementation of Article 6 — including classification criteria — by February 2026, providing additional clarity for organisations assessing borderline cases.

What Happens After 2026

The compliance calendar extends beyond August 2026. High-risk AI systems embedded in Annex I regulated products (medical devices, machinery, toys) have an extended transition period until 2 August 2027. Existing GPAI models placed on the market before August 2025 must achieve full compliance by 2 August 2027. Certain high-risk AI systems used by public authorities and deployed before August 2026 have transition provisions extending to 2 August 2030.

For organisations still in early-stage compliance planning, the transition windows are narrowing. Conformity assessments for complex systems can take four to eight weeks once documentation is complete — and the documentation itself typically takes months to assemble correctly. The August 2026 deadline is not when to start; it is when everything must already be finished.

For a full overview of the regulatory framework, see our EU AI Act framework page.


Key Takeaways

  • The EU AI Act applies in four risk tiers: Prohibited (banned), High-Risk (Chapter 2 obligations), Limited Risk (transparency only), and Minimal Risk (no mandatory requirements).
  • GPAI obligations — including technical documentation, copyright compliance, and adversarial testing for systemic-risk models — have been in force since 2 August 2025.
  • Full compliance for Annex III high-risk systems is required by 2 August 2026; this includes risk management systems, technical documentation, human oversight, robustness testing, QMS, and conformity assessment.
  • Two conformity routes exist: self-assessment (most Annex III systems) and third-party notified body (biometric identification, law enforcement use cases, and systems without applicable harmonised standards).
  • Maximum penalties reach €35 million or 7% of global turnover for prohibited practices; €15 million or 3% for high-risk non-compliance.
  • ISO 27001-style infrastructure, properly extended, maps cleanly onto many EU AI Act documentation requirements — organisations with mature information security programmes have a meaningful head start.

Sources

  1. European Commission — AI Act full text and application timeline: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  2. EU AI Act Article 43 (Conformity Assessment): https://artificialintelligenceact.eu/article/43/
  3. EU AI Act High-Level Summary: https://artificialintelligenceact.eu/high-level-summary/
  4. Kennedys Law — EU AI Act Implementation Timeline (March 2026): https://www.kennedyslaw.com/en/thought-leadership/article/2026/the-eu-ai-act-implementation-timeline-understanding-the-next-deadline-for-compliance/
  5. Pitch.law — AI Act Compliance Timeline: https://www.pitch.law/knowledge-base/ai-act-compliance-timeline
  6. Glocert International — EU AI Act Risk Classification Playbook: https://www.glocertinternational.com/resources/guides/eu-ai-act-risk-classification-playbook/
  7. GDPR Local — AI Risk Classification Guide: https://gdprlocal.com/ai-risk-classification/
  8. AI Governance Library — Conformity Assessments Step-by-Step Guide: https://www.aigl.blog/conformity-assessments-under-the-eu-ai-act-a-step-by-step-guide/
  9. European Commission — General-Purpose AI Code of Practice (July 2025): https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  10. LinkedIn / Kayne McGladrey — GPAI Obligations August 2025: https://www.linkedin.com/pulse/what-do-organizations-need-know-latest-eu-ai-act-kayne-mcgladrey-jbzcf

Keep reading