eu-ai-actcomplianceai-governancerisk-managementvendor-evaluation

EU AI Act Compliance: The Complete 2026 Buyer's Guide

Definitive 2026 guide to EU AI Act compliance: every deadline from the August 2026 full application date, all obligation tiers, risk categories, vendor evaluation framework, and a 12-month implementation timeline. Updated April 2026.

By AI Compliance Vendors Editorial · Published April 21, 2026 · Last verified April 21, 2026

EU law has arrived for artificial intelligence. Regulation (EU) 2024/1689, the EU Artificial Intelligence Act, entered into force on 1 August 2024 and becomes fully applicable on 2 August 2026. For most enterprises that deadline is now within a single annual planning cycle — and the obligations are not minor.

This guide is written for the person who owns AI governance: whether that is a Chief AI Officer, a GRC lead, a DPO expanding their mandate, or a procurement team evaluating tooling. It covers every deadline, every obligation tier, and a rigorous evaluation framework for vendors claiming to help. It does not rank vendors for you; it gives you the criteria to do so yourself.


Key dates and what they mean for your program

The AI Act's application timeline is staggered, which creates confusion. Here is the precise sequence sourced directly from Article 113 of the Act and confirmed by the European Commission's AI Act page:

DateWhat applies
1 August 2024Regulation enters into force
2 February 2025Chapter II (prohibited AI practices, Art. 5) and AI literacy obligation (Art. 4)
2 August 2025Chapter III §4 (notified bodies), Chapter V (GPAI models, Arts. 53–55), Chapter VII (governance), Chapter XII (penalties), Art. 78
2 May 2025Deadline for AI Office to publish codes of practice for GPAI models
2 February 2026Commission must issue guidelines on Art. 6 (high-risk classification) and post-market monitoring template
2 August 2026Full application: high-risk AI obligations (Arts. 6–51 for Annex III systems), conformity assessments, EU database registration, transparency rules (Art. 50)
2 August 2027Art. 6(1) — high-risk AI embedded in regulated products (Annex I); GPAI models placed on market before August 2025
2 August 2030High-risk AI systems used by public authorities placed on market before August 2026
31 December 2030Large-scale EU IT systems listed in Annex I that are in operation before August 2027

Practical implication: If you deploy Annex III high-risk AI systems — recruitment tools, credit scoring, biometric identification, educational assessment — your conformity assessment, technical documentation, risk management system, and EU database registration must be in order by 2 August 2026. The Commission signalled via the Digital Omnibus on AI proposal adopted 19 November 2025 a possible extension for some obligations, but as of April 2026 that proposal has not passed.

Prohibited practices under Article 5 — social scoring, real-time remote biometric identification in public spaces without narrow exceptions, subliminal manipulation — have been prohibited since 2 February 2025. Violations already attract fines of up to €35 million or 7% of global annual turnover, whichever is higher (Art. 99).

See the /frameworks/eu-ai-act framework page and the /best/eu-ai-act-compliance-tools collection.


Who the Act applies to (providers vs deployers vs distributors)

The Act uses four actor categories that carry different obligation stacks. Getting this wrong is one of the most common compliance mistakes.

Providers (Art. 3(3)): Natural or legal persons that develop or have an AI system developed and place it on the EU market or put it into service under their own name. Providers bear the heaviest obligations: conformity assessment, CE marking, EU declaration of conformity, registration, post-market monitoring, serious incident reporting. If you build an AI system and sell or deploy it — even internally at scale — you are likely a provider for purposes of high-risk rules.

Deployers (Art. 3(4)): Entities using a high-risk AI system under their own authority for professional purposes. Deployers must follow provider instructions, implement human oversight (Art. 14), keep logs, and conduct fundamental rights impact assessments (Art. 27) when deploying Annex III systems.

Importers (Art. 3(23)) and Distributors (Art. 3(24)) bear lighter obligations — primarily due-diligence checks — but can be reclassified as providers if they modify the system or place it under their own name (Art. 25).

GPAI model providers (Arts. 53–55): Providers of general-purpose AI models must maintain technical documentation, provide information to downstream providers, comply with copyright law, and publish training data summaries. GPAI models with systemic risk (models trained with more than 10^25 FLOPs, per Annex XIII) face additional obligations including adversarial testing and incident reporting. This track became applicable 2 August 2025.

Territoriality: The Act applies to providers placing systems on the EU market regardless of where established, and to deployers in the EU. Non-EU providers must designate an authorised representative (Art. 22).


Risk categories: prohibited, high-risk, limited, minimal

Prohibited practices (Art. 5)

Eight categories of AI practice are absolutely banned under Article 5: 1. Subliminal techniques that circumvent conscious awareness to materially distort behaviour causing harm 2. Exploitation of vulnerabilities (age, disability, social situation) to distort behaviour 3. Social scoring by public authorities leading to detrimental treatment 4. Real-time remote biometric identification in publicly accessible spaces for law enforcement (narrow exceptions apply) 5. Post-remote biometric categorisation based on sensitive attributes (race, political opinions, religion) 6. AI systems that assess criminal risk of persons based solely on profiling 7. Untargeted facial image harvesting to build recognition databases 8. AI systems inferring emotions of persons in workplaces and educational institutions (except for medical or safety reasons)

High-risk AI systems (Arts. 6–15, Annexes I and III)

High-risk status is triggered by two routes: - Annex I route ([Art. 6(1)](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689)): AI systems that are safety components of products already regulated under EU harmonization legislation (medical devices, machinery, aviation, automotive) and subject to third-party conformity assessment. Deadline extended to 2 August 2027. - Annex III route ([Art. 6(2)](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689)): AI systems in eight domains: biometrics; critical infrastructure management; education; employment and worker management; access to essential services (credit, insurance, social benefits); law enforcement; migration and border management; administration of justice. Systems that are narrow, low-risk, or purely preparatory may qualify for derogations.

Limited-risk AI systems (Art. 50)

Transparency obligations apply to interactive AI systems (chatbots must disclose AI nature), AI generating synthetic content (deepfakes must be labelled), and emotion recognition systems. As of April 2026, the Commission is preparing guidelines on transparent AI systems due in Q2 2026.

Minimal-risk AI systems

Spam filters, video game AI, and AI systems not covered by Annex III face no mandatory obligations, though voluntary codes of conduct are encouraged.


The 9 obligations that require tooling support

For high-risk AI systems under Annex III (post-August 2026), providers face nine core obligations per the EU AI Act, Chapter III Section 2:

#ObligationArticleTooling category needed
1Risk management system (continuous, not one-time)Art. 9Risk register, lifecycle tracking, continuous monitoring
2Data and data governance (quality, provenance, bias mitigation)Art. 10Data catalog, bias detection, data lineage
3Technical documentation (Annex IV, 10-year retention)Art. 11Model cards, documentation automation, version control
4Record-keeping and automatic event loggingArt. 12Audit log infrastructure, observability platform
5Transparency to deployers (instructions for use, limitations)Art. 13Documentation templates, model cards
6Human oversight (design for override; halt capability)Art. 14Workflow governance, approval gates, alerting
7Accuracy, robustness, cybersecurityArt. 15Red-teaming, adversarial testing, model hardening
8Quality management systemArt. 17GRC platform, process documentation, internal audit
9Post-market monitoring and serious incident reportingArts. 72–73Production monitoring, incident tracking, EU database integration

The QMS (Art. 17) requires written strategies, processes, and techniques for development, quality control, risk management, change management, and documentation. This is where off-the-shelf GRC tooling that lacks AI-specific risk libraries falls short.

Deployers additionally face the Fundamental Rights Impact Assessment (Art. 27) obligation when deploying Annex III systems. No standard template yet exists; vendors offering FRIA templates should cite which legal authority their template references.


What "compliance tooling" actually means

Before evaluating vendors, establish clarity on what the tooling must actually do:

Documentation and inventory: Automated generation of Annex IV technical documentation, model cards, and system registers. The Act requires documentation retention for 10 years (Art. 11(3)). A spreadsheet will not survive a market surveillance audit.

Risk register and classification: A structured workflow to assess whether systems fall under Annex III, assign risk classification, link evidence, and flag re-assessment triggers (e.g., substantial modification under Art. 9(5)).

Bias and dataset auditing: Art. 10 requires training, validation, and testing data to be relevant, representative, and free of errors and biases to the extent possible. This demands tooling that runs statistical parity tests, demographic disparity analysis, and data provenance tracking — capabilities largely absent from pure GRC platforms.

Human oversight workflows: Art. 14 requires that high-risk AI systems allow human operators to fully understand, override, and halt outputs. Compliance tooling should support workflow-level approval gates.

Post-market monitoring dashboard: Art. 72 mandates a post-market monitoring plan and system. This maps to LLM observability and ML monitoring platforms — not traditional compliance software.

EU database integration: High-risk systems listed in Annex III must be registered in the EU AI Act database (Art. 71), which became operational under the AI Office.


Build vs buy: when internal GRC is not enough

Arguments for internal tooling: - Mature GRC teams with existing ISO 27001 or SOC 2 programs may adapt existing risk frameworks - Annex IV documentation is structured but not algorithmically complex; experienced technical writers can produce it - Art. 9 risk management can be run through existing enterprise risk management systems with AI-specific customizations

Arguments for external platforms: - Art. 10 bias testing requires statistical testing infrastructure that most GRC platforms do not natively include - Art. 12 automatic event logging requires integration with production ML infrastructure (SageMaker, Vertex AI, Azure ML) — not a GRC problem - Art. 15 adversarial testing requires specialized red-teaming tooling - Most internal GRC tools have no concept of a "model" or "AI system" as a first-class object - Multi-framework requirements (NIST AI RMF, ISO 42001, Colorado AI Act) require simultaneous obligation mapping that commercial platforms handle at scale

The practical threshold: if your organization deploys more than 10 AI systems that may fall under Annex III, purpose-built tooling pays for itself in avoided rework. Below that threshold, a well-structured GRC extension may suffice — if it includes a data bias module.


How to evaluate EU AI Act compliance vendors (the criteria)

Use these eight criteria when issuing an RFP or conducting vendor demonstrations. Score each 1–5.

  1. Regulatory mapping depth: Does the platform map obligations at article and recital level? Ask for a live demo of how Art. 9 risk management maps to platform workflows.
  2. High-risk system classification: Does the platform have a structured Annex III classification tool, with derogation analysis?
  3. Annex IV documentation automation: Can the platform auto-generate draft technical documentation from a model registry? Does it support 10-year audit-trail retention?
  4. Bias and data quality testing: Does the platform run statistical fairness tests natively? What metrics are supported (demographic parity, equalized odds, individual fairness)?
  5. MLOps integration breadth: Which ML platforms does the vendor integrate with out of the box? (SageMaker, Azure ML, Vertex AI, Databricks, MLflow, Hugging Face)
  6. FRIA and human oversight workflow: Does the platform support the Art. 27 FRIA process? Does it provide approval gates and override documentation for Art. 14?
  7. Multi-framework coverage without duplication: If you need NIST AI RMF and ISO 42001 alongside EU AI Act, does the platform reuse evidence across frameworks?
  8. Evidence export format: Can the platform export audit-ready evidence packages in formats acceptable to notified bodies?

Also ask: Is the platform itself high-risk under the Act? Platforms that assess AI systems and influence compliance decisions may themselves face obligations.


Comparison of major vendors (5-7 from roster with concrete capability notes)

The following platforms are from the site's vendor roster. Capability notes are based on publicly available product pages as of April 2026. Independent evaluation against the criteria above is required before purchase.

VendorEU AI Act coverageArt. 10 bias testingMLOps integrationsFRIA supportPricing model
Credo AIPre-built EU AI Act policy packs; automated evidence generation; "10x faster compliance" — credo.aiContinuous bias assessment; automated red-teamingSnowflake, Databricks, AWS, Azure, MLflowCompliance mapping; policy enforcement workflowsEnterprise, contact sales
Holistic AIRisk mapped to EU AI Act; policy-as-code; continuous audit trails — holisticai.comAutomated bias detection, hallucination testing; runtime monitoringCloud-native integrations; lifecycle monitoringAudit-ready controls; evidence logsEnterprise, contact sales
Collibra AI GovernanceBuilt-in assessments aligned to EU AI Act; templates for EU AI Act documentation — collibra.comData lineage from training through inference; data quality rulesAWS, Azure, Google, Databricks, SAP, MLflowCompliance workflows with stakeholder reviewEnterprise
OneTrust AI GovernanceNIST AI RMF alignment confirmed; EU AI Act coverage claimed — onetrust.comGlobal framework risk identificationIntegration with OneTrust ecosystemCompliance reportingEnterprise
Modulos AIEU AI Pact signatory; evidence management for transparency certification; covers EU AI Act, ISO 42001, NIST AI RMF, DORA without duplicate work — modulos.aiLifecycle stage tracking; quantitative monetary risk assessmentCross-framework governance graphFull audit trailsEnterprise (CHF 15k+ starter; free starter plan available)
FairNow25+ AI regulations covered; flags applicable AI regulations per system with step-by-step guidance — fairnow.aiAutomated evidencing for ISO 42001 and NIST; bias audit servicesSelf-serve tier; mid-market accessibleAI certification supportStarting rate (self-serve available)
IBM watsonx.governanceCompliance accelerators for EU AI Act; governs models on any cloud including AWS, Azure, Google — ibm.com/products/watsonx-governanceAI guardrails for toxicity and bias; model drift detectionAWS, Microsoft, Google; hybrid cloud and on-premRisk capture with contextual regulatory mappingSaaS Standard: USD 0.60/resource unit

Evaluation caveat: All vendor capability claims are drawn from public product pages. Require vendors to demonstrate specific support for Arts. 9, 10, 12, and 17 obligations — not just top-level "EU AI Act" branding — during any proof of concept.

See the /best/eu-ai-act-compliance-tools collection for the full ranked list, and /best/ai-governance-platforms for the broader governance platform landscape.


Common procurement mistakes

1. Buying a GRC platform and calling it EU AI Act compliance. Traditional GRC tools can store documentation and run assessment workflows, but they lack AI system–aware data models, bias testing engines, and MLOps integrations required by Arts. 10, 12, and 15. GRC is necessary but not sufficient.

2. Treating GPAI obligations as someone else's problem. If your organization fine-tunes a foundation model and offers it externally — even to internal business units via API — you may meet the definition of a GPAI model provider. Art. 55 obligations for systemic-risk GPAI models include adversarial testing and real-time incident reporting.

3. Assuming a one-time gap analysis is sufficient. The Act imposes continuous obligations: ongoing risk management (Art. 9), automatic logging (Art. 12), and post-market monitoring (Art. 72). A point-in-time compliance project is a starting point, not a finished state.

4. Neglecting the deployer track. If you deploy a third-party Annex III high-risk system (e.g., an HR screening tool), you carry Art. 26 obligations including fundamental rights impact assessment (Art. 27), worker notification, and monitoring duties.

5. Conflating the EU AI Act with GDPR. The two regulations are distinct. The Act's Recital 10 makes clear it does not affect GDPR. A DPIA under GDPR is not a substitute for a FRIA under Art. 27. See the /frameworks/gdpr-article-22 framework page for interaction guidance.


Implementation timeline (12-month, 6-month, rush)

12-month plan (standard for most organizations, deadline: 2 August 2026)

  • Months 1–2: AI system inventory; classify each system against Annex III criteria; apply the Art. 6(2) derogation analysis to narrow scope
  • Months 3–4: Risk management system design (Art. 9); assign system owners; procure or configure tooling
  • Months 4–6: Annex IV technical documentation for highest-risk systems; data governance review (Art. 10); bias testing baseline
  • Months 6–8: Conformity assessment preparation; internal audit against all Art. 9–17 obligations; QMS (Art. 17) framework finalization
  • Months 8–10: Notified body engagement (if Annex I route required); EU database registration (Art. 71)
  • Months 10–12: Post-market monitoring system go-live; FRIA completion for deployer obligations; staff training (Art. 4 AI literacy)

6-month accelerated plan

Compress months 1–6 above into months 1–3 by running inventory, classification, and risk management design in parallel. Requires a dedicated program lead, external counsel for legal classification questions, and a platform that can auto-generate documentation drafts. Prioritize the top 20% of highest-risk systems; treat the rest as a second wave.

Rush plan (3 months, for organizations starting late)

Focus exclusively on: (1) Annex III classification across your full model portfolio, (2) prohibitions audit under Art. 5 to ensure nothing banned is already deployed, (3) written risk management procedures for your highest-risk system, and (4) basic conformity documentation. This does not achieve full compliance but addresses the highest-enforcement-risk gaps first.


FAQ

Q: Does the EU AI Act apply to AI systems used only internally? A: Yes, if the deployer is established in the EU and uses the system for professional purposes in a high-risk category. The Act does not limit itself to external-facing products.

Q: What counts as a "substantial modification" that triggers re-assessment? A: Article 9(5) requires the risk management system to cover changes throughout the lifecycle. The Act defines substantial modification at Art. 3(23) as a change that affects compliance with requirements or changes the intended purpose. Re-training on significantly different data and changes to human oversight design both likely qualify.

Q: Our AI vendor says their product is "EU AI Act compliant" — does that cover us as deployer? A: Partially. The vendor (as provider) must meet provider obligations (Arts. 16–20). You, as deployer, still carry Art. 26 obligations: following use instructions, implementing oversight, maintaining logs, and conducting FRIAs for Annex III systems. Provider compliance does not eliminate deployer obligations.

Q: What are the maximum fines? A: Under Art. 99: violations of Art. 5 prohibited practices: up to €35 million or 7% of global annual turnover; violations of other obligations: up to €15 million or 3%; providing incorrect information to authorities: up to €7.5 million or 1%.

Q: Is there a de minimis exemption for small companies? A: No absolute exemption, but Art. 62 provides measures to support SMEs and startups, including reduced fees for regulatory sandbox participation. Fines are capped at the lower of percentage/absolute amounts for SMEs.

Q: When does the EU AI Act database go live? A: The EU database for high-risk AI systems (Art. 71) became operational under the AI Office. Providers of Annex III systems must register before placing systems on the market after 2 August 2026.

Q: What is a GPAI model's "systemic risk" threshold? A: Annex XIII provides criteria. A GPAI model is presumed to carry systemic risk if trained using more than 10^25 FLOPs of computing power. The AI Office can also designate models as systemic-risk based on capabilities or reach, regardless of training compute.

Q: How does the EU AI Act interact with GDPR? A: The two instruments are complementary, not mutually exclusive. Recital 10 makes clear the Act does not affect GDPR. A DPIA under GDPR is not a substitute for a FRIA under Art. 27. Where an HR-AI system makes decisions about individuals, both GDPR Art. 22 restrictions and EU AI Act Annex III obligations apply.


For the full list of platforms that map to EU AI Act obligations, see [/best/eu-ai-act-compliance-tools](/best/eu-ai-act-compliance-tools). For related reading, see the [NIST AI RMF implementation guide](/guides/nist-ai-rmf-implementation-guide) and [AI governance platform buyer's guide](/guides/ai-governance-platform-buyers-guide-2026).

Keep reading