EU AI Act GPAI Obligations Explained: What Foundation Model Providers Must Do
Articles 53 and 55 of the EU AI Act impose layered obligations on GPAI model providers. Here's what applies, to whom, and when enforcement kicks in.
By ACV Editorial · April 22, 2026 · 12 min read · Last reviewed April 22, 2026
EU AI Act GPAI Obligations Explained: What Foundation Model Providers Must Do
The EU AI Act entered into force on 1 August 2024. Its most structurally novel provisions—those governing General-Purpose AI (GPAI) models—became legally operative on 2 August 2025, following the publication of the GPAI Code of Practice on 10 July 2025 and the EU Commission's guidelines on key concepts on 18 July 2025. For every organization that develops, distributes, or builds on top of foundation models with EU market exposure, the compliance clock is now running.
This post provides a systematic breakdown of what GPAI means under the Act, what Articles 53 and 55 require, who qualifies at which tier, how the Code of Practice functions as a compliance pathway, and what the enforcement calendar actually looks like.
What Is a GPAI Model Under the EU AI Act?
Article 3(63) of the AI Act defines a general-purpose AI model as:
*"an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications."
The EU Commission's guidelines published on 18 July 2025 operationalize this definition with a quantitative indicator: models trained with cumulative compute greater than 10²³ FLOPs (floating-point operations) and capable of generating language (text, audio), text-to-image, or text-to-video outputs are presumptively considered GPAI models. This captures virtually every contemporary large language model and multimodal foundation model.
Notably, the definition is functional rather than architectural. A fine-tuned customer service chatbot constrained to product-specific questions is unlikely to qualify—it lacks "significant generality." But GPT-4, Claude 3, Gemini Ultra, Llama 3, Mistral Large, and comparable foundation models unambiguously meet the definition.
The GPAI category sits alongside—not within—the Act's familiar four-tier risk framework (unacceptable, high-risk, limited, minimal). A GPAI model may also be subject to high-risk obligations if it is deployed in an Annex III context (hiring, credit scoring, law enforcement), but its GPAI obligations apply at the model layer, independent of downstream deployment context.
Article 53: The Four Baseline Obligations
Every provider of a GPAI model placed on the EU market must comply with four core obligations under Article 53(1), effective from 2 August 2025. The obligations apply regardless of company size, compute used in training, or whether the model has systemic risk designation.
Obligation 1: Technical Documentation (Annex XI)
Providers must draw up and maintain technical documentation per Annex XI of the Act. The documentation must be provided to the AI Office on request and retained for ten years after market placement.
Annex XI Section 1 specifies required content:
- General description: tasks the model can perform, architecture type, whether it is multimodal, licence terms, parameter count where available
- Development process: training methodology, optimization objectives, internal evaluation results
- Training data: type of data, provenance, curation and filtering methods, approximate data points per modality, feedback from human reviewers, bias detection activities
- Compute resources: estimate of FLOPs consumed during training, training duration, total energy consumed
For models with systemic risk, Annex XI Section 2 adds adversarial testing details, detailed evaluation results, and system architecture diagrams.
The Code of Practice published on 10 July 2025 provides a Model Documentation Form template that satisfies Annex XI requirements. Providers can use it to demonstrate conformity through a standardized, AI Office-accepted format.
Obligation 2: Downstream Provider Information (Annex XII)
Providers must make available to downstream AI system providers—those who build products and services on top of the GPAI model—the technical information and documentation specified in Annex XII.
Annex XII requires disclosure of: model capabilities (performance benchmarks); known limitations and performance degradation conditions; intended uses and excluded uses; safety performance including hazards and failure modes; and interaction modalities (text-in/text-out, multimodal, tool use).
The rationale is practical: downstream providers building high-risk AI systems under Annex III cannot meet their own Annex IV technical documentation requirements without understanding the capabilities and limitations of the underlying model. Governance tools from vendors like Holistic AI and Credo AI help downstream operators collect and track this information from their GPAI suppliers.
Obligation 3: Copyright Compliance Policy
Providers must implement a policy to comply with Union copyright law, specifically respecting Article 4(3) of the Copyright Directive (EU) 2019/790—the text and data mining opt-out right. Under this provision, rights holders may expressly reserve the right to opt out of TDM uses of their works.
The Code of Practice's Copyright Chapter operationalizes this obligation with specific requirements:
- Training data must be "lawfully accessible"
- Providers must honor machine-readable opt-outs (robots.txt per RFC 9309, ai.txt files, HTTP headers)
- Data scraped from sites "persistently and repeatedly infringing copyright" is prohibited
- Providers must implement safeguards against generating copyright-infringing outputs
- A complaint mechanism for rights holders must be maintained
- A board-level written copyright policy must assign organizational responsibility
The EU Commission published a mandatory training data summary template on 24 July 2025, eliminating prior ambiguity about what "sufficiently detailed" meant in practice.
Obligation 4: Training Data Summary Publication
Providers must publish a sufficiently detailed summary of the content used to train the GPAI model. The AI Office's mandatory template published 24 July 2025 requires disclosure across three sections:
- Section 1 (General): data modalities (text, image, audio, code); approximate dataset size in categorical ranges; language coverage
- Section 2 (Data Sources): large named datasets; top web-scraped domains by volume; crawler and scraping tools used
- Section 3 (Processing): filtering and deduplication methods; quality thresholds; opt-out identification and respect; content moderation procedures
The summary must be updated every six months or when material changes occur. Trade secrets may be redacted in the public version, but the non-redacted version must be provided to authorities.
Who Is a "Downstream Provider" Under the Act?
The downstream provider distinction matters enormously for organizations that fine-tune or build on foundation models. The Commission's 18 July 2025 guidelines establish a clear threshold: a downstream modifier becomes a new GPAI provider in their own right only if the modification uses more than one-third of the original model's training compute.
In practice, standard fine-tuning—which typically uses orders of magnitude less compute than original training—will rarely trigger re-classification as a new GPAI provider. The modifier's obligations are then limited to documenting their specific modifications, not redocumenting the full base model.
However, if the upstream GPAI provider is non-EU and explicitly excludes EU distribution in their licensing terms, the downstream operator that deploys the model to EU users may themselves become the GPAI provider under Article 2(1)'s extraterritorial reach provision.
Organizations building commercial applications on top of APIs like the OpenAI API, Anthropic API, or cloud model services are generally "GPAI system deployers"—subject to any high-risk AI obligations arising from their specific use case under Annex III, but not to the model-layer GPAI obligations of Articles 53–55, absent significant modification.
Article 55: Systemic Risk—The Second Tier
Article 55 imposes a second, cumulative tier of obligations on providers of GPAI models classified as presenting systemic risk. These obligations add to Article 53 requirements—they do not replace them.
The 10²⁵ FLOPs Threshold: Rebuttable Presumption
Article 51(2) establishes that a GPAI model trained with cumulative compute at or above 10²⁵ floating-point operations (FLOPs) is presumed to present systemic risk. For reference, GPT-3's training required approximately 3×10²³ FLOPs; GPT-4's training has been estimated at roughly 2×10²⁵ FLOPs, placing it squarely above the threshold.
The presumption is rebuttable: a provider whose model exceeds the threshold may contest the designation by demonstrating to the AI Office's satisfaction that the model lacks the high-impact capabilities that constitute systemic risk. Crucially, demonstrating that risks have been mitigated does not rebut the presumption—the provider must show an absence of systemic risk capabilities.
Conversely, the Commission may designate models below 10²⁵ FLOPs as systemic risk under the criteria in Annex XIII: capabilities versus state-of-the-art benchmarks, number of EU users, degree of autonomy, and scalability to high-impact deployments. This means a highly capable model trained at 10²⁴ FLOPs could still receive designation.
Providers must notify the AI Office within two weeks of meeting or foreseeing the 10²⁵ FLOPs threshold.
The Four Additional Article 55 Obligations
Adversarial testing (Article 55(1)(a)). Providers must conduct state-of-the-art model evaluations, including adversarial testing (red-teaming), before placing the model on the market and regularly thereafter. Testing must cover alignment, fine-tuning resistance, capability evaluations for catastrophic harm, fraud, infrastructure attacks, and CBRN (chemical, biological, radiological, nuclear) risks. Results must be documented in Annex XI Section 2 and risks mitigated.
Systemic risk assessment and mitigation. Providers must assess and mitigate possible systemic risks at Union level, including their sources. The Code of Practice's Safety & Security Chapter requires adoption of a Safety & Security Framework—a living document describing systemic risk management processes—and a Safety and Security Model Report prepared before each market placement.
Serious incident reporting (Article 55(1)(b)). Providers must report serious incidents to the AI Office via the EU SEND platform without undue delay upon discovery. The Code of Practice specifies notification windows of two to fifteen days depending on severity. This is a direct analog to the NIS2 Directive's incident reporting obligation, applied to AI.
Cybersecurity protection (Article 55(1)(d)). Providers must maintain adequate cybersecurity for both the model and its physical infrastructure throughout the model lifecycle—specifically protecting against weight exfiltration, unauthorized access, model theft, and API exploitation.
The Open-Source Exemption: Narrower Than It Appears
Article 53(2) provides a partial exemption for GPAI models released under free and open-source licenses: they are exempt from the Annex XI technical documentation obligation and the Annex XII downstream information obligation—but not from the copyright compliance policy or training data summary obligations.
To qualify, three conditions must all be satisfied: the license must permit access, use, modification, and distribution without restrictions (research-only or commercial-use-restricted licenses do not qualify); model weights, architecture, and usage information must be publicly available; and the model must not be monetized (directly or indirectly).
The Llama family of models likely does not qualify in most commercial deployment contexts because Meta's license carries commercial-use terms that restrict certain categories of use. Apache 2.0–licensed models like some Mistral releases may qualify if the developer does not monetize the model. The EU Commission's guidelines make clear that making models available through open repositories does not, by itself, constitute monetization—but charging for API access, fine-tuning services, or paid support contracts around the model likely does.
Critically, the open-source exemption provides no relief for systemic-risk models. A provider of an open-source model above the 10²⁵ FLOPs threshold must comply with all Article 53 and Article 55 obligations in full.
The GPAI Code of Practice: A Compliance Pathway
The GPAI Code of Practice, published 10 July 2025 and formally approved on 1 August 2025 by the EU Commission and AI Board, was developed through a multistakeholder process involving nearly 1,000 participants. Adherence is voluntary—but its strategic importance is difficult to overstate.
Providers who sign the Code can demonstrate compliance with Articles 53 and 55 through the Code's measures, rather than through their own bespoke compliance architecture. Non-signatories must demonstrate compliance independently, will face closer scrutiny from the AI Office, and cannot benefit from the Code's interpretive safe harbors. Given enforcement penalties of up to €15 million or 3% of global annual turnover for GPAI non-compliance, the compliance infrastructure cost of not signing typically exceeds the burden of signing.
The Code is structured in three chapters:
- Transparency Chapter: Covers Article 53(1)(a) and (b) obligations. Provides the Model Documentation Form and information-sharing protocols.
- Copyright Chapter: Covers Article 53(1)(c). Provides copyright policy templates and lawful web-crawling standards.
- Safety & Security Chapter: Covers Article 55. Applies only to systemic-risk model providers. Provides the Safety & Security Framework template and Model Report requirements.
AI governance platforms like Saidot and LatticeFlow AI have incorporated the Code's requirements into their model governance workflows, enabling providers to generate compliant documentation artifacts systematically rather than manually.
Enforcement Timeline
Understanding the staggered timeline is essential for compliance planning:
| Date | Event |
|---|---|
| 1 August 2024 | AI Act enters into force |
| 2 February 2025 | Prohibited AI practices enforcement begins; AI literacy obligations apply |
| 10 July 2025 | GPAI Code of Practice published |
| 18 July 2025 | Commission guidelines on GPAI scope published |
| 24 July 2025 | Mandatory training data summary template published |
| 2 August 2025 | GPAI obligations (Articles 53–55) legally operative for new models |
| 2 August 2026 | AI Office enforcement powers and fines activate for models placed after August 2025 |
| 2 August 2026 | High-risk AI system obligations (Annex III) fully enforceable |
| 2 August 2027 | GPAI obligations apply to models placed on market before 2 August 2025 (transition period ends) |
For organizations evaluating compliance tools, the EU AI Act framework page on this site tracks regulatory developments and links to official Commission documentation.
Practical Implications for Different Actor Types
Foundation model providers (OpenAI, Anthropic, Google DeepMind, Meta, Mistral, Cohere, and equivalents) are the primary targets of Articles 53 and 55. They need to produce Annex XI documentation, publish training data summaries using the mandatory template, implement copyright compliance policies, and—if above 10²⁵ FLOPs—conduct adversarial testing and establish incident reporting channels.
Cloud platform providers (AWS, Azure, Google Cloud) that distribute third-party foundation models via their marketplaces may themselves be acting as GPAI providers if they make those models available on the EU market under their own terms—a grey area the Commission is still clarifying.
Enterprise application builders who access foundation models via API and do not materially modify them are downstream deployers, not GPAI providers. They remain subject to high-risk AI obligations if their applications fall under Annex III, and they should request Annex XII documentation from their GPAI suppliers to satisfy their own technical documentation requirements.
Open-source model maintainers should audit whether their licenses satisfy the free and open-source definition, confirm they are not monetizing the model in ways that would void the exemption, and recognize that copyright compliance and training data summary obligations apply regardless.
Key Takeaways
- GPAI obligations under Articles 53 and 55 have applied since 2 August 2025; full enforcement with fines begins 2 August 2026 for new models.
- All GPAI providers must produce Annex XI technical documentation, Annex XII downstream information, a copyright compliance policy, and a training data summary using the EU Office's mandatory template.
- The 10²⁵ FLOPs threshold creates a presumption of systemic risk, triggering four additional obligations: adversarial testing, systemic risk assessment, serious incident reporting, and cybersecurity protection.
- The Code of Practice, published July 2025, is the practical compliance pathway—non-signatories face heavier scrutiny and must independently demonstrate conformity.
- Open-source models get a partial exemption from documentation obligations, but must still comply with copyright and training data transparency requirements; the exemption does not apply to systemic-risk models.
- Downstream builders who access GPAI models via API without material modification are not GPAI providers, but should obtain Annex XII documentation from their suppliers.
Sources
- EU AI Act, Articles 51–55 (official text): https://artificialintelligenceact.eu/high-level-summary/
- GPAI Code of Practice, EU Commission (published 10 July 2025): https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai
- Jones Day, EU AI Act: European Commission Publishes GPAI Code of Practice (August 2025): https://www.jonesday.com/en/insights/2025/08/eu-ai-act-european-commission-publishes-generalpurpose-ai-code-of-practice
- EU AI Act Navigator, Articles 53 and 55 Guide: https://euai.app/blog/gpai-obligations-article-53-55-guide
- AI Act Gap, GPAI Obligations (Arts. 53–55): https://www.aiactgap.com/guides/gpai-obligations
- WilmerHale, EU Commission Issues Guidelines for GPAI Providers (July 2025): https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20250724-european-commission-issues-guidelines-for-providers-of-general-purpose-ai-models
- Skadden, EU GPAI Obligations Now in Force (August 2025): https://www.skadden.com/insights/publications/2025/08/eus-general-purpose-ai-obligations
- Hugging Face, What Open-Source Developers Need to Know about EU AI Act GPAI: https://huggingface.co/blog/yjernite/eu-act-os-guideai
- Nemko Digital, EU AI Act GPAI 2025 Update: https://digital.nemko.com/insights/eu-ai-act-rules-on-gpai-2025-update
- EU Commission, GPAI Models FAQ (July 2025): https://digital-strategy.ec.europa.eu/en/faqs/general-purpose-ai-models-ai-act-questions-answers
Keep reading
Frameworks
Colorado AI Act: What Insurers and Employers Need to Do Before June 2026
Colorado SB24-205 takes effect June 30, 2026. Here's what insurers and employers must do now on impact assessments, consumer notices, and AG enforcement.
Frameworks
NIST AI RMF vs ISO/IEC 42001: Which Should You Adopt First?
NIST AI RMF is a flexible US risk framework; ISO 42001 is a certifiable international standard. Here's how they differ, overlap, and how to sequence both.
Frameworks
The EU AI Act Compliance Checklist for 2026
A practical 20-item checklist covering risk classification, GPAI obligations, high-risk system requirements, conformity assessment, and fines under the EU AI Act.