GPAI Code of Practice: Who Signed, Who Didn't, and What It Means for Enterprise AI Buyers
The EU AI Office published the final General-Purpose AI Code of Practice on July 10, 2025. Google, OpenAI, Anthropic, Microsoft, Mistral, Cohere, Amazon, and IBM signed. Meta publicly refused. Here is what the three chapters require, what Article 56 means for non-signatories, and how procurement teams should respond.
By AI Compliance Vendors Editorial · April 26, 2026 · 8 min read · Last reviewed April 26, 2026
April 26, 2026 · AI Compliance Vendors Editorial
TL;DR
- The EU AI Office published the final General-Purpose AI Code of Practice on July 10, 2025, one month before GPAI obligations under the AI Act became enforceable on August 2, 2025.
- The Code has three chapters — Transparency, Copyright, and Safety and Security — covering obligations under Articles 53 and 55 of the AI Act.
- Confirmed signatories include Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, and OpenAI.
- Meta publicly refused to sign, with its chief global affairs officer calling the Code an "overreach."
- xAI signed only the Safety and Security chapter, meaning it must demonstrate compliance with Transparency and Copyright obligations through other means.
- Non-signatories face heavier regulatory scrutiny and must demonstrate compliance through alternative means under Article 56.
- Enterprise buyers should require GPAI provider transparency documentation in RFPs and contract language today.
Background: Why the Code Exists
The EU AI Act's obligations for providers of general-purpose AI (GPAI) models became applicable on August 2, 2025 — twelve months after the Act entered into force. The European standardisation process typically takes three or more years. Article 56 of the AI Act was designed to bridge this gap: it authorizes the AI Office to facilitate voluntary codes of practice that providers can use to demonstrate compliance with Articles 53 and 55 until harmonised standards are published.
The Code of Practice was drafted through a multi-stakeholder process involving GPAI model providers, downstream deployers, civil society, and independent experts. The final version was published on July 10, 2025, and the European Commission and the AI Board have confirmed it is an adequate voluntary tool for demonstrating compliance with the Act.
For enterprise teams that rely on foundation models — for RAG pipelines, co-pilots, content generation, or customer-facing automation — the Code defines a new accountability floor. Understanding who signed and who did not is no longer an academic exercise.
What the Three Chapters Require
Chapter 1: Transparency (Applies to All GPAI Model Providers)
The Transparency chapter addresses the obligations in Article 53(1)(a) and (b) of the AI Act. It requires signatories to:
- Maintain and update model documentation using a standardised Model Documentation Form. This form covers licensing, technical specifications, use cases, datasets used, compute and energy usage, and capability assessments. Documentation must be retained for at least ten years and made available to the AI Office on request and to downstream providers upon request within fourteen days.
- Publish contact details so the AI Office and downstream providers can request access to relevant documentation.
- Ensure quality and integrity of all documented information, protecting it from unintended alteration.
Providers of free and open-source GPAI models are exempt from the Transparency obligations unless the model poses systemic risk, per Article 53(2) of the AI Act.
For enterprise deployers building on top of foundation models, this chapter is significant: it creates a formal mechanism for requesting documentation from your GPAI provider. If your vendor signed the Code, they are committed to responding to your documentation request within fourteen days.
Chapter 2: Copyright (Applies to All GPAI Model Providers)
The Copyright chapter addresses compliance with Article 53(1)(c), which requires GPAI providers to implement a policy to comply with EU copyright law — particularly rights holders' ability to reserve the use of their works under Article 4(3) of Directive 2019/790.
Signatories commit to five measures, as detailed in the Global Policy Watch analysis of the final Code:
- Draw up and maintain a copyright policy aligned with EU law.
- Only collect web-crawled data from lawfully accessible sources — not circumventing paywalls or scraping sites flagged for persistent copyright infringement.
- Respect machine-readable rights reservations, including robots.txt signals.
- Implement technical safeguards to prevent models from generating copyright-infringing outputs and prohibit infringing uses in acceptable-use policies.
- Designate a contact point for rights holders to submit complaints, with fair and timely resolution processes.
As Skadden noted in its August 2025 analysis, adherence to the Copyright chapter does not itself constitute compliance with EU copyright law — it demonstrates that a policy framework is in place.
Chapter 3: Safety and Security (Applies Only to GPAI Models with Systemic Risk)
This chapter is the most extensive and applies only to providers of GPAI models with systemic risk — generally defined as models trained above the threshold of 10^25 floating-point operations (FLOP). Globally, this currently covers a small number of providers, estimated at five to fifteen companies.
Under Article 55 of the AI Act, these providers must:
- Conduct and document adversarial testing to identify and mitigate systemic risks.
- Assess and mitigate systemic risks, including evaluating the sources of those risks.
- Track, document, and report serious incidents to the AI Office without undue delay.
- Ensure cybersecurity protection adequate to the model's risk profile.
Signatories to this chapter must notify the AI Office of their Safety and Security Framework and Safety and Security Model Reports. As the Wikipedia summary of the Code notes, public disclosure is limited: providers are only required to publish summaries of their safety framework when a model may pose greater risk than comparable models already available in the EU.
Who Signed
The official EU Digital Strategy signatory list confirms the following organizations as signatories as of the most recently updated public list:
Major foundation model providers: Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, OpenAI
European and specialized providers: Accexible, AI Studio Delta, Aleph Alpha, Almawave, Black Forest Labs, Bria AI, Domyn, Dweve, Fastweb, Lawise, LINAGORA, Open Hippo, Pleias, ServiceNow, WRITER
Partial signatory: xAI signed only the Safety and Security Chapter, which means it must demonstrate compliance with Transparency and Copyright obligations through alternative adequate means.
For enterprise procurement teams, the presence of all major US hyperscalers and dominant foundation model providers on the list — except Meta — provides meaningful signal. A vendor with systemic-risk GPAI models that has signed the Code has committed to maintaining model documentation, responding to downstream requests, and operating within a structured safety framework overseen by the EU AI Office.
Who Refused — and Why
Meta is the most prominent non-signatory. On July 18, 2025, TechCrunch reported that Meta's chief global affairs officer Joel Kaplan posted on LinkedIn:
"We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act."
Kaplan added that the EU is "heading down the wrong path on AI" and that the Code would "throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them."
Meta characterized the Code as regulatory "overreach" that exceeds the AI Act's actual requirements. This is a significant position: Meta's Llama family of models is widely deployed in enterprise settings, often via self-hosting or through cloud provider integrations. Meta's refusal means enterprises relying on Llama-based models cannot point to Code adherence as evidence of GPAI compliance.
Several Chinese-headquartered providers also did not sign, as noted in the artificialintelligenceact.eu analysis of the signatory landscape.
What Article 56 Means for Non-Signatories
Article 56 of the AI Act creates a structured but asymmetric compliance environment:
For signatories: The Commission and AI Board treat Code adherence as a presumption of conformity with Articles 53 and 55. Enforcement attention focuses on monitoring adherence to the Code. Commitments to the Code may be taken into account as mitigating factors when determining fines.
For non-signatories: Providers must independently demonstrate compliance through "alternative adequate means" for assessment by the Commission. Per the artificialintelligenceact.eu introduction to the Code, non-signatories can expect a larger number of requests for information from the AI Office and will typically need to provide more detailed responses.
The enforcement timeline matters: the GPAI rules took effect August 2, 2025, but the Commission's active enforcement actions — requests for information, access to models, or model recalls — will begin August 2, 2026. For models placed on the market before August 2, 2025, providers have until August 2, 2027 to achieve compliance.
Fine exposure is not trivial. As MIAI's analysis of the Code notes, the Commission can impose fines of up to 3% of annual worldwide turnover or €15 million for providers who intentionally or negligently infringe the regulation — and Code adherence is explicitly referenced as a factor in fine calculation.
Practical Implications for Enterprise AI Buyers
For organizations procuring foundation models or building on top of GPAI systems, the Code's signatory landscape reshapes vendor due diligence in three concrete ways.
1. Downstream documentation rights are now enforceable for signatories. If you integrate a GPAI model from a signatory into your AI system and you need documentation to meet your own AI Act obligations (as a high-risk AI system provider, for example), you can formally request it. The Code commits signatories to respond within fourteen days. Build this into your contracts.
2. Non-signatory providers require more diligence, not less. If a vendor's GPAI model is not covered by the Code — either because the vendor refused to sign or because the model falls below the systemic-risk threshold — you cannot rely on Code adherence as a compliance proxy. You need to assess their alternative compliance approach directly. Ask for their technical documentation, their copyright policy, and, for systemic-risk models, their safety framework.
3. RFP language should reflect Code status. Enterprise procurement teams should add a standard question to foundation model RFPs: "Has your organization signed the EU GPAI Code of Practice? If yes, which chapters? If no, what alternative means of compliance with Articles 53 and 55 of the EU AI Act have you implemented, and can you provide documentation?" This is not a disqualifying question — it is a transparency requirement.
A vendor who has signed the Code and adheres to its documentation standards is demonstrably easier to audit. A vendor who has not signed, but can articulate a clear compliance posture, is acceptable. A vendor who cannot answer the question should be treated as a material compliance risk.
The Signatory Taskforce and Ongoing Monitoring
Signatories of the Code have established a Signatory Taskforce, chaired by the AI Office, to facilitate coherent application of the Code. The Taskforce meets regularly to exchange views on implementation and supports the AI Office's monitoring of Code adherence.
The Code is also not static. Article 56(8) of the AI Act requires the AI Office to encourage and facilitate review and adaptation of the Code as technical standards emerge. When CEN-CENELEC finalises harmonised AI standards — expected no earlier than 2027 — the Code may be superseded by or integrated into those standards.
What to Do Now
- Audit your GPAI vendor stack. List every foundation model provider in your production or near-production environment and check their signatory status against the official EU Digital Strategy signatory page.
- Update contracts. For Code signatories, add contractual language requiring the vendor to maintain Code adherence and to respond to documentation requests within the fourteen-day window the Code specifies.
- For non-signatory vendors, request written documentation of their alternative compliance approach under Articles 53 and 55. Store this documentation.
- For Meta/Llama deployments specifically, engage your legal and compliance team on the implications of deploying a model from a non-signatory provider in EU-facing use cases. Consider whether the deployment falls under high-risk AI Act categories that would impose additional conformity obligations on you as a deployer.
- Watch the August 2, 2026 enforcement date. This is when the AI Office can begin active enforcement actions. Have your documentation in order before that date.
Sources: [EU Digital Strategy — GPAI Code of Practice](https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai) · [Signatory Taskforce page](https://digital-strategy.ec.europa.eu/en/policies/signatory-taskforce-gpai-code-practice) · [Article 55, EU AI Act](https://artificialintelligenceact.eu/article/55/) · [Article 56, EU AI Act](https://artificialintelligenceact.eu/article/56/) · [Meta refusal — TechCrunch](https://techcrunch.com/2025/07/18/meta-refuses-to-sign-eus-ai-code-of-practice/) · [Introduction to the Code of Practice — artificialintelligenceact.eu](https://artificialintelligenceact.eu/introduction-to-code-of-practice/) · [Final Code analysis — Global Policy Watch](https://www.globalpolicywatch.com/2025/07/ai-office-publishes-final-version-of-the-code-of-practice-for-general-purpose-ai-models/) · [Skadden GPAI obligations analysis](https://www.skadden.com/insights/publications/2025/08/eus-general-purpose-ai-obligations) · [Wikipedia — General-Purpose AI Code of Practice](https://en.wikipedia.org/wiki/General-Purpose_AI_Code_of_Practice) · [MIAI — Hidden Policy Choices](https://ai-regulation.com/gpai-cop-hidden-policy-choices/) · [Latham & Watkins analysis](https://www.lw.com/en/insights/eu-ai-act-gpai-model-obligations-in-force-and-final-gpai-code-of-practice-in-place)
Keep reading
Frameworks
OSFI E-23 Final Guideline (2025): What Canadian Banks and Insurers Must Do Before May 2027
OSFI published the final E-23 Guideline on September 11, 2025. Effective May 1, 2027, it extends to all federally regulated financial institutions and all models — including third-party AI. This post covers what changed from the 2017 version, the AI/ML-specific obligations, the 18-month transition window, and a gap-assessment checklist for Canadian FRFIs.
Frameworks
The Texas AI Act (TRAIGA): Complete Compliance Guide for January 1, 2026
Texas HB 149 takes effect January 1, 2026. This guide walks through prohibited practices, penalties up to $200,000 per violation, the 60-day cure period, NIST AI RMF safe harbor, and the 36-month sandbox — with every provision cited to primary source.
Frameworks
EU AI Act GPAI Obligations Explained: What Foundation Model Providers Must Do
Articles 53 and 55 of the EU AI Act impose layered obligations on GPAI model providers. Here's what applies, to whom, and when enforcement kicks in.