Introduction: Why Transparency Matters in AI Governance
Artificial Intelligence (AI) systems are transforming the way industries operate, from healthcare and education to finance and creative industries. With this rapid adoption comes an equally pressing demand: ensuring that AI is compliant with legal frameworks. Transparency is a vital property of GPAI.
The European Union’s Code of Practice for General-Purpose AI Models (GPAI) provides a voluntary but powerful framework to help companies align with the EU AI Act. It is designed not only for signatories but also for organizations that want to demonstrate good faith compliance and adopt responsible AI practices.
At the heart of this framework lies Transparency, one of three foundational chapters, alongside Copyright and Safety & Security. Transparency is more than a bureaucratic exercise: it is the backbone of trust. Clear, accessible documentation enables downstream providers, regulators, and end-users to understand how AI models are built, what they can and cannot do, and under what conditions they may be safely deployed.
This article explores the objectives of the three chapters, then takes a deep dive into the Transparency Chapter — focusing on Commitment 1: Documentation and the practical Model Documentation Form.
Objectives of the GPAI Code of Practice
The Code’s overarching goal is to promote the uptake of human-centric and “trustworthy” AI while ensuring compliance with the EU AI Act. Specifically, it aims to:
- Serve as a guiding document for demonstrating compliance with Articles 53 and 55 of the AI Act, which mandate transparency and systemic risk obligations.
- Provide a framework for the AI Office to assess compliance of GPAI providers who rely on the Code.
- Support the internal market by harmonizing best practices across EU Member States, ensuring AI innovation does not outpace accountability.
Important clarification: As the Transparency Chapter itself notes, adherence to the Code does not constitute conclusive evidence of compliance with the EU AI Act. According to Article 53(4) AI Act, only compliance with European harmonised standards provides providers with a presumption of conformity to the extent that those standards cover the obligations. Nevertheless, the Code of Practice remains a valuable orientation tool for GPAI providers, helping them translate regulatory obligations into concrete practices and reducing the risk of missteps on the path to compliance.
Each chapter serves a different but complementary role:
- Transparency → requires providers to document and disclose information that downstream users and regulators need.
- Copyright → ensures training practices and outputs respect Union copyright law.
- Safety & Security → sets standards for managing systemic risks in high-impact AI models.
Together, these objectives reflect the AI lifecycle from lawful training to transparent documentation, and from safe deployment to continuous monitoring.
Commitment 1: Documentation – The Foundation of Transparency
Transparency is a vital property of GPAI and one way to ensure it is the proper documentation of AI models. The Transparency Chapter therefore introduces Commitment 1: Documentation, which applies to all GPAI model providers (unless exempted under Article 53(2) for certain open-source cases).
Key Provider Duties
Under this commitment, providers must:
- Draw up and maintain documentation (Measure 1.1)
- At model release, complete documentation covering all required fields in the Model Documentation Form.
- Keep the documentation updated as the model evolves.
- Retain past versions for 10 years after release.
- Provide relevant information (Measure 1.2)
- Publicly disclose via their website (or another appropriate channel if no website exists) contact information for the AI Office and for downstream providers who may need to request model details.
- Make relevant documentation available to the AI Office and national competent authorities upon request.
- Share appropriate parts of the documentation with downstream providers, helping them integrate models and comply with their own AI Act obligations.
- Ensure information integrity and security (Measure 1.3)
- Protect records against unauthorized alteration.
- Follow established technical standards for version control and record-keeping.
These requirements ensure that transparency is not a one-off exercise but a living commitment throughout the model lifecycle.
The Model Documentation Form: A Practical Tool
To simplify compliance, the Transparency Chapter provides a Model Documentation Form. This standardized template serves as both a compliance tool and a trust-building mechanism.
What Does the Form Include?
The form is divided into sections, each tailored to different audiences — the AI Office (AIO), national competent authorities (NCAs), and downstream providers (DPs).
Key fields include:
- General Information
- Provider’s legal name, model name, release date, dependencies.
- Evidence of model authenticity (e.g., hashes, secure endpoints).
- Model Properties
- Architecture description (e.g., transformer-based).
- Total parameters, design specifications, input/output modalities.
- Distribution and Licensing
- Distribution channels (APIs, software, open-source repositories).
- Associated licenses and terms of use.
- Compute and Efficiency Metrics
- Training compute (FLOPs), energy consumption, efficiency benchmarks.
- Use Cases and Limitations
- Intended uses, prohibited applications, and known model limitations.
- Risk-Related Information
- Mitigation measures and safety controls applied.
Access Controls and Confidentiality
Not all information is public. In line with Article 78 of the AI Act, data shared with the AI Office or NCAs is subject to confidentiality obligations, including protection of trade secrets and intellectual property.
This balance between transparency and confidentiality is vital: it allows regulators to scrutinize models while ensuring providers can protect commercially sensitive data.
Recital Guidance: Why Documentation Matters
As already pointed out above, transparency is a vital property of GPAI and documentation is key for achieving it. Several recitals of the Transparency Chapter therefore emphasize the importance of documentation:
- Recital (a) highlights that GPAI providers have a unique role in the AI value chain since their models form the backbone of countless downstream systems who must have a good understanding of the models that they integrate into their products.
- Recital (b) notes that obligations should be proportionate — if a model is merely fine-tuned or lightly modified, the provider’s duties extend only to those modifications.
- Recital (c) stresses that documentation should adapt to technological and market developments, ensuring ongoing relevance.
Together, these recitals remind providers that documentation is not simply regulatory “box-ticking,” but a practical enabler of accountability and interoperability.
Practical Benefits for Providers
For Signatories
- Provides a clear roadmap for meeting AI Act transparency obligations, helping providers reduce compliance risks even though the Code itself does not establish a legal presumption of conformity (as only harmonised standards do under Article 53(4) AI Act).
- Demonstrates commitment to compliant AI, strengthening public and investor trust.
- Provides a structured basis for engagement with regulators.
For Non-Signatories
- Offers a voluntary guideline to prepare for audits or due diligence processes.
- Provides a competitive advantage by showcasing proactive compliance.
- Helps embed best practices in AI governance frameworks, even outside the EU market.
Case Study Example: Downstream Integration
Imagine a GPAI provider releasing a large language model. A fintech company wants to integrate it into a fraud detection system, which could qualify as a high-risk AI system under the AI Act.
Without documentation, the fintech has no way to assess the model’s limitations, provenance, or reliability. But with the standardized documentation form, the fintech gains:
- Clarity on the model’s training and capabilities.
- Transparency about acceptable use cases.
- Confidence that it can meet its own obligations under the AI Act.
Transparency is a vital property of GPAI and this is how transparency creates synergies across the AI ecosystem.
Challenges Providers May Face
- Resource burden: Smaller providers may find it challenging to maintain documentation to this standard.
- Dynamic updates: Rapid model iterations make record-keeping complex.
- Balancing openness and secrecy: Striking the right balance between public transparency and IP protection requires careful judgment.
Nonetheless, the long-term benefits outweigh short-term costs, as transparent documentation reduces both compliance risk and reputational damage.
Conclusion: Transparency as a Strategic Advantage
The Transparency Chapter of the GPAI Code of Practice is not just about compliance. It is about building trust, enabling safe downstream innovation, and fostering accountability in Europe’s AI ecosystem.
By adopting Commitment 1 and using the Model Documentation Form, providers can:
- Enhance their efforts to align with the EU AI Act.
- Build confidence with regulators and customers.
- Set themselves apart as leaders in responsible AI governance.
A Call-to-Action
Whether you are considering signing the Code or simply aligning your practices voluntarily:
If you are an innovator → Treat transparency as a feature, not a burden — it will be a competitive advantage in the era of AI regulation.
If you are a GPAI provider → Start preparing your documentation framework today.
If you are a downstream company → Ask your model providers for documentation to ensure your own compliance.
The EU AI Act is a complex regulatory framework. If you are developing AI models, make sure that you don´t neglect legal compliance.