Artificial intelligence powers translation tools, image generators, and analytical systems. These General-Purpose AI Models (GPAI) underpin countless downstream uses. The EU Code of Practice for General-Purpose AI Models is a voluntary but authoritative framework that anticipates the EU AI Act’s binding Articles 53–55 on transparency, copyright, and systemic-risk management. In my earlier blog posts, I explored the Transparency and Copyright Chapters. This final post turns to the Code’s most crucial pillar — Safety and Security — ensuring reliable and ethical AI across its lifecycle. Safe and secure GPAI models form the foundation of sustainable AI.
The Code is a voluntary framework. It is not legally binding in itself but is designed to:
- Help providers demonstrate compliance with the EU AI Act, especially Articles 53 and 55, which impose obligations on transparency, copyright, and systemic-risk management.
- Offer a clear orientation tool for providers navigating complex legal requirements.
- Provide the AI Office with a reference point to assess compliance for those providers who rely on the Code.
Important clarification: This post provides an initial overview of the essential sample measures set out in the Safety and Security Chapter of the GPAI Code of Practice, the most complex and comprehensive chapter of all. This post is not intended to be exhaustive, and signatories of the Code will need to take further measures to ensure their adherence.
Understanding the Purpose
GPAI must be safe and secure. The Safety and Security Chapter translates Europe’s risk-based policy into ten concrete commitments for systemic-risk GPAI models — very large or high-impact systems.
A Systemic-Risk is a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain; (Article 3 para. (65) of the EU AI Act). Examples for systemic risks are provided in the overview of Commitment 2 hereinafter.
Commitment 1 – Safety & Security Framework
A Safety & Security Framework is a governance document integrating technical, organisational, and procedural safeguards for identifying, assessing, and mitigating AI risks.
Under the Code of Practice, providers of GPAI models commit to creating and maintaining a living Safety & Security Framework which documents risk management across the model lifecycle.
- Create: a state-of-the-art Framework that notably contains a high-level description of implemented and planned processes and measures for systemic risk assessment and mitigation(Measure 1.1).
- Implement: Apply ongoing risk assessments and implement risk mitigations (Measure 1.2).
- Update: Following each framework assessment by a signatory of the Code and at least every 12 months
(Measure 1.3). - Notify the EU AI Office: Submit updates within five business days (Measure 1.4).
Recitals (a) and (g) of the Safety and Security Chapter of the Code require lifecycle management and application of the precautionary principle for systemic risks for which the lack or quality of scientific data does not yet permit a complete assessment.
Commitment 2 – Systemic Risk Identification
GPAI must be safe and secure but systemic risks must first be identified before they can be managed.
Providers of GPAI models who have pledged to adhere to the Code of Practice must use structured processes to identify systemic risks such as risks from enabling chemical, biological, radiological, and nuclear (“CBRN”) attacks or accidents, loss of control over the GPAI model and the risk of misuse of their GPAI model for cyber offence or harmful manipulation (Commitment 2, Measures 2.1–2.3 and Appendix 1.4).
Providers of GPAI models should maintain a risk register, conduct red-teaming sessions to uncover model vulnerabilities and involve multidisciplinary experts to validate findings.
Commitment 3 – Risk Analysis
After identification, risks must be analyzed. For this purpose, providers must gather model-independent information (Measure 3.1) and apply state-of-the-art model evaluation methods (Measure 3.2) such as:
- Benchmarking
- Adversarial testing
- Simulations.
Moreover, providers of GPAI models who adopt the Code of Practice commit to conduct systemic risk modeling, estimate the probability and severity of harm for the systemic risk and conduct post-market monitoring (Measures 3.3 to 3.5).
Commitment 4 – Risk Acceptance
GPAI must be safe and secure so providers must not accept untenable risks stemming from their GPAI models. But providers must also establish risk-acceptance criteria (Measures 4.1–4.2) defining when systemic risks are acceptable so that development of a GPAI model may proceed. If the systemic risks stemming form a GPAI model are not determined to be acceptable, the model must not be put on the market.
Practical example: A model capable of generating hazardous chemical instructions must not be released until safeguards lower the risk below threshold levels.
Commitment 5 – Safety Mitigations
The Code of Practice Safety and Security Chapter lists mitigation techniques designed to minimize the risk of unsafe outcomes, including, but not limited to, the following:
- filtering and cleaning of training data
- Model finetuning to refuse certain requests
- Tiered access controls
- Application of techniques that enable safe ecosystems of AI agents.
Commitment 6 – Security Mitigations
Providers commit to
- Define a security goal that specifies the threat actors and their security mitigations to protect against them (Measure 6.1).
- Safeguard the GPAI model and infrastructure through the implementation of security mitigation objectives and measures defined in Appendix 4 to the Safety and Security Chapter in order to meet their defined security goal (Measure 6.2)
Safety prevents harm from model behavior; Security measures protect the model against cyber attacks.
Commitment 7 – Model Reports
Before deployment, providers of systemic-risk models must file a Safety & Security Model Report with the AI Office (Measures 7.1–7.5).
The Model Report must include model information such as:
- A high-level description of the model’s architecture, capabilities, propensities, and affordances
- A description of the expected use of the Model
- A description of the current model version
- A model specification
as required in more detail by Measure 7.1.
Moreover, the Model Report must provide reasons for proceeding with the development, the making available on the market, and/or the use of the model (Measure 7.2) as well as a documentation of systemic risk identification, analysis and mitigation (Measure 7.3).
If external auditors contributed to the model evaluation or model security reviews, the Model Report must provide a link to their findings (Measure 7.4). If no external auditors were involved, the Model Report must detail how the requirements set forth in Appendix 3.5 to the Safety and Security Chapter of the Code for Practice have still been met otherwise (Measure 7.5). Providers must also ensure that their Model Report contains information that enables the EU AI Office to understand how the development, making available on the market, and/or use of the model result in material changes in the systemic risk landscape that are relevant for taking risk assessment and mitigation measures (Measure 7.5)
Model Reports must be updated in accordance with Measure 7.6 whenever significant model changes or new vulnerabilities arise.
Model Reports ensure transparency between providers, regulators, and the public, bridging the gap between technical testing and regulatory oversight.
Commitment 8 – Governance & Accountability
Providers must
- assign clear responsibilities for managing the systemic risks stemming from their GPAI models (Measure 8.1)
- allocate appropriate ressources to their management bodies to which such responsibilities have been delegated (Measure 8.2)
- promote a healthy risk culture (Measure 8.3).
As a first step for assigning responsibilities according to Commitment 8, Appoint a Chief AI Safety Officer, establish an AI Safety Board, and integrate reporting lines to senior management.
Commitment 9 – Incident Reporting
GPAI must be safe and secure so providers must:
- Review internal and external sources to keep track of serious incidents and facilitate serious incident reporting by downstream modifiers, downstream providers, users, and other third parties to the provider of the GPAI model and the EU AI Office (Measure 9.1)
- Document essential information on serious incidents detailed in Measure 9.2
- Notify the EU AI Office and affected parties about serious incidents without undue delay (Measure 9.3)
- Preserve incident logs for a minimum of 5 years (Measure 9.4).
Commitment 10 – Additional documentation and transparency
GPAI must be safe and secure, so documenting the measures taken to achieve them is essential. To comply with their documentation duties (Measure 10.1), Providers must
- commit to documenting how they implemented the Safety and Security Chapter of the EU GPAI Code of Practice .
- Draw up, and keep up-to-date additional information on the model architecture, its integration into an AI system, the model evaluations they conducted and the safety mitigations they implemented to provide it to the EU AI Office upon request.
- Retain their documentation for at least 10 years after placing their GPAI model on the market
- Keep track of information needed for evidencing their compliance with the Safety and Security Chapter of the Code of Practice to the AI Office upon request.
To achieve public transparency and to the extent necessary for assessing and/or mitigating systemic risks, providers must publish a summary of their Framework and Model Report(s). But this is not a one-time effort as providers must also publish any updates on their website or in other media. (Measure 10.2).
Small and medium enterprises (SMEs) and small mid-cap enterprises (SMCs)
For SMEs and SMCs including startups, recital (h) of the Safety and Security Chapter states that “simplified ways of compliance should be possible as proportionate.” By way of example, recital (h) mentions exemptions of such signatories of the Code of Practice from some reporting commitments (Article 56 (5) AI Act).
Conclusion
GPAI must be safe and secure. The Safety and Security Chapter of the GPAI Code of Practice operationalizes Europe’s principles of trustworthy AI through measurable commitments. By taking the measures detailed in this Chapter, providers demonstrate proactive stewardship.
Your next Step if you are A GPAI model provider: Audit your safety and security framework, assign clear roles, and adopt the ten commitments to strengthen trust and resilience across the AI value chain.
The EU AI Act is a complex regulatory framework. If you are developing AI models, make sure that you don´t neglect legal compliance.