EU AI Act: EC Publishes Code of Practice for General-Purpose AI Models

by John Jenkins

July 17, 2025

Earlier this month, the European Commission issued its General-Purpose AI Code of Practice (the “AI Code”).  The EU AI Act’s provisions relating to general-purpose AI models under the EU AI Act are due to come into effect at the end of this month. Compliance with the AI Code is voluntary but can help a company demonstrate compliance with certain provisions of the statute.  This excerpt from a Hunton Andrews Kurth blog provides an overview of the key provisions of the AI Code:

The AI Code is divided into three separately authored chapters: transparency obligations, copyright, and safety and security. Each chapter addresses specific aspects of compliance under the EU AI Act:

– Transparency: This chapter provides a framework for providers of general-purpose AI models to demonstrate compliance with their obligations under Articles 53(1)(a) and (b) of the EU AI Act. It outlines the necessary documentation and practices required to meet transparency standards. In particular, signatories to the AI Code can comply with the EU AI Act’s transparency requirements by maintaining information in a model documentation form (included in the chapter) which may be requested by the AI Office or a national competent authority.

– Copyright: This chapter details how to demonstrate compliance with Article 53(1)(c) of the EU AI Act which requires a provider put in place a policy to comply with EU law on copyright and to identify and comply with expressed reservations of rights. The AI Code provides several measures to demonstrate compliance with Article 53(1)(c), such as implementing a copyright policy which incorporates the other measures of the chapter and designating a point of contract for complaints concerning copyright.

– Safety and security: This chapter only applies to providers responsible for general-purpose AI models with systemic risk and relates to the obligations under Article 55 of the EU AI Act. The chapter details the measures needed to assess and mitigate risks associated with these advanced models, such as creating and adopting a framework detailing the processes and measures for systemic risk assessment and mitigation, implementing appropriate safety and security mitigations, and developing a model report containing details about the AI model, systemic risk assessment and mitigation processes that may be shared with the AI Office.

The blog says that the EC will issue guidelines to complement the AI Code later this month.  Those guidelines are expected to “clarify key concepts related to general-purpose AI models and aim to ensure consistent interpretation and application of key concepts related to general-purpose AI models.”