A Guide to the AI Code of Practice for General-Purpose AI (GPAI)
The European Union has introduced the General-Purpose AI (GPAI) Code of Practice, a voluntary framework designed to help the AI industry comply with the upcoming AI Act. Published on July 10, 2025, this code is a key tool for providers of general-purpose AI models to demonstrate their commitment to safety, transparency, and copyright. By adopting the code, companies can achieve legal certainty and reduce administrative burdens.
Key Components of the Code
The GPAI Code of Practice is structured into three main chapters, each addressing a critical aspect of AI development and deployment.
1. Transparency 🕵️‍♀️ This chapter provides a “Model Documentation Form” to assist providers in meeting their transparency obligations under the AI Act. It guides them in documenting essential information about their AI models, ensuring that users and regulators have clear insights into how the technology works.
2. Copyright ⚖️ To address intellectual property concerns, this section offers practical solutions for providers to comply with EU copyright law. It outlines the necessary policies and procedures to ensure that the data used to train AI models respects the rights of content creators.
3. Safety and Security đź”’ This chapter is specifically for providers of the most advanced AI models that pose a systemic risk. It outlines state-of-the-art practices for managing and mitigating these risks, ensuring that powerful AI systems are developed and deployed responsibly.
How it is going to impact AI adoption
he EU’s General-Purpose AI (GPAI) Code of Practice is likely to have a significant and multifaceted impact on AI adoption, both within the EU and globally.
Potential Positive Impacts on AI Adoption:
Increased Trust and Legal Certainty: By providing a clear framework for compliance with the EU AI Act, the voluntary code offers companies a way to demonstrate their commitment to safety, transparency, and copyright. This can build consumer and business trust in AI systems, encouraging wider adoption. The “presumption of conformity” that the code offers to signatories is a strong incentive, reducing administrative burden and providing legal certainty.
Encouraging Responsible Innovation: The code sets a standard for responsible AI development, particularly for powerful models with systemic risks. This includes requirements for risk assessment, mitigation, and cybersecurity. While some may see this as a constraint, it can also lead to more robust, reliable, and secure AI systems, which are more likely to be adopted in critical sectors.
Standardization: The code helps to standardize practices across the industry. This is beneficial for downstream providers who integrate GPAI models into their own systems, as it ensures they have a consistent understanding of the models’ capabilities, limitations, and risks. This standardization can streamline the development process and accelerate the deployment of AI applications.
Potential Negative Impacts or Challenges:
Risk of Stifling Innovation: Some critics, including companies like Meta and a number of European tech firms, have expressed concerns that the code’s requirements are too burdensome and may stifle innovation, particularly for smaller European startups. They argue that the stringent rules could give a competitive advantage to larger, non-EU companies that might be able to find ways around compliance.
Uneven Playing Field: The voluntary nature of the code and the lack of full “presumption of conformity” means that providers who choose not to sign will have to demonstrate compliance through other means. This could create an uneven playing field. The global AI regulatory landscape is becoming increasingly complex, with different jurisdictions taking different approaches, which could lead to challenges for international businesses.
Potential for a “Complain, then Comply” Strategy: Some experts worry that large firms might initially resist the code to weaken its provisions, and then only comply once it is finalized and becomes a necessary part of operating in the EU market. This could damage the code’s legitimacy and the collaborative process that created it.
Companies who have signed up for GPAI
As of July 10, 2025, the following companies have signed the AI Code of Practice:
- Accexible
- AI Alignment Solutions
- Aleph Alpha
- Almawave
- Amazon
- Anthropic
- Bria AI
- Cohere
- Cyber Institute
- Domyn
- Dweve
- Euc Inovação Portugal
- Fastweb
- Humane Technology
- IBM
- Lawise
- Microsoft
- Mistral AI
- Open Hippo
- OpenAI
- Pleias
- Re-AuditIA
- ServiceNow
- Virtuo Turing
- WRITER
The document also mentions that xAI has signed the code specifically for the Safety and Security Chapter. The document does not specify which chapters the other listed companies have committed to.
Conclusion
In summary, while the code aims to promote trustworthy and human-centric AI, its impact on adoption will depend on how effectively it balances the need for regulation with the desire for innovation. It has the potential to accelerate adoption by building trust and providing clarity, but it also faces challenges from those who believe it may create an overly burdensome regulatory environment.
For more information please visit https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai#ecl-inpage-the-3-chapters-of-the-code
