What to Know About the EU AI Code of Practice
Artificial Intelligence | ISO 42001
Published: Aug 18, 2025
As the need for innovative artificial intelligence grows, regulatory bodies are working quickly to create frameworks that balance acceleration with safety, accountability, and trust. Notably, the European Union’s AI Act is poised to reshape how organizations approach AI governance, especially when it comes to general-purpose AI (GPAI) models.
To help companies prepare, the EU recently introduced a voluntary AI Code of Practice, which serves as a significant early step toward AI compliance and responsible development. As the first ANAB accredited Certification Body for ISO 42001, we at Schellman can’t help but acknowledge the code’s overlap with ISO 42001 and the benefits they both offer with regards to regulatory readiness for AI governance.
In this blog, we’ll break down what the EU AI Code of Practice entails, why it matters, how it complements ISO 42001, and what your organization can do to prepare.
What Is the EU AI Code of Practice?
Finalized by the European Commission in July 2025, the EU AI Code of Practice is a voluntary framework designed to help companies navigate the new enforcement of the EU AI Act requirements. It was developed by 13 independent experts and shaped by input from more than 1,000 small and medium-sized enterprises, academics, civil society organizations, and large AI model providers.
This Code specifically targets general-purpose AI systems and includes practical guidance on:
- Transparency: Making model capabilities, limitations, and intended use clear
- Safety and Security: Implementing risk mitigation strategies for responsible development
- Copyright Protections: Addressing the use of copyrighted content in model training
- Responsible Deployment: Encouraging responsible commercialization and lifecycle oversight
Though voluntary, the Code is positioned as a way for companies to demonstrate regulatory readiness and early compliance with the AI Act and reduce the administrative burden in the face of enforcement.
EU Commission Endorses General-Purpose AI Code of Practice as a Key Compliance Tool for the AI Act
The release of the EU AI Code is particularly timely considering the AI Act’s provisions for GPAI began taking effect as of August 2, 2025, and came with significant implications.
Under this AI Act:
- New GPAI models will be subject to oversight starting in 2026
- Existing models will fall under enforcement starting in 2027
- Non-compliance could result in fines of up to 7% of global annual revenue
Ahead of the EU AI Code’s enforcement, the European Commission formally recognized the GPAI Code of Practice as an adequate tool to help providers of GPAI models demonstrate compliance with Articles 53 (covering transparency, copyright, training data summaries) and 55 (focused on systemic risk assessment, mitigation, reporting, cybersecurity) of the EU AI Act. This endorsement underscores the Code's role in facilitating adherence to the AI Act's provisions, particularly those concerning transparency, copyright, and safety for high-risk AI models.
While adherence to the Code is voluntary, it is expected to play a key role in the broader enforcement ecosystem of the AI Act. The Commission emphasized that the Code offers a practical framework to demonstrate regulatory compliance, particularly for documenting model capabilities, managing copyright obligations under EU law, and implementing cybersecurity and governance safeguards for high-risk AI models.
By voluntarily aligning with the EU AI Code of Practice, companies may be better positioned to reduce their legal and administrative burden while gaining clarity on how to apply abstract AI Act principles in practice and operations. Additionally, it provides a competitive edge with the opportunity to show regulators and stakeholders that your organization is taking AI risk-mitigation and compliance seriously.
This forward-looking alignment not only builds trust but could also provide a strategic advantage in an increasingly compliance-focused market by better positioning you to navigate evolving regulatory requirements.
How the EU AI Code of Practice Enhances Regulatory Readiness
The Code of Practice serves as a readiness roadmap, helping organizations translate regulatory expectations into concrete actions that can be implemented ahead of the EU AI Act enforcement.
The EU AI Code of Practice encourages companies to:
- Conduct risk assessments and impact analyses for AI systems
- Build transparency into the model lifecycle, including documentation and disclosure practices
- Create clear governance structures to define accountability and oversight
- Monitor and adapt based on model behavior in real-world contexts
These steps are aligned with global expectations around responsible AI, including the compliance requirements outlined for ISO 42001 certification. By adopting the Code early, organizations can proactively build the controls and security culture needed for long-term compliance.
The Overlap Between the EU AI Code of Practice and ISO 42001
The EU’s voluntary Code of Practice shares notable overlap with ISO 42001, the world’s first international AI management system (AIMS) standard. Both frameworks are principles-based, voluntary in nature, and designed to guide responsible AI development while enabling regulatory readiness.
Specifically, the EU AI Code of Practice and ISO 42001 frameworks overlap in the following focus areas:
- Risk Management: Both emphasize the need to identify, assess, and mitigate AI-specific risks.
- Governance and Accountability: Both require governance roles with defined responsibilities and accountability structures.
- Transparency and Documentation: Both mandate transparent documentation around topics such as scope, policies, and risk assessments.
- Lifecycle Monitoring and Incident Management: Both include evaluation requirements around performance and incident monitoring for continual improvement.
Where ISO 42001 differs is in its auditable and certifiable structure. It provides a documented management system standard for governing AI risks across the full lifecycle, helping organizations align policy, roles, procedures, and controls.
The EU Code of Practice on the other hand is specifically focused on demonstrating compliance with the EU AI Act (Articles 53 & 55) and applies specifically to general-purpose AI under EU law. ISO 42001 applies to any organization using, providing, or developing AI products and services across industries globally, offering broader flexibility.
For companies considering ISO 42001 certification, the EU Code of Practice can serve as complementary guidance and validation of their existing governance efforts. Conversely, organizations already aligned with ISO 42001 will likely find themselves well-positioned to meet the voluntary expectations laid out in the EU Code with minimal additional lift. As global standards begin to harmonize, ISO 42001 is a strategic enabler for organizations operating across both U.S. and EU markets to future-proof their AI governance.
How to Prepare Now for AI Governance and Compliance
Whether you’re a model provider, enterprise user, or part of an AI supply chain, the message is clear-- you should start preparing for compliance now in the following ways:
- Conduct a gap assessment: Compare your current AI governance practices against the EU Code and ISO/IEC 42001.
- Establish risk and impact protocols: Identify high-risk use cases and document risk mitigation strategies.
- Implement transparency measures: Improve disclosures around model capabilities, training data sources, and intended use.
- Review copyright and data governance policies: Ensure model development aligns with emerging IP and data use standards.
- Consider ISO 42001 certification: This globally recognized standard offers an end-to-end framework for responsible AI that will only become more valuable as regulations evolve.
Moving Forward with Responsible AI
The release of the EU AI Code of Practice marks a pivotal moment — not just in European AI regulation, but in how organizations worldwide can approach AI compliance, trust, and innovation together. While regulatory efforts can be complex and there are concerns about overreach, the broader, underlying demand for transparency, accountability, and responsible AI is only accelerating. Frameworks like the EU Code and ISO 42001 don’t stifle innovation; they create the foundation for it to scale safely and sustainably.
Companies looking to stay ahead of AI legislation while building stakeholder trust and reducing future compliance risk should consider adopting these frameworks now. If you’re ready to proceed with ISO 42001 certification, or have questions about the requirements or process involved, Schellman can help. Contact us today and we’ll get back to you shortly.
In the meantime, discover additional ISO 42001 insights in these helpful resources:
About Danny Manimbo
Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman's AI and ISO practices as well as the development and oversight of Schellman's attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services.