Upcoming Webinar | The New Frontier of 2025 Compliance: Mastering GovRAMP, IN-RAMP, and the Mystery of FedRAMP 20x on Sept. 4th @ 1:00 PM ET

Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

How ISO 42001 “AIMS” to Promote Trustworthy and Ethical AI

ISO Certifications | Artificial Intelligence | ISO 42001

Published: Nov 3, 2023

Last Updated: Aug 25, 2025

The need for responsible, trustworthy, and ethical use of artificial intelligence (AI) has been a hot topic over the past couple of years, prompting the release of regulations such as NIST’s AI Risk Management Framework to help organizations secure the evolving tech. Additional standards have emerged to address the need to implement safeguards addressing the security, safety, privacy, fairness, transparency, and data quality of AI systems throughout their life cycle—including ISO/IEC 42001. 

ISO is already well-known among those interested and invested in cybersecurity, as it offers frameworks for the implementation of different management systems that can help you improve different aspects of your organization. Through the establishment of ISO 42001 in late 2023—ISO merged into the AI game, introducing best practices for an AI management system (AIMS).  

In this blog post, we’ll explore ISO 42001’s structure, objectives, and intent so that you have a better idea of what the standard looks like, how it promotes ethical AI, and whether it suits your organization. 

What is ISO 42001?

As a new AI management system standard (MSS), ISO 42001 tasks organizations with taking a risk-based approach in applying the requirements to AI use. This is because applying the AIMS more broadly to all use cases within an organization can harm other business objectives without realizing any tangible benefits or raising additional concerns. 

Another, and perhaps most exciting, takeaway is that, while ISO 42001 is a certifiable, management system framework, focused on your AIMS, the standard has been drafted in such a way as to facilitate integration with other, existing MSS, such as: 

Since the issues and risks surrounding AI in those areas of security, privacy, and quality, among others, should not be managed separately for AI—but rather holistically—the adoption of an AIMS can enhance both the effectiveness of an organization’s existing management systems in those areas and your overall compliance posture. 

That being said, it’s important to note that ISO 42001 does not require other MSS to be implemented or certified as a prerequisite, nor is it the intent of ISO 42001 to replace or supersede existing quality, safety, security, privacy, or other MSS. 

Still, the potential for such integration will help organizations who need to meet the requirements of two or more such standards, though the focus of each implemented MSS must remain unique—e.g., information security with ISO 27001. Should you opt to adhere to ISO 42001, you’ll be expected to focus your application of the requirements on features that are unique to AI and the resulting issues and risks that arise with its use. 

Sign up to discover more ISO content from Schellman’s Weekly Read delivered every Friday morning.

The ISO 42001 Framework Structure 

What’s more, the structure of the ISO 42001 framework appears very familiar to those who’ve already been ISO 27001 certified, as ISO 42001 also features: 

  • Clauses 4-10 
    • An Annex A (normative1) listing of controls that can help organizations* both:
      • Meet objectives as they relate to the use of AI 
      • Address the concerns identified during the risk assessment process related to the design and operation of AI systems 

*These particular controls are not intended to be exhaustive—rather, they’re meant to be a reference to ensure that no necessary controls have been overlooked or omitted, and you are free to design (or leverage from existing sources) and implement different or additional controls as needed, beyond those in Annex A. 

Within ISO 42001, the 38 Annex A controls touch on the following areas: 

  • Policies related to AI 
  • Internal organization (e.g., roles and responsibilities, reporting of concerns) 
  • Resources for AI systems (e.g., data, tooling, system and computing, human) 
  • Impact analysis of AI systems on individuals, groups, & society 
  • AI system life cycle (e.g., system requirements, development, operation, monitoring) 
  • Data for AI systems (e.g., quality, provenance, preparation) 
  • Information for interested parties of AI systems (e.g., external reporting, communication of incidents) 
  • Use of AI systems (e.g., responsible / intended use, objectives) 
  • Third-party relationships (e.g., suppliers, customers) 

ISO 42001 also contains an Annex B and Annex C: 

Annex B (Normative)

Annex C (Informative2)

Provides the implementation guidance for the controls listed in Annex A 

(Think of this similar to the separate ISO 27002 standard for ISO 27001’s Annex A.) 

Outlines: 

  • The potential organizational objectives 
  • Risk sources 
  • Descriptions that can be considered when managing risks related to the use of AI 

ISO 42001 Objectives and Risk Sources  

Those potential objectives and risk sources referenced in Annex C address the following areas: 

Objectives

Risk Sources

  • Accountability 
  • AI expertise 
  • Availability and quality of training data 
  • Environmental impact 
  • Fairness 
  • Maintainability 
  • Privacy 
  • Robustness 
  • Safety 
  • Security 
  • Transparency and explainability 
  • Complexity of environment 
  • Lack of transparency and explainability 
  • Level of automation 
  • Risk sources related to ML 
  • System hardware issues 
  • System life cycle issues 
  • Technology readiness 

1 Normative elements are those that are prescriptive, that is they are to be followed in order to conform with scheme requirements. 
2 Informative elements are those that are descriptive, that is they are designed to help the reader understand the concepts presented in the normative elements. 

And finally, ISO 42001 contains an Annex D (Informative) that speaks to the use of an AIMS across domains or sectors. 

The Intent of ISO 42001 

Organizations meeting those objectives and mitigating those risk sources as outlined in the ISO 42001 framework will be helpful as AI use overall continues to expand—this tech is increasingly being applied across all sectors utilizing IT and trends demonstrate that it’s expected to be one of the main economic drivers over the coming years. 

As such, the intent of ISO 42001 is to help organizations responsibly perform their roles in the use, development, monitoring, or provision of products or services that utilize AI so as to secure the technology.  

Special focus through the ISO 42001 framework can help organizations implement the different safeguards that may be required by certain features of AI—features that raise additional risks within a particular process or system (in comparison to how the same task would traditionally be performed without the application and use of AI). 

 Examples of these features that would warrant specific safeguards include: 

  • Automatic Decision-Making: When done in a non-transparent and non-explainable way, may require specific administration and oversight beyond that of traditional IT systems. 
  • Data Analysis, Insight, and Machine Learning (ML): When employed in place of human-coded logic to design systems, these change the way that such systems are developed, justified, and deployed in ways that may require different protections. 
  • Continuous Learning: AI systems that perform continuous learning change their behavior during use and require special considerations to ensure their responsible use continues in their state of constantly changing behavior.  

Available AI Cybersecurity Guidance and Regulation That Can Help You Prepare for ISO 42001 Compliance 

Organizations need to get started on securing their AI use as soon as possible, and while ISO 42001 can now help, there are other important developments you may want to also consider:

  • NIST AI Risk Management Framework (AI RMF): NIST released this new framework to better manage risks to individuals, organizations, and society associated with AI. For voluntary use, the NIST AI RMF can improve the incorporation of trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. 
  • EU AI Act: At the time of this blog’s publication, the EU is also in the process of finalizing its own AI use regulation that is centered around excellence and trust and aims to boost research and industrial capacity while ensuring safety and fundamental rights. 
  • HITRUST CSF v11.2.0 AI Requirements: To accommodate the ever-evolving cybersecurity threat landscape, HITRUST has released HITRUST CSF v11.2.0, updating its framework to include more pertinent concepts—including additions around AI risk management content. 

What’s Next for ISO 42001 and How Schellman Can Help with Your Compliance Journey 

Even with all these major milestones regarding AI regulation, America appears to be firmly committed to moving further toward, as indicated by the push in America’s AI Action Plan to accelerate AI innovation, build American AI infrastructure, and lead globally in diplomacy and national security.  

While not legislation per se, ISO 42001 still represents a major development in AI security. In response to ISO 42001’s publication, Schellman extended its MSS accreditation and suite of ISO services as the first ANAB accredited certification body for ISO 42001.  

As part of your ISO 42001 preparation, we highly recommend having an ISO 42001 gap assessment performed—and that's something we're also ready to help you with. So, if you'd like to learn more about that assessment—or if you have any other questions regarding AI security or the ISO 42001 framework—contact us today. 

In the meantime, discover additional ISO 42001 and AI compliance insights in these helpful resources:  

About Danny Manimbo

Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman's AI and ISO practices as well as the development and oversight of Schellman's attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services.