AI Governance and ISO 42001 FAQs: What Organizations Need to Know in 2026
Education | ISO Certifications | Artificial Intelligence | ISO 42001
Published: Jan 6, 2026
As interest in ISO 42001 certification has surged over the past year, we've heard a steady stream of questions from organizations seeking to build their AI governance strategy and operationalize their Artificial Intelligence Management Systems (AIMS) responsibly. From understanding practical preparation steps to what to expect during the audit, many teams are looking for clearer guidance as they navigate this newer management system standard.
To help, Danny Manimbo, Managing Principal, and Joe Sigman, Manager of Schellman’s AI Assessments and ISO Certification Services, have compiled and answered the top questions we've received as the first ANAB-accredited ISO 42001 certification body. This way, you can be better informed and move forward with confidence on your path to ISO 42001 certification.
The Need for AI Governance, Risk, and Compliance
- As customers, regulators, and investors begin asking for proof of responsible AI, what types of frameworks, metrics, or attestations are most credible in demonstrating governance maturity?
Stakeholders increasingly expect evidence grounded in recognized governance frameworks, such as NIST AI RMF and ISO/IEC 42001, as well as more technical-first standards like AIUC-1. Metrics like model performance drift, bias scores, and documentation completeness offer tangible indicators of maturity. Independent assessments add an additional layer of credibility.
- What is AIUC-1 and what do organizations need to know about it as AI agents become more embedded in enterprise workflows?
As AI agents move from experimentation to autonomous operation within core business systems, organizations need assurance that these tools are governed with the same rigor as other high-risk technologies. AIUC-1 was created to meet this need by providing a single, auditable framework for AI agent adoption and usage.
Built on globally recognized standards like ISO 42001, the EU AI Act, and NIST AI RMF, AIUC-1 combines governance expectations with technical evaluation and testing. This enables organizations to balance innovation with compliance while ensuring AI-driven decisions remain explainable, reversible, and accountable even when agents are executing real transactions involving sensitive data.
- What is the purpose of evolving and emerging state AI laws?
Some local and state AI laws are broader in nature, intending to align with some of the internationally recognized legislative artifacts. For example, Texas has the Texas Responsible AI Governance Act (TRAIGA), which provides a comprehensive framework that aligns with the EU AI act requirements.
We are also seeing states propose more limited scope legislations, like in New York where they've passed local law which is specific to requirements for AI systems that provide hiring decision making to perform bias and algorithmic auditing so that they can demonstrate that there are no bias considerations within the tool in its production environment.
Each of these state-level efforts intends to address different risk scenarios or contexts, but some are more comprehensive in scope and fortunately, well aligned with common ISO 42001 themes and principles.
A notable example is the Colorado AI Act, which explicitly recognizes adherence to ISO 42001 as a potential safe harbor for demonstrating responsible AI governance and compliance. This acknowledgment signals that organizations following ISO 42001’s risk management and governance framework can mitigate legal and regulatory exposure under Colorado’s law while maintaining consistency with international best practices.
- AI has shifted from isolated projects to becoming the operational core of business. How has this shift changed the nature of risk, and what new governance expectations are emerging as a result?
As AI becomes embedded in workflows, risks shift from experimental missteps to systemic impacts across entire business processes. That elevates expectations around model monitoring, data quality, and lifecycle governance. Regulators and customers now expect AI controls to resemble other enterprise-risk disciplines in rigor and continuity.
- Many still view AI policy as a compliance burden. How can CTOs and CIOs reframe it as a strategic architecture that enables innovation, not restricts it?
AI policy becomes an innovation asset when it defines where experimentation is encouraged, what data is trusted, and what risks are acceptable. It sets the conditions for safe acceleration. CTOs and CIOs who treat policy as design scaffolding rather than legal fine print unlock more consistent and scalable AI development.
- What tangible business benefits have you seen when organizations treat AI governance as a brand and customer trust issue?
Companies that elevate governance as a trust signal see faster customer adoption and reduced friction in sales cycles. Clear, verifiable AI controls demonstrate accountability when stakeholders are skeptical. It becomes a differentiator as proof that innovation isn’t happening at the expense of ethics or reliability.
- Is ISO 42001 good for Service Providers providing AI services? What types of organizations and industries do you typically see adopting certification?
The target audience for ISO 42001 is broadly any organization that may be providing, developing, or using AI products or services. The certificate is not specific to just providers of AI models, but more so anyone who may be involved in the AI supply chain either through the development, provisioning, or use of AI products.
We've seen a lot of organizations adopt this across industries, including everything from foundational model providers, frontier model providers, and SaaS companies. We've also seen adoption across some industries that you may not traditionally expect, such as legal firms, medical device firms, advertising and marketing agencies, and professional service firms that might be utilizing AI. It's very industry agnostic and widely adopted, which really strengthens the value proposition of certification.
Operationalizing AI Governance & Prepping for ISO 42001 Certification
- There is an obvious disconnect between high-level policy and the technical layers that enforce it. What are the most effective ways to bridge this gap?
The most effective organizations pair high-level governance with technical blueprints, such as risk tiers, model cards, MLOps controls, and automated checks. Embedding governance into pipelines ensures policy is executable while regular alignment between risk teams and engineering keeps guardrails both realistic and enforceable.
- What are the most practical methods you’ve seen enterprises use to operationalize fairness and avoid bias?
Bias and fairness are data issues at their core, involving data quality, data representation, and model training. Enterprises are implementing bias testing at dataset creation, establishing diverse sampling standards, and conducting model audits during validation. Continuous monitoring for drift and embedding checks into MLOps pipelines ensures fairness is routine rather than a one-and-done exercise.
- Explainability and transparency are easy to advocate for but difficult to implement, especially with deep learning systems. How can organizations bake these concepts into their AI lifecycle without slowing innovation?
Organizations are adopting model-agnostic explainability tools and requiring interpretable documentation as part of deployment gates. When transparency and explainability are baked into development workflows, it doesn’t become a bottleneck. Even with deep learning, layered model cards and decision-rationale logs provide meaningful visibility.
- Governance ultimately depends on people as much as processes. How can leaders foster a culture where responsible AI is not just compliance-driven but innovation-aligned?
People adopt what they help build. The culture shifts when teams see governance as enablers through shared objectives, co-designed controls, and clear value to innovation. Training, transparent communication, and recognition of responsible experimentation help align incentives.
- Who should “own” AI governance inside an organization? How are leading enterprises structuring collaboration between risk, legal, compliance, and technical teams?
Leading organizations are moving toward a federated model with centralized accountability. Risk and compliance teams define guardrails, legal interprets regulatory expectations, and technical teams operationalize controls.
Increasingly, organizations are also establishing formal AI leadership roles, such as a Chief AI Officer or Chief Ethics Officer, mirroring how the DPO role emerged after GDPR was adopted. These leaders often chair cross-functional AI governance committees that bring together risk, legal, compliance, data, and engineering stakeholders. While execution is distributed, successful programs clearly designate a single accountable owner to drive decisions, resolve trade-offs, and ensure consistent oversight.
- What is being assessed for the ISO 42001 Risk Assessment Section?
Before performing an official risk assessment, organizations are expected to undergo an AI system impact assessment. This involves understanding the scope, purpose, and potential impacts of your AI systems, including effects on individuals, groups, and the business. The impact assessment informs what risks should be evaluated and prioritized.
From there, the risk assessment follows a familiar management-system approach to identifying AI risks. If you already have compliance with any of the other management system standards, such as ISO 27001, ISO 27701, or ISO 22301, it's basically the same steps, only the risk focus shifts to AI-specific topics rather than security, privacy, or continuity.
You'll start by identifying the risks you are assessing against and evaluating potential impacts to the business and stakeholders. You'll need to formally log and track these in a risk register, which you’ll need to demonstrate to your external auditor. You will also need to continually work and monitor these risks on an ongoing basis to truly operate the program.
- Do all controls apply to every role, or are some controls role-specific?
ISO 42001 presents the idea of a role-based assignment with respect to in-scope AI systems and determining your role is one of the first steps. You need to understand who you are in terms of the standard and what your responsibilities are for responsible AI governance. This requires organizations to identify whether they are a user, provider, or developer/producer of AI. You also need to understand and define the scope of the products, services, or processes that you're looking to certify.
There are some controls that are fundamental to the overall AIMS, and most all ISO 42001 controls should apply to any organization, but the extent of which all ISO 42001 controls apply to each role differs. The question is more of whether there is a shared or carved-out responsibility, which would be assessed by organizations in their third-party risk management processes outlined in A.10.
For example, AI providers may have shared responsibilities with the AI developer over certain AI system life cycle controls, and that goes for AI users who have similar dependencies on both AI providers and AI developers.
Once you define your role and scope, you can select and tailor the applicable controls from Annex A to your specific AIMs. You’ll need to create a Statement of Applicability (SoA) to justify your controls selection and responsible parties. Ultimately, you're going to be certifying a very explicit scope that includes your AIMS role, the unique AI context and function supported, and the explicit naming of those AI systems and components within scope.
- Are there any online ISO 42001 certification courses or trainings available from reputable providers?
Schellman partners with PECB for our ISO 42001 foundation, lead auditor, and lead implementer courses. These are great options for anyone who may be supporting either an audit or implementing function when getting an AI governance or AI management system program off the ground.
The Interplay Between ISO 42001 and Other Compliance Frameworks
- How are emerging AI regulations in the EU, US, and Asia influencing corporate governance strategies — and do you see compliance as a competitive advantage?
Regulators in the EU, U.S., and Asia are pushing companies toward proactive governance involving risk classification, documentation, and transparency requirements. Early adopters of these standards gain competitive advantage through smoother market access and enhanced trust.
- Do you recommend combining AIMS + ISMS (ISO 42001 + ISO 27001)?
We're certainly seeing organizations implementing these net new considerations with a little bit more agility and speed because the groundwork and framework for these requirements may already be preexisting. You may already have an enterprise IT risk management process, internal audit program, or management review program that supports preexisting Information Security Management System (ISMS) or Privacy Information Management System (PIMS) governance efforts.
In some cases, these processes can be easily modified to address some of the additional requirements posed by ISO 42001, while not requiring ground up implementation, but rather an additional bolstering of the preexisting products. - Do ISO 27701 and ISO 27001 map to ISO 42001, or are the privacy or security controls similar?
Conceptually, the clause (4-10) requirements in management system standards ISO 27701 and ISO 27001 align closely with ISO 42001. While the specific controls differ, many organizations can integrate existing security or privacy policies—such as secure development processes—into their AI governance framework, saving time and effort.
A best practice is to establish a cross-functional governing risk council, leveraging existing security or privacy committees, and adding AI-relevant stakeholders as needed. This council oversees AI risks, discusses control gaps, and makes decisions on risk treatment and resource allocation. Regular meetings help ensure consistent oversight.
Defining clear roles and responsibilities is critical. AI risk management generally follows the same process as security or privacy: determine scope, identify and assess risks, log them in a risk register, and decide on treatment plans based on business impact and cost.
- How do you actually bring various compliance initiatives together to create a single audit?
A lot of people express audit fatigue, rightfully so. To address this, it’s best practice to make your compliance approach people centric, which involves looking at your separate management systems, including all of the control owners, leadership teams, and other stakeholders, and identifying commonalities.
A lot of times, you'll ask similar questions covering the same topic, so if you streamline these conversations and make them people centric, where you're more focused on the stakeholder and not the control, then you're going use their time a lot better and be more efficient. For example, if you have a risk committee, leverage conversations with them as much as you can to cover controls across privacy, information security, and AI in the same meeting versus scheduling three separate meetings. Apply that throughout the ISO 42001 clause 4-10 requirements, and this can help make it feel more like a single audit.
- How do you see CSA STAR for AI complementing ISO 42001 certification?
CSA STAR for AI complements ISO 42001 by combining AI governance with verifiable AI security assurance. Organizations pursuing CSA STAR certification are required to have an ISO 27001 or ISO 42001 program in place, making ISO 42001 a natural starting point. CSA STAR for AI then builds on that foundation by layering in additional AI-specific security and control validation, offering a more comprehensive framework for AI security and governance. The AICM control framework is a useful reference for understanding the detailed requirements that CSA STAR assesses.
In practice, organizations that combine ISO 42001 certification with the AI-CAIQ controls and Valid-AI-ted scoring can achieve CSA STAR for AI Level 2, which represents an independently validated level of AI security and governance maturity. Together, ISO 42001 establishes the governance backbone, while CSA STAR for AI adds deeper technical and assurance rigor.
The Future of AI Governance and ISO 42001 Compliance
- Do you see ISO 42001 being implemented to support US public sector contacts?
I think we can expect there to be additional requirements, posed for the federal supply chain and doing work with US departments. With that being said, ISO 42001 is intended to be a foundational governance structure that can be expanded to address a lot of these requirements. We certainly see organizations that are implicated by federal requirements standing up ISO 42001 in alignment with some more nuanced or specific requirements that might be posed at the federal level.
- If we look three years ahead, what new dimensions of trust — such as model provenance, energy impact, or data ethics — do you expect to become part of mainstream AI governance programs?
We expect provenance, sustainability metrics, and data ethics disclosures to become core elements of AI assurance. Stakeholders will want proof of where models come from, how they were trained, and their environmental impacts. Trust will extend beyond performance to holistic accountability.
- As AI systems become more autonomous and embedded in decision-making, how should boards and executives evolve oversight responsibilities? What does effective AI governance look like at the leadership level?
Boards must treat AI as a strategic risk and value driver, not a technical detail—requiring reporting on model inventories, risk posture, and incident trends. Effective oversight blends literacy, escalation pathways, and alignment with corporate strategy. Leadership-level governance sets the tone for safe and responsible innovation and scaling.
- Looking forward, do you foresee AI assurance becoming as standardized and auditable as cybersecurity or privacy? What role will Schellman play in shaping that landscape?
Yes—AI is following the same trajectory as cybersecurity and privacy, moving toward defined controls, evidence requirements, and attestations. Schellman expects to play a central role in shaping and delivering these models, leveraging decades of assurance experience. Standardization will make governance scalable and comparable.
If you have additional questions or would like to gain more insights about AI governance or ISO 42001 certification, contact us today.
About the Authors

Danny Manimbo is a Managing Principal at Schellman based in Denver, Colorado, where he leads the firm’s Artificial Intelligence (AI) and ISO services. In this role, he oversees the strategy, delivery, and quality of Schellman’s AI, ISO, and broader attestation services. Since joining the firm in 2013, Danny has built more than 15 years of expertise in information security, data privacy, AI governance, and compliance, helping organizations navigate evolving regulatory landscapes and emerging technologies. He is also a recognized thought leader and frequent speaker at industry conferences, where he shares insights on AI governance, security best practices, and the future of compliance. Danny has achieved the following certifications relevant to the fields of accounting, auditing, and information systems security and privacy: Certified Public Accountant (CPA), Certified Information Systems Security Professional (CISSP), Certified Information Systems Auditor (CISA), Certified Internal Auditor (CIA), Certificate of Cloud Security Knowledge (CCSK), Certified Information Privacy Professional – United States (CIPP/US).
Joe Sigman is a Manager of Schellman’s AI Assessments and ISO Certification Services based in Denver, Colorado. Prior to joining Schellman in 2021, Joe worked as a Senior Associate at a management consulting firm specializing in IT strategy and compliance, solution architecture, and enterprise digital transformation. Joe has led and supported AI Assessments, Cybersecurity Assessments, Information Security Architecture Solutioning, Information Technology Gap Analysis, and Cloud Migration Roadmaps. Joe has over 6 years of experience comprised of serving clients in various industries, including Information Technology, Professional Services, Healthcare, and Energy. Joe is now focused primarily on ISO Certifications for organizations across various industries.
About Schellman
Schellman is a leading provider of attestation and compliance services. We are the only company in the world that is a CPA firm, a globally licensed PCI Qualified Security Assessor, an ISO Certification Body, HITRUST CSF Assessor, a FedRAMP 3PAO, and most recently, an APEC Accountability Agent. Renowned for expertise tempered by practical experience, Schellman's professionals provide superior client service balanced by steadfast independence. Our approach builds successful, long-term relationships and allows our clients to achieve multiple compliance objectives through a single third-party assessor.