Upcoming Webinar | Assuring Agentic AI on March 5th @ 1:00 PM ET

Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

Strengthening AI Governance: How ISO 42001 Supports Compliance with the Vietnam AI Law

Artificial Intelligence | ISO 42001

Published: Mar 2, 2026

As artificial intelligence continues to rapidly evolve, from generative tools to increasingly autonomous systems, governments around the world are accelerating efforts to formalize AI governance. Regulatory frameworks are becoming enforceable on legal requirements that shape how AI systems are designed, deployed, and monitored.

Against this backdrop of expanding global oversight, Vietnam has passed the first AI law in Southeast Asia, known as the Law on Artificial Intelligence No. 134/2025/QH15 (AI Law). This legislation signals Vietnam’s commitment to aligning with broader international AI governance trends while addressing the unique needs of its domestic innovation ecosystem.

Meanwhile, internationally recognized standards like ISO 42001 provide organizations with a structured, risk-based approach to AI governance and regulatory readiness. By aligning AI practices with ISO 42001, companies can build a scalable compliance framework that supports responsible AI adoption across markets.

In this article, we’ll explain the new Vietnam AI Law, including its requirements and what compliance entails, and how it overlaps with ISO 42001 for robust AI governance.

What Is the Vietnam AI Law?

The Vietnam AI Law establishes a comprehensive legal framework outlining governance for developing, deploying, and operating AI systems. It represents Vietnam’s first effort to formalize AI governance and aligns with broader international trends toward standardized, accountable AI oversight. Approved by the Vietnam National Assembly on December 10, 2025, the law officially takes effect in phases starting March 1, 2026.

Organizations with AI systems are expected to comply by March 1, 2027 (12 months after the effective date of the Vietnam AI Law), with the exception of companies with AI systems in healthcare, education, and financial fields, who have until September 1, 2027 (18 months after the effective date) to comply.

Who Does the Vietnam AI Law Apply To?

The Vietnam AI Law applies to organizations, including foreign entities, that sell AI products or services to Vietnamese customers. Similar to ISO 42001, the legislation emphasizes a structured, risk-based approach to AI governance, where compliance requirements depend on the organization’s role in the AI ecosystem.

Organizations are assigned various obligations based on the following AI roles:

  • AI Developers: Companies designing, building, training, testing, or refining AI models with direct control over the technical methods, training data, and model parameters.
  • AI Providers: Companies marketing AI systems under their name, brand, or trademark, regardless of whether the AI system was developed by them or a third-party.
  • AI Deployers: Companies utilizing or integrating AI systems as part of their service offerings.
  • AI Users: Entities directly interacting with AI systems or relying on their outputs.
  • AI Affected Parties: Entities whose lawful rights, life, health, property, reputation, or access services are directly or indirectly impacted by the deployment or outputs of AI systems.

These roles allow organizations to adopt a structured governance approach, ensuring accountability and clear responsibilities across the AI lifecycle.

Prohibited AI Uses Under the Vietnam AI Law

In line with promoting responsible and accountable AI use, the Vietnam AI Law bans AI systems that engage in harmful, deceptive, or unlawful activities, including those that:

  • Violate laws or infringe the rights and legitimate interest of companies and individuals;
  • Use fabricated or simulated elements of real people or events to deceive or manipulate human perception and behavior causing serious harm to the rights and interests of individuals;
  • Exploit vulnerable groups (e.g., children, elderly, people with disabilities, etc.);
  • Disseminate dangerous misinformation;
  • Collect, process, or use data that violates privacy, intellectual property, or cybersecurity laws; and
  • Conceal information that is required to be disclosed, and erase or falsify mandatory information, labels, and warnings in AI activities.

The Vietnam AI Law Risk Tiers

The Vietnam AI Law classifies AI systems into three risk tiers: high, medium, and low. The risk classification is determined by the level of potential impact on human rights, safety and security, the field where the AI system is used, the scope of users, and the scale of impact the AI system could have. AI systems face varying obligations based on their risk category:

High-Risk AI Systems

  • How high-risk AI systems are determined:
    • AI systems that may cause significant harm to life, health, or the lawful rights and interests of organizations or individuals, as well as to national interests, public interests, or national security.
    • The Prime Minister of Vietnam is tasked with issuing a list specifying which AI systems are classified as high risk.
  • Company obligations for high-risk AI systems:
    • All high-risk AI systems must undergo conformity assessments (e.g., audits) before deployment or use and after significant changes.
    • Organizations conducting conformity assessment and testing of AI systems must ensure independence, possess sufficient technical competency, and be subject to periodic supervision by the Vietnamese government.
    • Foreign entities of the AI systems must appoint a legal point of contact in Vietnam.

Role-Specific Requirements for High-Risk AI Systems

AI Role Key Obligations for High-Risk AI Systems
AI Developers & Providers
  • Implement AI risk management programs to measure and monitor AI risks when there are system changes

  • Manage training, testing, and operating data to ensure the quality and intended use of the AI system
  • Maintain technical documentation and activity logs for conformity assessments and post-deployment reviews
  • Design systems with human oversight and intervention capabilities
  • Fulfill transparency and incident reporting obligations
AI Deployers
  • Operate and monitor AI for its intended purpose and scope
  • Ensure data safety and security and human oversight
  • Maintain compliance with technical standards and regulations
  • Fulfill transparency and incident reporting obligations
AI Users
  • Comply with operating procedures, technical guidelines, and safety measures
  • Do not alter system functions illegally
  • Report incidents to providers

Medium and Low Risk AI Systems

  • How medium and low risk AI systems are determined:
    • Medium risk AI systems that have the potential to confuse, influence, or manipulate users.
    • Low risk AI Systems are all the remaining AI systems that are not classified as high or medium risk.
  • The Vietnamese government encourages companies with medium and low risk AI systems to comply with technical AI standards.

Role-Specific Requirements for Medium and Low Risk AI Systems

AI Role Key Obligations for Medium and Low Risk AI Systems
AI Developers, Providers, Deployers Fulfill transparency and incident reporting obligations.
AI Users Comply with rules for labeling AI-generated content and notifying incidents.

Supporting Governance Measures in the Vietnam AI Law

The Vietnam AI Law establishes several national frameworks to strengthen AI governance, including:

  • National AI Strategy: Drives AI development suitable for Vietnam. Subject to periodic review every three years or upon significant technological or market developments.
  • National AI Infrastructure: Unified, open, and secure ecosystem supporting AI development and operations. High risk AI systems must be deployed here to ensure safety, security, and control.
  • National AI Database: Serve as a key part of National AI infrastructure to manage datasets used for AI training, testing, evaluation, and development to promote innovation while ensuring transparency and effective state oversight.
  • AI Sandbox: Used for testing for conformity assessments where results will inform compliance obligations adjustments.
  • National AI Ethics Framework: Provides guidance for standards, technical regulations, sector-specific guidance, and incentive policies for safe, trustworthy, and responsible AI, with voluntary application encouraged.

Violators of the Vietnam AI Law will be subject to administrative sanctions or criminal liability by the Vietnamese government.

How Does the Vietnam AI Law Overlap with ISO 42001?

The Vietnam AI Law emphasizes themes of protecting human rights, privacy and national security; ensuring transparency, accountability, and human control of the AI systems, and promoting innovation and competency in AI. The following ISO 42001 components complement these goals and objectives:

  • Governance & Accountability: AI Roles & Responsibilities
    • Clause 4.1 - Understanding the organization and its content (e.g., definitions of the AI system roles)
    • Clause 4.2 - Understanding the needs and expectations of interested parties (e.g., Vietnamese government, AI affected parties, etc.)
    • Clause 5.1 - Leadership and commitment
    • Clause 5.3 and Annex A.3.2 - AI roles, responsibilities, and authorities
  • Risk-Based AI Management & AI Performance Evaluation
    • Clause 6.1 - Actions to address risks and opportunities
    • Clause 8.2 - AI risk assessment
    • Clause 8.3 - AI risk treatment
    • Clause 9.2 - Internal audit (e.g., conformity assessment)
  • Transparency: AI Documentation & Logging
    • Clause 7.5 - Documented information
    • Annex A.6.2.6 - AI system operation and monitoring
    • Annex A.6.2.8 - AI system recording of event logs
  • Operational Controls & Monitoring: AI Development, Data Management, & Incident Communication
    • Annex A.6.2.4 - AI system verification and validation (e.g., AI sandbox testing)
    • Annex A.7 - Data for AI systems (e.g., acquisition of data, quality of data, data provenance, and data preparation)
    • Annex A.8.3 - External reporting
    • Annex A.8.4 - Communication of incidents

How ISO 42001 Certification Can Support Vietnam AI Law Compliance

Being certified against the ISO 42001 standard provides organizations with a structured and internationally recognized foundation for responsible AI governance. It requires companies to formalize policies, controls, and oversight mechanisms that align closely with the Vietnam AL law’s expectations, particularly for high-risk AI systems subject to comply with periodic conformity assessments.

ISO 42001 supports regulatory readiness by requiring organizations to establish documented processes, define clear roles and responsibilities across AI lifecycles, maintain audit trails, and operationalize transparency, monitoring, and incident response procedures.

ISO 42001 certification signals to external parties and stakeholders that an organization is committed to accountable, continuously improving AI governance practices. This external validation provides a competitive advantage in the AI marketplace, particularly for multinational organizations operating across jurisdiction with evolving AI regulations.

The implementation of the ISO 42001 Clauses 4-10 and Annex A controls helps companies develop a governance framework for how AI systems are designed, deployed, monitored, and maintained. ISO 42001 helps embed responsible AI practices into ongoing operations; positioning organizations to adapt as regulatory requirements continue to emerge.

Preparing for the Future of AI Governance in Vietnam

Vietnam’s AI Law signals a broader shift toward formalized, enforceable AI governance frameworks across global markets. It introduces clear obligations, defined risk tiers, and structured enforcement mechanisms that will reshape how AI systems are governed within the country. For high-risk AI systems in particular, conformity assessments and transparency requirements elevate the importance of proactive compliance planning.

Implementing a comprehensive AI management system aligned with ISO 42001 can help organizations operationalize these requirements, strengthen AI governance, reduce regulatory uncertainty, and demonstrate ongoing accountability.

As AI regulation continues to evolve globally, organizations that embed principled governance practices today will have increased regulatory readiness to meet Vietnam’s compliance deadlines while enabling sustainable AI growth across jurisdictions.

To learn more about how to develop your AI governance roadmap now to prepare for evolving regulatory expectations, contact us today. In the meantime, discover additional AI insights in these resources:

About Jack Nguyen

Jack Nguyen is a Senior Associate with Schellman based in Atlanta, Georgia. Before joining the firm in 2021, Jack worked as a Senior Analyst for risk3sixty specializing in IT Audit & Cyber Risk Advisory, and as a Project Management Associate for Ernst & Young specializing in SAP projects. He now has over 5 years of experience serving clients in various industries—including high-growth tech companies' information security and compliance programs, IBM development/testing/incident resolution, SAP landscape management, and EY project management office—and holds the following relevant certifications: Certified Associate in Project Management, Certified Information System Auditor, CompTIA Security+, PECB Certified ISO/IEC 27001 Lead Auditor, Certificate of Cloud Security Knowledge, PECB Certified ISO/IEC 9001 Lead Auditor, PECB Certified ISO/IEC 42001 Provisional Auditor, and Certified Information Systems Security Professional. Jack Nguyen is now focused primarily on the ISO practice for organizations across various industries.