Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

A Global Snapshot of AI Laws and How Compliance with ISO 42001 Can Help

Artificial Intelligence | ISO 42001

Published: May 13, 2025

As artificial intelligence continues to become increasingly integrated into regular business operations, the need for its responsible development and use also continues to grow. From bias and fairness to data privacy and security concerns, the risks associated with AI are driving governments around the world to introduce new and evolving legislation aimed at ensuring its ethical and safe deployment.  

However, navigating this fast-evolving regulatory landscape presents significant challenges for organizations seeking to adopt AI responsibly and at scale. To help address these challenges, the world’s first international AI Management System (AIMS) standard was introduced. Known as ISO 42001, this framework provides organizations with structured guidance for implementing, maintaining, and continuously improving responsible AI practices. 

In this article, we’ll provide a global snapshot of current AI laws, best practices for preparing for emerging AI regulations, and insight into how compliance with ISO 42001 can help your organization align with AI laws while building trustworthy, transparent, and accountable AI systems. 

Current AI Laws Around the World 

Numerous AI laws and regulations have emerged around the world which are designed to guide the responsible development, deployment, and use of AI. These legal frameworks reflect regional priorities but share the overall common goal of promoting transparent, accountable, and ethical AI practices. 

Colorado AI Act 

The Colorado AI Act (SB24-205) will go into effect within the state beginning on February 1, 2026, with the intention of protecting consumer interactions with AI systems. More specifically, the aim is to protect consumers from potential algorithmic discrimination within high-risk systems. The term “high-risk systems” is consistently used throughout different AI regulations. In this case however, it describes an AI system which when deployed, could make or be a significant factor in making a consequential decision, such as those pertaining to employment status, educational enrollment, or financial lending decisions.  

It is also important to note that within this bill, consumers are defined as Colorado residents. This bill divides specific requirements between developers of high-risk AI systems and deployers of high-risk systems, both of which must be doing business within the State of Colorado. 

Illinois Human Rights Act 

On January 1, 2026, an amendment to the Illinois Human Rights Act (HB3773) will take effect adding language pertaining to the use of artificial intelligence. This amendment aims to protect employees and candidates during AI employment decision making processes to ensure that discrimination is not taking place. Relevant employment decisions include but are not limited to recruitment protocols and promotion considerations. The amendment specifically calls out the use of employee race and zip codes within the AI system used to make decisions. It also enforces the need for employers to provide notice that AI is being used to make such decisions. 

Utah AI Policy Act 

Similar to the Colorado AI Act, Utah implemented a bill that took effect on May 1, 2024, protecting consumers interacting with AI.  

There are two main takeaways from the Utah AI Policy Act: 

  1. Organizations or individuals using AI as a part of activities that are regulated by the Utah Division of Consumer Protection must disclose to consumers that they are interacting with generative AI, if asked. 
  1. Organizations or individuals using AI as a part of activities of a regulated occupation or occupations regulated by the Department of Commerce where licenses are required to practice the occupation must disclose when a person is interacting with generative AI.  

For regulated occupations, disclosures must be made verbally at the start of the interaction if it is oral or through electronic messaging if the interaction is written.   

South Korea AI Basic Act 

With an upcoming enforcement date of January 22, 2026, South Korea will become the second entity to propose AI legislation, following the European Union. The South Korea AI Basic Act enforces organizations to maintain transparency and safety regarding their development of AI or conduction of AI services. It is important to note that the regulations apply to all organizations conducting business within the South Korean market, including those physically located outside of the nation through interaction with a domestic representative.  

The act defines requirements for organizations utilizing high-impact AI systems, which are any systems involved in an industry that has a significant impact on human life, human rights, and/or physical safety. A few of these requirements include conducting impact assessments of the high-impact AI systems, establishing a risk management plan, and retaining documentation to prove system safety and reliability. 

EU AI Act 

Lastly, we have the regulation leading the way, the European Union’s AI Act (EU AI Act), which took effect August 1, 2024. The EU AI Act takes a risk-based approach to enforcing regulations for AI systems. Low-risk systems are subject to voluntary requirements, high-risk applications must adhere to compliance requirements, and systems with unacceptable risk are banned altogether. The EU AI Act defines high-risk systems as those that have the capability to have a negative impact on human safety or human rights, which serves as the source of inspiration for the South Korea AI Basic Act. 

Any organization that develops a high-risk AI system or provides a service using such a system within the EU is impacted by this legislation. This includes organizations who are not physically located within the EU but still conduct business there.  

High-risk AI system providers must meet requirements such as developing a risk management system, conducting data governance activities on data sets used within the system, and maintaining technical documentation to show compliance, to name a few. The act also provides requirements for providers of general-purpose AI (GPAI) models. 

Strategies to Remain Prepared for Emerging AI Regulations 

As you can see, there are several AI regulations already introduced around the world and even more in the works pending enforcement. It can become overwhelming to keep up with all the nuances of each law in different countries or regions. What satisfies a requirement of one act may not satisfy the mandates of another regulation, however there are numerous measures you can take to remain prepared for compliance with emerging AI laws and frameworks. 

1. Assign a designated member of your compliance team to track, lead, and deliver AI governance efforts 

It would be highly beneficial to designate a specific individual of your compliance team to track AI governance efforts to ensure your organization aligns with the requirements wherever you conduct business. There is immense value in having a subject matter expert (SME) who can be called on when questions arise about a specific regulation, for regulatory evaluation when your organization plans to enter a new market, or if stakeholders have questions about how the organization is meeting new or existing requirements.  

2. Proactively conduct risk management assessments for AI impact and use on a regular basis

Another best practice is to implement controls for conducting AI specific impact and risk assessments as early in the AI life cycle as possible. Rather than waiting until you are attempting to meet regulatory requirements and standards, proactively implementing these efforts early on is not only beneficial for your organization's security stature, but it also lessens the burden and hassle involved with scrambling to meet requirements later. An additional important measure is to conduct these assessments on a regular basis, such as annually, to ensure that the results do not grow stale and to consistently evaluate new considerations. 

3. Adopt processes that define and disclose AI use to consumers

A common theme found throughout all five of the regulations outlined above is the need for end users to be made aware of the fact they are using AI. Some regulations may include specific guidelines for how organizations must inform users, but typical methods of navigating disclosure of AI use to consumers include: 

  • An AI chatbot clearly and directly informs the end user at the beginning of the conversation that they are interacting with an AI system and not a human. 
  • Watermarks on content generated by AI systems such as images or videos.  

It is also important to define the process for how AI is used within the organization for complete transparency. For example, it’s essential to clearly define to customers what data of theirs is being collected and utilized by an AI system and what guardrails are in place to ensure the security of their information, such as data masking or tokenization. 

4. Address ethical considerations and bias mitigation

Implementing ethical considerations and reducing the amount of bias within AI models is the cornerstone of creating a trustworthy AI system. For example, as described in the Illinois Human Rights Act above, if zip codes are included within the dataset of employees being hired for a position, the model could inadvertently be trained with a bias to disregard applicants based on where they live rather than what skillset they offer to the organization.  

Organizations should ensure that datasets being used for AI models include relevant data for the goal the system aims to achieve. Organizations should document how they ensure that datasets being used to train AI models are not biased, in addition to any testing that takes place during the lifecycle. If biased results are discovered, the organization should determine if it is the model itself or the data being used. Then, proper testing, retraining, and/or retuning should occur in response.  

How Compliance with ISO 42001 Can Help You Prepare for Emerging AI Laws 

As an organization looking to comply with AI laws, it can be intimidating to identify the best ways to keep up with the latest AI trends and best practices. However, a huge asset can be pursuing compliance with the ISO 42001 framework. This standard ensures and proves to stakeholders that organizations establish, implement, maintain, and continually improve their AIMS. As the first international standard for responsible AI development and use, it was created to work with the emerging laws of the time and many new laws are referencing the standard as a benchmark. For example, the Colorado AI Act specifically calls out the ISO 42001 standard as being a baseline for risk management of AI systems. 

Another requirement to comply with the ISO 42001 standard is the need for AI impact assessments to be conducted. These assessments should evaluate the potential negative impacts on individuals and societies associated with the development and use of AI systems. The results of the assessment should be retained and made available to stakeholders, as applicable. Not only is the ISO 42001 framework an important tool for an organization to showcase that their AI system is responsible and secure, but laws around the world are acknowledging its maturity and stature. It only makes sense that as new laws develop, they will look to the ISO 42001 standard as well. 

If you’re ready to pursue ISO 42001 compliance, or have questions about the requirements or certification process, Schellman can help. Contact us today and we’ll get back to you shortly. In the meantime, discover other helpful AI compliance insights in these additional resources:  

About Jerrad Bartczak

Jerrad Bartczak is a Senior Associate – AI within the AI Practice at Schellman, based in New York. He specializes in AI assessments including ISO 42001 and HITRUST + AI, while staying current on worldwide AI compliance and governance developments. He also possesses in-depth compliance knowledge cultivated through years of experience conducting HITRUST, SOC 1, SOC 2, DEA EPCS and HIPAA audits. Jerrad maintains CISSP, CISA, CCSFP, CCSK and Security+ certifications.