Upcoming Webinar | AI Meets ISO: What Makes ISO 42001 Different from ISO 27001 & 27701 on August 14th @ 1:00 PM ET

Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

Understanding U.S. AI Policy: Executive Orders, the Big Beautiful Bill, & America’s AI Action Plan

Artificial Intelligence | ISO 42001

Published: Aug 11, 2025

The global push to both regulate and strategically accelerate the development of artificial intelligence (AI) has gained momentum over the past year, resulting in a diverse landscape of evolving frameworks, policies, and executive directives. In the United States, this dual focus on oversight and innovation has translated into a series of executive orders and formal federal AI governance initiatives.  

Notably, 2025 marks a turning point with the passage of the Big Beautiful Bill (BBB) and White House's America’s AI Action Plan. These efforts, combined with ISO 42001, are rapidly shaping the direction of AI compliance. As a leading compliance assessment firm and the first ANAB accredited Certification Body for ISO 42001, we keep close track of the latest developments in AI regulation to help our clients navigate evolving expectations and align their AI strategies with responsible and secure innovation.  

In this article, we’ll break down the U.S. AI executive orders, BBB, and America’s AI Action plan, exploring how these directives are defining federal guidelines and oversight of AI across the public sector. We’ll also detail the key role of ISO 42001 in AI governance and regulatory preparedness. 

The U.S. Executive Orders On AI 

Over the past several years, the U.S. government has issued a series of executive orders (EOs) aimed at establishing a foundation for the ethical and strategic deployment of AI. These directives reflect the nation’s evolving priorities—starting with trust and transparency, and now emphasizing global leadership, innovation, and infrastructure resilience, outlined in the EOs below:  

EO 13960 (2020): Promoting the Use of Trustworthy AI in the Federal Government  

Signed in December 2020, EO 13960 laid the groundwork for federal AI adoption by emphasizing the importance of trustworthy AI through mandated principles such as transparency, accountability, and reliability. It directed federal agencies to ensure AI use is lawful, purposeful, and performance driven. The order encouraged adherence to ethical AI standards such as accuracy, reliability, and non-discrimination, and pushed for open collaboration, data sharing, and workforce readiness.  

EO 14110 (2023): Safe, Secure, and Trustworthy Development and Use of AI (Rescinded) 

Signed in October 2023, EO 14110 represented a significant expansion in federal AI policy focused on the risks and safeguards associated with AI, particularly in the context of national security, civil rights, and innovation. It tasked agencies with implementing guardrails and safety testing standards around the development, procurement, and deployment of AI, particularly in high-risk contexts and with a focus on preventing algorithmic discrimination. 

However, it’s important to note that on January 20, 2025, the incoming administration signed EO 14148: Initial Recissions of Harmful Executive Orders and Actions, which rescinded EO 14110, describing it as overregulation and barriers to AI innovation. EO 14110 was formally revoked with the passing of the BBB in July 2025.  

EO 14179 (2025): Removing Barriers to American Leadership in AI  

Signed on January 23, 2025, EO 14179 replaced EO 14110 and focuses on eliminating regulatory, structural, and institutional roadblocks that hinder innovation in the AI sector. It streamlines research funding processes and procurement and reassesses export controls to ensure U.S. competitiveness in AI talent and technologies.  

By removing barriers, this order aims to reduce administrative burdens on AI startups engaging with federal programs and aims to empower both public and private sectors to accelerate the development and deployment of AI systems, reflecting a shift toward enabling agile innovation, research, and implementation. 

EO 14141 (2025): Advancing U.S. Leadership in AI Infrastructure 

Issued on January 14, 2025, this EO emphasizes the need for building robust infrastructure to support AI at scale. It prioritizes expanding access and investment in shared computing resources, such as national AI research cloud initiatives, strengthening the AI supply chain, and advancing public-private collaboration. 

EO 14141 also directs federal agencies to coordinate on cloud capacity and data-sharing frameworks, enabling more rapid and secure AI research and deployment. The goal is to ensure that U.S. governmental and commercial institutions have the foundational infrastructure required to lead in AI innovation on a global scale. To ensure long-term competitiveness, the order directs the Department of Defense and Department of Energy to facilitate these efforts by the end of 2027. 

While these executive orders provide critical strategic direction for federal AI priorities, the Big Beautiful Bill transforms these principles into law. Many aspects of EO 14110 and EO 14141 are reflected, repackaged, and enforced in BBB mandates—marking a landmark legislative shift in U.S. AI governance from policy suggestions to legal compliance obligations.  

What the Big Beautiful Bill Means for AI Governance 

The BBB cements a pro-innovation and deregulatory agenda, emphasizing American leadership, free enterprise, and reduced bureaucratic friction in the development and deployment of AI systems. Key provisions include: 

  • Legal Mandates for High-Risk AI Systems: Establishes compliance obligations for systems impacting national security, civil liberties, and public safety. 
  • Open-Source Protections: Shields developers of open-weight foundation models from liability, while encouraging voluntary self-certification practices. 
  • Federal Procurement Reform: Harmonizes and streamlines AI procurement criteria across agencies, reducing barriers for private sector AI vendors. 
  • Establishment of the AI Oversight Board: Creates a centralized entity to oversee federal agency compliance, handle public complaints, and issue implementation guidance. 

The BBB repositions accountability and transparency as shared responsibilities between agencies and innovators. It also compels agencies to revise their internal guidance to reflect this legal shift, making many of the formerly voluntary practices under EO 14110 and the OMB’s guidance now mandatory. This law established the backbone for the supplemental implementation blueprint that followed: America’s AI Action Plan. 

America’s AI Action Plan Explained 

Following the passage of the BBB and under the authority of EO 14179, the White House unveiled “Winning the AI Race: America’s AI Action Plan” in July 2025. This operational roadmap outlines how federal agencies will implement the BBB’s mandates while advancing the broader national agenda for AI leadership. The plan centers on three strategic pillars: 

  • Accelerating innovation 
  • Building American AI infrastructure 
  • Leading globally in diplomacy and national security 

To support those goals, the plan defines over 90 concrete federal policy actions, including: 

  • Expediting permits for data centers, semiconductors, and AI infrastructure 
  • Scaling access to high-performance computing through federal cloud investments 
  • Streamlining export controls to ensure U.S. AI competitiveness 
  • Establishing an interagency AI Test and Evaluation (T&E) ecosystem 
  • Promoting open-source and open-weight model development 
  • Advancing AI workforce development and digital literacy 
  • Strengthening cyber and physical AI resilience 
  • Countering AI influence from adversarial nations like China 
  • Promoting biosecurity, election integrity, and content authenticity 

In contrast to earlier plans focused on precautionary approaches, America’s AI Action Plan operationalizes a shift toward deregulation, pro-innovation, public-private collaboration, and infrastructure-first governance. It also emphasizes U.S. diplomatic leadership in shaping competitive AI policy. With this legal foundation in place, federal agencies have now turned their focus to implementation. 

AI Guidelines for U.S. Federal Agencies  

In parallel with executive orders, the BBB, and America’s AI Action Plan, federal agencies continue to develop and update AI-specific guidelines, policies, and processes that reflect both the deregulatory emphasis and continued accountability mandates to align internal practices with national AI priorities.  

Many of these agency-level guidelines originally drew from earlier federal actions, including the now-rescinded EO 14110, and are actively being revised in light of current directives so that they strike a new balance between risk management and innovation enablement. 

Key agency guidance includes: 

  • OMB AI Implementation Guidance (2024): The Office of Management and Budget (OMB) issued landmark guidance requiring agencies to appoint Chief AI Officers (CAIO), maintain AI use case inventories, and implement safeguards for safety-impacting or rights-impacting AI systems. Although updates are underway and may modify some of the original requirements, core responsibilities like CAIO appointments and impact assessments remain in place under the BBB. 
  • NIST AI Risk Management Framework (AI RMF): While voluntary, NIST’s AI RMF continues to be a foundational tool and key reference for agencies seeking to identify, assess, and manage risks associated with AI systems. It encourages agencies and vendors to assess systems through the lens of fairness, explainability, and robustness, supporting alignment with federal and international best practices. 
  • Department-Specific Policies: Agencies like the Department of Defense, Department of Homeland Security, and the General Services Administration have issued supplemental guidance tailored to their mission areas, often addressing procurement standards, ethical concerns, algorithmic bias, cyber resilience, and human oversight. 

These guidelines are now being reinforced through the BBB, which transforms key elements of OMB and NIST frameworks into mandatory compliance requirements. Additionally, America’s AI Action Plan further provides a centralized roadmap for implementation, prioritizing interagency coordination, infrastructure readiness, and strategic alignment with U.S. innovation goals. 

As a result, federal AI guidance is no longer just a matter of best practice—it’s becoming an enforceable baseline, shaping expectations for both public and private organizations engaging with government AI systems. 

The Role of ISO 42001 in AI Governance 

As U.S. federal agencies adapt to evolving AI mandates, ISO 42001 has become a valuable framework for organizations seeking to operationalize ethical and responsible AI practices. Published in late 2023, ISO 42001 is the world’s first international standard focused specifically on AI management systems (AIMS). 

While policies like the BBB and America’s AI Action Plan outline principles and requirements for AI governance, ISO 42001 provides a practical framework for implementing those ideals in day-to-day operations by requiring organizations to: 

  • Establish defined AI governance structures 
  • Identify, assess, and mitigate AI-related risks 
  • Ensure data quality and integrity 
  • Implement lifecycle monitoring of AI systems 
  • Document AI system purposes, limitations, and controls 

These components closely mirror the risk management and documentation requirements emphasized in U.S. federal guidance, including OMB’s AI implementation memos and agency-specific compliance expectations. Notably, ISO 42001 also aligns well with the risk controls, impact mitigation, and human oversight obligations codified in the BBB.  

Readiness for Future AI Oversight with ISO 42001 

Adopting ISO 42001 signals a mature and proactive AI governance posture. Though not mandatory, it offers an internationally recognized, audit-ready framework that positions organizations to future-proof their AI operations, enabling them to adapt quickly to new and evolving AI regulation.  

For example, having an AI management system in place makes it easier to:  

  • Respond to AI impact assessment mandates 
  • Demonstrate responsible AI practices to federal customers 
  • Support documentation and transparency requirements 
  • Implement continuous improvement and incident response protocols 
  • Ensure AI systems align with human-centric and non-discriminatory design principles 

In short, ISO 42001 is more than a compliance tool—it’s a strategic enabler. Whether you're a federal agency, a government contractor, or a private-sector innovator, adopting this framework strengthens your readiness for both emerging oversight and sustained innovation. Developments in federal legislation like the BBB and strategic initiatives under America’s AI Action Plan raise the bar for AI governance, making frameworks like ISO 42001 essential for staying ahead of regulatory demands. 

The Path Forward in AI Governance 

The 2025 executive orders updates paired with the release of the BBB and America’s AI Action Plan all signal a clear pivot in national AI strategy—emphasizing scalability and global competitiveness. While the focus has shifted toward fostering a more agile, innovation-friendly regulatory environment, federal oversight remains. 

At the same time, state-level AI regulations continue to enforce robust AI risk assessment, bias mitigation, and transparency measures, particularly around consumer-facing AI tools. Ultimately, this evolving and expanding landscape suggests that organizations across industries and sectors must remain proactive and vigilant in developing and deploying AI systems that are both effective and accountable, balancing innovation with governance.  

Implementing a governance framework like ISO 42001 is a strong first step. If you’re ready to pursue ISO 42001 certification, or have questions about the process or requirements, Schellman can help. Contact us today and we’ll get back to you shortly. In the meantime, discover helpful ISO 42001 tips and insights in these additional resources:  

 

About Danny Manimbo

Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman's AI and ISO practices as well as the development and oversight of Schellman's attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services.