Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

Cross-Border AI Governance and Jurisdictional Conflicts

Artificial Intelligence

Published: Oct 6, 2025

If you thought developing and implementing your AI system was a challenge, just wait until you attempt to ensure your AI system complies with conflicting international laws simultaneously. 

Your AI doesn't just use data; it consumes it like a hungry teenager at a buffet. This creates a problem when the same AI system operating across multiple regulatory jurisdictions is subject to conflicting legal requirements. Imagine your organization trains your AI in California, deploys it in Dublin, and serves users globally. This means that you operate in multiple jurisdictions, each demanding different regulatory requirements from your organization. 

Welcome to the fragmentation of cross-border AI governance, where over 1,000 state AI bills introduced in 2025 meet the EU's comprehensive regulatory framework, creating headaches for businesses operating internationally.  

As compliance and attestation leaders, we’re well positioned to offer advice on how to face this challenge as you establish your AI governance roadmap. In this article, we’ll cover an overview of current global AI regulations in place, where some regulations are colliding and conflicting, the business impact of global AI regulations, and how to act now in the interest of cross-border AI accountability. 

Concepts to Know About Cross-Border AI Governance 

Before we dive into detailing the cross-border AI regulations in place, it's important to understand the following concepts:  

  • Cross-Border AI Systems: Artificial intelligence which operates across more than one legal boundary  
  • Data Localization: Legal or regulatory restrictions which mandate that specific data (e.g. financial, personal, or government data) be stored and processed within a country’s borders 
  • Jurisdictional Conflicts: Conditions where complying with one set of regulations or laws simultaneously prevents meeting another set of regulations or laws 

Regulatory Arbitrage: The Regulatory Reality Check 

United States: A Complex Mixture of AI Regulation 

There is no single, unified legal framework for AI governance within the US, similar to state taxes, but in regard to the code and practices behind AI. That said, a total of 260 measures related to AI were introduced into the US Legislature in 2025, of which 22 passed. This rise in AI legislation and regulation is not going to slow down either. States considered almost 700 legislative proposals related to AI in 2024, with activity accelerating in 2025. 

This mixture of emerging regulation creates a patchwork nightmare. Colorado pioneered comprehensive AI legislation with a risk-based approach, while California pursued a different strategy with multiple targeted laws addressing election deepfakes, AI-generated content warnings, digital replicas of performers, and training-data disclosure after their comprehensive bill was vetoed. 

EU: One Ring to Rule Them All 

The EU took the opposite approach with their AI Act, which offers a comprehensive, prescriptive framework that became the world's first set of rules on AI. More importantly for US companies, the EU AI Act applies not only to organizations based in the EU, but also those in the U.S. and other non-EU organizations if AI systems or outputs are used within the EU. 

The Act's timeline is aggressive, with prohibitions on "unacceptable risk" AI practices becoming legally binding across all 27 EU member states on February 2, 2025, and additional requirements rolling out through 2027. 

China: State-Controlled with Global Ambitions 

China combines national, provincial, and local regulations with an emphasis on state power and cultural values. Chinese companies need government approval to sell AI technologies, including speech and text recognition and personalized content recommendation systems. This creates another layer of complexity for global operations. 

When Laws Collide: Examples of Real-World Battle Royals 

The OpenAI Data Dilemma: David vs. Goliath vs. GDPR 

In May 2025, a US federal court issued a preservation order requiring OpenAI to retain all ChatGPT conversation logs—affecting over 400 million users globally. This introduced a structurally irreconcilable conflict: compliance with a US preservation order may directly breach Articles 5 and 17 GDPR. 

Now, consider the following: the right to erasure, first codified in the 1995 Data Protection Directive (Article 17 GDPR), sits uneasily alongside the broad discovery obligations under US litigation. In order to operate, AI systems process massive volumes of personal data, making this conflict exponentially worse. 

The business impact is severe. Companies using vendors like OpenAI face data disclosure requirements that sharply contrast longstanding privacy commitments between companies and their AI vendors. Your enterprise contract promising data deletion may now be worthless in litigation. 

TikTok: The Multi-Jurisdictional Nightmare 

ByteDance walked into a perfect regulatory storm: US national security concerns requiring data localization, Chinese export approval requirements for AI technologies, and EU GDPR compliance for European users—all for the same platform. 

Clearview AI: When Enforcement Goes Global 

The Dutch Data Protection Authority fined Clearview AI €30.5 million for violating the GDPR, despite Clearview's argument that it doesn't provide services in the EU. This demonstrates how extraterritorial enforcement of privacy laws can reach companies that believe they're safely outside a jurisdiction's reach. 

The Business Impact: More Than Compliance Theater 

Healthcare compliance was already complicated, but now organizations face the perfect storm of regulatory complexity. Most frameworks classify AI as Software as Medical Devices (SaMDs), but definitions of Personally Identifiable Information vary by jurisdiction, and information collected by devices and AI changes over time. 

Add HIPAA compliance to GDPR requirements and state-specific AI regulations, and you have a compliance matrix that would have any compliance, governance, or security professional on-edge. 

The Outsourcing Fallacy 

Here's a reality check: you cannot outsource legal culpability. You're like a dry cleaner who sends shirts to a contractor for repairs. If that contractor ruins the shirt or creates new problems, the customer will still hold you accountable, not the contractor. Data management works the same way—you are beholden to your customers and cannot outsource legal culpability. 

What C-Suite Leaders Can Do Today: Action Over Analysis Paralysis 

1. Immediate Risk Assessment (Over the Next 30 Days)

  • Inventory your AI exposure: 
    • Map AI systems affecting international users 
    • Identify AI systems that might touch the EU market and assess their risk level 
    • Document data flows and retention policies 
    • Understand which jurisdictions' laws apply to your systems 
  • Conduct red flag analysis to look for systems that: 
    • Collect psychological or behavioral data, target specific demographic groups, make automated decisions affecting access to services, or process biometric information 
    • Process EU resident data (triggering EU AI Act requirements) 
    • Handle protected health information 
    • Make "consequential decisions" under various state laws 

2. Technical Architecture Solutions (Over the Next 90 Days)

  • Privacy-preserving technologies:  
    • Implement federated learning to avoid cross-border data transfers 
    • Deploy differential privacy techniques 
    • Use privacy-focused modules to ensure compliance with diverse data protection laws 
  • Modular compliance architecture:  
    • Create region-specific AI model versions 
    • Build configurable governance frameworks 
    • Design systems that can adapt to different jurisdictional requirements without complete rebuilds 

3. Legal and Organizational Strategies (Ongoing)

  • Representative Appointments:  
    • Providers of high-risk AI systems established outside the EU must appoint an authorized representative in the EU in writing. Don't wait. This requirement is already in effect for many systems.  
    • Legal caveat: This is for non-EU AI providers that place their high-risk AI systems or general-purpose AI models on the EU market. 
  • Unified Governance Frameworks:  
    • Establish AI governance that addresses multiple jurisdictions simultaneously rather than creating separate compliance programs. Organizations can best benefit from a single, focused approach to safeguarding data and addressing AI-related cybersecurity threats. 
  • Contract Redesign: 
    • Add vendor notification requirements for litigation holds. 
    • Include terms requiring notification when data becomes subject to a hold, including a court-mandated hold, and providing opportunities to object to disclosure. 
    • Build in jurisdiction-specific termination clauses. 
    • Plan for conflicting legal requirements scenarios. 

4. Strategic Policy Engagement (6-12 Months)

  • Industry Collaboration:  
    • Support harmonization efforts and mutual recognition agreements. Like-minded governments should work together to implement basic data-transfer agreements that establish guidelines for how companies handle cross-border user-data transfers. 
  • Standards Development: 
    • Engage with international standards bodies working on AI governance harmonization. Technical standards alignment can reduce compliance complexity significantly. 

Looking Ahead: Emerging Challenges You Can't Ignore 

  • Generative AI Explosion: 
    • Foundation models create new categories of cross-border conflicts. Generative AI platforms increasingly rely on large volumes of user-generated content that may qualify as personal data, blurring the boundary between user content and training data. 
  • Enforcement Escalation: 
    • The European AI Office begins supervising GPAI models on August 2, 2025, with substantial administrative fines for non-compliance reaching up to 7 percent of a company's global annual turnover, or 35 million EUR. 
  • Quantum Computing Implications: 
    • As quantum computing advances, you should expect new categories of cross-border AI conflicts involving cryptographic standards and data sovereignty. 

Bottom Line: Act Now, Adapt Continuously 

Cross-border AI accountability isn't going away, it's only accelerating. The companies who thrive will be those that treat regulatory complexity as a competitive advantage, not a compliance burden. 

Conversations in your next board meeting should include the following three questions: 

  1. Which of our AI systems create cross-jurisdictional legal conflicts? 
  2. What's our plan for the next wave of AI regulations hitting in 2025-2026? 
  3. How are we turning regulatory complexity into a competitive moat? 

The wild west era of AI deployment is ending. In its place comes a complex but navigable regulatory landscape that rewards preparation, punishes improvisation, and demands that C-suite leaders think globally while acting locally. 

The choice is simple: lead the harmonization effort or get harmonized out of business. Your AI systems—and your shareholders—are counting on you to choose wisely. 

To learn more about AI and its impact on your business, join our AI Summit this November or contact us today. 

About Sully Perella

Sully Perella is a Senior Manager at Schellman who leads the PIN and P2PE service lines. His focus also includes the Software Security Framework and 3-Domain Secure services. Having previously served as a networking, switching, computer systems, and cryptological operations technician in the Air Force, Sully now maintains multiple certifications within the payments space. Active within the payments community, he helps draft new payments standards and speaks globally on payment security.