Services
Services
SOC & Attestations
SOC & Attestations
Payment Card Assessments
Payment Card Assessments
ISO Certifications
ISO Certifications
Privacy Assessments
Privacy Assessments
Federal Assessments
Federal Assessments
Healthcare Assessments
Healthcare Assessments
Penetration Testing
Penetration Testing
Cybersecurity Assessments
Cybersecurity Assessments
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
ESG & Sustainability
ESG & Sustainability
AI Services
AI Services
Industry Solutions
Industry Solutions
Cloud Computing & Data Centers
Cloud Computing & Data Centers
Financial Services & Fintech
Financial Services & Fintech
Healthcare
Healthcare
Payment Card Processing
Payment Card Processing
US Government
US Government
Higher Education & Research Laboratories
Higher Education & Research Laboratories
About Us
About Us
Leadership Team
Leadership Team
Careers
Careers
Corporate Social Responsibility
Corporate Social Responsibility
Strategic Partnerships
Strategic Partnerships

The EU AI Act Passed: What’s Next and What Now

Cybersecurity Assessments | Artificial Intelligence

In a move that now positions the 27-nation bloc as a global leader in regulating artificial intelligence (AI), European Union lawmakers have granted final approval to the Artificial Intelligence (AI) Act, a pioneering legislation that sets a precedent for other jurisdictions grappling with the challenges of AI regulation. 

First proposed five years ago, the EU AI Act garnered overwhelming support from the European Parliament. With 523 votes in favor, 46 against, and 49 abstentions, the legislation—and the significant governance milestone it represents—reflects a broad consensus among policymakers on the need to establish robust rules to regulate the use of AI technologies. 

As a top cybersecurity assessment firm, we have been at the forefront of the latest discussions and developments regarding AI—we know how closely organizations are following the progress of what has become the foremost debate in tech, and we want to help disseminate this latest development.  

In this article, we’ll provide a brief overview of what’s in the EU AI Act and its implications for the rest of the world, as well as details on what you can do right now to get started validating the trustworthiness of your AI systems. 

 

What is the EU AI Act? 

In regulating AI applications, the newly passed EU AI Act takes a risk-based approach—the legislation categorizes AI systems based on their level of risk, ranging from low to high to unacceptable: 

  • Low-risk applications, such as content recommendation systems, are only subject to voluntary requirements and codes of conduct. 
  • High-risk applications, or those that have the capability to negatively affect human safety or fundamental rights, including those used in medical devices and critical infrastructure, face stricter scrutiny and compliance requirements that include adequate risk assessments, logging and monitoring mandates, and human oversight. 
  • Unacceptable risk applications, or those considered a threat to people—like social scoring systems—will be banned outright. 

Importantly, the AI Act also addresses the emergence of generative AI models, which have rapidly evolved to produce lifelike responses, images, and more—though these systems will not be classified as high-risk, developers will have to comply with the Act’s transparency requirements that include the provision of detailed summaries of training data as well as EU copyright law. 

The EU AI Act’s Enforcement Timeline

Though now passed, the expected effective dates for different requirements within the Act are expected to roll out in stages: 

  • April 2024: Expected formal adoption of the law 
  • May 2024: Expected EU AI Act effective date 
  • November 2025: Expected enforcement date for the ban of AI systems posing unacceptable risks 
  • March 2026: Expected effective date for codes of practice 
  • May 2026: Expected date that general-purpose AI systems must meet transparency requirements
  • May 2027: Expected date for the full applicability of the EU AI Act  

As these provisions roll out in stages, it’ll be important to remain vigilant as to what will be required and by when. Enforcement mechanisms, including the establishment of AI watchdog agencies in EU member states, will play a critical role in overseeing compliance and addressing potential violations, which could see companies hit with fines ranging from 7.5 million to 35 million euros ($8.2m to $38.2m), depending on the type of infringement and your firm’s size. 

The Global Implications of the EU AI Act 

Though this regulation will only cover AI systems “placed on the market, put into service or used in the EU,” the passage of the EU AI Act is still expected to have far-reaching influence beyond the borders of the European Union—as the first major set of regulatory guidelines for AI, the legislation is likely to shape further global discussions on AI governance.  

Moreover, governments around the world, including the United States and China, are closely monitoring the EU's regulatory approach and may follow suit with their own initiatives, so—even if your organization is not directly subject to the EU AI Act’s provisions, and especially if it is—it may be a good idea to get started in proving the trustworthiness of your AI systems.  

 

What You Can Do Right Now to Better Secure Your AI Systems 

While international regulatory uncertainty does remain to an extent, there are still proactive measures organizations can take to both better secure their systems and prepare themselves for AI governance: 

  • Guidelines for Secure AI System Development: Released by the UK National Security Centre (NCSC) and the US Cybersecurity and Infrastructure Security Agency (CISA), these guidelines—when implemented—can help developers reduce system risks before security issues arise.  
  • NIST AI Risk Management Framework (RMF): By following this set of guidelines and best practices, you can better manage the risks associated with AI systems—for further validation of the robustness and security of your AI systems, you can also undergo a comprehensive assessment against these guidelines. 
  • HITRUST + AI Certification: Now that HITRUST has added optional AI risk management requirements, including them within your certification would confirm your AI safeguards regarding sensitive data as well as your mitigation of related cybersecurity threats. 
  • Privacy Impact Assessment (PIA): A PIA specific to your AI system(s) would shed light on the privacy implications of the data that is collected, used, and/or shared within that solution, as well as what ramifications exist at the state, national, and international levels should you fall victim to a breach.  
  • Penetration Test of AI System(s): Using simulated attack vectors that can range from training data poisoning to prompt injection, a penetration test will identify any security weaknesses and unaccounted-for risks within your AI applications. 

Though all these potential avenues feature different parameters and require different levels of effort, each can help better secure your AI and position your organization well for future regulations and current ones, like the EU AI Act. 

 

Looking Forward to a World of Secure AI 

By setting clear rules and standards with the passage of its AI Act, the European Union aims to harness the potential of AI as a tool for societal progress while minimizing risks, safeguarding fundamental rights and values, and ensuring accountability.  

As the global community continues to grapple with the ethical and societal implications of AI, the EU's leadership in this area is poised to shape the future of international AI governance—a future that is quickly evolving.

Organizations need to be ready as this landscape continues to shift, and our dedicated AI services team is prepared to help—contact us today with any questions you may have about your AI security options and how to get started in proving the trustworthiness of your applications. 

About DANNY MANIMBO

Danny Manimbo is a Principal with Schellman based in Denver, Colorado. As a member of Schellman’s West Coast / Mountain region management team, Danny is primarily responsible for leading Schellman's AI and ISO practices as well as the development and oversight of Schellman's attestation services. Danny has been with Schellman for 10 years and has over 13 years of experience in providing data security audit and compliance services.