Services
Services
SOC & Attestations
SOC & Attestations
Payment Card Assessments
Payment Card Assessments
ISO Certifications
ISO Certifications
Privacy Assessments
Privacy Assessments
Federal Assessments
Federal Assessments
Healthcare Assessments
Healthcare Assessments
Penetration Testing
Penetration Testing
Cybersecurity Assessments
Cybersecurity Assessments
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
ESG & Sustainability
ESG & Sustainability
AI Services
AI Services
Industry Solutions
Industry Solutions
Cloud Computing & Data Centers
Cloud Computing & Data Centers
Financial Services & Fintech
Financial Services & Fintech
Healthcare
Healthcare
Payment Card Processing
Payment Card Processing
US Government
US Government
Higher Education & Research Laboratories
Higher Education & Research Laboratories
About Us
About Us
Leadership Team
Leadership Team
Careers
Careers
Corporate Social Responsibility
Corporate Social Responsibility
Strategic Partnerships
Strategic Partnerships

What to Know About the New EU AI Act

Cybersecurity Assessments | Artificial Intelligence

After 22 grueling hours of negotiations, policymakers within the European Union (EU) have reached a provisional agreement on new rules to govern the most powerful artificial intelligence (AI) models. They’re calling it the EU AI Act, and though yes—the provisions have been hashed out—disagreements surrounding the law enforcement of said provisions have led to a recess in the negotiations. 

Though there’s still more to learn and dive into about this important milestone in AI regulation, there are some early takeaways to be had from the provisions. As cybersecurity experts at the forefront of our field, we’ve kept abreast of developments in the EU sector, and to help you understand the latest, we’re going to briefly explain nine things you should know about the new EU AI Act. 

 

What is the EU AI Act? 

As artificial intelligence continues to advance, its integration further into society continues as well. But as our reliance on such technology increases—as with every new digital tool—security concerns have increased as well, prompting action from different governing bodies to ensure the safeguarding of AI. 

Brand new standards have emerged, like the NIST AI Risk Management Framework, while others like ISO 42001 are still on the way, and existing ones—like HITRUST—have adjusted their requirements to address AI. 

AI has even been addressed by the current American federal government through President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, but the EU has now taken things a step further with its AI Act. 

As the world’s first comprehensive legal framework on AI, the EU AI Act “aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field.” 

 

5 Takeaways from the EU AI Act
(December 2023)
 

To achieve these aims, the EU AI Act will establish rules for AI based on its potential risks and level of impact—here are key things we know right now about the EU AI Act.  

1. Scope of Regulation: 

As a starting point for applicability, the definition of AI in the EU AI Act does closely follow that of the OECD, (though not verbatim). 

That being said, free and open-source software will generally be excluded from this regulation's scope unless it: 

  • Poses a high risk;  
  • Is involved in prohibited applications; or  
  • Presents a risk of manipulation.

2. Governance: 

To ensure adherence to the provisions by applicable systems, national competent authorities will oversee AI systems, and the European Artificial Intelligence Board will facilitate consistent application of the law.  

Furthermore, a specific AI Office will be established within the European Commission to enforce foundation model provisions.  

3. Foundation Models: 

Speaking of which, these models will take a tiered approach that categorizes AI models as “systemic” if they were trained with computing power above a certain threshold—criteria for designation decisions by the AI Office include: 

  • The number of business users; and  
  • Model parameters. 

4. Transparency Obligations: 

The transparency requirements of this new regulation will apply to all AI models, and there will be an obligation to publish a sufficiently detailed summary of training data. While trade secrets must be respected, AI-generated content must be immediately recognizable. 

5. Stakeholder Engagement: 

Along with the AI Office, an advisory forum will be formed to gather feedback from other stakeholders, including civil society.  

Meanwhile, a scientific panel of independent experts will:  

  • Advise on regulation enforcement; 
  • Flag potential systemic risks; and  
  • Inform AI model classification. 

 

4 Items Still Under Ongoing Consideration Within the EU AI Act (December 2023)

While all that has been agreed to by this point, lawmakers still have the other, following considerations to finalize regarding the EU AI Act. 

1. Prohibited Practices: 

The AI Act already includes a list of banned AI applications, including:  

  • Manipulative techniques;  
  • Vulnerabilities exploitation;  
  • Social scoring; and  
  • indiscriminate scraping of facial images.  

However, disagreements persist on the extent of the ban, as the European Parliament has proposed a broader list. 

2. Application to Pre-Existing AI Systems: 

Ongoing discussions will also address whether this regulation should apply to AI systems that were on the market before the Act's implementation—particularly if they undergo significant changes. 

3. National Security Exemption:

Another point of contention is the national security exemption.  

While some EU countries, led by France, have called for a broad exemption for AI systems used in military or defense, including those by external contractors, some are against such blanket loopholes, and instead argue that any national security exception from the AI Act should be assessed on a case-by-case basis—in line with both existing EU law and the EU Charter of Fundamental Rights—and so discussions regarding this issue will continue. 

4. Law Enforcement Exemption: 

Negotiations concerning law enforcement provisions are similarly ongoing, with debates regarding: 

  • Predictive policing,  
  • Emotion recognition software, and  
  • The use of Remote Biometric Identification (RBI). 

What’s Next for the EU AI Act? 

Though the EU AI Act already represents a significant step in regulating AI with its focus on mitigating potential risks while promoting transparency and accountability, the outcome of ongoing negotiations will determine the final provisions of this landmark legislation. 

As it’s now in the final stage of the legislative process, with the EU Commission, Council, and Parliament engaged in said negotiations, we’ll have to wait and see where the lawmakers come down on the remaining items that need addressing.  

In the meantime, should you have any interim questions about the EU AI Act—or any of the other AI-related frameworks that are either already available or upcoming—please contact us so that we can help point your organization in the right direction regarding the security of your AI use. 

About AVANI DESAI

Avani Desai is the CEO at Schellman. Avani has more than 15 years of experience in IT attestation, risk management, compliance and privacy. Avani’s primary focus is on emerging healthcare issues and privacy concerns for organizations. Named as one of the 2017 Global Leaders in Consulting by Consulting Magazine she has also been featured and published in the ISSA Journal, ITSP Magazine, ISACA Journal, Information Security Buzz, Healthcare Tech Outlook, and many more. Avani also sits on the board of Catalist, a not for profit that empowers women by supporting the creation, development and expansion of collective giving through informed grantmaking. In addition, she is co-chair of 100 Women Strong, a female only venture philanthropic fund to solve problems related to women and children in the community.