Upcoming Webinar | AI Meets ISO: What Makes ISO 42001 Different from ISO 27001 & 27701 on Thursday, July 17th, 2025 @ 1:00 PM ET

Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

The Schellman Blog

Stay up to date with the latest compliance news from the Schellman blog.

Blog Feature

ISO Certifications

By: Danny Manimbo
October 24th, 2024

Since being published in December 2023, a lot of people are still wrapping their heads around the ISO 42001 standard. While designed to help all organizations who provide, develop, or use artificial intelligence (AI) products and services do so in a trustworthy and responsible manner with the requirements and safeguards that the standard defines—including defining your AI role.

Blog Feature

Penetration Testing

By: Dan Groner
October 22nd, 2024

With so much business now being done online and digitally, much—if not most—of organizational security concerns focus on beefing up technical controls. But, in fact, the human element of cybersecurity is often where the most impactful failures occur.

Blog Feature

ISO Certifications

By: Megan Sajewski
October 21st, 2024

When seeking ISO 42001:2023 certification, you must ensure that your artificial intelligence management system (AIMS) aligns with the standard’s key clauses (4-10), each of which focuses on a specific facet—context, leadership, planning, support, operation, performance evaluation, and improvement.

Blog Feature

Penetration Testing | Red Team Assessments

By: Jonathan Garella
October 18th, 2024

Thinking Inside the Box Traditional red teaming approaches often focus on external threats—simulating how an outside attacker might breach a company’s defenses. This method is undeniably valuable, offering insights into how well an organization can withstand external cyberattacks. However, this "outside-in" perspective can sometimes overlook another aspect of security: the risks that arise from within the organization itself. While traditional red teaming is crucial for understanding external threats, thinking inside the box—examining internal processes, workflows, and implicit trusts—can reveal vulnerabilities that are just as dangerous, if not more so to an organization.

Blog Feature

Penetration Testing

By: Cory Rey
October 17th, 2024

With proven real-life use cases, it’s a no-brainer that companies are looking for ways to integrate large language models (LLMs) into their existing offerings to generate content. A combination that’s often referred to as Generative AI, LLMs enable chat interfaces to have a human-like, complex conversation with customers and respond dynamically, saving you time and money. However, with all these new, exciting bits of technology come related security risks—some that can arise even at the moment of initial implementation.

Blog Feature

Healthcare Assessments

By: Schellman
October 16th, 2024

When the COVID-19 pandemic spread across the globe in 2020, the need for social distancing and isolation impacted the availability of in-person, non-emergency healthcare appointments. As a result, telehealth became a common way for healthcare providers to serve their patients without seeing them in-person, and with its rise came related HIPAA compliance concerns.

Blog Feature

Cybersecurity Assessments

By: AVANI DESAI
October 15th, 2024

As EU member states transpose the NIS 2 Directive into their national laws by October 17, 2024, organizations under its purview must also ensure they’re ready to fully comply with the new cybersecurity regulations. Penalties for non-compliance will include significant fines, so if you haven’t started on any necessary implementations, now is the time.

Blog Feature

Penetration Testing | Artificial Intelligence

By: Josh Tomkiel
October 11th, 2024

Need for Secure LLM Deployments As businesses increasingly integrate AI-powered Large Language Models (LLMs) into their operations via GenAI (Generative AI) solutions, ensuring the security of these systems is on the top of everyone’s mind. "AI Red Teaming" (which is closer to Penetration Testing than a Red Team Assessment) is a methodology to identify vulnerabilities within GenAI deployments proactively. By leveraging industry-recognized frameworks, we can help your organization verify that your LLM infrastructure and execution is done securely.

{