Contact Us
Services
Services
Crypto and Digital Trust
Crypto and Digital Trust
Schellman Training
Schellman Training
Sustainability Services
Sustainability Services
AI Services
AI Services
About Us
About Us
Leadership Team
Leadership Team
Corporate Social Responsibility
Corporate Social Responsibility
Careers
Careers
Strategic Partnerships
Strategic Partnerships

Penetration Testing

AI Red Teaming

AI Red Team assessments evaluate the security of your Generative AI systems to identify vulnerabilities such as prompt injection, data leakage, and unintended model behaviors that could expose sensitive data, generate harmful outputs, or undermine business workflows.

Contact a Specialist Start Scoping Your Next Pen Test

What Happens During a AI Red Team Assessment?

Using a combination of manual testing and automated tools, we identify vulnerabilities and demonstrate their real-world impact by exploiting your models before attackers do. Navigate the new AI space confidently and securely while establishing trust with your customers.

AI Red Team Assessments Can Help You

https://216294.fs1.hubspotusercontent-na1.net/hubfs/216294/assets/images/icons/indentify.svg

Identify AI-Specific Vulnerabilities

Red teamers simulate attacks against your GenAI systems—such as LLMs and RAG implementations—to uncover prompt injection flaws, model jailbreaks, data leakage vectors, and weaknesses in content moderation or system integration.

https://216294.fs1.hubspotusercontent-na1.net/hubfs/216294/assets/images/icons/strengthen-security-model.svg

Strengthen Model Security

By identifying and addressing issues like jailbreaks, unsafe outputs, or excessive agency, you reduce the risk of real-world exploitation and ensure your AI behaves reliably, ethically, and securely.

https://216294.fs1.hubspotusercontent-na1.net/hubfs/216294/assets/images/icons/compliance.svg

Support Compliance and Responsible AI Goals

Emerging regulations and frameworks (like OWASP Top 10 for Large Language Model Applications, NIST AI RMF, ISO 42001, or industry-specific standards) increasingly expect organizations to assess AI systems for security and safety. AI red teaming helps demonstrate compliance with these evolving requirements.

https://216294.fs1.hubspotusercontent-na1.net/hubfs/216294/assets/images/icons/risk-management.svg

Demonstrate Due Diligence and AI Risk Management

Regular AI-specific testing shows customers, partners, and regulators that you take AI risks seriously and are proactively addressing threats in line with Responsible AI principles and modern cybersecurity expectations.

schellman-red-teaming

Our AI Red Teaming Methodology

Our AI Red Team approach is built on leading industry frameworks, including the OWASP Top 10 for LLMs and the NIST AI Risk Management Framework. Recognizing that AI systems introduce unique risks—such as prompt injection, model manipulation, and unsafe output generation—we go beyond traditional automated testing. Our team conducts hands-on, adversarial exercises designed to simulate real-world abuse scenarios, assessing how your AI models, prompts, guardrails, and integrations withstand malicious inputs, edge cases, and intentional misuse.

Before testing begins, we'll collaborate closely with your team through a series of detailed planning sessions. These discussions explore the backend architecture and the systems that support your AI implementations, ensuring we fully understand the environment and its potential vulnerabilities. From this collaborative process, we craft tailored threat scenarios that mirror realistic attack vectors, aligning directly with OWASP’s Top 10 threats for Large Language Model applications. This ensures our testing is both comprehensive and relevant to the unique challenges of your AI systems. 

schellman-red-teaming
cory-rey

Lead Penetration Tester

Cory Rey

Cory Rey is a Lead Penetration Tester at Schellman, where he plays a key role in advancing the firm’s offensive security capabilities, including spearheading the development of its AI Red Team service line. Focused on performing penetration tests for leading cloud service providers, he now extends his expertise to identifying and exploiting vulnerabilities in Generative AI systems—areas often overlooked by traditional security assessments. With a strong foundation in Application Security, Cory has a proven track record of uncovering complex security flaws across diverse environments.

Meet Cory Contact Us

cory-rey

Frequently Asked Questions

How long do AI Red Team engagements take?

What does an AI red team engagement at Schellman cost?

What is the difference between an AI Red Team and a traditional application penetration test?

Does this test include other penetration testing vectors, such as input validation?

How many tokens should we expect to be used as a result of testing?

Take the first step to harden your GenAI solution

Our team of practice leaders, not sales, are ready to talk and help determine your best next steps.

Start Scoping Your Penetration Test Contact a Specialist