How Attackers Are Weaponizing AI to Create a New Generation of Ransomware
Penetration Testing | Artificial Intelligence
Published: Feb 9, 2026
Last Updated: Feb 11, 2026
Artificial intelligence is reshaping the cyber threat landscape as attackers have already begun weaponizing AI to dramatically accelerate phishing, reconnaissance, payload development, and attack execution.
To better understand this new reality, Josh Tomkiel, Managing Director on Schellman’s Penetration Testing Team, answers the most common questions security leaders are asking about AI-enabled threats. In this FAQ-style blog post, Josh breaks down how attackers are using AI today, why reliability is still the limiting factor for AI-powered ransomware, and what organizations should be doing now to prepare.
From real-world offensive security testing examples to the limitations of legacy detection tools and the role of AI red teaming and ISO 42001 certification, this guide provides a practical, expert-driven look at how AI is changing both sides of the cyber battlefield and what comes next.
How Attackers Are Weaponizing AI to Accelerate and Scale Attacks
1. How is AI being weaponized in phishing attacks today?
AI is most notably compressing attack timelines. A traditional phishing campaign takes about three days to stand up, building infrastructure, designing the campaign, creating a convincing landing page, setting up credential harvesting, configuring reverse proxies, and crafting content that doesn't look like a template.
Using AI, we can now generate 30 different campaign ideas with customized login pages within minutes. Additionally, the entire required infrastructure can be automatically deployed and ready to go in one day. The weaponization of AI is in its speed and scale.
For targeted spear-phishing, it’s now possible to conduct open-source intelligence gathering, build detailed target profiles, and craft perfectly written emails with convincing pretexts exponentially faster. When you combine AI agents with automation, you go from a 5-day manual setup to a 48-hour automated deployment with minimal manual review still required.
The Reality of AI-Powered Ransomware Today
2. Is AI-powered ransomware a real threat today, and what would make it dangerous?
Frankly, true AI-powered ransomware isn't widespread yet. The technical barriers are still too significant. Larger models need the hardware to support them, and smaller models are popping up designed to run on edge-devices, like phones or tablets.
This size tradeoff also impacts the model’s intelligence, and then there is the topic of uncensored models. These fine-tuned models exist that could theoretically power autonomous ransomware; however, the consistency and dependability needed aren’t there, and that is everything in offensive operations.
You need complete control over what your tools and scripts do. You can't have AI deciding to automatically delete all domain users to prove a privilege escalation attack works or dropping all tables in a database if SQL injection is discovered. That would be catastrophic for both the target's availability and your engagement parameters.
Real threat attackers, just like penetration testers, need 99%+ reliability because if an executed payload during a pen test engagement alerts defenders and burns the access they worked to gain or if ransomware failed to deploy, that would be an extremely frustrating showstopper.
Pre-configured payloads and traditional scripts still outperform AI in the reliability and predictability that attackers require for guaranteed results. The risk of unintended consequences from autonomous AI decision-making currently outweighs any potential benefits.
How AI Is Being Used in Real-World Offensive Security Testing
3. Can you share an example of how AI has been used in testing to simulate or enhance an attack?
We're using AI to accelerate the entire workflow, but it's important to understand that we're using private, self-hosted large language models on our own hardware. These can be fine-tuned to provide uncensored responses where standard foundation models have guardrails against hacking and penetration testing topics. With those limitations removed, it’s possible to use AI as a true copilot hacking assistant.
When developing custom payloads to bypass EDR solutions, AI helps us iterate through dozens of code variations faster than ever possible in the past. AI functions as an augmentation tool, not a replacement. It makes our jobs easier and our work more efficient, but it doesn't do the job for us.
We're still making tactical decisions, verifying outputs, and maintaining control over execution. The AI accelerates the pre-compromise phase, including reconnaissance, initial access preparation, and tool development. Humans handle the strategic thinking and post-compromise operations.
Why Legacy Detection and Defense Models Are Falling Behind
4. What risks do enterprises face if they continue to rely on legacy detection tools?
The fundamental asymmetry hasn't changed. Defenders must be right 100% of the time, while attackers only need to be right once. And AI is accelerating the attacker's advantage significantly. If blue teams don’t leverage automation, machine learning models, and AI-enhanced threat hunting to match this pace, it becomes a speed problem they simply won’t be able to solve manually.
Legacy SIEM and EDR solutions relying on static signatures and rules-based detection will miss polymorphic threats, AI-generated payloads with unique signatures, and automated attack chains that move faster than human analysts can respond.
When we can compress a three-day attack preparation into one day and iterate through payload variations in hours instead of days, traditional detection approaches face an expanding gap they can't bridge without advancing their capabilities to match the massive number of new threats.
How Organizations Should Prepare for AI-Enabled Cyber Threats
5. What proactive steps should organizations take now, whether readiness assessments, AI red teaming, or other AI-focused compliance initiatives like ISO 42001 certification?
AI red teaming should evaluate your entire threat surface beyond just prompt injection vulnerabilities. It’s best practice to work with development teams to map realistic attack scenarios.
What happens if an engineer gets phished and their credentials access your AI’s knowledge base (RAG)? What if an attacker poisons your RAG with malicious content? How would your system respond to adversarial inputs designed to extract sensitive data or manipulate model outputs? Do you have the necessary monitoring in place to alert you of security breaches in real-time? An AI red teaming assessment allows you to evaluate and respond to these considerations and their implications.
ISO 42001 certification demonstrates commitment to AI security governance, but the real value lies in the risk assessment and controls implementation process. Organizations should establish AI security working groups that bridge security, development, and legal teams. Start with a tabletop exercise mapping your AI attack surface, then conduct targeted AI red team assessments of your highest-risk AI systems before expanding the program.
How Ransomware Attacks Will Evolve as AI Technology Matures
6. How do you expect ransomware tactics to evolve as AI continues to advance?
Near-term, AI will help attackers develop more sophisticated payloads with polymorphic signatures that evade detection. Attacks will also be executed faster, with AI handling reconnaissance, lateral movement decisions, and data exfiltration prioritization in compressed timeframes.
The real gamechanger will be when attackers start deploying fine-tuned small models directly on compromised devices that are designed to do specific types of attacks. This could limit the risk of unintended fallout by only expanding capabilities when the initial attack path fails.
This also eliminates the call-home risk, with no outbound network traffic to block, no external dependencies, and no inference costs eating into profits. Using larger models means paying for every inference call when the payload phones home for next steps, and that cost consideration matters from an operational perspective. You could drop the payload alongside a local model and provide inference that way, but that adds significant logistical complexity.
We're probably 12-24 months away from seeing this deployed at scale in the wild. The barrier isn't technical capability, because research has already proven it works. The barrier lies in operational necessity. Current methods that attackers are using still work extremely well, so there's no urgent need to add AI complexity into the fold.
The Future of AI-Powered Ransomware Attacks
Looking ahead, AI-powered ransomware and remote access tools are coming. However, I personally wouldn't deploy solo AI-powered tools in client environments during a pen test today because the guardrails aren't there yet, and you can't predict with 100% certainty what the model will do. That unpredictability is unacceptable in professional assessments where we have strict rules of engagement and can't risk serious impact to network or service availability.
That said, with human-in-the-loop architectures where an operator approves each action before execution, this threat is viable today. In controlled environments like capture-the-flag (CTF) competitions, AI has successfully found and exploited vulnerabilities autonomously. But it's not yet as reliable or controllable as an experienced penetration tester in real-world scenarios.
The gap is closing rapidly though, and Schellman’s pen testing team is at the cutting edge of this fast-moving space. Contact us to learn more about the future of AI-powered cyber-attacks and how penetration testing can help strengthen your security posture today.
In the meantime, explore these articles to better understand AI-driven threats and governance strategies:
About Josh Tomkiel
Josh Tomkiel is a Managing Director on Schellman’s Penetration Testing Team based in the Greater Philadelphia area with over a decade of experience within the Information Security field. He has a deep background in all facets of penetration testing and works closely with all of Schellman's service lines to ensure that any penetration testing requirements are met. Having been a penetration tester himself, he knows what it takes to have a successful assessment. Additionally, Josh understands the importance of a positive client experience and takes great care to ensure that expectations are not only met but exceeded.