AI‑Driven Hacks: Reality vs Hype
Key Points
- Hackers are leveraging open‑source tools and agentic AI at high speed, prompting security teams to adopt the same technologies for proactive testing and defense.
- The episode previews a deep dive into OWASP’s 2025 Top 10 vulnerabilities, emerging ransomware trends, and the ongoing debate about the real value of cyber‑insurance policies.
- Anthropic revealed that attackers used Claude‑code to automate 80‑90 % of a multi‑stage campaign against 30 targets, sparking both alarm and skepticism about the impact of AI‑driven attacks.
- Panelists note that the threat is largely predictable—attackers are using AI‑generated code rather than the Claude model itself, highlighting the importance of understanding tool misuse.
- Overall, the discussion underscores the need for AI‑enhanced security intelligence to stay ahead of increasingly automated adversaries.
Sections
- AI‑Driven Threats and Defensive Tools - In this opening of IBM’s Security Intelligence podcast, the host warns that hackers are exploiting open‑source and agentic AI tools, urges defenders to adopt similar technologies, and outlines upcoming discussions on OWASP 2025, ransomware trends, cyber‑insurance, and an Anthropic‑related AI espionage incident.
- AI Tool-Calling Sparks Security Debate - The speaker warns that exposing open‑source tool‑calling to Claude Code turns AI into a script‑kiddie, creating fresh cyber‑threats while also motivating proactive adaptive AI governance initiatives.
- Role‑Play Jailbreaks Bypass AI Guardrails - The participants explain how attackers use role‑playing and social‑engineering tricks to circumvent AI safety guardrails, exposing ethical blind spots and the ongoing difficulty of reliably detecting such jailbreak attempts.
- AI-Driven Regulatory Compliance Automation - The speaker describes using autonomous AI agents to parse global regulations, auto‑reason control assessments, build baseline knowledge engines for client risk and governance, and envision a future of fully automated cyber‑defense where AI agents contest hackers.
- AI Empowers Script Kiddies - The discussion warns that autonomous AI agents could amplify low‑skill attackers, turning script‑kittens into powerful threat actors reminiscent of historic major breaches.
- Rising Concerns Over Vendor Supply Chains - The speakers note increasing client anxiety about third‑party and AI‑driven supply chain vulnerabilities and how breaches could cascade to affect brand reputation.
- Attackers Target Security Misconfigurations - The speaker emphasizes that the increase in misconfiguration attacks—from rating 5 to 2—shows attackers are exploiting easily controllable configuration flaws, prompting defenders to tighten vigilance, especially around AI‑generated settings and traditional AD environments.
- Rise of Small Ransomware Gangs - A Checkpoint Research report shows the ransomware ecosystem has become its most fragmented ever, with 85 active gangs and the top‑10 share dropping from 71% to 56% as law‑enforcement actions dismantle large groups, creating a more complex threat landscape for organizations.
- Fragmented Ransomware Threat Landscape - The speakers discuss how the rise of small, decentralized cyber‑crime groups reduces victims’ willingness to pay ransoms and creates a volatile, hard‑to‑track attack ecosystem.
- Prefer Targeting Large Ransomware Cartels - Panelists argue that defending against big, organized ransomware groups is more predictable and manageable than chasing numerous small, chaotic crews.
- Cyber Insurance as Private Regulation - The speaker explains that cyber insurers impose mandatory security baselines—similar to traditional insurance practices—while also providing post‑breach support that enables organizations to recover, safeguard their brand, and maintain operations despite inevitable threat actor activity.
Full Transcript
# AI‑Driven Hacks: Reality vs Hype **Source:** [https://www.youtube.com/watch?v=LNVDrrKbfzg](https://www.youtube.com/watch?v=LNVDrrKbfzg) **Duration:** 00:40:05 ## Summary - Hackers are leveraging open‑source tools and agentic AI at high speed, prompting security teams to adopt the same technologies for proactive testing and defense. - The episode previews a deep dive into OWASP’s 2025 Top 10 vulnerabilities, emerging ransomware trends, and the ongoing debate about the real value of cyber‑insurance policies. - Anthropic revealed that attackers used Claude‑code to automate 80‑90 % of a multi‑stage campaign against 30 targets, sparking both alarm and skepticism about the impact of AI‑driven attacks. - Panelists note that the threat is largely predictable—attackers are using AI‑generated code rather than the Claude model itself, highlighting the importance of understanding tool misuse. - Overall, the discussion underscores the need for AI‑enhanced security intelligence to stay ahead of increasingly automated adversaries. ## Sections - [00:00:00](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=0s) **AI‑Driven Threats and Defensive Tools** - In this opening of IBM’s Security Intelligence podcast, the host warns that hackers are exploiting open‑source and agentic AI tools, urges defenders to adopt similar technologies, and outlines upcoming discussions on OWASP 2025, ransomware trends, cyber‑insurance, and an Anthropic‑related AI espionage incident. - [00:03:15](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=195s) **AI Tool-Calling Sparks Security Debate** - The speaker warns that exposing open‑source tool‑calling to Claude Code turns AI into a script‑kiddie, creating fresh cyber‑threats while also motivating proactive adaptive AI governance initiatives. - [00:07:59](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=479s) **Role‑Play Jailbreaks Bypass AI Guardrails** - The participants explain how attackers use role‑playing and social‑engineering tricks to circumvent AI safety guardrails, exposing ethical blind spots and the ongoing difficulty of reliably detecting such jailbreak attempts. - [00:11:17](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=677s) **AI-Driven Regulatory Compliance Automation** - The speaker describes using autonomous AI agents to parse global regulations, auto‑reason control assessments, build baseline knowledge engines for client risk and governance, and envision a future of fully automated cyber‑defense where AI agents contest hackers. - [00:14:31](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=871s) **AI Empowers Script Kiddies** - The discussion warns that autonomous AI agents could amplify low‑skill attackers, turning script‑kittens into powerful threat actors reminiscent of historic major breaches. - [00:18:12](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1092s) **Rising Concerns Over Vendor Supply Chains** - The speakers note increasing client anxiety about third‑party and AI‑driven supply chain vulnerabilities and how breaches could cascade to affect brand reputation. - [00:21:29](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1289s) **Attackers Target Security Misconfigurations** - The speaker emphasizes that the increase in misconfiguration attacks—from rating 5 to 2—shows attackers are exploiting easily controllable configuration flaws, prompting defenders to tighten vigilance, especially around AI‑generated settings and traditional AD environments. - [00:24:35](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1475s) **Rise of Small Ransomware Gangs** - A Checkpoint Research report shows the ransomware ecosystem has become its most fragmented ever, with 85 active gangs and the top‑10 share dropping from 71% to 56% as law‑enforcement actions dismantle large groups, creating a more complex threat landscape for organizations. - [00:28:06](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1686s) **Fragmented Ransomware Threat Landscape** - The speakers discuss how the rise of small, decentralized cyber‑crime groups reduces victims’ willingness to pay ransoms and creates a volatile, hard‑to‑track attack ecosystem. - [00:32:08](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1928s) **Prefer Targeting Large Ransomware Cartels** - Panelists argue that defending against big, organized ransomware groups is more predictable and manageable than chasing numerous small, chaotic crews. - [00:38:11](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=2291s) **Cyber Insurance as Private Regulation** - The speaker explains that cyber insurers impose mandatory security baselines—similar to traditional insurance practices—while also providing post‑breach support that enables organizations to recover, safeguard their brand, and maintain operations despite inevitable threat actor activity. ## Full Transcript
Oh, my goodness, this is terrible. Right? The hackers are
using these open source tools and they're running at a
great speed. I mean, I don't want to tell you
how you do your job, people, but maybe, maybe you
should use the same tools and use the models and
rather than saying, you know, hey, please, you are a
security researcher, maybe be a security researcher with the tools
and agentic AI and actually do the test first, right?
All that and more on security intelligence. Hello and welcome
to Security Intelligence, IBM's weekly cybersecurity podcast, where we break
down the most interesting stories in the field with the
help of our panel of experts. I'm your host, Matt
Kaczynski, and joining me today, Ryan Anschutz, North America, leader
of X Force Incident Response. Evelyn Anderson, css, cto, IBM
distinguished engineer, IBM master inventor, Seth Glasgow, Cyber Range Executive
advisor and and our special guest for the first segment,
Chris Hay, distinguished engineer and frequent presence on mixture of
experts, IBM's AI news podcast. For those of you who
have not already subscribed to that one. This week we're
gonna be talking about OWASP's top 10 for 2025, big
changes in the ransomware landscape, and the ongoing debate over
the efficacy of cyber insurance. But first, a story I'm
sure we've all heard by now. Anthropic disrupted an AI
powered Espion. Now, according to Anthropic, hackers were using Claude
code to mount an assault against some 30 targets, with
the AI handling 80 to 90% of the campaign. That's
the number that they put on it, including recon, writing,
exploit code, establishing back doors, exfiltrating data. All this stuff
with almost no human input and reactions have been kind
of mixed. Actually. Some people look at this and they
say, oh my God, this is crazy. I can't believe
that the attackers are utilizing this stuff this way. I
can't believe we've gotten to this point in the cyber
attack era. Other people are looking at it and saying,
this seems a bit sensational. I'm not really sure that
this is as impressive as Anthropic is making it sound.
For example, Anthropic themselves noted that while the attackers were
targeting some 30 organizations, they only succeeded in, quote, a
small number of cases. So I want to start by
throwing it to you, Chris, as the resident AI guy
here. On that scale from holy crap to snoozefest, where
are you landing with this thing and why? Highly predictable.
But the first thing I would say when people say,
you know, the AI is going to cause A lot
of job loss. Who would have thought it would be
the hackers first, you know, so there we go. It
is highly predictable. I mean, if we actually look at
the anthropic paper there, I think there's a couple of
key things we need to point out. First thing is
it was CLAUDE code that they use. It wasn't Claude
in that sense. I mean, Claude as the model was
backing it, but it was Claude code. And that is
actually super important because the real key to this is
about tool orchestration. So they use the. A whole bunch
of open source tools, right? And there's a whole community
of MCP servers. You know, they said they used one
for browser automation. I could take a fair guess that
that was probably playwright, but again, same sort of thing.
They've just taken a load of open source tools, made
that available, probably. And I'm no cybersecurity person, but probably
the same tools that you folks use every day. And
all they've done is they've exposed that to Claude code
in a way. And I think that is the key
thing. It is agentic. So AI is now officially a
script kiddie people. So that's what's going on. So I'm
not surprised because tool calling is the key piece. I
think what I would say is this has now opened
the can of worms, right? And they're saying Claude code.
Well, we're going to protect the Claude model. We're going
to do a lot better in that. Yeah, I can
absolutely see it going that way. And let's throw then
to a cybersecurity take on this one. Right? And let's
was more so just interesting to me. I mean, when
I first looked at it, I looked at it as,
yes, it's a new challenge for us from a cybersecurity
perspective. But I also view it as an opportunity. You
know, the fact that the hackers have kind of beat
us to thinking about how you can start writing code
to, you know, address a lot of their phishing campaigns,
their deep fakes, you know, malware, et cetera, we can
actually do the same thing. My particular team has been
working on building out what they call adaptive AI governance.
And the purpose of it is to be more proactive
than reactive. But the key word is we're building it.
It doesn't exist yet. And what this is actually telling
me and the blaring sign that I see is we
have to move faster, we have to look at how
we can implement AI driven security architectures as well as
governance to adopt across our entire ecosystem versus us waiting
until we detect a problem and finding it. We need
to more autonomous that once we see that someone has
gone in, whether it was intentional or unintentional, and changed
the system configuration, that we don't wait and take it
through, you know, the five whys did it happen, but
we have programs in place to actually flip it back
to what it should be. So for me, when I
looked at this, it just made me think about all
the things in areas where we need to accelerate on.
Absolutely. And that brings up a good point. Right. Sometimes
I look at some of this stuff and I wonder
if the hackers are moving a lot faster with this
stuff than the kind of legitimate use cases are. Ryan,
I see you nodd. What are your thoughts there? Do
you think the hackers have a kind of leg up
on us right now? Yeah, I would say this is
pretty challenging because, you know, for this case in particular,
what really stood out to me was the automation of
actually the full kill chain. It was everything from recon
exploit generation, backdoor deployment and data theft. You know, in
their numbers that 80 to 90% of that was done
sensationalizing is a good word. This is kind of pretty
scary. But in the same breath, this is the first
time the public has actually seen what we in IR
have actually been quietly preparing for. And that is an
adversary that really doesn't sleep, that doesn't take weekends, and
iterates at true machine speed. So for IR teams really
this shifts the game. We now have to really detect
and contain not just our human counterparts, our human adversaries,
but machine speed adversaries. And the interesting thing about that
kind of machine speed adversary is that it might move
a lot faster than us, but it also makes some
mistakes that people don't make. Right. One of the things
that Anthropic brought up in their report was that, you
know, Claude Code did hallucinate a couple times. Right. They
saw it spitting out some fake credentials that didn't actually
work. Right. Or claiming it had surfaced data that was
actually just public data. I'm wondering how this factors into
our assessment of the threat. Seth, any thoughts on your
end there? Yeah, I think it does kind of matter
because it does still have a lot of the same
limitations that an LLM normally would. It can hallucinate. It's
only as good as what it's trained on. But additionally,
you know, Claude code is sort of a legitimate tool,
Right. It was essentially socially engineered. And I believe the
full paper actually uses that term to describe how could
the legitimate tool be used to do something that was
destructive. Well, they set up a pretext that they, the
attackers were themselves a security research company and then leveraged
it to get it to do these things sort of
against it. And so a lot of what we work
with clients on is sort of, hey, how does AI
change what these attack avenues are? And this is similarly
in a sort of the old attack avenue of I've
convinced you to do something, it's social engineering, but now,
as Ryan said, it's at machine speed. Right? The scale,
the speed, the scope of how quickly this can be
done. If I'm still just convincing something to do it,
that does materially change how bad that can be. Right.
So even though it's going to be wrong, you know,
that ethical limitation of if I'm the attacker, do I
care that it's wrong? Probably not. Right. I don't have
the same governance structures when I'm on the other side
of the law that I would on a legitimate side.
Seth, I'm really glad you brought up that social engineering
perspective because you're right. Right. This all had to. The
way that this worked was the Claude code was jailbroken,
right? You had to kind of, or not jailbroken necessarily,
but you had to trick it basically, right? Make it
think it was doing something legitimate. Chris, I was wondering
if you could give us some insight into how easy
it is to kind of trick these AI models into
doing things they shouldn't. I mean, how are our guardrails?
What's the kind of level of sophistication there? I think
these days it's getting pretty sophisticated. I mean, there tends
to be guard models surrounding the main model anyway. But
the reality is it's very difficult to tell when something
is role playing. And that's really what happened in this
case. It was a role play type attack versus doing
it for reals. Maybe a bit of a clue is
you gave access to the tools and it was sitting
hacking on your behalf. So that might be the point
where you might say, you know what, we're beyond role
playing at that point. But where there's a will, there's
a way. I just don't think this is necessarily a
solved problem. So if you look at folks like Pliny,
for example, is always publishing how to prompt engineer around
these models, then most of this stuff is out there
in the open. I guess what I would say just
to counter things is everybody's going, oh my goodness, this
is terrible. Right? The hackers are using these open source
tools and they're running at a great speed. I mean,
I don't want to tell you how you do your
job, people, but maybe you should use the same tools
and use the models and rather than saying, hey, please,
you are a security researcher, maybe be a security researcher
with the tools and agentic AI and actually do the
test first. Right? So use the very same tools to
do that. Because the reality is if you don't have
open source tools, then you're not going to be hacked
by these folks. So I think there is an argument
for security teams to actually start doing that. Preventative measures
use the exact same techniques as the hackers. It's all
laid out in the paper from Anthropic. It's actually a
really pretty well detailed paper on that. So go follow
that and just do it in advance. I think it's
a really good point. You know, honestly, like adopting that
research, adopting that, that hacking technique kind of as your
own research method. And I'm wondering if any of the
folks here have, have toyed around with that stuff or
toyed around with agentic AI or anything really in your
kind of security work. Kind of open question. Has anybody
toyed with that stuff? Anything they want to talk about
on that front? Yeah, I think I can touch on
that a little bit. You know, just how like attackers
are chaining those models together, like you mentioned, Chris, you
know, defenders, we are actually, you know, chaining together Defensive
Age, one model monitoring logs, another doing memory forensics, another
isolating hosts, and you know, maybe another actually even drafting
executive updates. You know, that's where this all goes. It's
kind of turning those defensive models and fighting fire with
fire, essentially, as you mentioned before. So that is something
that as defenders, we are constantly evolving every day with.
Yeah, and we're doing some. We're not actually doing the
same thing as Ryan, but we're doing things where we're
utilizing agency to perform like some auto analysis and reasoning.
We are responsible for all of the regulations, frameworks, et
cetera, that come in. So when you think about the
huge number of regulations and laws across the globe, when
we're working with our clients, we have to be able
to pull out the controls and make sure that the
client is meeting their regulatory commitments. And so we're doing
analysis there where Our engine today is able to go
in, analyze, perform some auto reasoning, and then respond to
like any type of an assessment. So that's how we're
utilizing it within our arena. But we're also helping to
build it where we're building a knowledge engine that will
be able to establish a baseline for a client and
then assess, will assess the client against that baseline, not
only for their controls, but for the risk model, for
their governance structure, their strategy, et cetera. But so it
seems like the overall kind of perspective here is that,
right? We all kind of saw this coming, but it's
still a big deal. But we're working on some stuff
on our end as defenders to kind of put these
tools to the same kinds of uses. And eventually we're
gonna start entering into, I don't know, maybe a fully
automated, you know, sort of security world where the hackers
are attacking the agents and the agents are attacking the
hacker agents. I don't know, you don't even need people
there anymore. It's fully automated cyber attacks. It's great. Last
question, though. And this is kind of. I don't know,
this is just something I saw that I wanted to
bring up. I saw one tweet about this where somebody
was very skeptical of what Anthropic was reporting. And they
said, they asked how Anthropic could get what they called
AGI level performance out of this stuff. I wanted to
throw this to the actual AI guy here. Does anything
like this look like AGI to you at all? Because
it didn't to me. It's definitely not AGI. This is.
I mean, as I said before, this. The real innovation
on this is the fact that the models are getting
really good at tool calling, right? And they're also getting
really good at planning. They are the two things. So
if you look at some of the weaker models, they'll
just. They're like your script kiddies. They'll just call any
old tool and hope something lands, right? But. But a
really good big model that can tool call will be
able to follow up. It'll follow sequence events, it'll have
a plan in the same way as a human being
is. So this isn't. This isn't about how great the
AI is, and it is in some regards, it is
about how great the AI is because CLAUDE is a
great model and Claude code is a wonderful tool. But
it's really this advances in function call and this advances
in planning that's made this possible. And it's really about
of your security tools that you use today, the hack
and all the script kitties use, et cetera, if you
get all of those same tools, you make them available
with something like MCP servers or just wrap them in
function calls, then you are going to be able to
have AI do the same thing. Because it's not really
applying so much intelligence there. It's using the tools to
break through and then gather the insights. So anybody who's
done any sort of I coding or done any sort
of agentic work whatsoever will recognize the techniques here. Yeah.
So I know it's been mentioned a couple of times
around script kitties, and I think that is concerning where
some, if you look historically, some of the world's worst
breaches were, weren't actually done by elite threat actors. They
were done by beginners using downloaded tools and historically old
vulnerabilities. Now imagine those same beginners with access to autonomous
AI agents. That is the trajectory that actually should concern
us. That's a really good point. Right, Because I think
especially when we have these conversations about AI, there's a
tendency to focus on how impressive the tool itself is.
But it's really important to what it's enabling people to
do. Right. And it's like you said, a script kitty
with one of these things in their hands, they become
very good at that stuff. And you didn't even need
to be that good in the first place. Right. You
know what I mean? Like you said, some of the
biggest attacks in history were the script kitties. So I
think that's a really important point. On that note, I
think it's time to move on to our next story,
but thanks you. Thank you, Chris, for being here, for
talking through this stuff with the rest of us. Yeah,
go watch mixture of experts if you haven't already, folks.
Thanks for having me. All right, let's move on to
our second story for today. This is OWASP's top 10
for 2025. Now, last week, OWASP released the latest installment
of its top 10 list of the most critical web
application security risks. This is the eighth installment of List
overall, I believe, and the first update since 2021. Some
highlights are broken access control remains the number one threat.
Security misconfigurations rose from number five to number two. And
two new ones joined the list. We've got software supply
chain failures and mishandling of exceptional conditions. So first off,
I just want to get initial reactions to this update.
You know, I know a Lot of us kind of
use OWASP as a guide for when we're designing or
working on web apps. So I just want to see
how people are feeling about this newest iteration. Let's start
with you, Seth. Any initial reactions, things that caught your
eye here, thoughts that come to mind? Yeah, I actually
really like the addition of the exceptional conditions sort of
catch. Right. Because a lot of it is focusing on
these unique failure points that exist within a certain context
or, you know, in usual customer applications, we're thinking of
the business logic. Right. And so we've actually seen kind
of sometimes on the testing side you have sort of
what old. What is old is new again in terms
of these vibe coded apps that are bringing in old
business logic flaws that we thought we had kind of
taken care of. Right. So it's not perfect. Right. The
overall OWASP idea of categorizing CWEs versus specifics the way
Mitre does it is a point of contention. But the
idea that this category exists to acknowledge that there are
just conditions that have been mishandled but don't necessarily have
a standard way to handle is extremely important in putting
the apps in the business and user context. Right. How
is it being used that can be abused as opposed
to just a rote coding issue? Yeah, I'm glad you
kind of explicated on that. Right. Because you just see
that phrase mishandling of exceptional conditions. And if you're a
guy like me who's not super technical, you're like, what
the heck does that mean? So I'm glad you put
that in a context where it's like, oh, that's super
duper important. Evelyn, what about you? Any thoughts on this
new OWASP list? Anything jump to mind when you look
at it? It was a little bit kind of. Well,
I was surprised, I guess, in one sense because I
haven't actually looked at The OWASP top 10 in a
couple of years and I was like, wow, it hasn't
changed a lot. Which is a disappoint that we should
have improved over the last five years when I saw
that broken access control, still holding that top spot very
strongly there. So I was a little bit disappointed that
there hasn't been any improvement, that much improvement within the
market there. But I was happy to see supply chain.
When it comes to supply chain, we're starting to realize
that our clients are realizing the growing risk around their
vendors and the 3rd nth party vendors and how it
actually impacts. And we're starting to see more and more
coming out around this space, when we've been looking at
it, most of the clients, their biggest complaint and concern
now is how those third party, fourth party, nth party,
whatever vendors actually have on their organization and what portions
of their supply chain that they're actually supporting and if
they go down or if there's any type of a
problem, a breach with them, how it impacts their overarching
brand. So I was very pleasantly surprised to see the
supply chain there. Yeah. And you know, I think that
basically every episode we've recorded of this show so far
has included some reference to a supply chain attack or
somebody talking about supply chain. It really is kind of
top of mind for a lot of people before. Like
you said, you know, organizations are worried. They're, they're integrated
with so many other companies and if a third party
gets breached, it can affect them. And what's that going
to do? And not to harp on it, but also
we were just talking in the last section about, you
know, these AI agents chain chaining together a bunch of
tools. I mean, that's even more of a supply chain
right there that you're getting open up to. So I
agree. I'm glad to see that here too. Ryan, how
about you? Anything come to mind when you look at
this list? Yeah, you know, I completely agree with Evelyn.
You know, this new OWASP list really feels like a
reality check for the AI coding era where this is
not just a developer list anymore. It's really a map
of how attackers are breaking into modern environments where that,
that big theme, that insecure design, right. That API exposure,
the dependency chain risks that we've been talking about and
really AI generated code are now first class citizens on
this list. Right. What's interesting though is this speed, this
list already feels like it will need updates mid year.
Right. Like our attack surface is already expanding faster than
our ability to train dev teams, you know, and from
my lens on that incident response aspect, I like how
OWASP is really kind of becoming predictive. Like the list
is almost a forecast of the types of vulnerabilities we're
actually cleaning up in breaches every day. Yeah, and that's
a really good point. You know, when you talk about
how it feels like it's a list that almost is
going to be needed to updated. Needed to be updated
pretty soon. Right. Because you know, you also look at
what Evelyn said before about how a lot of this
stuff looks pretty similar to how it used to. Right.
And it raises this kind of question for me where
it's like we're in this AI coding era, and yet
how. How much things have changed. They're also not that
different in some ways. Right. Like there, it's weird how
the more things change, the more they stay the same.
So I don't know. That's just something that came to
mind there. And I wonder, I guess, I don't know,
maybe the rise of vibe coding makes some of this
stuff more likely to be there. Right. Because people who
aren't used to coding apps are coding apps now. I
don't know. Just a thought that I'm throwing out there.
But so anyway, you as the security experts here, as
you look at this and you highlight the things that
jump to mind for you, what do you think the
takeaways are for defenders? Right. Like if you're a defender,
for example, Ryan, you're talking about the need to train
people, for example, if you're a defender sitting there, what
should you be based on this list thinking about as
your kind of next moves? How should this influence what
you do next? Let's go back around the circle. Seth,
any thoughts there? What the key takeaways for defenders are?
Yeah, so I think the fact that mis security misconfiguration
jump from 5 to 2 tells you a lot about
where attackers are focusing and where we need to focus
as defenders. Right? Right. New vulnerabilities will always be novel.
We will find zero days we have to respond to
that. But we have a lot of control over those
configuration items and that's what we need to clamp down.
Right. And so in the application space, it's because more
and more of these apps are managed by these configurations,
we have to put more sort of vigilance around how
they're created. Right. We probably using AI to generate some
of those configurations as well. So really focusing on that,
you know, in a traditional network environment, a lot of
times what's getting exploited is nothing new do it's the
same old active directory misconfigurations as well. And I think
a lot of times it's those security configurations are thorny.
They can be very difficult to work through. And so
you really have to focus on that. And I think
it jumping from five to two just tells you that
that is growing and increasing attack space. And from my
perspective, if I were the attacker, yeah, I would love
to try to get in there because that's probably going
to have a flaw that I can make a use
of. That doesn't require me being more novel on the
actual vulnerability or injects. Fully agree with what Seth mentioned
there. I Guess when I looked at this, I started
thinking about the prior session where we were discussing how
attackers were getting into the environment. And when I'm looking
at broken access control and system misconfiguration, I look at
both of those as being two key areas of opportunity
where we can try to automate. I mean, if we
are making mistakes every time we're configuring a system, then
let's try to automate that process. I mean, it's. There's
no reason why we can't actually automate it. And it's
one of the areas that I think we have to
closely look at. How do we migrate to more of
a digital worker versus a human actually doing it and
just have a validation process from a human and if
necessary, establish a secondary control where we go back and
check the initial configuration from the initial setup. So I
think that this is an area that we can definitely
improve in. We just have to start thinking a little
bit out of the box. I love that. I love
that. Ryan, how about you? What are your takeaways? I
go back to like, you know, the Defenders. Right. This
list really, I think, reflects the reality that Defenders live
in every day. You know, as we mentioned before, that
insecure design, API exposure, and misconfigs, you know, that is
what really drives the majority of the breaches, not exotic
exploits. You know, with AI now really producing a significant
chunk of enterprise code, I think we need stronger guardrails,
better code lineage, and better real time validation. I think
that the key takeaway really for Defenders is simple focus
on the fundamentals, secure the dev pipeline, and gain that
deep visibility into your APIs and identity flows. Yeah, you
know, I think if there was like one cardinal rule
of cybersecurity that's come up again in almost every episode
we talk about it's focus on the fundamentals. You know,
as much as things change, so much stays the same.
Let's move on then to our next story here. Now
a new report from Checkpoint Research found that the ransomware
landscape has grown much more fragmented, with the large gangs
losing their traditional grip on power and many smaller gangs
accounting for a bigger share of attacks. Specifically, some of
the numbers they call out here was that in Q3,
2025, they saw 85 active ransomware and extortion gangs, which
was the most decentralized. That's the word they use, the
most decentralized ransomware ecosystem that they've seen since Checkpoint has
been, you know, keeping tabs on this thing. Earlier this
year, the top 10 ransomware gangs accounted for 71% of
victims, that has since fallen to 56%. So you know,
again, we're seeing that, that that shared decrease and that
checkpoint says that this fragmentation is driven by the kind
of dissolution of large gangs following successful law enforcement efforts.
Right. We've seen quite a bit of, of, you know,
gangs get taken down this year by law enforcement. A
lot of good activity. There's. But often what happens is
that these gangs get taken down, some people scatter, they
form their own. And so now we've got a bunch
of little guys running around out there. I'm wondering how
this situation, all these little guys popping up, complicates that
threat landscape as opposed to when you're dealing with a
bunch of big ransomware gangs. Let's start with you, Seth.
Any thoughts on how this complicates the threat landscape for
organizations? Yeah, I think first off, the minute that the
larger ones break up now we have sort of a
dearth of intelligence, right. We don't know exactly. We can't
attribute behaviors to a bunch of different small groups. And
furthermore, the small groups, if they start to target more
specific industries, well, their tactics will likely become more effective
while they get sort of hyper niche. The report itself
actually mentioned something that I found interesting around how it,
this ecosystem that's evolving mirrors sort of a decentralized financial
system or open source communities. And we see that a
lot with the way these threat actors work now. Right?
Hey, I, my core competency is credential theft. I'll sell
these off to another group, group who is good at
this. And so as it sort of fragments, it creates
a sort of structure organizationally that is not unlike the
way that we naturally structure legitimate organizations. Right now they're
not governed by the same things, but ultimately these criminal
organizations for the most part have the same goal as
our organizations, which is they, they desire to make money
and turn a profit. Right. So many of them as
they split up are going to be harder to track
and really focus in and sort of being, hey, if
I'm a smaller fish, I'm harder to gobble up. It
probably makes them able to survive and become survivable better
themselves. Yeah, it's interesting, right? That kind of dark web
marketplace, those dynamics that economics is very similar to the
legitimate marketplace. Right. Like you said, the same kinds of
pressures are there. At the end of the day, we're
talking about people who are there after money, they're after
profit. Right. It just so happens that they don't have
as many scruples down in the dark Web, as the
rest of us do up here. But I do think
that's a good point, right? How much these things mirror
each other, you know, and another interesting thing about it
is that this. This breakup and this change in market
dynamics, like you said, it gives us a dearth of
intelligence, and it also gives the victims kind of some
hesitation in terms of, do I want to pay these
guys? Because it turns out that they're seen as a
lot less trustworthy than the big guys, which is a
very interesting question about branding. Right. The biggest big guys
have an incentive to decrypt when they say they're going
to decrypt. Right. Because that gets people to pay them.
The little guys have less of an incentive, and that
means people are a little more hesitant about paying them.
I'm wondering, Evelyn, if you have any thoughts about this
particular dynamic, dealing with these ransom demands from these little
groups that you're not even sure of if they're going
to deliver on their promises. I actually, I was listening
to Seth and thinking about when. When you first go
through this and you think about decentralizing hackers, I mean,
attacks into organizations, it's beyond terrifying when you think about,
okay, my niche, my skill is not here, so I'm
just going to sell. I can sell off the different
bits and pieces to other groups that may have their
specialty. I mean, I agree with the smaller accounts and
organizations where they're a little bit hesitant to pay because
there's no incentive. And when you're talking about a little
guy, okay, we're with this group today. Oh, you caught
one of us. Okay, we just branch out the next
day and we start a new group, you know, and
on and on and on. And so it's. Yeah, when
I looked at this and I thought about the conversations
we had, you know, many years ago, when we thought
about just decentralizing different bits and pieces of cybersecurity. And
I'm like, wow, we're decentralizing, you know, attacks now. This
is, like, beyond terrifying for me when you start thinking
about it. And the biggest thing is, how do you
catch up? You know, there's so much uncertainty and volatility
in the process until when you think about the law
enforcement being able to take them down, they may get
one, but, you know, that's the problem. They get one,
but then there's nine others that just moved, changed locations,
and started up again. So for me, when I looked
at it and went through the article, it was like,
wow, how do you catch up with this? Because it's
going to be difficult. Now, I'm glad you brought up
that question of catching up, because that's what was running
through my head too. Right? Is that like, how do
you, like, when you take down one organization and a
bunch of little ones pop up? It almost feels like,
what was the point of taking down the big one?
And maybe it's unfair, but I'm going to pose the
question and I'm going to throw it to you, Ryan.
Are we just destined to deal with this? Is this
just what's going to happen where you take down a
big organization and the little ones are going to pop
up? Like, is this just the dynamic and we have
to accept it and adjust to it or. I don't
know, is there a more definite way to shut down
some of this stuff? I don't know. Any thoughts there?
Yeah, you know, I kind of go back to prior.
Prior to my time in cyber, I spent many years
in law enforcement. And I think this is something where
we have to. What we always say in law enforcement
was adapt and overcome, Right. And if we take a
look at what fragmentation does of criminal organizations, it kind
of destroys that traditional law enforcement playbook. Right? The big
gangs have hierarchies. They have infrastructure, they have predictable patterns,
they have financial trails. You track them like an organized
crime enterprise. But when we see these micro crews spinning
up overnight using disposable infrastructure and disappearing just as fast.
And to Evelyn, your point, there's no brand to track,
so, you know, victims are kind of left with, hey,
who. Who are we dealing with? There's no negotiation history,
no crypto, wallet reuse. It's the proverbial whack a mole
at global scale. Right. So that is a complexity that
we have to just adapt and overcome with. I think,
really from ir, you know, since we can't rely on
known group patterns and law enforcement can either, that really,
it's unfortunate because it does prolong investigations, I think, unnecessarily,
and also complicates victim recovery. So we have to really
find a way to adapt and overcome those challenges. So
to close out this segment, I want to play a
quick game of would you rather. Because I cannot help
myself whenever the opportunity arises to play a quick game
of would you rather. So I'm going to go around
real quick. Would you rather deal with lots of little
gangs running around or just a couple big ones? Seth,
what do you think? I think it's kind of twofold.
I think it's probably easier from the defensive side to
deal with a couple of known players. Right. As Ryan
said, you can get tendencies on them. The intel's easier
to track. We have a way to understand how they
will behave because they under. They behave in a way
that we understand. A bunch of smaller ones is probably
worse, but it's the same as anything in terms of
this is still just crime. Right. Sweep one street corner,
they're going to move to other ones. It's just kind
of a repeat fashion that we have to overcome. But
I think it would be easier to deal with larger
players as opposed to a bunch of unique small. I
agree. When it comes to the larger players, I think
that, you know, they're a little bit more predictable and
they have, you know, a structured approach, and so you
can kind of be able to understand a little bit
of their behaviors. So for me, it would definitely be
a larger. I think the smaller, as we mentioned, are
that's the best description, is you're really trying to whack
a mole with the smaller. So definitely larger. Absolutely. And
Ryan, take us home. Give me the big ransomware cartel
any day over the small cruise. The big guys are
predictable, professionalized, and they usually actually follow their own rules.
The smaller crews, they are chaotic, emotional, and heck, they
might even burn down the entire environment by accident. So
I take it there will be no small business Saturday
for the ransomware gangs this year. Let's move on then
to our. Our final story for the week. Insurance payments
for cyber attacks are skyrocketing in the UK. Cyber insurers
reported that in 2024 they spent $259 million on payouts,
compared to 77 million in 2023, which is just a
massive year over year leap. And that doesn't even account
for some of those. Those huge attacks we saw in
2025, like the JLR attack that took them offline for
like, I don't know, a month and a half or
something. Right. So this figure has added more fuel to
an ongoing debate in cybersecurity corners. Does cyber insurance help
strengthen defenses, or does it just encourage hackers to keep
asking for ransoms? I'm gonna start with you, Evelyn. Do
you have any thoughts on this debate about cyber insurance?
And I recognize we can't land. We might not be
able to land on one side cleanly, but I'd just.
Like to hear your take for me, when I went
through the article and read and thought about it a
little bit. For me, when you think about cyber Insurance,
it's just like us having insurance. It's a safety net.
You don't predict that there's going to be a tornado
that comes through and wipes out your house, but just
in case you're just trying to recover your expenses. And
so when I look at this particular scenario, we see,
I mean the numbers don't lie. We see in our
own cost of a data breach, the report where we're
seeing the numbers just going up, up and away, insurance
payments are going to go up with it. You know,
when you start looking at a lot of the premiums
and you're weighing the difference, it's one of those, okay,
do I, how much is this worth? Or am I
willing to come out of my pocket and pay to
an attacker directly from, you know, my funds versus let
me have some insurance. And if you look at the
return on investment, the insurance paying out X number of
millions of dollars versus the organization, I think you have
to look at both and go with both. But the
one thing, what I was looking at when I thought
about this is that there's still an impact to their
reputation. The reputational brand is still. When you look and
weigh the differences, if I know that a company was
able to recover versus if I'm dealing, you know, and
I have business with the company and they're not able
holding on by a thread versus one that had insurance
and was able to recover. I think I'd always go
with the one that had insurance that planned for the
rainy day, you know, when the storm came through and
took everything. The storm being the hackers, makes sense. Ryan,
how about you? Any thoughts here? Yeah, I have a
slightly different look into that. I really think that when
insurance payouts triple in one year, I think it tells
you two clear things. I think the attacks are more
severe and organizations are actually still not building resilience at
the pace that the threats are actually evolving. When reading
this article, I think the interesting part really is the
insurers are becoming unofficial regulators. They're demanding proof of controls,
evidence of IR readiness, tabletop prep, multi factor authentication, everywhere,
immutable backups. And if you don't have it, your coverage
shrinks and your premiums skyrocket. Right. So that insurance is
almost that it's no longer a safety net anymore. It's
now shaping cybersecurity maturity programs. And what does that mean
for organizations? I mean, that means during a breach, your
timeline Your containment, speed and documentation aren't just operational. They
literally directly impact your financial recovery. That makes a lot
of sense. That makes a lot of sense. And I
think that's one of the things that people, you know,
sparks this debate right in the first place is like,
should cyber insurers be these like unofficial regulators? You know,
like, is that the right thing to do? And I
don't know that we can necessarily answer it, but it's
food for thought. Seth, your take on this little debate
here? Yeah, I mean, it's in terms of if it
should exist or what it does. It's just having insurance
on any of our goods promote burglary. Right. The insurance
exists because the crime occurs and the loss happens. It's
not like the insurance exists first. So it exists because
there exists a world where we have losses from this.
But in general they do require these baselines. And as
Ryan said, it's becoming this de facto private regulation of
if you want to have coverage, we need you to
do X, Y and Z so they're not unduly exposed
to this. Right. And that's a practice that's common in
any other type of insurance. Right. My insurance company sent
me a letter saying, hey, we used a drone, we
flew over your house and we saw that your roof
needs repairs. Right. Right now I was able to proactively
repair the roof, keep insured, and now I've averted a
problem in the future. So. Well, that's not always one
to one with how cyber insurance is going to work.
I know when we work with clients in the range,
it's always, hey, what does your cyber insurer say? Do
you have that in place? All of the resources that
they're going to be able to provide to you in
terms of negotiating with a threat actor, the confidence to
be able to recover. Right. Sort of publicly as consumers
anymore, we all accept that data breaches and security incidents
happen, but how you recover is much more material to
how your organizations perceive to protect your brand value and
how customers respond to you in the future. So the
insurers stepping in to provide that sort of after the
fact, you could continue to operate your business. Seems to
be a net positive at that. Hey, this doesn't have
to be a world ending event for your business. The
end of the day, the threat actors are probably going
to commit the crimes regardless. Right. They're going to find
a buyer for the data. This at least gives an
organization some sort of structure to be able to respond
and recover and sort of move past the negative event
that's an extremely. Good point, and this is probably the
only time in my life I will ever say this.
I wish we could keep talking about insurance, but that's
all the time we have for today. I want to
thank our panelists, Evelyn and Seth and Ryan, and our
special guest, Chris Hay. A thank you to the viewers
and the listeners. As always, subscribe to Security Intelligence wherever
podcasts are found. Stay safe out there. And I guess
watch out for agents spying on you.