Learning Library

← Back to Library

AI‑Driven Hacks: Reality vs Hype

Key Points

  • Hackers are leveraging open‑source tools and agentic AI at high speed, prompting security teams to adopt the same technologies for proactive testing and defense.
  • The episode previews a deep dive into OWASP’s 2025 Top 10 vulnerabilities, emerging ransomware trends, and the ongoing debate about the real value of cyber‑insurance policies.
  • Anthropic revealed that attackers used Claude‑code to automate 80‑90 % of a multi‑stage campaign against 30 targets, sparking both alarm and skepticism about the impact of AI‑driven attacks.
  • Panelists note that the threat is largely predictable—attackers are using AI‑generated code rather than the Claude model itself, highlighting the importance of understanding tool misuse.
  • Overall, the discussion underscores the need for AI‑enhanced security intelligence to stay ahead of increasingly automated adversaries.

Sections

Full Transcript

# AI‑Driven Hacks: Reality vs Hype **Source:** [https://www.youtube.com/watch?v=LNVDrrKbfzg](https://www.youtube.com/watch?v=LNVDrrKbfzg) **Duration:** 00:40:05 ## Summary - Hackers are leveraging open‑source tools and agentic AI at high speed, prompting security teams to adopt the same technologies for proactive testing and defense. - The episode previews a deep dive into OWASP’s 2025 Top 10 vulnerabilities, emerging ransomware trends, and the ongoing debate about the real value of cyber‑insurance policies. - Anthropic revealed that attackers used Claude‑code to automate 80‑90 % of a multi‑stage campaign against 30 targets, sparking both alarm and skepticism about the impact of AI‑driven attacks. - Panelists note that the threat is largely predictable—attackers are using AI‑generated code rather than the Claude model itself, highlighting the importance of understanding tool misuse. - Overall, the discussion underscores the need for AI‑enhanced security intelligence to stay ahead of increasingly automated adversaries. ## Sections - [00:00:00](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=0s) **AI‑Driven Threats and Defensive Tools** - In this opening of IBM’s Security Intelligence podcast, the host warns that hackers are exploiting open‑source and agentic AI tools, urges defenders to adopt similar technologies, and outlines upcoming discussions on OWASP 2025, ransomware trends, cyber‑insurance, and an Anthropic‑related AI espionage incident. - [00:03:15](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=195s) **AI Tool-Calling Sparks Security Debate** - The speaker warns that exposing open‑source tool‑calling to Claude Code turns AI into a script‑kiddie, creating fresh cyber‑threats while also motivating proactive adaptive AI governance initiatives. - [00:07:59](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=479s) **Role‑Play Jailbreaks Bypass AI Guardrails** - The participants explain how attackers use role‑playing and social‑engineering tricks to circumvent AI safety guardrails, exposing ethical blind spots and the ongoing difficulty of reliably detecting such jailbreak attempts. - [00:11:17](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=677s) **AI-Driven Regulatory Compliance Automation** - The speaker describes using autonomous AI agents to parse global regulations, auto‑reason control assessments, build baseline knowledge engines for client risk and governance, and envision a future of fully automated cyber‑defense where AI agents contest hackers. - [00:14:31](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=871s) **AI Empowers Script Kiddies** - The discussion warns that autonomous AI agents could amplify low‑skill attackers, turning script‑kittens into powerful threat actors reminiscent of historic major breaches. - [00:18:12](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1092s) **Rising Concerns Over Vendor Supply Chains** - The speakers note increasing client anxiety about third‑party and AI‑driven supply chain vulnerabilities and how breaches could cascade to affect brand reputation. - [00:21:29](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1289s) **Attackers Target Security Misconfigurations** - The speaker emphasizes that the increase in misconfiguration attacks—from rating 5 to 2—shows attackers are exploiting easily controllable configuration flaws, prompting defenders to tighten vigilance, especially around AI‑generated settings and traditional AD environments. - [00:24:35](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1475s) **Rise of Small Ransomware Gangs** - A Checkpoint Research report shows the ransomware ecosystem has become its most fragmented ever, with 85 active gangs and the top‑10 share dropping from 71% to 56% as law‑enforcement actions dismantle large groups, creating a more complex threat landscape for organizations. - [00:28:06](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1686s) **Fragmented Ransomware Threat Landscape** - The speakers discuss how the rise of small, decentralized cyber‑crime groups reduces victims’ willingness to pay ransoms and creates a volatile, hard‑to‑track attack ecosystem. - [00:32:08](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=1928s) **Prefer Targeting Large Ransomware Cartels** - Panelists argue that defending against big, organized ransomware groups is more predictable and manageable than chasing numerous small, chaotic crews. - [00:38:11](https://www.youtube.com/watch?v=LNVDrrKbfzg&t=2291s) **Cyber Insurance as Private Regulation** - The speaker explains that cyber insurers impose mandatory security baselines—similar to traditional insurance practices—while also providing post‑breach support that enables organizations to recover, safeguard their brand, and maintain operations despite inevitable threat actor activity. ## Full Transcript
0:01Oh, my goodness, this is terrible. Right? The hackers are 0:03using these open source tools and they're running at a 0:05great speed. I mean, I don't want to tell you 0:07how you do your job, people, but maybe, maybe you 0:11should use the same tools and use the models and 0:14rather than saying, you know, hey, please, you are a 0:17security researcher, maybe be a security researcher with the tools 0:22and agentic AI and actually do the test first, right? 0:25All that and more on security intelligence. Hello and welcome 0:32to Security Intelligence, IBM's weekly cybersecurity podcast, where we break 0:37down the most interesting stories in the field with the 0:39help of our panel of experts. I'm your host, Matt 0:42Kaczynski, and joining me today, Ryan Anschutz, North America, leader 0:46of X Force Incident Response. Evelyn Anderson, css, cto, IBM 0:52distinguished engineer, IBM master inventor, Seth Glasgow, Cyber Range Executive 0:57advisor and and our special guest for the first segment, 1:00Chris Hay, distinguished engineer and frequent presence on mixture of 1:05experts, IBM's AI news podcast. For those of you who 1:08have not already subscribed to that one. This week we're 1:10gonna be talking about OWASP's top 10 for 2025, big 1:14changes in the ransomware landscape, and the ongoing debate over 1:18the efficacy of cyber insurance. But first, a story I'm 1:22sure we've all heard by now. Anthropic disrupted an AI 1:25powered Espion. Now, according to Anthropic, hackers were using Claude 1:35code to mount an assault against some 30 targets, with 1:38the AI handling 80 to 90% of the campaign. That's 1:42the number that they put on it, including recon, writing, 1:44exploit code, establishing back doors, exfiltrating data. All this stuff 1:49with almost no human input and reactions have been kind 1:53of mixed. Actually. Some people look at this and they 1:55say, oh my God, this is crazy. I can't believe 1:57that the attackers are utilizing this stuff this way. I 2:01can't believe we've gotten to this point in the cyber 2:03attack era. Other people are looking at it and saying, 2:06this seems a bit sensational. I'm not really sure that 2:08this is as impressive as Anthropic is making it sound. 2:11For example, Anthropic themselves noted that while the attackers were 2:14targeting some 30 organizations, they only succeeded in, quote, a 2:18small number of cases. So I want to start by 2:21throwing it to you, Chris, as the resident AI guy 2:24here. On that scale from holy crap to snoozefest, where 2:28are you landing with this thing and why? Highly predictable. 2:31But the first thing I would say when people say, 2:34you know, the AI is going to cause A lot 2:35of job loss. Who would have thought it would be 2:37the hackers first, you know, so there we go. It 2:42is highly predictable. I mean, if we actually look at 2:44the anthropic paper there, I think there's a couple of 2:46key things we need to point out. First thing is 2:48it was CLAUDE code that they use. It wasn't Claude 2:51in that sense. I mean, Claude as the model was 2:53backing it, but it was Claude code. And that is 2:55actually super important because the real key to this is 2:59about tool orchestration. So they use the. A whole bunch 3:03of open source tools, right? And there's a whole community 3:06of MCP servers. You know, they said they used one 3:09for browser automation. I could take a fair guess that 3:12that was probably playwright, but again, same sort of thing. 3:15They've just taken a load of open source tools, made 3:17that available, probably. And I'm no cybersecurity person, but probably 3:21the same tools that you folks use every day. And 3:24all they've done is they've exposed that to Claude code 3:27in a way. And I think that is the key 3:29thing. It is agentic. So AI is now officially a 3:33script kiddie people. So that's what's going on. So I'm 3:37not surprised because tool calling is the key piece. I 3:40think what I would say is this has now opened 3:43the can of worms, right? And they're saying Claude code. 3:47Well, we're going to protect the Claude model. We're going 3:50to do a lot better in that. Yeah, I can 3:51absolutely see it going that way. And let's throw then 3:54to a cybersecurity take on this one. Right? And let's 4:04was more so just interesting to me. I mean, when 4:06I first looked at it, I looked at it as, 4:08yes, it's a new challenge for us from a cybersecurity 4:10perspective. But I also view it as an opportunity. You 4:14know, the fact that the hackers have kind of beat 4:16us to thinking about how you can start writing code 4:19to, you know, address a lot of their phishing campaigns, 4:23their deep fakes, you know, malware, et cetera, we can 4:26actually do the same thing. My particular team has been 4:29working on building out what they call adaptive AI governance. 4:33And the purpose of it is to be more proactive 4:37than reactive. But the key word is we're building it. 4:40It doesn't exist yet. And what this is actually telling 4:43me and the blaring sign that I see is we 4:46have to move faster, we have to look at how 4:49we can implement AI driven security architectures as well as 4:53governance to adopt across our entire ecosystem versus us waiting 4:59until we detect a problem and finding it. We need 5:02to more autonomous that once we see that someone has 5:05gone in, whether it was intentional or unintentional, and changed 5:08the system configuration, that we don't wait and take it 5:11through, you know, the five whys did it happen, but 5:14we have programs in place to actually flip it back 5:16to what it should be. So for me, when I 5:19looked at this, it just made me think about all 5:21the things in areas where we need to accelerate on. 5:23Absolutely. And that brings up a good point. Right. Sometimes 5:26I look at some of this stuff and I wonder 5:28if the hackers are moving a lot faster with this 5:31stuff than the kind of legitimate use cases are. Ryan, 5:34I see you nodd. What are your thoughts there? Do 5:35you think the hackers have a kind of leg up 5:37on us right now? Yeah, I would say this is 5:39pretty challenging because, you know, for this case in particular, 5:42what really stood out to me was the automation of 5:45actually the full kill chain. It was everything from recon 5:49exploit generation, backdoor deployment and data theft. You know, in 5:54their numbers that 80 to 90% of that was done 6:03sensationalizing is a good word. This is kind of pretty 6:07scary. But in the same breath, this is the first 6:11time the public has actually seen what we in IR 6:15have actually been quietly preparing for. And that is an 6:18adversary that really doesn't sleep, that doesn't take weekends, and 6:22iterates at true machine speed. So for IR teams really 6:26this shifts the game. We now have to really detect 6:30and contain not just our human counterparts, our human adversaries, 6:35but machine speed adversaries. And the interesting thing about that 6:39kind of machine speed adversary is that it might move 6:42a lot faster than us, but it also makes some 6:44mistakes that people don't make. Right. One of the things 6:47that Anthropic brought up in their report was that, you 6:50know, Claude Code did hallucinate a couple times. Right. They 6:53saw it spitting out some fake credentials that didn't actually 6:55work. Right. Or claiming it had surfaced data that was 6:58actually just public data. I'm wondering how this factors into 7:01our assessment of the threat. Seth, any thoughts on your 7:04end there? Yeah, I think it does kind of matter 7:06because it does still have a lot of the same 7:08limitations that an LLM normally would. It can hallucinate. It's 7:12only as good as what it's trained on. But additionally, 7:14you know, Claude code is sort of a legitimate tool, 7:16Right. It was essentially socially engineered. And I believe the 7:20full paper actually uses that term to describe how could 7:23the legitimate tool be used to do something that was 7:26destructive. Well, they set up a pretext that they, the 7:29attackers were themselves a security research company and then leveraged 7:33it to get it to do these things sort of 7:35against it. And so a lot of what we work 7:38with clients on is sort of, hey, how does AI 7:40change what these attack avenues are? And this is similarly 7:43in a sort of the old attack avenue of I've 7:45convinced you to do something, it's social engineering, but now, 7:48as Ryan said, it's at machine speed. Right? The scale, 7:51the speed, the scope of how quickly this can be 7:53done. If I'm still just convincing something to do it, 7:56that does materially change how bad that can be. Right. 7:59So even though it's going to be wrong, you know, 8:01that ethical limitation of if I'm the attacker, do I 8:03care that it's wrong? Probably not. Right. I don't have 8:06the same governance structures when I'm on the other side 8:09of the law that I would on a legitimate side. 8:12Seth, I'm really glad you brought up that social engineering 8:15perspective because you're right. Right. This all had to. The 8:18way that this worked was the Claude code was jailbroken, 8:21right? You had to kind of, or not jailbroken necessarily, 8:23but you had to trick it basically, right? Make it 8:26think it was doing something legitimate. Chris, I was wondering 8:28if you could give us some insight into how easy 8:31it is to kind of trick these AI models into 8:33doing things they shouldn't. I mean, how are our guardrails? 8:36What's the kind of level of sophistication there? I think 8:39these days it's getting pretty sophisticated. I mean, there tends 8:42to be guard models surrounding the main model anyway. But 8:46the reality is it's very difficult to tell when something 8:51is role playing. And that's really what happened in this 8:54case. It was a role play type attack versus doing 8:56it for reals. Maybe a bit of a clue is 8:58you gave access to the tools and it was sitting 9:01hacking on your behalf. So that might be the point 9:04where you might say, you know what, we're beyond role 9:07playing at that point. But where there's a will, there's 9:11a way. I just don't think this is necessarily a 9:14solved problem. So if you look at folks like Pliny, 9:17for example, is always publishing how to prompt engineer around 9:23these models, then most of this stuff is out there 9:25in the open. I guess what I would say just 9:28to counter things is everybody's going, oh my goodness, this 9:31is terrible. Right? The hackers are using these open source 9:33tools and they're running at a great speed. I mean, 9:36I don't want to tell you how you do your 9:37job, people, but maybe you should use the same tools 9:42and use the models and rather than saying, hey, please, 9:45you are a security researcher, maybe be a security researcher 9:50with the tools and agentic AI and actually do the 9:53test first. Right? So use the very same tools to 9:56do that. Because the reality is if you don't have 10:02open source tools, then you're not going to be hacked 10:06by these folks. So I think there is an argument 10:09for security teams to actually start doing that. Preventative measures 10:13use the exact same techniques as the hackers. It's all 10:16laid out in the paper from Anthropic. It's actually a 10:18really pretty well detailed paper on that. So go follow 10:21that and just do it in advance. I think it's 10:24a really good point. You know, honestly, like adopting that 10:27research, adopting that, that hacking technique kind of as your 10:30own research method. And I'm wondering if any of the 10:32folks here have, have toyed around with that stuff or 10:35toyed around with agentic AI or anything really in your 10:37kind of security work. Kind of open question. Has anybody 10:41toyed with that stuff? Anything they want to talk about 10:42on that front? Yeah, I think I can touch on 10:45that a little bit. You know, just how like attackers 10:48are chaining those models together, like you mentioned, Chris, you 10:51know, defenders, we are actually, you know, chaining together Defensive 10:55Age, one model monitoring logs, another doing memory forensics, another 11:01isolating hosts, and you know, maybe another actually even drafting 11:04executive updates. You know, that's where this all goes. It's 11:07kind of turning those defensive models and fighting fire with 11:11fire, essentially, as you mentioned before. So that is something 11:14that as defenders, we are constantly evolving every day with. 11:17Yeah, and we're doing some. We're not actually doing the 11:19same thing as Ryan, but we're doing things where we're 11:21utilizing agency to perform like some auto analysis and reasoning. 11:26We are responsible for all of the regulations, frameworks, et 11:29cetera, that come in. So when you think about the 11:32huge number of regulations and laws across the globe, when 11:36we're working with our clients, we have to be able 11:38to pull out the controls and make sure that the 11:40client is meeting their regulatory commitments. And so we're doing 11:44analysis there where Our engine today is able to go 11:47in, analyze, perform some auto reasoning, and then respond to 11:51like any type of an assessment. So that's how we're 11:54utilizing it within our arena. But we're also helping to 11:58build it where we're building a knowledge engine that will 12:00be able to establish a baseline for a client and 12:04then assess, will assess the client against that baseline, not 12:07only for their controls, but for the risk model, for 12:10their governance structure, their strategy, et cetera. But so it 12:13seems like the overall kind of perspective here is that, 12:16right? We all kind of saw this coming, but it's 12:18still a big deal. But we're working on some stuff 12:21on our end as defenders to kind of put these 12:24tools to the same kinds of uses. And eventually we're 12:27gonna start entering into, I don't know, maybe a fully 12:30automated, you know, sort of security world where the hackers 12:34are attacking the agents and the agents are attacking the 12:37hacker agents. I don't know, you don't even need people 12:38there anymore. It's fully automated cyber attacks. It's great. Last 12:42question, though. And this is kind of. I don't know, 12:45this is just something I saw that I wanted to 12:47bring up. I saw one tweet about this where somebody 12:50was very skeptical of what Anthropic was reporting. And they 12:54said, they asked how Anthropic could get what they called 12:57AGI level performance out of this stuff. I wanted to 13:00throw this to the actual AI guy here. Does anything 13:03like this look like AGI to you at all? Because 13:06it didn't to me. It's definitely not AGI. This is. 13:11I mean, as I said before, this. The real innovation 13:14on this is the fact that the models are getting 13:17really good at tool calling, right? And they're also getting 13:20really good at planning. They are the two things. So 13:23if you look at some of the weaker models, they'll 13:25just. They're like your script kiddies. They'll just call any 13:27old tool and hope something lands, right? But. But a 13:30really good big model that can tool call will be 13:34able to follow up. It'll follow sequence events, it'll have 13:37a plan in the same way as a human being 13:39is. So this isn't. This isn't about how great the 13:42AI is, and it is in some regards, it is 13:44about how great the AI is because CLAUDE is a 13:47great model and Claude code is a wonderful tool. But 13:50it's really this advances in function call and this advances 13:53in planning that's made this possible. And it's really about 14:02of your security tools that you use today, the hack 14:05and all the script kitties use, et cetera, if you 14:07get all of those same tools, you make them available 14:10with something like MCP servers or just wrap them in 14:13function calls, then you are going to be able to 14:16have AI do the same thing. Because it's not really 14:18applying so much intelligence there. It's using the tools to 14:22break through and then gather the insights. So anybody who's 14:25done any sort of I coding or done any sort 14:27of agentic work whatsoever will recognize the techniques here. Yeah. 14:31So I know it's been mentioned a couple of times 14:33around script kitties, and I think that is concerning where 14:37some, if you look historically, some of the world's worst 14:40breaches were, weren't actually done by elite threat actors. They 14:43were done by beginners using downloaded tools and historically old 14:48vulnerabilities. Now imagine those same beginners with access to autonomous 14:53AI agents. That is the trajectory that actually should concern 14:57us. That's a really good point. Right, Because I think 15:00especially when we have these conversations about AI, there's a 15:03tendency to focus on how impressive the tool itself is. 15:06But it's really important to what it's enabling people to 15:09do. Right. And it's like you said, a script kitty 15:12with one of these things in their hands, they become 15:15very good at that stuff. And you didn't even need 15:18to be that good in the first place. Right. You 15:19know what I mean? Like you said, some of the 15:21biggest attacks in history were the script kitties. So I 15:23think that's a really important point. On that note, I 15:25think it's time to move on to our next story, 15:26but thanks you. Thank you, Chris, for being here, for 15:29talking through this stuff with the rest of us. Yeah, 15:32go watch mixture of experts if you haven't already, folks. 15:35Thanks for having me. All right, let's move on to 15:37our second story for today. This is OWASP's top 10 15:40for 2025. Now, last week, OWASP released the latest installment 15:49of its top 10 list of the most critical web 15:51application security risks. This is the eighth installment of List 15:54overall, I believe, and the first update since 2021. Some 15:59highlights are broken access control remains the number one threat. 16:03Security misconfigurations rose from number five to number two. And 16:08two new ones joined the list. We've got software supply 16:11chain failures and mishandling of exceptional conditions. So first off, 16:15I just want to get initial reactions to this update. 16:17You know, I know a Lot of us kind of 16:19use OWASP as a guide for when we're designing or 16:21working on web apps. So I just want to see 16:23how people are feeling about this newest iteration. Let's start 16:25with you, Seth. Any initial reactions, things that caught your 16:28eye here, thoughts that come to mind? Yeah, I actually 16:30really like the addition of the exceptional conditions sort of 16:34catch. Right. Because a lot of it is focusing on 16:36these unique failure points that exist within a certain context 16:39or, you know, in usual customer applications, we're thinking of 16:42the business logic. Right. And so we've actually seen kind 16:46of sometimes on the testing side you have sort of 16:48what old. What is old is new again in terms 16:50of these vibe coded apps that are bringing in old 16:53business logic flaws that we thought we had kind of 16:56taken care of. Right. So it's not perfect. Right. The 16:59overall OWASP idea of categorizing CWEs versus specifics the way 17:03Mitre does it is a point of contention. But the 17:06idea that this category exists to acknowledge that there are 17:09just conditions that have been mishandled but don't necessarily have 17:13a standard way to handle is extremely important in putting 17:16the apps in the business and user context. Right. How 17:19is it being used that can be abused as opposed 17:22to just a rote coding issue? Yeah, I'm glad you 17:24kind of explicated on that. Right. Because you just see 17:26that phrase mishandling of exceptional conditions. And if you're a 17:30guy like me who's not super technical, you're like, what 17:32the heck does that mean? So I'm glad you put 17:34that in a context where it's like, oh, that's super 17:36duper important. Evelyn, what about you? Any thoughts on this 17:39new OWASP list? Anything jump to mind when you look 17:41at it? It was a little bit kind of. Well, 17:44I was surprised, I guess, in one sense because I 17:47haven't actually looked at The OWASP top 10 in a 17:50couple of years and I was like, wow, it hasn't 17:51changed a lot. Which is a disappoint that we should 17:54have improved over the last five years when I saw 17:57that broken access control, still holding that top spot very 18:02strongly there. So I was a little bit disappointed that 18:06there hasn't been any improvement, that much improvement within the 18:09market there. But I was happy to see supply chain. 18:13When it comes to supply chain, we're starting to realize 18:16that our clients are realizing the growing risk around their 18:19vendors and the 3rd nth party vendors and how it 18:23actually impacts. And we're starting to see more and more 18:26coming out around this space, when we've been looking at 18:29it, most of the clients, their biggest complaint and concern 18:32now is how those third party, fourth party, nth party, 18:35whatever vendors actually have on their organization and what portions 18:39of their supply chain that they're actually supporting and if 18:42they go down or if there's any type of a 18:45problem, a breach with them, how it impacts their overarching 18:48brand. So I was very pleasantly surprised to see the 18:51supply chain there. Yeah. And you know, I think that 18:54basically every episode we've recorded of this show so far 18:56has included some reference to a supply chain attack or 18:59somebody talking about supply chain. It really is kind of 19:02top of mind for a lot of people before. Like 19:03you said, you know, organizations are worried. They're, they're integrated 19:06with so many other companies and if a third party 19:08gets breached, it can affect them. And what's that going 19:10to do? And not to harp on it, but also 19:12we were just talking in the last section about, you 19:15know, these AI agents chain chaining together a bunch of 19:18tools. I mean, that's even more of a supply chain 19:20right there that you're getting open up to. So I 19:22agree. I'm glad to see that here too. Ryan, how 19:25about you? Anything come to mind when you look at 19:26this list? Yeah, you know, I completely agree with Evelyn. 19:29You know, this new OWASP list really feels like a 19:33reality check for the AI coding era where this is 19:37not just a developer list anymore. It's really a map 19:40of how attackers are breaking into modern environments where that, 19:44that big theme, that insecure design, right. That API exposure, 19:50the dependency chain risks that we've been talking about and 19:53really AI generated code are now first class citizens on 19:57this list. Right. What's interesting though is this speed, this 20:01list already feels like it will need updates mid year. 20:05Right. Like our attack surface is already expanding faster than 20:08our ability to train dev teams, you know, and from 20:11my lens on that incident response aspect, I like how 20:15OWASP is really kind of becoming predictive. Like the list 20:18is almost a forecast of the types of vulnerabilities we're 20:23actually cleaning up in breaches every day. Yeah, and that's 20:26a really good point. You know, when you talk about 20:28how it feels like it's a list that almost is 20:30going to be needed to updated. Needed to be updated 20:33pretty soon. Right. Because you know, you also look at 20:36what Evelyn said before about how a lot of this 20:38stuff looks pretty similar to how it used to. Right. 20:40And it raises this kind of question for me where 20:42it's like we're in this AI coding era, and yet 20:45how. How much things have changed. They're also not that 20:48different in some ways. Right. Like there, it's weird how 20:51the more things change, the more they stay the same. 20:53So I don't know. That's just something that came to 20:55mind there. And I wonder, I guess, I don't know, 20:57maybe the rise of vibe coding makes some of this 20:59stuff more likely to be there. Right. Because people who 21:03aren't used to coding apps are coding apps now. I 21:05don't know. Just a thought that I'm throwing out there. 21:06But so anyway, you as the security experts here, as 21:09you look at this and you highlight the things that 21:11jump to mind for you, what do you think the 21:13takeaways are for defenders? Right. Like if you're a defender, 21:16for example, Ryan, you're talking about the need to train 21:18people, for example, if you're a defender sitting there, what 21:21should you be based on this list thinking about as 21:23your kind of next moves? How should this influence what 21:25you do next? Let's go back around the circle. Seth, 21:27any thoughts there? What the key takeaways for defenders are? 21:29Yeah, so I think the fact that mis security misconfiguration 21:33jump from 5 to 2 tells you a lot about 21:36where attackers are focusing and where we need to focus 21:39as defenders. Right? Right. New vulnerabilities will always be novel. 21:42We will find zero days we have to respond to 21:44that. But we have a lot of control over those 21:47configuration items and that's what we need to clamp down. 21:50Right. And so in the application space, it's because more 21:53and more of these apps are managed by these configurations, 21:56we have to put more sort of vigilance around how 21:59they're created. Right. We probably using AI to generate some 22:02of those configurations as well. So really focusing on that, 22:05you know, in a traditional network environment, a lot of 22:08times what's getting exploited is nothing new do it's the 22:10same old active directory misconfigurations as well. And I think 22:14a lot of times it's those security configurations are thorny. 22:18They can be very difficult to work through. And so 22:20you really have to focus on that. And I think 22:22it jumping from five to two just tells you that 22:25that is growing and increasing attack space. And from my 22:28perspective, if I were the attacker, yeah, I would love 22:30to try to get in there because that's probably going 22:32to have a flaw that I can make a use 22:34of. That doesn't require me being more novel on the 22:37actual vulnerability or injects. Fully agree with what Seth mentioned 22:41there. I Guess when I looked at this, I started 22:43thinking about the prior session where we were discussing how 22:47attackers were getting into the environment. And when I'm looking 22:49at broken access control and system misconfiguration, I look at 22:53both of those as being two key areas of opportunity 22:57where we can try to automate. I mean, if we 23:00are making mistakes every time we're configuring a system, then 23:05let's try to automate that process. I mean, it's. There's 23:08no reason why we can't actually automate it. And it's 23:11one of the areas that I think we have to 23:12closely look at. How do we migrate to more of 23:16a digital worker versus a human actually doing it and 23:20just have a validation process from a human and if 23:23necessary, establish a secondary control where we go back and 23:26check the initial configuration from the initial setup. So I 23:33think that this is an area that we can definitely 23:36improve in. We just have to start thinking a little 23:38bit out of the box. I love that. I love 23:40that. Ryan, how about you? What are your takeaways? I 23:42go back to like, you know, the Defenders. Right. This 23:45list really, I think, reflects the reality that Defenders live 23:50in every day. You know, as we mentioned before, that 23:52insecure design, API exposure, and misconfigs, you know, that is 23:57what really drives the majority of the breaches, not exotic 24:01exploits. You know, with AI now really producing a significant 24:05chunk of enterprise code, I think we need stronger guardrails, 24:09better code lineage, and better real time validation. I think 24:13that the key takeaway really for Defenders is simple focus 24:17on the fundamentals, secure the dev pipeline, and gain that 24:22deep visibility into your APIs and identity flows. Yeah, you 24:25know, I think if there was like one cardinal rule 24:27of cybersecurity that's come up again in almost every episode 24:30we talk about it's focus on the fundamentals. You know, 24:33as much as things change, so much stays the same. 24:36Let's move on then to our next story here. Now 24:43a new report from Checkpoint Research found that the ransomware 24:47landscape has grown much more fragmented, with the large gangs 24:51losing their traditional grip on power and many smaller gangs 24:55accounting for a bigger share of attacks. Specifically, some of 24:59the numbers they call out here was that in Q3, 25:012025, they saw 85 active ransomware and extortion gangs, which 25:05was the most decentralized. That's the word they use, the 25:09most decentralized ransomware ecosystem that they've seen since Checkpoint has 25:13been, you know, keeping tabs on this thing. Earlier this 25:16year, the top 10 ransomware gangs accounted for 71% of 25:19victims, that has since fallen to 56%. So you know, 25:23again, we're seeing that, that that shared decrease and that 25:26checkpoint says that this fragmentation is driven by the kind 25:29of dissolution of large gangs following successful law enforcement efforts. 25:33Right. We've seen quite a bit of, of, you know, 25:35gangs get taken down this year by law enforcement. A 25:38lot of good activity. There's. But often what happens is 25:41that these gangs get taken down, some people scatter, they 25:44form their own. And so now we've got a bunch 25:45of little guys running around out there. I'm wondering how 25:49this situation, all these little guys popping up, complicates that 25:52threat landscape as opposed to when you're dealing with a 25:54bunch of big ransomware gangs. Let's start with you, Seth. 25:57Any thoughts on how this complicates the threat landscape for 26:00organizations? Yeah, I think first off, the minute that the 26:03larger ones break up now we have sort of a 26:05dearth of intelligence, right. We don't know exactly. We can't 26:08attribute behaviors to a bunch of different small groups. And 26:11furthermore, the small groups, if they start to target more 26:14specific industries, well, their tactics will likely become more effective 26:17while they get sort of hyper niche. The report itself 26:21actually mentioned something that I found interesting around how it, 26:24this ecosystem that's evolving mirrors sort of a decentralized financial 26:28system or open source communities. And we see that a 26:32lot with the way these threat actors work now. Right? 26:34Hey, I, my core competency is credential theft. I'll sell 26:37these off to another group, group who is good at 26:39this. And so as it sort of fragments, it creates 26:42a sort of structure organizationally that is not unlike the 26:47way that we naturally structure legitimate organizations. Right now they're 26:51not governed by the same things, but ultimately these criminal 26:53organizations for the most part have the same goal as 26:57our organizations, which is they, they desire to make money 27:00and turn a profit. Right. So many of them as 27:03they split up are going to be harder to track 27:05and really focus in and sort of being, hey, if 27:08I'm a smaller fish, I'm harder to gobble up. It 27:11probably makes them able to survive and become survivable better 27:14themselves. Yeah, it's interesting, right? That kind of dark web 27:17marketplace, those dynamics that economics is very similar to the 27:21legitimate marketplace. Right. Like you said, the same kinds of 27:24pressures are there. At the end of the day, we're 27:25talking about people who are there after money, they're after 27:27profit. Right. It just so happens that they don't have 27:30as many scruples down in the dark Web, as the 27:32rest of us do up here. But I do think 27:33that's a good point, right? How much these things mirror 27:35each other, you know, and another interesting thing about it 27:38is that this. This breakup and this change in market 27:42dynamics, like you said, it gives us a dearth of 27:44intelligence, and it also gives the victims kind of some 27:50hesitation in terms of, do I want to pay these 27:53guys? Because it turns out that they're seen as a 27:55lot less trustworthy than the big guys, which is a 27:57very interesting question about branding. Right. The biggest big guys 28:00have an incentive to decrypt when they say they're going 28:03to decrypt. Right. Because that gets people to pay them. 28:06The little guys have less of an incentive, and that 28:09means people are a little more hesitant about paying them. 28:11I'm wondering, Evelyn, if you have any thoughts about this 28:14particular dynamic, dealing with these ransom demands from these little 28:18groups that you're not even sure of if they're going 28:20to deliver on their promises. I actually, I was listening 28:25to Seth and thinking about when. When you first go 28:28through this and you think about decentralizing hackers, I mean, 28:33attacks into organizations, it's beyond terrifying when you think about, 28:39okay, my niche, my skill is not here, so I'm 28:42just going to sell. I can sell off the different 28:44bits and pieces to other groups that may have their 28:47specialty. I mean, I agree with the smaller accounts and 28:52organizations where they're a little bit hesitant to pay because 28:56there's no incentive. And when you're talking about a little 28:59guy, okay, we're with this group today. Oh, you caught 29:02one of us. Okay, we just branch out the next 29:04day and we start a new group, you know, and 29:07on and on and on. And so it's. Yeah, when 29:11I looked at this and I thought about the conversations 29:13we had, you know, many years ago, when we thought 29:15about just decentralizing different bits and pieces of cybersecurity. And 29:19I'm like, wow, we're decentralizing, you know, attacks now. This 29:24is, like, beyond terrifying for me when you start thinking 29:27about it. And the biggest thing is, how do you 29:30catch up? You know, there's so much uncertainty and volatility 29:35in the process until when you think about the law 29:38enforcement being able to take them down, they may get 29:40one, but, you know, that's the problem. They get one, 29:43but then there's nine others that just moved, changed locations, 29:47and started up again. So for me, when I looked 29:50at it and went through the article, it was like, 29:52wow, how do you catch up with this? Because it's 29:55going to be difficult. Now, I'm glad you brought up 29:57that question of catching up, because that's what was running 29:59through my head too. Right? Is that like, how do 30:01you, like, when you take down one organization and a 30:04bunch of little ones pop up? It almost feels like, 30:06what was the point of taking down the big one? 30:07And maybe it's unfair, but I'm going to pose the 30:09question and I'm going to throw it to you, Ryan. 30:11Are we just destined to deal with this? Is this 30:13just what's going to happen where you take down a 30:15big organization and the little ones are going to pop 30:17up? Like, is this just the dynamic and we have 30:19to accept it and adjust to it or. I don't 30:21know, is there a more definite way to shut down 30:23some of this stuff? I don't know. Any thoughts there? 30:26Yeah, you know, I kind of go back to prior. 30:28Prior to my time in cyber, I spent many years 30:30in law enforcement. And I think this is something where 30:33we have to. What we always say in law enforcement 30:35was adapt and overcome, Right. And if we take a 30:38look at what fragmentation does of criminal organizations, it kind 30:42of destroys that traditional law enforcement playbook. Right? The big 30:47gangs have hierarchies. They have infrastructure, they have predictable patterns, 30:53they have financial trails. You track them like an organized 30:57crime enterprise. But when we see these micro crews spinning 31:01up overnight using disposable infrastructure and disappearing just as fast. 31:07And to Evelyn, your point, there's no brand to track, 31:10so, you know, victims are kind of left with, hey, 31:12who. Who are we dealing with? There's no negotiation history, 31:16no crypto, wallet reuse. It's the proverbial whack a mole 31:21at global scale. Right. So that is a complexity that 31:26we have to just adapt and overcome with. I think, 31:29really from ir, you know, since we can't rely on 31:34known group patterns and law enforcement can either, that really, 31:38it's unfortunate because it does prolong investigations, I think, unnecessarily, 31:42and also complicates victim recovery. So we have to really 31:46find a way to adapt and overcome those challenges. So 31:51to close out this segment, I want to play a 31:53quick game of would you rather. Because I cannot help 31:55myself whenever the opportunity arises to play a quick game 31:58of would you rather. So I'm going to go around 32:00real quick. Would you rather deal with lots of little 32:03gangs running around or just a couple big ones? Seth, 32:06what do you think? I think it's kind of twofold. 32:08I think it's probably easier from the defensive side to 32:10deal with a couple of known players. Right. As Ryan 32:13said, you can get tendencies on them. The intel's easier 32:15to track. We have a way to understand how they 32:18will behave because they under. They behave in a way 32:20that we understand. A bunch of smaller ones is probably 32:24worse, but it's the same as anything in terms of 32:27this is still just crime. Right. Sweep one street corner, 32:30they're going to move to other ones. It's just kind 32:31of a repeat fashion that we have to overcome. But 32:34I think it would be easier to deal with larger 32:37players as opposed to a bunch of unique small. I 32:39agree. When it comes to the larger players, I think 32:42that, you know, they're a little bit more predictable and 32:44they have, you know, a structured approach, and so you 32:48can kind of be able to understand a little bit 32:50of their behaviors. So for me, it would definitely be 32:53a larger. I think the smaller, as we mentioned, are 33:02that's the best description, is you're really trying to whack 33:05a mole with the smaller. So definitely larger. Absolutely. And 33:08Ryan, take us home. Give me the big ransomware cartel 33:12any day over the small cruise. The big guys are 33:15predictable, professionalized, and they usually actually follow their own rules. 33:20The smaller crews, they are chaotic, emotional, and heck, they 33:25might even burn down the entire environment by accident. So 33:30I take it there will be no small business Saturday 33:32for the ransomware gangs this year. Let's move on then 33:35to our. Our final story for the week. Insurance payments 33:39for cyber attacks are skyrocketing in the UK. Cyber insurers 33:48reported that in 2024 they spent $259 million on payouts, 33:54compared to 77 million in 2023, which is just a 33:58massive year over year leap. And that doesn't even account 34:00for some of those. Those huge attacks we saw in 34:022025, like the JLR attack that took them offline for 34:06like, I don't know, a month and a half or 34:07something. Right. So this figure has added more fuel to 34:11an ongoing debate in cybersecurity corners. Does cyber insurance help 34:15strengthen defenses, or does it just encourage hackers to keep 34:19asking for ransoms? I'm gonna start with you, Evelyn. Do 34:24you have any thoughts on this debate about cyber insurance? 34:27And I recognize we can't land. We might not be 34:29able to land on one side cleanly, but I'd just. 34:31Like to hear your take for me, when I went 34:34through the article and read and thought about it a 34:37little bit. For me, when you think about cyber Insurance, 34:40it's just like us having insurance. It's a safety net. 34:43You don't predict that there's going to be a tornado 34:47that comes through and wipes out your house, but just 34:50in case you're just trying to recover your expenses. And 34:53so when I look at this particular scenario, we see, 34:58I mean the numbers don't lie. We see in our 35:00own cost of a data breach, the report where we're 35:02seeing the numbers just going up, up and away, insurance 35:07payments are going to go up with it. You know, 35:09when you start looking at a lot of the premiums 35:11and you're weighing the difference, it's one of those, okay, 35:15do I, how much is this worth? Or am I 35:19willing to come out of my pocket and pay to 35:22an attacker directly from, you know, my funds versus let 35:26me have some insurance. And if you look at the 35:28return on investment, the insurance paying out X number of 35:32millions of dollars versus the organization, I think you have 35:36to look at both and go with both. But the 35:39one thing, what I was looking at when I thought 35:41about this is that there's still an impact to their 35:44reputation. The reputational brand is still. When you look and 35:48weigh the differences, if I know that a company was 35:51able to recover versus if I'm dealing, you know, and 35:55I have business with the company and they're not able 36:04holding on by a thread versus one that had insurance 36:08and was able to recover. I think I'd always go 36:10with the one that had insurance that planned for the 36:12rainy day, you know, when the storm came through and 36:15took everything. The storm being the hackers, makes sense. Ryan, 36:21how about you? Any thoughts here? Yeah, I have a 36:23slightly different look into that. I really think that when 36:28insurance payouts triple in one year, I think it tells 36:31you two clear things. I think the attacks are more 36:35severe and organizations are actually still not building resilience at 36:41the pace that the threats are actually evolving. When reading 36:45this article, I think the interesting part really is the 36:48insurers are becoming unofficial regulators. They're demanding proof of controls, 36:56evidence of IR readiness, tabletop prep, multi factor authentication, everywhere, 37:03immutable backups. And if you don't have it, your coverage 37:08shrinks and your premiums skyrocket. Right. So that insurance is 37:12almost that it's no longer a safety net anymore. It's 37:16now shaping cybersecurity maturity programs. And what does that mean 37:21for organizations? I mean, that means during a breach, your 37:25timeline Your containment, speed and documentation aren't just operational. They 37:32literally directly impact your financial recovery. That makes a lot 37:36of sense. That makes a lot of sense. And I 37:37think that's one of the things that people, you know, 37:39sparks this debate right in the first place is like, 37:41should cyber insurers be these like unofficial regulators? You know, 37:45like, is that the right thing to do? And I 37:47don't know that we can necessarily answer it, but it's 37:49food for thought. Seth, your take on this little debate 37:53here? Yeah, I mean, it's in terms of if it 37:55should exist or what it does. It's just having insurance 37:59on any of our goods promote burglary. Right. The insurance 38:02exists because the crime occurs and the loss happens. It's 38:06not like the insurance exists first. So it exists because 38:09there exists a world where we have losses from this. 38:11But in general they do require these baselines. And as 38:14Ryan said, it's becoming this de facto private regulation of 38:18if you want to have coverage, we need you to 38:20do X, Y and Z so they're not unduly exposed 38:24to this. Right. And that's a practice that's common in 38:26any other type of insurance. Right. My insurance company sent 38:29me a letter saying, hey, we used a drone, we 38:31flew over your house and we saw that your roof 38:33needs repairs. Right. Right now I was able to proactively 38:35repair the roof, keep insured, and now I've averted a 38:38problem in the future. So. Well, that's not always one 38:41to one with how cyber insurance is going to work. 38:44I know when we work with clients in the range, 38:46it's always, hey, what does your cyber insurer say? Do 38:49you have that in place? All of the resources that 38:51they're going to be able to provide to you in 38:53terms of negotiating with a threat actor, the confidence to 38:56be able to recover. Right. Sort of publicly as consumers 38:59anymore, we all accept that data breaches and security incidents 39:03happen, but how you recover is much more material to 39:06how your organizations perceive to protect your brand value and 39:09how customers respond to you in the future. So the 39:12insurers stepping in to provide that sort of after the 39:15fact, you could continue to operate your business. Seems to 39:19be a net positive at that. Hey, this doesn't have 39:22to be a world ending event for your business. The 39:26end of the day, the threat actors are probably going 39:29to commit the crimes regardless. Right. They're going to find 39:31a buyer for the data. This at least gives an 39:33organization some sort of structure to be able to respond 39:36and recover and sort of move past the negative event 39:39that's an extremely. Good point, and this is probably the 39:41only time in my life I will ever say this. 39:43I wish we could keep talking about insurance, but that's 39:46all the time we have for today. I want to 39:49thank our panelists, Evelyn and Seth and Ryan, and our 39:52special guest, Chris Hay. A thank you to the viewers 39:55and the listeners. As always, subscribe to Security Intelligence wherever 39:59podcasts are found. Stay safe out there. And I guess 40:02watch out for agents spying on you.