Learning Library

← Back to Library

AI Deepfakes, Ransomware, OT Threats

Key Points

  • The episode opens with a warning that AI‑generated deepfakes have become dramatically more realistic, signaling a new era of threat‑making beyond earlier “Forrest Gump meets JFK” analogies.
  • The show’s roundup covers a post‑mortem on the Scattered Lapsis hacker group, a proof‑of‑concept AI‑driven “prompt‑lock” ransomware, a single phishing email that compromised 20 npm packages, and a fresh IBM X‑Force report on the biggest threats to OT and critical‑infrastructure systems.
  • Highlights include a discussion of business‑identity compromise scams that give hackers apparently legitimate jobs, an odd case where an advanced threat actor installed Huntress EDR on their own machine, and a controversial hot‑take on the reliability of CVSS scoring.
  • Host Matt Kaczynski introduces the expert panel—Michelle Alvarez, Sridhar M. P. D., and Dave Bales—and teases upcoming long‑form interviews about the true cost of cyber‑attacks in the U.S. and the challenges of securing AI identities in enterprises.

Sections

Full Transcript

# AI Deepfakes, Ransomware, OT Threats **Source:** [https://www.youtube.com/watch?v=dAS4zgMiSuQ](https://www.youtube.com/watch?v=dAS4zgMiSuQ) **Duration:** 00:46:03 ## Summary - The episode opens with a warning that AI‑generated deepfakes have become dramatically more realistic, signaling a new era of threat‑making beyond earlier “Forrest Gump meets JFK” analogies. - The show’s roundup covers a post‑mortem on the Scattered Lapsis hacker group, a proof‑of‑concept AI‑driven “prompt‑lock” ransomware, a single phishing email that compromised 20 npm packages, and a fresh IBM X‑Force report on the biggest threats to OT and critical‑infrastructure systems. - Highlights include a discussion of business‑identity compromise scams that give hackers apparently legitimate jobs, an odd case where an advanced threat actor installed Huntress EDR on their own machine, and a controversial hot‑take on the reliability of CVSS scoring. - Host Matt Kaczynski introduces the expert panel—Michelle Alvarez, Sridhar M. P. D., and Dave Bales—and teases upcoming long‑form interviews about the true cost of cyber‑attacks in the U.S. and the challenges of securing AI identities in enterprises. ## Sections - [00:00:00](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=0s) **Untitled Section** - - [00:03:12](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=192s) **Threat Groups Feign Retirement** - Participants discuss how cyber‑threat actors often claim they'll disappear, only to reappear, questioning the authenticity of such claims. - [00:06:31](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=391s) **Ransomware Groups Reevaluating Tactics** - Speaker discusses shifting from high‑profile ransomware to low‑effort hacks, the influence of AI tools, and internal infighting that causes fleeting criminal alliances. - [00:10:18](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=618s) **Responsible AI Red‑Team Ethics** - Panelists debate leveraging AI for red‑team testing to strengthen defenses, emphasizing transparency, responsibility, and concerns about emerging AI‑generated malware. - [00:14:09](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=849s) **Responsible Disclosure of Dual‑Use Tools** - The speakers discuss how powerful AI and hacking utilities inevitably get misused, emphasizing the need for responsible release strategies, selective distribution, and proactive defenses, illustrated by the early network‑scanning tool Satan that later inspired Nmap. - [00:18:42](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=1122s) **Balancing Risks of Third-Party Breaches** - The speakers argue that despite the dangers of phishing and third‑party compromises, such incidents can expose vulnerabilities and ultimately strengthen security, highlighting the vast attack surface and trust challenges in supply‑chain relationships. - [00:22:05](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=1325s) **Social Engineering and Ongoing Training** - The speakers note that even technically proficient developers can be duped, underscoring that no one is immune to social engineering and stressing the necessity of continually updated, practical user education to reinforce safe habits. - [00:26:08](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=1568s) **Analyzing OT Exploit Landscape** - The speakers review OT vendor vulnerabilities, monitor darknet mentions and exploit availability, and debate why threat actors—whether nation‑state or criminal—are increasingly targeting critical infrastructure for disruption rather than solely for data theft. - [00:29:58](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=1798s) **Unpatched Industrial Systems Explained** - The speaker outlines how costly downtime, legacy PLC/CNC hardware, and the prioritization of uptime over security create siloed operations that leave critical infrastructure vulnerable to attackers. - [00:33:47](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=2027s) **Prioritizing Exploitable Vulnerabilities Over Scores** - The speakers argue that CVSS ratings matter less than real‑world exploit availability and asset relevance, urging teams to patch truly exploitable flaws first. - [00:37:56](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=2276s) **HR Hiring Rush Fuels Fraud** - Rapid, remote hiring pressures combined with advanced AI deep‑fake tools create a perfect‑storm that attackers exploit, underscoring the need for heightened HR awareness and cybersecurity education. - [00:41:46](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=2506s) **Criminal Uses Huntress, Gets Watched** - A cybercriminal installed Huntress EDR on his own machine, unintentionally giving the security team full visibility into his research, tools, and attack preparations, which the team found both amusing and insightful. - [00:44:58](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=2698s) **Account Compromise, MFA, and Humor** - The host recaps a video on account‑compromise tactics (ignoring MFA), draws an analogy to attackers’ busy days, thanks guests and viewers, and closes with a light‑hearted “lion vs. Pokémon” hacking question. ## Full Transcript
0:00AI isn't Max Headroom anymore. I mean, it's. It's a 0:03lot. Deep fakes are really, really good now. It's not 0:07Forrest Gump meeting John Kennedy. It's a lot better than 0:10it used to be. All that and more on security 0:13intelligence. Hello, and welcome to Security Intelligence, IBM's weekly cyber 0:23security podcast, where we break down the most important stories 0:27in the field when the help of our panel of 0:29expert practitioners. I'm your host, Matt Kaczynski. Today's stories, a 0:34postmortem for scattered Lapsis hunters. AI powered prompt lock Ransomware 0:39is just a proof of concept. One phishing email leads 0:43to 20 compromised npm packages. A new IBM X Force 0:47Analysis digs into the major threats facing OT and critical 0:51infrastructure. Business identity compromise scams give hackers legitimate jobs for 0:56illegitimate reasons. Plus the story of an enterpr threat actor 1:00who for some reason installed Huntress EDR on their own 1:04device. And I hear we have a hot take on 1:07CVSS scores that I'm dying to get into. It's a 1:10packed episode, so let's introduce our cast of characters. First 1:13up, Michelle Alvarez, manager, X Force Strategic Threat Analysis. Michelle, 1:18thank you for being here today. Thank you for having 1:20me. Also, keep an eye on our podcast feed for 1:23an in depth discussion with Michelle on why it costs 1:26so dang much to get hacked in America, which should 1:28be coming in the not too distant future. We also 1:31have Sridhar, mpd, IBM fellow cto, IBM Security. Sridhar, thank 1:36you for being here. Thanks, Mike. Looking forward to the 1:38discussion. Sridhar also has a long form interview coming down 1:42the pipeline. Hazes on the complexities of securing AI identities 1:45in the enterprise. And Dave Bales of X force Incident 1:49Command 1, host of the not the Situation Room podcast 1:52and the only person I've heard make a dick dastardly 1:54reference in the year 2025. Dave, how you doing? I'm 1:59well. I'm a little jealous that I don't have a 2:01long form interview coming up. You know, I was about 2:03to say we got to figure out what yours is 2:05going to be, because, I mean, you can't be the 2:06only one, you know? Yeah, think about that in the 2:09background. All right, let's dive into our stories. First up, 2:17scattered Lapsis hunters. Call it quits. Or do they? The 2:21collaboration no one wanted to quote X Force's claim. Claire 2:24Nunez announced last week that it would be going dark 2:28in a message posted to breach forums. They left open 2:30the possibility that more breaches will be attributed to them 2:34in the future, especially among airlines. But they said that 2:37these are breaches. They already did. It's nothing new. They 2:39just haven't been discovered yet. They also framed their work 2:43as a quote, unquote, war on power meant to. And 2:46these are quotes. I didn't make this up. Humiliate those 2:48who humiliated and predate those who predate. Which. That's kind 2:53of a new interpretation for me. I don't really see 2:56how that's making any sense with what they're doing, but 2:58we'll discuss that. So to start, I want to ask, 3:02is this really the end of Scattered Lapsis Hunters? And 3:05I'm going to start with Dave, because I know your 3:07podcast has discussed these guys a few times now, so 3:10I'd love to get your take. What do you think? 3:12Are they really gone? Not a chance. There's no chance 3:17that they are gone. This is a ruse. This is 3:20a look at the right hand while the left hand's 3:22doing something else. As I've said before, we've seen threat 3:26groups come and go. They've said they were going to 3:28leave and then they come back. I can't remember the 3:30last one that I saw, but I think there was 3:32one. I think 2011, Dave, looks like. Yeah. And there's 3:38been a few. Yeah, yeah. They come out and they 3:41say we're going to go away, and then they don't. 3:43And the fact that they said that there's going to 3:46be some more breaches attributed to them tells me that 3:50they haven't happened yet and they will and that's when 3:54they're going to get attributed. I found that very suspicious, 3:55too. Michelle, I saw you leading in to speak. Well, 3:59I was just going to sort of anecdotally, I liken 4:01it to when my parents said they were retiring and 4:04then six months to a year later, they're. They're working 4:07again. It's like, well, we. We just spend all this 4:09money on your retirement party. What do you mean? But 4:13yeah, to, to Dave's point, I'm not going to disagree 4:16with him. I think this is it. I don't know 4:19if it's authentic. Right. Because I've seen other researchers say 4:22that it's not and bring up some really valid points 4:26as to why it isn't. And we've seen, historically speaking, 4:29these groups either, you know, reform. I don't think they're 4:35going away. Yeah, I think I agree with what Michelle 4:37and Dave. Right. I think the one thing, Matt, that 4:40you mentioned about the. The message of fighting the power. 4:43Right. That seems more like a psychological justification, like Robin 4:50Hood type Right. Being able to go and justify what 4:55you're doing. I guess whatever makes you sleep better at 4:58night. But I think I agree with Michelle that this 5:03is probably a ruse to go and deploy something bigger, 5:07better, different flavor. Yeah, I'm glad you brought up that 5:11war on power stuff because that also just, it boggled 5:14my mind. I mean, what's the point of that? I 5:17don't know. It's hard for me to see anything like 5:18righteous in what they're doing. So I'm just wondering what 5:21the point is in framing it like that. Any thoughts 5:24on that from the panel? Like why, why they would 5:26frame it that way? I spoke this morning with a 5:29couple of co workers about the fact that this is 5:32a really easy way for them to kind of throw 5:36the law enforcement trail cold. It doesn't really work. It's 5:41like someone getting pulled over and saying, I didn't do 5:43anything. And the, you know, the officer saying, oh, well, 5:45I'm sorry, I made a mistake. It doesn't work. So, 5:50yeah, the war on power, they want to declare war 5:54on those in power. I mean, is that the smartest 5:59move for an apt to announce to everyone? I don't 6:02think so. I think that they're actually probably going to 6:06go after smaller targets at first and then kind of 6:09ramp up from there. And that's why I don't think 6:11they're going to go away. I think they're actually going 6:13to be around for a while. Gotcha. So do we 6:15think the kind of main thing here is just, you 6:17know, you go quiet to get the heat off of 6:20you a little bit, maybe get law enforcement to back 6:22down and then like you said, in the meantime, Dave, 6:24maybe you're poking around at some smaller stuff before you 6:26ramp back up. Do we think that's the kind of 6:28the gist of what's happening here? No, I think so. 6:31I think this is an opportunity to regroup, reset, rethink 6:35the strategy. And a lot of things have changed these 6:38days, right. In terms of the tools that we're using, 6:41the type of attacks that we want to make, sometimes 6:44the return on investment is not there to go and 6:46launch a high powered ransomware attack when you can just 6:49simply hack into a system with compromised passwords. Right. So 6:53with all these things going on and the side of 6:56Genai and agents. Right. Looming around, what do we do? 7:01How do we go create a new identity? If I 7:03were them, I'm going to step back and rethink, probably 7:08keep something small going just for the time being, but 7:12then launch something which is more 20, 25, 20, 26. 7:17Focused. Yeah. Well, it sounds like I'm going to have 7:19to cancel my beautiful memorial then. For scattered lapses. Hunters, 7:23they've only been around for what. A month or so? 7:25I mean, and now they're just disappearing. You know, that's 7:29the other thing too, right? They just popped up like 7:31a month ago. They're like, hey, we got this new 7:33ransomware. And then they're just gone, you know? So I 7:37guess it was too good to be true, wasn't it? 7:39It does sound like there, this is just a ploy. 7:42This is, this is not. Or there's just a lot 7:44of infighting and they just can't get along. If you 7:47have three of these groups coming together, I could see 7:49that. I could see some infighting going on, some, you 7:51know, egos getting in the way and maybe they're going 7:53to break up the band and go their separate ways 7:55again. Or they need a new kind of music. There 7:57you go. That's the point. New kind of music. There 8:01we go. There we go. Let's move on to our 8:07next story then. AI powered ransomware Prompt lock turns out 8:12to be proof of concept from nyu. Now, I don't 8:15know if you folks remember, but in late August, promploc 8:18kind of made some waves, as quote, unquote, the first 8:21AI powered ransomware when it was discovered on VirusTotal. I 8:24believe the way that it works is it uses an 8:26open weight model from OpenAI to generate malicious scripts on 8:30the fly. Last week, researchers from the NYU Tanden School 8:34of Engineering clarified that they created it as a proof 8:38of concept for what they call ransomware 3.0. This is 8:41a hypothetical malware that uses LLMs to orchestrate the entire 8:46attack chain, the researchers write. And this is a quote, 8:49the system performs reconnaissance, payload generation and personalized extortion in 8:54a closed loop attack campaign without human involvement. Now, about 8:59a year ago, I wrote a story for IBM about 9:01AI malware. I wanted to see if it was a 9:03big deal, is if it was something we should be 9:05afraid of. I, I talked to a bunch of malware 9:08engineers at X Force and every single one of them 9:10said, it's not a thing, don't worry about it. It's 9:12all overblown. But now I'm wondering a year later, is 9:15it a thing? Is it, is it something we need 9:17to worry about? And I want to start. I'm going 9:19to throw it to Michelle first here. I want, I 9:20think. What do you think? Are we finally at a 9:22point where AI malware, like promploc could be an issue. 9:25What do you think's going on? Well, I think the 9:27real concern is that now it's more accessible to maybe 9:31other would not be hackers. Right. So I think this 9:36is likened to when we started seeing exploit kits available 9:40for sale. Now you have individuals, bad guys that wouldn't 9:44have otherwise known how to develop an exploit, can just 9:48purchase it. So I think that is the main concern 9:51because in the end, what is the impact? The impact 9:55is going to be the same to the organization. Sridhar, 9:58anything to add there? Yeah, I think, I mean from 10:01a research perspective, can it be done? Absolutely. We've done 10:04it. Right. We've done it in IBM. I think I 10:07look at it as very similar to automotive manufacturers testing 10:10their cars, crashing their cars, with a view to improve 10:13it. Right. So I absolutely endorse doing something like that. 10:18But do it responsibly, do it with the view to 10:22show that how you're improving the defense as opposed to 10:25highlighting the fact that you can go completely go and 10:29create an attack autonomously. Right. We do it all the 10:32time. The question of red teaming and blue teaming and 10:35we learn from the red agent to be able to 10:38go and do a better job of defects. Right. So 10:42I agree with the research, I think it's absolutely possible. 10:46But do it in a manner of responsible AI and 10:50do it with their defense and focus, with transparency. Right. 10:53I'm glad you brought up that ethical angle and I 10:55want to dig into that in a minute. But first, 10:57Dave, I want to get your thoughts. Do you think 10:58the AI ransomware or AI malware in general is something 11:01we need to start worrying about right now? I just 11:03wrote one this morning. No, I think so. With AI 11:09becoming more prevalent and becoming smarter, with smarter people programming, 11:14I mean, let's face it, the AI is only as 11:16good as the people programming it. We've got some pretty 11:19smart people. Like Sridhar said, we've already done it here 11:21at IBM. The difference is that we didn't put it 11:24out to the public. Exactly. And that's where I was 11:27gonna go. As soon as you said that, I knew. 11:29Oh, good, we get to talk about the fact that 11:32someone released the proof of concept in order to help 11:35us. Yeah, I wanna ask about that. Right. I mean, 11:39it just. I don't know. Look, I'm not a researcher, 11:41I'm not in the lab making this kind of stuff. 11:44But it does feel a little irresponsible to me to 11:46be releasing this to the public. I'm just wondering, is 11:49that, is this common practice? First off like, do people 11:52usually release this proof of concept stuff or do you 11:54usually keep it kind of hidden? Sridhar pay to release, 11:56but with the appropriate disclosures and how to use it. 12:02Right. For example, if you release it with. Make sure 12:07you test your applications with this red agent so that 12:10you can do a better job of defense. I think 12:12it's okay. It's like any other technology. Internet good and 12:16the bad, nuclear power good and the bad. Similarly, there's 12:20good and bad over here. But doing it responsibly will 12:25hopefully stay ahead of the malicious actors. Gotcha. And so 12:28what does doing it responsibly look like, Michelle, I saw 12:30you starting to speak there, so I'll let you go. 12:32Yeah, absolutely. I mean, I think what is going on 12:35here is that there's a race to beat the attackers 12:39at their own game. Right. So we're all, all the 12:41good guys, all the researchers are trying to see, okay, 12:45how can this be done and therefore how this is 12:48how we defend against this. And so to Sridhar's point, 12:52yes, there's ways about it to do it more responsibly, 12:55contacting perhaps the organizations or the entities that would need 13:01to know about this. So maybe the AV vendor providers 13:04first. Let's start there. We have this proof of concept 13:07ransomware. We want to make sure that you can detect 13:10it. Gotcha. Dave, any thoughts on responsibility here? The fact 13:14that they released it publicly is where I have the 13:16issue because the researchers aren't the only ones looking at 13:20these proofs of concept. The bad guys are getting these 13:24as well. And that, you know, if we're going to 13:28talk about responsible disclosure for these proofs of concept, we 13:31need to keep it within the cyber community and not 13:34let it get out to the public. That to me 13:37is more important than releasing a proof of concept to 13:41the public to show, hey, we can do this. Give 13:45it to the cyber researchers who know what to do 13:47with it. Keep it that way. So while I, while 13:51I disagree that it was responsible for them to release 13:53it, I do agree that releasing it to the cyber 13:55community is a smart thing to do. Yeah, this reminds 13:58me of, you know, on the last episode we were 14:00talking about Hex Strike AI, Right? Which is that framework 14:03that a bunch of, you know, it's supposed to be 14:05used for automating, penetration testing and orchestrating all your agent. 14:09Agentic AI and a bunch of hackers just kind of 14:12picked it up immediately. Right. And this came up again 14:14and again and again. And I think it was Nick 14:15Bradley who said that, you know, if you're worried about 14:18hackers misusing your tools, you would never move forward. Right. 14:21Any tool you develop, someone's going to be able to 14:24misuse it. And so it's a question of being responsible 14:26about it. Whether that means disclosing it to people before 14:29you do it or like Dave said, maybe you only 14:32give it to a very select number of people. Yeah. 14:36It may be difficult though. Right. If you recall, Dave, 14:38you probably remember this Satan, right? Way back in what, 14:42early late 90s? Early 90s. I don't remember that. Jeff 14:47Croom mentioned this on the last episode. I think he 14:49said it was early 2000s, I want to say. Yeah, 14:52probably even before that. But yeah, I mean it was 14:56a tool to go and do network scanning with a 14:59view to find vulnerabilities. All of a sudden that became 15:01a mechanism to find holes and published ttps. Right. So 15:08I think tools like this will get out of hand, 15:11good or bad. You know, they will produce for their 15:14one reason or the other. They'll get into wrong hands. 15:17We have to go figure out how to brace ourselves 15:20against such things. Right. That's where I would probably focus 15:25a little bit as well. And wasn't Satan like the 15:27impetus for nmap? Yes, it spawned NMAP out of that. 15:33So there's a good tool that came from it, but 15:37there was also a lot of bad that came with 15:40it, so. Exactly. Moving on then to our next story. 15:48A single phishing attack against a single developer leads to 15:5220 compromised packages on npm. I think people have probably 15:56heard of this because it's one of the kind of 15:57biggest hacks of NPM ever. I believe a prominent developer 16:01was hit by an AI assisted fish phishing attack that 16:04stole his NPM credentials, allowing threat actors to compromise 20 16:08packages, which according to the hackers Hacker News collectively attract 16:13over 2 billion weekly downloads, which I don't. That seems 16:16like a lot of downloads. I don't know, but it's 16:18a lot. The malware buried in these packages intercepts cryptocurrency 16:22transactions. It reroutes them to attacker controlled wallets. And the 16:27attack used a combination of clean email infrastructure and. And 16:31AI generated content to get past both technical defenses and 16:35the developer's own kind of, you know, psychological warning signs. 16:38This looked like a legitimate email from NPM about resetting 16:41two FA credentials. So I want to start by asking 16:44what this situation says about the kind of current state 16:47of software supply chain security. And I want to refer 16:50to you first, Sridhar, because we've talked a little bit 16:52in the past about supply chain Specifically regarding AI stuff. 16:55But I just want to hear your thoughts. You know, 16:57what does this say about the state of software supply 16:59chain security today? I think it's not where it should 17:01be. Right. As simple as that, right? This actually demonstrate 17:09a single point of failure. You've got such critical software 17:15that is used by millions of individuals, millions of organizations, 17:18maintained by somebody who's just probably doing it out of 17:22the goodness of her heart as a part time. And 17:26as a result, you know, bad things have happened. Not 17:29focusing on how it happened, but the fact that it's 17:32happened. Right. And we should think about, you know, how 17:36do we go mitigate against that. Right. This reminds me, 17:41actually as bad as it may sound, and they will 17:44probably disagree, sometimes these things are necessary, right? Sometimes these 17:50things are necessary. Like very earlier on, during my undergrad 17:57I learned about, I think Tylenol, if I remember right, 18:01somebody messed with the Tylenol and put tablets and impacted 18:05a few people and that resulted in a tamper proof 18:09cap. I think to me things like these are unfortunately 18:14necessary to be able to think about secure supply chain 18:19software and think about the level of rigor, verification, transparency 18:26needed. Very similar to how we think about food safety 18:30right from the time that, hey, this avocado was grown 18:33in this farm all the way to this manufacturer to 18:35my table, right? That level of software, bill of material 18:39transparency is probably going to help us in the future. 18:42Dave, go ahead. I want to see if you disagree 18:44or not. No, actually I'm not going to disagree with 18:47that. I think it's right. I'll give you an analogy. 18:50We talk about dumb rules. Well, the rules came about 18:54because someone did it. And sometimes we need these phishing 19:04attempts, these events to further security. And normally I would 19:10just play devil's advocate here and disagree just for fun. 19:12But seriously, I think it's actually a good thing when 19:17something bad happens, but a good result comes from it. 19:21So I can't, I can't disagree at all. It's like 19:24the little warning on the silica gel packet, right, that 19:26says do not eat. Somebody had to eat it. Someone 19:31ate it. Michelle, any thoughts on your end on this? 19:35Yeah, I mean I think the software compromises. It's just 19:39another flavor of third party compromise, right? And this is 19:43a huge issue for organizations because, okay, I've got my 19:48perimeter, right? And I'm doing what I'm supposed to be 19:52doing. But oh, that third party that I do business 19:55with, they're not secure, right? And how many of those 20:04working with that. You just have this massive attack surface. 20:09And so. And then you couple that with the social 20:12engineering. Like that is the biggest issue right now. Massive 20:16amount of trust between companies. When you have third party 20:19companies working together, there's just. You have to trust who 20:23you're working with. Yes. And so what do you do 20:25about that? Right. You have this, like you said, there's 20:27this massive attack surface. You got to have all this 20:29trust. But as you know, Sridhar, as you said, we 20:32have to find ways to secure that as much as 20:34we can. And you started, you mentioned a little bit 20:36like a software bill of materials kind of thing. I 20:38wondered if you wanted to expand a little bit on 20:40that concept. Right. I think it's very similar to what 20:43you said. Right. Do not eat. Right. It's being able 20:48to go and list exactly the genesis of what software 20:53package is coming from where, having the information in there, 20:56who touched it to the level of transparency. So that 20:59we know if some changes are happening and they're anomalous 21:03in nature, then we understand the blast surface. The transparency 21:07is what I want to highlight out of that software 21:10bill of materials rather than the how part. Right. It's 21:15not so much that it was one phishing email. Yeah. 21:18And the shocker for me is that they're going after 21:20cryptocurrency. I've just never heard of that before. But. Yeah, 21:25cryptocurrency scams, who do does that? What is that? But 21:29yeah, it's not the how. It's the product of, you 21:33know, the multiplication table that comes that it comes from. 21:36So I think just having the. Just a software bill 21:39of materials is not sufficient. Right. Being able to have 21:45that continuous verification of the package, being able to look 21:48for anomaly detection coupled with the transparency of the software 21:54bill of materials will help the overall equation. There's no 21:58single silver bullet per se. And Michelle, you had mentioned 22:02the kind of social engineering angle of this thing. Right. 22:05And I think that's another interesting kind of lesson from 22:08this whole situation is that, you know, you have somebody 22:11who's a pretty savvy developer, someone who knows their way 22:13around this stuff if they get the right message at 22:16the right time or the wrong message at the wrong 22:18time, depending on how you define that. What does this 22:20say kind of about the state of social engineering today, 22:23Right. That like even somebody like this can. Can get 22:26hit. Michelle, any thoughts on that? Yeah, I mean we're 22:29all human and I think if anyone in security says 22:32that they've never been duped, I would say say that 22:34they're lying. Because I think we've all, you know, we've 22:37all fallen for something. Dave's looking around, but I'm sure, 22:41think hard, think back even. I'm not denying it. I 22:45have. Right, right. And IBM has gotten me twice. We're 22:51on. And that's great. Right, because that's probably a testament 22:54to their training. So obviously we always tout user education 22:58and training and it seems very cliche, but it's true 23:01and it works. And it has to be adapted for 23:03the times. What are they actually doing? What are they 23:06leveraging? How are they getting in? And that needs to 23:09be sort of embedded into the training that an organization 23:14is doing. We're under so much pressure, right? And we 23:18have so many things going on, it's very easy to 23:22just act quickly without taking a minute. And I think 23:26what we have to do is just start every day 23:27with, okay, this is my baseline, this is what I'm 23:32going to do when I get an email, when I 23:34see a message, when I get a phone call. And 23:36that's hard to do because we have so many things 23:38going on. Right. We have to assume that these things 23:41are going to happen. Right. We cannot, we cannot assume 23:45that nobody's, you know, you can avoid phishing, you can 23:49assume that somebody's lost their password or iPad or whatever 23:52it is done that. So to Michelle's point, yes, I 23:56have lost laptop. Is Sridhar going to be the only 24:01one brave enough to admit, Is anybody else going to 24:04cop to it? No, I'm kidding. Go ahead. Getting back 24:06on point. Yeah, I think you have to assume that 24:09some of these things are going to be compromised. Right. 24:11So the question is, how do I put controls around 24:16it so that you can assume some of this? Right. 24:19So, and that's what I think. I mean, when you 24:22asked me the question, I was in two minds to 24:24answer it by saying, you know what, I'm glad it 24:26happened, but I stopped myself because certain, sometimes it is 24:32time that something like this happened so that it becomes 24:35an awakening call. I think it's actually okay to say 24:38that though, that you're glad it happened because it's going 24:41to teach someone something. Yeah. It honestly reminds me a 24:43lot of our previous conversation here about the prompt block 24:46stuff in the sense that like sometimes someone's gotta do 24:48something a little bad for something good to come out 24:50of it. You know what I mean? Yeah. We've got 24:56a new IBM X Force analysis finds that OT and 25:00critical infrastructure face serious threats, quote many ransomware, advanced persistent 25:06threats and cybercrime groups are going beyond data theft, aiming 25:11for physical disruption and even sabotage. This is from David 25:14McMillan who is the author of the blog post Breaking 25:17down this analysis now by fusing frontline threat Intel with 25:21some 2025 cost of a data breach data, the IBM 25:25X Force identified some concerning trends in OT and critical 25:28infrastructure, including a significant number of serious vulnerabilities in the 25:33fields. Of the 670 vulnerabilities disclosed in the first half 25:37of 2025 that could impact operational technology, nearly half have 25:41a CVSS severity rating of critical or high. So I 25:46want to start with you, Michelle. Can you give us 25:48a little context about this analysis, a little background about 25:51what all this is? Basically fuses some data that we 25:54got from the cost of a data breach, a survey 25:58conducted by Ponemon Institute and analyzed by IBM that showed 26:02that 15% of organizations actually experienced an OT incident. Right. 26:09So then we looked at our own data, like you 26:11mentioned, over 600 vulnerabilities impacting OT technology providers or vendors. 26:19And then subsequently we looked at, okay, what are we 26:22seeing being mentioned from either in telegram channels or Dark 26:26web or other forums to see, you know, what are 26:29some of the top vulnerabilities that are being mentioned out 26:33there and therefore maybe are of interest to attackers as 26:37well. And then beyond that, Right. How many of those 26:41CVEs have exploits that are publicly available? So there's obviously 26:46interest in targeting OT technology, but beyond that it's more 26:50about the industries that house the OT technology. Right. We're 27:04know, OT environments and that's why they're susceptible. And why 27:09are so many attackers kind of moving beyond data theft 27:12right now? Right. Like why, why do you think we 27:15see people targeting OT and critical infrastructure for disruption specifically? 27:19Any thoughts on that stuff, Michelle? So I think it's 27:21going to come down to the threat actor and the 27:23motivations, whether they're a nation state sponsored threat actor or 27:28their cyber criminal group. So I don't know that we've 27:30necessarily are seeing a move away from data theft. I 27:34think we have reports to the contrary of that. It's 27:37really going to come down to the threat actor group 27:40and what their motivations are. Gotcha. Sridhar, any thoughts to 27:43add? Right, Matt? I think it's easier, right? You know, 27:46the technology is, you know, not necessarily the state of 27:50the art. Right. And policies and procedures are not state 27:55of the art. Right. So as a result, if you 27:58have a castle with moat, why Can't I bring air 28:03attack? It's easier to launch it. You can even get 28:06it with a Cessna and still make a devastating impact. 28:10Right. It doesn't have to be. So that's the way 28:12I look at it. It's an easy thing to do. 28:14Right. Why invest a lot in sophisticated attack when I 28:19can do something simple, number one, and I'm actually a 28:25water balloon. Yeah, exactly right. Or the other part is 28:32that I know back to Michelle, the motivation point, right. 28:37It's a lot more appealing and easier to extract something 28:41out of it. Right. From an attacker perspective, you're dealing 28:46with Colonial Pipeline, if you remember that, or a power 28:49grid. It's easy to go and get what the attackers 28:53want, whether it is ransomware or whatever that is, to 28:58not disrupt millions of citizens life. Dave, what are your 29:03thoughts on it? I think it comes down to money. 29:07When you start messing with water supplies, power grid, the 29:11trucking industry, even people are more willing to pay to 29:15get those systems back online as opposed to information. Where 29:21now that we've been dealing with this information theft for 29:24so long, a lot of companies are starting to fix 29:26their security policies and make their backups and they take 29:29them off site and they make sure that they have 29:32more than one. It's really kind of hard to do 29:34that with a water supply or a trucking industry or 29:37a power grid. If it's down, it's down. If you 29:40want to bring a country to its knees, shut down 29:43its trucking supply and turn off its water, you can 29:46leave the power up, it'll be fine. But if you 29:49can't get food and you can't drink water, you're right, 29:53you. Can'T make a backup of your water supply like 29:55you said. Right, exactly. You got one and that's it. 30:03it's not kind of patched properly. I mean, why is 30:07that? Is it just a kind of cost benefit analysis 30:09going on that they're like, ah, we don't want to 30:11take it offline to patch. What's the deal there? I 30:13think it's a combination like if you look at the 30:15attack entry points, like PLC controller or logic cardboard, right. 30:19On a CNC machine or something, which is operates every 30:23day. Those are not necessarily the, you know, updated on 30:27a regular basis. It does take downtime to update those. 30:30Right. So as a result they stay there for a 30:33long time and the vulnerabilities over there are something that, 30:37you know, attackers exploit. So Part of it is, you 30:42know, the technology, where the technology is. And you don't 30:46get the latest and greatest software on a PLC controller 30:50for something. Right? Or a relay. And most of that 30:55goes into something like training LLM models with GPUs, right? 31:03So that's one of the reasons why the technology is 31:05a little bit dated, number one. Number two, I think 31:07is also, security has always been kind of an afterthought, 31:11right. For critical infrastructure, being able to keep the uptime 31:18high is always being the highest priority. And operations, security 31:23don't necessarily talk to each other. But when you start 31:26talking about safety, when you talk about downtime, that's when 31:30there's an opportunity for these entities to talk to each 31:33other. Otherwise, they're in silos, and attackers take advantage of 31:37their silos. Now, I want to go back to that 31:40figure of the vulnerabilities, right? 670 vulnerabilities disclosed in the 31:44first half of today, 2025, that could affect operational technology. 31:4849% of them were either critical or high in terms 31:51of CVSS rating. And I'm wondering, you know, this sounds 31:55pretty concerning, and I want to kind of throw it 31:57to Dave, because I know Dave has thoughts on cvss. 32:02both in terms of this number and also CVSS in 32:05general. I want to hear what you got to say. 32:06I'll tackle the CVSS thing really quick. I think it's 32:11completely broken. There's so many vulnerabilities now that are unscored, 32:17that show a score of 9.8, which is the default. 32:21So everything that's unscored shows up as critical. And you 32:24can't take that number and perform any kind of analysis 32:29with it because you don't actually know that it's a 32:32critical vulnerability. I think it needs to be redone. And 32:37there's been talk about them redoing the CVSS model for 32:41years now, and it hasn't happened. And I just not 32:46a fan of using a CVSS score to rate a 32:50vulnerability. You need to know what the vulnerability is. You 32:53need to know how it works, how it affects everything. 32:57And you can't do that with a number. You have 32:59to actually see it. And the fact that 675. Come 33:07on. I think we. And I agree with Dave on 33:10this, right? We need to think about how susceptible it 33:14is to go and launch or take advantage of the 33:18vulnerability. Not about the number, right? You can have 9.9 33:22and nobody's going to touch it, 9.8, but nobody's going 33:26to touch it. Why would I? Right. Instead I may 33:29go after something lower which is easy to exploit. Instead 33:33of, we should think about something like a weaponization score, 33:37like how easy or susceptible it is for to leverage 33:41or exploit that vulnerability and try to patch those. Right. 33:47So 600 doesn't mean anything. Out of the 600, maybe 33:51the 20 may be easily exploitable. Patch those first. Michelle, 33:56you have any thoughts on whether CVSS score is worth 33:58it if it's broken? What do you got? I don't 34:00know if I have any thoughts specifically on the CFSS 34:03scoring, but I would say that yes, to Sridhar's point, 34:07if it's exploitable or if there's public exploit available. Right. 34:11Proof of concept exploit already it's already being exploited in 34:14the wild. We see reports of threat actors leveraging it. 34:17That should probably raise it. Right. And obviously if you 34:21don't have a particular technology in your environment, you're not 34:24going to worry about that. So maybe of those, you 34:26know, there's a very small percentage that would be impacting 34:29you. So yeah, understanding what your assets are first and 34:33foremost. Right. Because that's already a problem. Even though that's 34:36sort of part of basic security is understanding your assets 34:41and asset management and then knowing of those, if something 34:45goes down, that's going to have this ripple effect. I 34:49would say to issue a challenge to the viewers and 34:52the listeners. Go and do some research and find out 34:55how many critical vulnerabilities are publicly exploited and how many 35:01of those mediums are actually exploited. And I think you'll 35:04find that the numbers are surprising in that the mediums 35:08are more exploited than the criticals because there's just so 35:12much high profile visibility on the criticals. Like Sridhar said, 35:16those mediums can be way more dangerous than the criticals. 35:20I like this is our first listener viewer challenge. I 35:23expect some, I expect some answers in the YouTube comments. 35:26Okay. Business identity compromise is the hot new social engineering 35:34scam. Or at least it's one of the hot new 35:37social engineering scams. In business identity compromise, also known as 35:41bic, or simply hiring fraud attackers, pose as legitimate workers 35:47applying for remote roles, often using AI tools to generate 35:51resumes, headshots, even voice and video, and of course do 35:55the work they need to do to keep the job. 35:57These insider threats then use their access to sensitive company 36:00systems to wreak havoc. Or they just draw a paycheck 36:03and use it to fund illegitimate activities. So I want 36:07to start by asking, what's up with this? The rise 36:11of bic? Why do you Think this kind of thing 36:13is taking off right now. Dave, I want to start 36:16with you. You got any thoughts on why BIC is 36:18getting popular right now? I think it started getting popular 36:21when the work from home became a really big thing. 36:25A lot of people didn't have an office to go 36:26into, so it was really hard to see those threat 36:31actors that actually got into these companies and maybe focus 36:35on what they were doing. You know, you see someone 36:37who, you know, suspicious looking over there, what's he doing 36:40over there? When they're in their homes, there's nothing to 36:44look at. You can see their work. And if they're 36:47doing a good job at their job, you know, if 36:50they're doing good work at their job, it's a little 36:52less suspicious if they're doing something over off to the 36:55side. So I think a lot of it has to 36:58do with the ease of work from home and not 37:00having to actually be physically present in a job. So 37:05hiring someone based off of an AI interview, probably pretty 37:09easy to do. That's how I got this job. No 37:16comment. I was going to say, how do we know 37:19one of you isn't AI? Right now? I think there's 37:23a perfect storm over here. Right. I think Dave is 37:26right. One is, of course, the work from home or 37:29remote wherever. Right, Remote. Remote workforce. Second is the fact 37:35that we have increased amount of AI that we're relying 37:40on that quite a bit. Being able to go and 37:43scan for resumes, being able to validate a bunch of 37:47things. That's, that's, that's good and bad. Right? And then 37:52the third piece is, you know, we don't have the. 37:57The number of, you know, HR individuals doing the manual 38:01process. The HR is getting a lot of pressure in 38:03terms of saying, okay, we need 100 jobs, we need 38:06thousand jobs, we need X number of jobs in this 38:08three years. Do it yesterday. So it's a combination of 38:11remote, combination of the tools, and the combination of the 38:15fact that you've got accelerated hiring practices. To me, that 38:20is coming together into a perfect storm for fraudsters to 38:26go and say, hey, why rob a bank when I 38:28can go get hired by one? I was just going 38:30to say that AI isn't Max Headroom anymore. It's a 38:34lot. Deep fakes are really, really good now. It's not 38:37Forrest Gump meeting John Kennedy. It's a lot better than 38:41it used to be. We got a. We got a 38:43Max Headroom reference this time. You know, I'm going to 38:45count on you every time now. I'm counting on you 38:48obscure stuff to come up here. All right, but Michelle, 38:51go ahead. Yeah, I mean, I think it's a fraud 38:53scam that we didn't have before. So it's like. Okay, 39:03about now, we're all moving remotely ever more than we 39:09were before. So within IBM we're used to this global 39:14organization, but a lot of other organizations, maybe they started 39:18to expand their boundaries beyond their regional location. Right. And 39:24so now we're hiring on, but we're not anticipating this 39:28type of where did this come from? Right. You're not 39:30going to know unless you're in the cybersecurity industry. So 39:33I think it's like anything else. It's awareness, it's end 39:37user education. Hey, HR staff, you know, this is what's 39:41coming, this is what's happening. We need to pay attention 39:43to this kind of stuff. So it's just a learning 39:46curve. Yeah. I wanted to ask about, you know, what, 39:48what organizations can maybe do to start spotting more of 39:51this. And so it sounds like part of it is 39:52education. Right. I guess you could also make everybody work 39:55in an office. It's pretty hard to be a scammer 39:56if you do that, but. Any other thoughts though on 39:59that? Sridhar, what do you got to say? I think 40:01it's a people process in technology, Michelle. Right. To just 40:04add on to that one is of course the technology. 40:09We do captcha for stupid things that we shouldn't be 40:12doing. There's other things that you can do that. But 40:15we don't do liveness test for ghost employment. I think 40:20being able to go and apply technology to be able 40:23to go and there's a lot of technology these days 40:26to go and check for liveness tests. Be able to 40:29go and scan your driver's license, Go here, go there. 40:32I mean, you know, sorry to deviate, but I had 40:34to go. We had all plumbing fixtures and that our 40:38lifetime warranty. And when we called the support desk, they 40:42said, oh, go up, take a picture, go down, take 40:44a picture, send this. They want to make sure that 40:47it's in my home. Right. We're doing that for plumbing 40:51fixtures. Why can't we do it for employment? Right. So 40:55second, of course, you know your point, Michelle, that I 40:59think education, Right. Education is important. This absolutely is. And 41:04then the last piece is, you know, continuing to go 41:07monitor. I'm not saying that avoid breach the privacy aspects 41:12and things like that, but we got to Like Dave 41:15said, hey, that person looks suspicious. What is that equivalent 41:19in a non work environment? Right. I think that's something 41:23that we have to think about as a mechanism to 41:25detect anomalies and stop it. Right. So think Shree hit 41:28the nail on the head. It's like having a kidnapping 41:30victim holding up a newspaper from the day. Yeah, this 41:35is me, I promise. Let's move along to our last 41:43story, which this one, honestly, just something a little lighter. 41:46I found it amusing. CyberCriminal installs Huntress EDR on their 41:50device for some reason giving security pros a front row 41:53seat to their activity. So it's exactly what it sounds 41:57like. A cyber criminal testing out new security tools installed 42:01Huntress EDR on their device. Now, we know that cybercriminals, 42:04as we just said, they play with a lot of 42:05these legitimate security, security tools. But Huntress team noticed that, 42:09you know, they, they installed this and it gave them 42:13the opportunity to look into what they were doing on 42:15their device. So once they confirmed it was a malicious 42:17actor, they started poking around, checking out their activity, including 42:20digging into previous stuff, like seeing what the attacker was 42:23researching the attack frameworks they were looking into, phishing messages 42:26they were crafting, dark web markets they were visiting, all 42:29kinds of stuff. So I just, like I said, this 42:32was just extremely funny to me. I thought this was 42:34hilarious and I just wanted to see what the rest 42:36of you thought. Dave, I see you cracking a smile. 42:38You got any thoughts on this one? I want to 42:41know how long, how long they let him do this 42:44before they went in there, started looking at things like, 42:47you know, it used to be one of the, one 42:49of the tactics is let him poke around for a 42:51little bit to see what they're going to do, and 42:53then the second that they get close to something that 42:54they're supposed to shut them down. Well, seems to me 42:58that Huntress EDR people just said, huh, he's got our 43:03tool. You know what that means? We can go poke 43:06around. So it's been, it's like the opposite. We can 43:09go poke around in their environment and like you said, 43:11see everything that they did. This was one of the 43:13dumbest things I've ever heard of a threat actor doing. 43:16Just one of the dumbest. I can't state that firmly 43:21enough. Was it on his production machine or was it 43:24on a test machine honeypot? I don't know. But not 43:29the smartest cookie in the bag. Well, I think I 43:33actually grounds the whole thing, right, the fact that even 43:37attackers can make mistakes, I think, which is Actually, we 43:40have a chance, Dave. We have a chance right now. 43:44The defenders are always chasing because they're not talking to 43:47each other, whereas the attackers are doing a very good 43:50job of working with each other. Right. So mistake like 43:53this not only show the human side of it, that 43:57end of the day, they are human. They make mistakes. 44:00I mean, and to the other point, right, which is. 44:04But at the same time, what can we take away 44:06from that? Right? It gives us a front row seat 44:09or a backstage pass to how they are orchestrating this 44:12whole thing. And then how should we think about our 44:15defenses? Right, so that's what I was taking away. I 44:18hope Huntress sent them a thank you card. Nice fruit 44:21basket. You know. Better. Teddy bear in a fruit basket. 44:26Michelle, any thoughts on your end? Yeah, I mean, I 44:28immediately thought about actually some research that two analysts, I'll 44:33have to give them credit here, Allison Wickoff and Richard 44:35Emerson. They stumbled across two or open servers of the 44:43threat actor ITGA team, which I think we t. TTP 44:48overlaps with charming kitten and a few other folks. Basically 44:53videos showing how to train other attackers in their group. 44:58So the videos showed things like, this is how you 45:02compromise an account. And oh, by the way, if it 45:04has mfa, you know, disregard, don't. Don't try to compromise 45:08this one. So it was very interesting inside look. So 45:11I immediately thought of that when I saw this article. 45:14But yeah, I mean, to go back to my analogy 45:17before of like, we all have busy days. Attackers have 45:20busy days, too. Like, they're going to do things like 45:22this. It's going to happen, right? These operational errors, it's. 45:26It's hard out there. Big error, large error. All right, 45:32that's all the time we have for today. Thank you, 45:35Michelle and Sridhar and Dave for joining us. Thank you 45:38to our viewers and listeners for tuning in. Special thanks 45:41to viewer2Left Arms who posted the hard hitting question on 45:44our last episode. Would you rather be hacked by a 45:47billion lions or one of every Pokemon? The answer there 45:51is pretty obvious if you ask me. It's the lions. 45:53They got real big paws. They can't type on the 45:55computer. You're set there. Make sure to subscribe to security 45:58intelligence wherever podcasts are found. And everyone please stay safe 46:02out there.