AI Deepfakes, Ransomware, OT Threats
Key Points
- The episode opens with a warning that AI‑generated deepfakes have become dramatically more realistic, signaling a new era of threat‑making beyond earlier “Forrest Gump meets JFK” analogies.
- The show’s roundup covers a post‑mortem on the Scattered Lapsis hacker group, a proof‑of‑concept AI‑driven “prompt‑lock” ransomware, a single phishing email that compromised 20 npm packages, and a fresh IBM X‑Force report on the biggest threats to OT and critical‑infrastructure systems.
- Highlights include a discussion of business‑identity compromise scams that give hackers apparently legitimate jobs, an odd case where an advanced threat actor installed Huntress EDR on their own machine, and a controversial hot‑take on the reliability of CVSS scoring.
- Host Matt Kaczynski introduces the expert panel—Michelle Alvarez, Sridhar M. P. D., and Dave Bales—and teases upcoming long‑form interviews about the true cost of cyber‑attacks in the U.S. and the challenges of securing AI identities in enterprises.
Sections
- Untitled Section
- Threat Groups Feign Retirement - Participants discuss how cyber‑threat actors often claim they'll disappear, only to reappear, questioning the authenticity of such claims.
- Ransomware Groups Reevaluating Tactics - Speaker discusses shifting from high‑profile ransomware to low‑effort hacks, the influence of AI tools, and internal infighting that causes fleeting criminal alliances.
- Responsible AI Red‑Team Ethics - Panelists debate leveraging AI for red‑team testing to strengthen defenses, emphasizing transparency, responsibility, and concerns about emerging AI‑generated malware.
- Responsible Disclosure of Dual‑Use Tools - The speakers discuss how powerful AI and hacking utilities inevitably get misused, emphasizing the need for responsible release strategies, selective distribution, and proactive defenses, illustrated by the early network‑scanning tool Satan that later inspired Nmap.
- Balancing Risks of Third-Party Breaches - The speakers argue that despite the dangers of phishing and third‑party compromises, such incidents can expose vulnerabilities and ultimately strengthen security, highlighting the vast attack surface and trust challenges in supply‑chain relationships.
- Social Engineering and Ongoing Training - The speakers note that even technically proficient developers can be duped, underscoring that no one is immune to social engineering and stressing the necessity of continually updated, practical user education to reinforce safe habits.
- Analyzing OT Exploit Landscape - The speakers review OT vendor vulnerabilities, monitor darknet mentions and exploit availability, and debate why threat actors—whether nation‑state or criminal—are increasingly targeting critical infrastructure for disruption rather than solely for data theft.
- Unpatched Industrial Systems Explained - The speaker outlines how costly downtime, legacy PLC/CNC hardware, and the prioritization of uptime over security create siloed operations that leave critical infrastructure vulnerable to attackers.
- Prioritizing Exploitable Vulnerabilities Over Scores - The speakers argue that CVSS ratings matter less than real‑world exploit availability and asset relevance, urging teams to patch truly exploitable flaws first.
- HR Hiring Rush Fuels Fraud - Rapid, remote hiring pressures combined with advanced AI deep‑fake tools create a perfect‑storm that attackers exploit, underscoring the need for heightened HR awareness and cybersecurity education.
- Criminal Uses Huntress, Gets Watched - A cybercriminal installed Huntress EDR on his own machine, unintentionally giving the security team full visibility into his research, tools, and attack preparations, which the team found both amusing and insightful.
- Account Compromise, MFA, and Humor - The host recaps a video on account‑compromise tactics (ignoring MFA), draws an analogy to attackers’ busy days, thanks guests and viewers, and closes with a light‑hearted “lion vs. Pokémon” hacking question.
Full Transcript
# AI Deepfakes, Ransomware, OT Threats **Source:** [https://www.youtube.com/watch?v=dAS4zgMiSuQ](https://www.youtube.com/watch?v=dAS4zgMiSuQ) **Duration:** 00:46:03 ## Summary - The episode opens with a warning that AI‑generated deepfakes have become dramatically more realistic, signaling a new era of threat‑making beyond earlier “Forrest Gump meets JFK” analogies. - The show’s roundup covers a post‑mortem on the Scattered Lapsis hacker group, a proof‑of‑concept AI‑driven “prompt‑lock” ransomware, a single phishing email that compromised 20 npm packages, and a fresh IBM X‑Force report on the biggest threats to OT and critical‑infrastructure systems. - Highlights include a discussion of business‑identity compromise scams that give hackers apparently legitimate jobs, an odd case where an advanced threat actor installed Huntress EDR on their own machine, and a controversial hot‑take on the reliability of CVSS scoring. - Host Matt Kaczynski introduces the expert panel—Michelle Alvarez, Sridhar M. P. D., and Dave Bales—and teases upcoming long‑form interviews about the true cost of cyber‑attacks in the U.S. and the challenges of securing AI identities in enterprises. ## Sections - [00:00:00](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=0s) **Untitled Section** - - [00:03:12](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=192s) **Threat Groups Feign Retirement** - Participants discuss how cyber‑threat actors often claim they'll disappear, only to reappear, questioning the authenticity of such claims. - [00:06:31](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=391s) **Ransomware Groups Reevaluating Tactics** - Speaker discusses shifting from high‑profile ransomware to low‑effort hacks, the influence of AI tools, and internal infighting that causes fleeting criminal alliances. - [00:10:18](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=618s) **Responsible AI Red‑Team Ethics** - Panelists debate leveraging AI for red‑team testing to strengthen defenses, emphasizing transparency, responsibility, and concerns about emerging AI‑generated malware. - [00:14:09](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=849s) **Responsible Disclosure of Dual‑Use Tools** - The speakers discuss how powerful AI and hacking utilities inevitably get misused, emphasizing the need for responsible release strategies, selective distribution, and proactive defenses, illustrated by the early network‑scanning tool Satan that later inspired Nmap. - [00:18:42](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=1122s) **Balancing Risks of Third-Party Breaches** - The speakers argue that despite the dangers of phishing and third‑party compromises, such incidents can expose vulnerabilities and ultimately strengthen security, highlighting the vast attack surface and trust challenges in supply‑chain relationships. - [00:22:05](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=1325s) **Social Engineering and Ongoing Training** - The speakers note that even technically proficient developers can be duped, underscoring that no one is immune to social engineering and stressing the necessity of continually updated, practical user education to reinforce safe habits. - [00:26:08](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=1568s) **Analyzing OT Exploit Landscape** - The speakers review OT vendor vulnerabilities, monitor darknet mentions and exploit availability, and debate why threat actors—whether nation‑state or criminal—are increasingly targeting critical infrastructure for disruption rather than solely for data theft. - [00:29:58](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=1798s) **Unpatched Industrial Systems Explained** - The speaker outlines how costly downtime, legacy PLC/CNC hardware, and the prioritization of uptime over security create siloed operations that leave critical infrastructure vulnerable to attackers. - [00:33:47](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=2027s) **Prioritizing Exploitable Vulnerabilities Over Scores** - The speakers argue that CVSS ratings matter less than real‑world exploit availability and asset relevance, urging teams to patch truly exploitable flaws first. - [00:37:56](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=2276s) **HR Hiring Rush Fuels Fraud** - Rapid, remote hiring pressures combined with advanced AI deep‑fake tools create a perfect‑storm that attackers exploit, underscoring the need for heightened HR awareness and cybersecurity education. - [00:41:46](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=2506s) **Criminal Uses Huntress, Gets Watched** - A cybercriminal installed Huntress EDR on his own machine, unintentionally giving the security team full visibility into his research, tools, and attack preparations, which the team found both amusing and insightful. - [00:44:58](https://www.youtube.com/watch?v=dAS4zgMiSuQ&t=2698s) **Account Compromise, MFA, and Humor** - The host recaps a video on account‑compromise tactics (ignoring MFA), draws an analogy to attackers’ busy days, thanks guests and viewers, and closes with a light‑hearted “lion vs. Pokémon” hacking question. ## Full Transcript
AI isn't Max Headroom anymore. I mean, it's. It's a
lot. Deep fakes are really, really good now. It's not
Forrest Gump meeting John Kennedy. It's a lot better than
it used to be. All that and more on security
intelligence. Hello, and welcome to Security Intelligence, IBM's weekly cyber
security podcast, where we break down the most important stories
in the field when the help of our panel of
expert practitioners. I'm your host, Matt Kaczynski. Today's stories, a
postmortem for scattered Lapsis hunters. AI powered prompt lock Ransomware
is just a proof of concept. One phishing email leads
to 20 compromised npm packages. A new IBM X Force
Analysis digs into the major threats facing OT and critical
infrastructure. Business identity compromise scams give hackers legitimate jobs for
illegitimate reasons. Plus the story of an enterpr threat actor
who for some reason installed Huntress EDR on their own
device. And I hear we have a hot take on
CVSS scores that I'm dying to get into. It's a
packed episode, so let's introduce our cast of characters. First
up, Michelle Alvarez, manager, X Force Strategic Threat Analysis. Michelle,
thank you for being here today. Thank you for having
me. Also, keep an eye on our podcast feed for
an in depth discussion with Michelle on why it costs
so dang much to get hacked in America, which should
be coming in the not too distant future. We also
have Sridhar, mpd, IBM fellow cto, IBM Security. Sridhar, thank
you for being here. Thanks, Mike. Looking forward to the
discussion. Sridhar also has a long form interview coming down
the pipeline. Hazes on the complexities of securing AI identities
in the enterprise. And Dave Bales of X force Incident
Command 1, host of the not the Situation Room podcast
and the only person I've heard make a dick dastardly
reference in the year 2025. Dave, how you doing? I'm
well. I'm a little jealous that I don't have a
long form interview coming up. You know, I was about
to say we got to figure out what yours is
going to be, because, I mean, you can't be the
only one, you know? Yeah, think about that in the
background. All right, let's dive into our stories. First up,
scattered Lapsis hunters. Call it quits. Or do they? The
collaboration no one wanted to quote X Force's claim. Claire
Nunez announced last week that it would be going dark
in a message posted to breach forums. They left open
the possibility that more breaches will be attributed to them
in the future, especially among airlines. But they said that
these are breaches. They already did. It's nothing new. They
just haven't been discovered yet. They also framed their work
as a quote, unquote, war on power meant to. And
these are quotes. I didn't make this up. Humiliate those
who humiliated and predate those who predate. Which. That's kind
of a new interpretation for me. I don't really see
how that's making any sense with what they're doing, but
we'll discuss that. So to start, I want to ask,
is this really the end of Scattered Lapsis Hunters? And
I'm going to start with Dave, because I know your
podcast has discussed these guys a few times now, so
I'd love to get your take. What do you think?
Are they really gone? Not a chance. There's no chance
that they are gone. This is a ruse. This is
a look at the right hand while the left hand's
doing something else. As I've said before, we've seen threat
groups come and go. They've said they were going to
leave and then they come back. I can't remember the
last one that I saw, but I think there was
one. I think 2011, Dave, looks like. Yeah. And there's
been a few. Yeah, yeah. They come out and they
say we're going to go away, and then they don't.
And the fact that they said that there's going to
be some more breaches attributed to them tells me that
they haven't happened yet and they will and that's when
they're going to get attributed. I found that very suspicious,
too. Michelle, I saw you leading in to speak. Well,
I was just going to sort of anecdotally, I liken
it to when my parents said they were retiring and
then six months to a year later, they're. They're working
again. It's like, well, we. We just spend all this
money on your retirement party. What do you mean? But
yeah, to, to Dave's point, I'm not going to disagree
with him. I think this is it. I don't know
if it's authentic. Right. Because I've seen other researchers say
that it's not and bring up some really valid points
as to why it isn't. And we've seen, historically speaking,
these groups either, you know, reform. I don't think they're
going away. Yeah, I think I agree with what Michelle
and Dave. Right. I think the one thing, Matt, that
you mentioned about the. The message of fighting the power.
Right. That seems more like a psychological justification, like Robin
Hood type Right. Being able to go and justify what
you're doing. I guess whatever makes you sleep better at
night. But I think I agree with Michelle that this
is probably a ruse to go and deploy something bigger,
better, different flavor. Yeah, I'm glad you brought up that
war on power stuff because that also just, it boggled
my mind. I mean, what's the point of that? I
don't know. It's hard for me to see anything like
righteous in what they're doing. So I'm just wondering what
the point is in framing it like that. Any thoughts
on that from the panel? Like why, why they would
frame it that way? I spoke this morning with a
couple of co workers about the fact that this is
a really easy way for them to kind of throw
the law enforcement trail cold. It doesn't really work. It's
like someone getting pulled over and saying, I didn't do
anything. And the, you know, the officer saying, oh, well,
I'm sorry, I made a mistake. It doesn't work. So,
yeah, the war on power, they want to declare war
on those in power. I mean, is that the smartest
move for an apt to announce to everyone? I don't
think so. I think that they're actually probably going to
go after smaller targets at first and then kind of
ramp up from there. And that's why I don't think
they're going to go away. I think they're actually going
to be around for a while. Gotcha. So do we
think the kind of main thing here is just, you
know, you go quiet to get the heat off of
you a little bit, maybe get law enforcement to back
down and then like you said, in the meantime, Dave,
maybe you're poking around at some smaller stuff before you
ramp back up. Do we think that's the kind of
the gist of what's happening here? No, I think so.
I think this is an opportunity to regroup, reset, rethink
the strategy. And a lot of things have changed these
days, right. In terms of the tools that we're using,
the type of attacks that we want to make, sometimes
the return on investment is not there to go and
launch a high powered ransomware attack when you can just
simply hack into a system with compromised passwords. Right. So
with all these things going on and the side of
Genai and agents. Right. Looming around, what do we do?
How do we go create a new identity? If I
were them, I'm going to step back and rethink, probably
keep something small going just for the time being, but
then launch something which is more 20, 25, 20, 26.
Focused. Yeah. Well, it sounds like I'm going to have
to cancel my beautiful memorial then. For scattered lapses. Hunters,
they've only been around for what. A month or so?
I mean, and now they're just disappearing. You know, that's
the other thing too, right? They just popped up like
a month ago. They're like, hey, we got this new
ransomware. And then they're just gone, you know? So I
guess it was too good to be true, wasn't it?
It does sound like there, this is just a ploy.
This is, this is not. Or there's just a lot
of infighting and they just can't get along. If you
have three of these groups coming together, I could see
that. I could see some infighting going on, some, you
know, egos getting in the way and maybe they're going
to break up the band and go their separate ways
again. Or they need a new kind of music. There
you go. That's the point. New kind of music. There
we go. There we go. Let's move on to our
next story then. AI powered ransomware Prompt lock turns out
to be proof of concept from nyu. Now, I don't
know if you folks remember, but in late August, promploc
kind of made some waves, as quote, unquote, the first
AI powered ransomware when it was discovered on VirusTotal. I
believe the way that it works is it uses an
open weight model from OpenAI to generate malicious scripts on
the fly. Last week, researchers from the NYU Tanden School
of Engineering clarified that they created it as a proof
of concept for what they call ransomware 3.0. This is
a hypothetical malware that uses LLMs to orchestrate the entire
attack chain, the researchers write. And this is a quote,
the system performs reconnaissance, payload generation and personalized extortion in
a closed loop attack campaign without human involvement. Now, about
a year ago, I wrote a story for IBM about
AI malware. I wanted to see if it was a
big deal, is if it was something we should be
afraid of. I, I talked to a bunch of malware
engineers at X Force and every single one of them
said, it's not a thing, don't worry about it. It's
all overblown. But now I'm wondering a year later, is
it a thing? Is it, is it something we need
to worry about? And I want to start. I'm going
to throw it to Michelle first here. I want, I
think. What do you think? Are we finally at a
point where AI malware, like promploc could be an issue.
What do you think's going on? Well, I think the
real concern is that now it's more accessible to maybe
other would not be hackers. Right. So I think this
is likened to when we started seeing exploit kits available
for sale. Now you have individuals, bad guys that wouldn't
have otherwise known how to develop an exploit, can just
purchase it. So I think that is the main concern
because in the end, what is the impact? The impact
is going to be the same to the organization. Sridhar,
anything to add there? Yeah, I think, I mean from
a research perspective, can it be done? Absolutely. We've done
it. Right. We've done it in IBM. I think I
look at it as very similar to automotive manufacturers testing
their cars, crashing their cars, with a view to improve
it. Right. So I absolutely endorse doing something like that.
But do it responsibly, do it with the view to
show that how you're improving the defense as opposed to
highlighting the fact that you can go completely go and
create an attack autonomously. Right. We do it all the
time. The question of red teaming and blue teaming and
we learn from the red agent to be able to
go and do a better job of defects. Right. So
I agree with the research, I think it's absolutely possible.
But do it in a manner of responsible AI and
do it with their defense and focus, with transparency. Right.
I'm glad you brought up that ethical angle and I
want to dig into that in a minute. But first,
Dave, I want to get your thoughts. Do you think
the AI ransomware or AI malware in general is something
we need to start worrying about right now? I just
wrote one this morning. No, I think so. With AI
becoming more prevalent and becoming smarter, with smarter people programming,
I mean, let's face it, the AI is only as
good as the people programming it. We've got some pretty
smart people. Like Sridhar said, we've already done it here
at IBM. The difference is that we didn't put it
out to the public. Exactly. And that's where I was
gonna go. As soon as you said that, I knew.
Oh, good, we get to talk about the fact that
someone released the proof of concept in order to help
us. Yeah, I wanna ask about that. Right. I mean,
it just. I don't know. Look, I'm not a researcher,
I'm not in the lab making this kind of stuff.
But it does feel a little irresponsible to me to
be releasing this to the public. I'm just wondering, is
that, is this common practice? First off like, do people
usually release this proof of concept stuff or do you
usually keep it kind of hidden? Sridhar pay to release,
but with the appropriate disclosures and how to use it.
Right. For example, if you release it with. Make sure
you test your applications with this red agent so that
you can do a better job of defense. I think
it's okay. It's like any other technology. Internet good and
the bad, nuclear power good and the bad. Similarly, there's
good and bad over here. But doing it responsibly will
hopefully stay ahead of the malicious actors. Gotcha. And so
what does doing it responsibly look like, Michelle, I saw
you starting to speak there, so I'll let you go.
Yeah, absolutely. I mean, I think what is going on
here is that there's a race to beat the attackers
at their own game. Right. So we're all, all the
good guys, all the researchers are trying to see, okay,
how can this be done and therefore how this is
how we defend against this. And so to Sridhar's point,
yes, there's ways about it to do it more responsibly,
contacting perhaps the organizations or the entities that would need
to know about this. So maybe the AV vendor providers
first. Let's start there. We have this proof of concept
ransomware. We want to make sure that you can detect
it. Gotcha. Dave, any thoughts on responsibility here? The fact
that they released it publicly is where I have the
issue because the researchers aren't the only ones looking at
these proofs of concept. The bad guys are getting these
as well. And that, you know, if we're going to
talk about responsible disclosure for these proofs of concept, we
need to keep it within the cyber community and not
let it get out to the public. That to me
is more important than releasing a proof of concept to
the public to show, hey, we can do this. Give
it to the cyber researchers who know what to do
with it. Keep it that way. So while I, while
I disagree that it was responsible for them to release
it, I do agree that releasing it to the cyber
community is a smart thing to do. Yeah, this reminds
me of, you know, on the last episode we were
talking about Hex Strike AI, Right? Which is that framework
that a bunch of, you know, it's supposed to be
used for automating, penetration testing and orchestrating all your agent.
Agentic AI and a bunch of hackers just kind of
picked it up immediately. Right. And this came up again
and again and again. And I think it was Nick
Bradley who said that, you know, if you're worried about
hackers misusing your tools, you would never move forward. Right.
Any tool you develop, someone's going to be able to
misuse it. And so it's a question of being responsible
about it. Whether that means disclosing it to people before
you do it or like Dave said, maybe you only
give it to a very select number of people. Yeah.
It may be difficult though. Right. If you recall, Dave,
you probably remember this Satan, right? Way back in what,
early late 90s? Early 90s. I don't remember that. Jeff
Croom mentioned this on the last episode. I think he
said it was early 2000s, I want to say. Yeah,
probably even before that. But yeah, I mean it was
a tool to go and do network scanning with a
view to find vulnerabilities. All of a sudden that became
a mechanism to find holes and published ttps. Right. So
I think tools like this will get out of hand,
good or bad. You know, they will produce for their
one reason or the other. They'll get into wrong hands.
We have to go figure out how to brace ourselves
against such things. Right. That's where I would probably focus
a little bit as well. And wasn't Satan like the
impetus for nmap? Yes, it spawned NMAP out of that.
So there's a good tool that came from it, but
there was also a lot of bad that came with
it, so. Exactly. Moving on then to our next story.
A single phishing attack against a single developer leads to
20 compromised packages on npm. I think people have probably
heard of this because it's one of the kind of
biggest hacks of NPM ever. I believe a prominent developer
was hit by an AI assisted fish phishing attack that
stole his NPM credentials, allowing threat actors to compromise 20
packages, which according to the hackers Hacker News collectively attract
over 2 billion weekly downloads, which I don't. That seems
like a lot of downloads. I don't know, but it's
a lot. The malware buried in these packages intercepts cryptocurrency
transactions. It reroutes them to attacker controlled wallets. And the
attack used a combination of clean email infrastructure and. And
AI generated content to get past both technical defenses and
the developer's own kind of, you know, psychological warning signs.
This looked like a legitimate email from NPM about resetting
two FA credentials. So I want to start by asking
what this situation says about the kind of current state
of software supply chain security. And I want to refer
to you first, Sridhar, because we've talked a little bit
in the past about supply chain Specifically regarding AI stuff.
But I just want to hear your thoughts. You know,
what does this say about the state of software supply
chain security today? I think it's not where it should
be. Right. As simple as that, right? This actually demonstrate
a single point of failure. You've got such critical software
that is used by millions of individuals, millions of organizations,
maintained by somebody who's just probably doing it out of
the goodness of her heart as a part time. And
as a result, you know, bad things have happened. Not
focusing on how it happened, but the fact that it's
happened. Right. And we should think about, you know, how
do we go mitigate against that. Right. This reminds me,
actually as bad as it may sound, and they will
probably disagree, sometimes these things are necessary, right? Sometimes these
things are necessary. Like very earlier on, during my undergrad
I learned about, I think Tylenol, if I remember right,
somebody messed with the Tylenol and put tablets and impacted
a few people and that resulted in a tamper proof
cap. I think to me things like these are unfortunately
necessary to be able to think about secure supply chain
software and think about the level of rigor, verification, transparency
needed. Very similar to how we think about food safety
right from the time that, hey, this avocado was grown
in this farm all the way to this manufacturer to
my table, right? That level of software, bill of material
transparency is probably going to help us in the future.
Dave, go ahead. I want to see if you disagree
or not. No, actually I'm not going to disagree with
that. I think it's right. I'll give you an analogy.
We talk about dumb rules. Well, the rules came about
because someone did it. And sometimes we need these phishing
attempts, these events to further security. And normally I would
just play devil's advocate here and disagree just for fun.
But seriously, I think it's actually a good thing when
something bad happens, but a good result comes from it.
So I can't, I can't disagree at all. It's like
the little warning on the silica gel packet, right, that
says do not eat. Somebody had to eat it. Someone
ate it. Michelle, any thoughts on your end on this?
Yeah, I mean I think the software compromises. It's just
another flavor of third party compromise, right? And this is
a huge issue for organizations because, okay, I've got my
perimeter, right? And I'm doing what I'm supposed to be
doing. But oh, that third party that I do business
with, they're not secure, right? And how many of those
working with that. You just have this massive attack surface.
And so. And then you couple that with the social
engineering. Like that is the biggest issue right now. Massive
amount of trust between companies. When you have third party
companies working together, there's just. You have to trust who
you're working with. Yes. And so what do you do
about that? Right. You have this, like you said, there's
this massive attack surface. You got to have all this
trust. But as you know, Sridhar, as you said, we
have to find ways to secure that as much as
we can. And you started, you mentioned a little bit
like a software bill of materials kind of thing. I
wondered if you wanted to expand a little bit on
that concept. Right. I think it's very similar to what
you said. Right. Do not eat. Right. It's being able
to go and list exactly the genesis of what software
package is coming from where, having the information in there,
who touched it to the level of transparency. So that
we know if some changes are happening and they're anomalous
in nature, then we understand the blast surface. The transparency
is what I want to highlight out of that software
bill of materials rather than the how part. Right. It's
not so much that it was one phishing email. Yeah.
And the shocker for me is that they're going after
cryptocurrency. I've just never heard of that before. But. Yeah,
cryptocurrency scams, who do does that? What is that? But
yeah, it's not the how. It's the product of, you
know, the multiplication table that comes that it comes from.
So I think just having the. Just a software bill
of materials is not sufficient. Right. Being able to have
that continuous verification of the package, being able to look
for anomaly detection coupled with the transparency of the software
bill of materials will help the overall equation. There's no
single silver bullet per se. And Michelle, you had mentioned
the kind of social engineering angle of this thing. Right.
And I think that's another interesting kind of lesson from
this whole situation is that, you know, you have somebody
who's a pretty savvy developer, someone who knows their way
around this stuff if they get the right message at
the right time or the wrong message at the wrong
time, depending on how you define that. What does this
say kind of about the state of social engineering today,
Right. That like even somebody like this can. Can get
hit. Michelle, any thoughts on that? Yeah, I mean we're
all human and I think if anyone in security says
that they've never been duped, I would say say that
they're lying. Because I think we've all, you know, we've
all fallen for something. Dave's looking around, but I'm sure,
think hard, think back even. I'm not denying it. I
have. Right, right. And IBM has gotten me twice. We're
on. And that's great. Right, because that's probably a testament
to their training. So obviously we always tout user education
and training and it seems very cliche, but it's true
and it works. And it has to be adapted for
the times. What are they actually doing? What are they
leveraging? How are they getting in? And that needs to
be sort of embedded into the training that an organization
is doing. We're under so much pressure, right? And we
have so many things going on, it's very easy to
just act quickly without taking a minute. And I think
what we have to do is just start every day
with, okay, this is my baseline, this is what I'm
going to do when I get an email, when I
see a message, when I get a phone call. And
that's hard to do because we have so many things
going on. Right. We have to assume that these things
are going to happen. Right. We cannot, we cannot assume
that nobody's, you know, you can avoid phishing, you can
assume that somebody's lost their password or iPad or whatever
it is done that. So to Michelle's point, yes, I
have lost laptop. Is Sridhar going to be the only
one brave enough to admit, Is anybody else going to
cop to it? No, I'm kidding. Go ahead. Getting back
on point. Yeah, I think you have to assume that
some of these things are going to be compromised. Right.
So the question is, how do I put controls around
it so that you can assume some of this? Right.
So, and that's what I think. I mean, when you
asked me the question, I was in two minds to
answer it by saying, you know what, I'm glad it
happened, but I stopped myself because certain, sometimes it is
time that something like this happened so that it becomes
an awakening call. I think it's actually okay to say
that though, that you're glad it happened because it's going
to teach someone something. Yeah. It honestly reminds me a
lot of our previous conversation here about the prompt block
stuff in the sense that like sometimes someone's gotta do
something a little bad for something good to come out
of it. You know what I mean? Yeah. We've got
a new IBM X Force analysis finds that OT and
critical infrastructure face serious threats, quote many ransomware, advanced persistent
threats and cybercrime groups are going beyond data theft, aiming
for physical disruption and even sabotage. This is from David
McMillan who is the author of the blog post Breaking
down this analysis now by fusing frontline threat Intel with
some 2025 cost of a data breach data, the IBM
X Force identified some concerning trends in OT and critical
infrastructure, including a significant number of serious vulnerabilities in the
fields. Of the 670 vulnerabilities disclosed in the first half
of 2025 that could impact operational technology, nearly half have
a CVSS severity rating of critical or high. So I
want to start with you, Michelle. Can you give us
a little context about this analysis, a little background about
what all this is? Basically fuses some data that we
got from the cost of a data breach, a survey
conducted by Ponemon Institute and analyzed by IBM that showed
that 15% of organizations actually experienced an OT incident. Right.
So then we looked at our own data, like you
mentioned, over 600 vulnerabilities impacting OT technology providers or vendors.
And then subsequently we looked at, okay, what are we
seeing being mentioned from either in telegram channels or Dark
web or other forums to see, you know, what are
some of the top vulnerabilities that are being mentioned out
there and therefore maybe are of interest to attackers as
well. And then beyond that, Right. How many of those
CVEs have exploits that are publicly available? So there's obviously
interest in targeting OT technology, but beyond that it's more
about the industries that house the OT technology. Right. We're
know, OT environments and that's why they're susceptible. And why
are so many attackers kind of moving beyond data theft
right now? Right. Like why, why do you think we
see people targeting OT and critical infrastructure for disruption specifically?
Any thoughts on that stuff, Michelle? So I think it's
going to come down to the threat actor and the
motivations, whether they're a nation state sponsored threat actor or
their cyber criminal group. So I don't know that we've
necessarily are seeing a move away from data theft. I
think we have reports to the contrary of that. It's
really going to come down to the threat actor group
and what their motivations are. Gotcha. Sridhar, any thoughts to
add? Right, Matt? I think it's easier, right? You know,
the technology is, you know, not necessarily the state of
the art. Right. And policies and procedures are not state
of the art. Right. So as a result, if you
have a castle with moat, why Can't I bring air
attack? It's easier to launch it. You can even get
it with a Cessna and still make a devastating impact.
Right. It doesn't have to be. So that's the way
I look at it. It's an easy thing to do.
Right. Why invest a lot in sophisticated attack when I
can do something simple, number one, and I'm actually a
water balloon. Yeah, exactly right. Or the other part is
that I know back to Michelle, the motivation point, right.
It's a lot more appealing and easier to extract something
out of it. Right. From an attacker perspective, you're dealing
with Colonial Pipeline, if you remember that, or a power
grid. It's easy to go and get what the attackers
want, whether it is ransomware or whatever that is, to
not disrupt millions of citizens life. Dave, what are your
thoughts on it? I think it comes down to money.
When you start messing with water supplies, power grid, the
trucking industry, even people are more willing to pay to
get those systems back online as opposed to information. Where
now that we've been dealing with this information theft for
so long, a lot of companies are starting to fix
their security policies and make their backups and they take
them off site and they make sure that they have
more than one. It's really kind of hard to do
that with a water supply or a trucking industry or
a power grid. If it's down, it's down. If you
want to bring a country to its knees, shut down
its trucking supply and turn off its water, you can
leave the power up, it'll be fine. But if you
can't get food and you can't drink water, you're right,
you. Can'T make a backup of your water supply like
you said. Right, exactly. You got one and that's it.
it's not kind of patched properly. I mean, why is
that? Is it just a kind of cost benefit analysis
going on that they're like, ah, we don't want to
take it offline to patch. What's the deal there? I
think it's a combination like if you look at the
attack entry points, like PLC controller or logic cardboard, right.
On a CNC machine or something, which is operates every
day. Those are not necessarily the, you know, updated on
a regular basis. It does take downtime to update those.
Right. So as a result they stay there for a
long time and the vulnerabilities over there are something that,
you know, attackers exploit. So Part of it is, you
know, the technology, where the technology is. And you don't
get the latest and greatest software on a PLC controller
for something. Right? Or a relay. And most of that
goes into something like training LLM models with GPUs, right?
So that's one of the reasons why the technology is
a little bit dated, number one. Number two, I think
is also, security has always been kind of an afterthought,
right. For critical infrastructure, being able to keep the uptime
high is always being the highest priority. And operations, security
don't necessarily talk to each other. But when you start
talking about safety, when you talk about downtime, that's when
there's an opportunity for these entities to talk to each
other. Otherwise, they're in silos, and attackers take advantage of
their silos. Now, I want to go back to that
figure of the vulnerabilities, right? 670 vulnerabilities disclosed in the
first half of today, 2025, that could affect operational technology.
49% of them were either critical or high in terms
of CVSS rating. And I'm wondering, you know, this sounds
pretty concerning, and I want to kind of throw it
to Dave, because I know Dave has thoughts on cvss.
both in terms of this number and also CVSS in
general. I want to hear what you got to say.
I'll tackle the CVSS thing really quick. I think it's
completely broken. There's so many vulnerabilities now that are unscored,
that show a score of 9.8, which is the default.
So everything that's unscored shows up as critical. And you
can't take that number and perform any kind of analysis
with it because you don't actually know that it's a
critical vulnerability. I think it needs to be redone. And
there's been talk about them redoing the CVSS model for
years now, and it hasn't happened. And I just not
a fan of using a CVSS score to rate a
vulnerability. You need to know what the vulnerability is. You
need to know how it works, how it affects everything.
And you can't do that with a number. You have
to actually see it. And the fact that 675. Come
on. I think we. And I agree with Dave on
this, right? We need to think about how susceptible it
is to go and launch or take advantage of the
vulnerability. Not about the number, right? You can have 9.9
and nobody's going to touch it, 9.8, but nobody's going
to touch it. Why would I? Right. Instead I may
go after something lower which is easy to exploit. Instead
of, we should think about something like a weaponization score,
like how easy or susceptible it is for to leverage
or exploit that vulnerability and try to patch those. Right.
So 600 doesn't mean anything. Out of the 600, maybe
the 20 may be easily exploitable. Patch those first. Michelle,
you have any thoughts on whether CVSS score is worth
it if it's broken? What do you got? I don't
know if I have any thoughts specifically on the CFSS
scoring, but I would say that yes, to Sridhar's point,
if it's exploitable or if there's public exploit available. Right.
Proof of concept exploit already it's already being exploited in
the wild. We see reports of threat actors leveraging it.
That should probably raise it. Right. And obviously if you
don't have a particular technology in your environment, you're not
going to worry about that. So maybe of those, you
know, there's a very small percentage that would be impacting
you. So yeah, understanding what your assets are first and
foremost. Right. Because that's already a problem. Even though that's
sort of part of basic security is understanding your assets
and asset management and then knowing of those, if something
goes down, that's going to have this ripple effect. I
would say to issue a challenge to the viewers and
the listeners. Go and do some research and find out
how many critical vulnerabilities are publicly exploited and how many
of those mediums are actually exploited. And I think you'll
find that the numbers are surprising in that the mediums
are more exploited than the criticals because there's just so
much high profile visibility on the criticals. Like Sridhar said,
those mediums can be way more dangerous than the criticals.
I like this is our first listener viewer challenge. I
expect some, I expect some answers in the YouTube comments.
Okay. Business identity compromise is the hot new social engineering
scam. Or at least it's one of the hot new
social engineering scams. In business identity compromise, also known as
bic, or simply hiring fraud attackers, pose as legitimate workers
applying for remote roles, often using AI tools to generate
resumes, headshots, even voice and video, and of course do
the work they need to do to keep the job.
These insider threats then use their access to sensitive company
systems to wreak havoc. Or they just draw a paycheck
and use it to fund illegitimate activities. So I want
to start by asking, what's up with this? The rise
of bic? Why do you Think this kind of thing
is taking off right now. Dave, I want to start
with you. You got any thoughts on why BIC is
getting popular right now? I think it started getting popular
when the work from home became a really big thing.
A lot of people didn't have an office to go
into, so it was really hard to see those threat
actors that actually got into these companies and maybe focus
on what they were doing. You know, you see someone
who, you know, suspicious looking over there, what's he doing
over there? When they're in their homes, there's nothing to
look at. You can see their work. And if they're
doing a good job at their job, you know, if
they're doing good work at their job, it's a little
less suspicious if they're doing something over off to the
side. So I think a lot of it has to
do with the ease of work from home and not
having to actually be physically present in a job. So
hiring someone based off of an AI interview, probably pretty
easy to do. That's how I got this job. No
comment. I was going to say, how do we know
one of you isn't AI? Right now? I think there's
a perfect storm over here. Right. I think Dave is
right. One is, of course, the work from home or
remote wherever. Right, Remote. Remote workforce. Second is the fact
that we have increased amount of AI that we're relying
on that quite a bit. Being able to go and
scan for resumes, being able to validate a bunch of
things. That's, that's, that's good and bad. Right? And then
the third piece is, you know, we don't have the.
The number of, you know, HR individuals doing the manual
process. The HR is getting a lot of pressure in
terms of saying, okay, we need 100 jobs, we need
thousand jobs, we need X number of jobs in this
three years. Do it yesterday. So it's a combination of
remote, combination of the tools, and the combination of the
fact that you've got accelerated hiring practices. To me, that
is coming together into a perfect storm for fraudsters to
go and say, hey, why rob a bank when I
can go get hired by one? I was just going
to say that AI isn't Max Headroom anymore. It's a
lot. Deep fakes are really, really good now. It's not
Forrest Gump meeting John Kennedy. It's a lot better than
it used to be. We got a. We got a
Max Headroom reference this time. You know, I'm going to
count on you every time now. I'm counting on you
obscure stuff to come up here. All right, but Michelle,
go ahead. Yeah, I mean, I think it's a fraud
scam that we didn't have before. So it's like. Okay,
about now, we're all moving remotely ever more than we
were before. So within IBM we're used to this global
organization, but a lot of other organizations, maybe they started
to expand their boundaries beyond their regional location. Right. And
so now we're hiring on, but we're not anticipating this
type of where did this come from? Right. You're not
going to know unless you're in the cybersecurity industry. So
I think it's like anything else. It's awareness, it's end
user education. Hey, HR staff, you know, this is what's
coming, this is what's happening. We need to pay attention
to this kind of stuff. So it's just a learning
curve. Yeah. I wanted to ask about, you know, what,
what organizations can maybe do to start spotting more of
this. And so it sounds like part of it is
education. Right. I guess you could also make everybody work
in an office. It's pretty hard to be a scammer
if you do that, but. Any other thoughts though on
that? Sridhar, what do you got to say? I think
it's a people process in technology, Michelle. Right. To just
add on to that one is of course the technology.
We do captcha for stupid things that we shouldn't be
doing. There's other things that you can do that. But
we don't do liveness test for ghost employment. I think
being able to go and apply technology to be able
to go and there's a lot of technology these days
to go and check for liveness tests. Be able to
go and scan your driver's license, Go here, go there.
I mean, you know, sorry to deviate, but I had
to go. We had all plumbing fixtures and that our
lifetime warranty. And when we called the support desk, they
said, oh, go up, take a picture, go down, take
a picture, send this. They want to make sure that
it's in my home. Right. We're doing that for plumbing
fixtures. Why can't we do it for employment? Right. So
second, of course, you know your point, Michelle, that I
think education, Right. Education is important. This absolutely is. And
then the last piece is, you know, continuing to go
monitor. I'm not saying that avoid breach the privacy aspects
and things like that, but we got to Like Dave
said, hey, that person looks suspicious. What is that equivalent
in a non work environment? Right. I think that's something
that we have to think about as a mechanism to
detect anomalies and stop it. Right. So think Shree hit
the nail on the head. It's like having a kidnapping
victim holding up a newspaper from the day. Yeah, this
is me, I promise. Let's move along to our last
story, which this one, honestly, just something a little lighter.
I found it amusing. CyberCriminal installs Huntress EDR on their
device for some reason giving security pros a front row
seat to their activity. So it's exactly what it sounds
like. A cyber criminal testing out new security tools installed
Huntress EDR on their device. Now, we know that cybercriminals,
as we just said, they play with a lot of
these legitimate security, security tools. But Huntress team noticed that,
you know, they, they installed this and it gave them
the opportunity to look into what they were doing on
their device. So once they confirmed it was a malicious
actor, they started poking around, checking out their activity, including
digging into previous stuff, like seeing what the attacker was
researching the attack frameworks they were looking into, phishing messages
they were crafting, dark web markets they were visiting, all
kinds of stuff. So I just, like I said, this
was just extremely funny to me. I thought this was
hilarious and I just wanted to see what the rest
of you thought. Dave, I see you cracking a smile.
You got any thoughts on this one? I want to
know how long, how long they let him do this
before they went in there, started looking at things like,
you know, it used to be one of the, one
of the tactics is let him poke around for a
little bit to see what they're going to do, and
then the second that they get close to something that
they're supposed to shut them down. Well, seems to me
that Huntress EDR people just said, huh, he's got our
tool. You know what that means? We can go poke
around. So it's been, it's like the opposite. We can
go poke around in their environment and like you said,
see everything that they did. This was one of the
dumbest things I've ever heard of a threat actor doing.
Just one of the dumbest. I can't state that firmly
enough. Was it on his production machine or was it
on a test machine honeypot? I don't know. But not
the smartest cookie in the bag. Well, I think I
actually grounds the whole thing, right, the fact that even
attackers can make mistakes, I think, which is Actually, we
have a chance, Dave. We have a chance right now.
The defenders are always chasing because they're not talking to
each other, whereas the attackers are doing a very good
job of working with each other. Right. So mistake like
this not only show the human side of it, that
end of the day, they are human. They make mistakes.
I mean, and to the other point, right, which is.
But at the same time, what can we take away
from that? Right? It gives us a front row seat
or a backstage pass to how they are orchestrating this
whole thing. And then how should we think about our
defenses? Right, so that's what I was taking away. I
hope Huntress sent them a thank you card. Nice fruit
basket. You know. Better. Teddy bear in a fruit basket.
Michelle, any thoughts on your end? Yeah, I mean, I
immediately thought about actually some research that two analysts, I'll
have to give them credit here, Allison Wickoff and Richard
Emerson. They stumbled across two or open servers of the
threat actor ITGA team, which I think we t. TTP
overlaps with charming kitten and a few other folks. Basically
videos showing how to train other attackers in their group.
So the videos showed things like, this is how you
compromise an account. And oh, by the way, if it
has mfa, you know, disregard, don't. Don't try to compromise
this one. So it was very interesting inside look. So
I immediately thought of that when I saw this article.
But yeah, I mean, to go back to my analogy
before of like, we all have busy days. Attackers have
busy days, too. Like, they're going to do things like
this. It's going to happen, right? These operational errors, it's.
It's hard out there. Big error, large error. All right,
that's all the time we have for today. Thank you,
Michelle and Sridhar and Dave for joining us. Thank you
to our viewers and listeners for tuning in. Special thanks
to viewer2Left Arms who posted the hard hitting question on
our last episode. Would you rather be hacked by a
billion lions or one of every Pokemon? The answer there
is pretty obvious if you ask me. It's the lions.
They got real big paws. They can't type on the
computer. You're set there. Make sure to subscribe to security
intelligence wherever podcasts are found. And everyone please stay safe
out there.