React2Shell Vulnerability: Severity Debate
Key Points
- The podcast frames hacking as forcing systems to do unintended actions, setting the tone for a deep dive into current cyber‑security threats.
- Hosts introduce the agenda: evaluating malicious large‑language models, a bizarre Gmail‑lockout exploit that changes a user’s age, simultaneous attacks by multiple threat groups, and the impact of solar radiation grounding aircraft.
- A major discussion centers on the “React2Shell” remote code execution flaw (CVSS 10.0) affecting React Server DOM packages and downstream frameworks like Next.js and Vite, which lets crafted HTTP requests turn into server‑side command prompts.
- Experts debate the vulnerability’s severity, with some comparing it to historic Log4Shell exploits while others caution that alarmist responses could cause more harm than good.
- Patches have already been released for the affected versions, emphasizing the need for rapid supply‑chain updates and vigilant monitoring of software dependencies.
Sections
- Security Intelligence Podcast Intro - The opening segment introduces the IBM Security Intelligence podcast, its hosts and guests, defines hacking, and previews topics including malicious LLMs, Gmail lockouts, dual hacks, solar radiation impacts, and the React2Shell vulnerability.
- Balanced Response to Emerging Vulnerabilities - The speakers stress that while new exploits like Log4j‑style bugs demand swift action, organizations should avoid rushed patches—citing Cloudflare’s misstep—and instead apply controlled, intentional updates.
- Cascading Breach Impacts & Debate - Discussion on how a single breach can propagate through token reuse, the challenges of patching dependencies, and the heightened polarization surrounding the vulnerability.
- Prudent Overreaction Strategy in Business Continuity - The speaker advises leaders to intentionally “overreact” to emerging risks by conducting rigorous risk and business‑impact assessments, maintaining strong vendor communication, and adjusting responses proportionally to ensure swift and effective continuity.
- Prompt Injection Enables Malicious AI - Ian explains that standard language models can be coaxed into creating phishing or other harmful content via subtle prompts, rendering specialized “low‑level” malicious bots like WormGPT4 largely unnecessary.
- Democratizing Hacking Through AI Tools - The discussion highlights how AI lowers the entry barrier for cyber‑attackers—enabling easier creation of phishing emails and synthetic media—while urging defenders to adapt and questioning the cost‑benefit of such tools for malicious actors.
- AI-Driven Security Becomes Norm - The speakers discuss the rise of automatic AI-powered fuzzing and penetration testing, urging organizations to adopt more agile, dynamic defenses since attackers are also leveraging AI.
- Managing Single Point of Failure - The speaker recommends using passkeys, token‑based authentication, and separate accounts for sensitive versus non‑sensitive data, applying data classification to avoid a single point of failure.
- Free Services Turned Double‑Edged Sword - The speakers examine how reliance on free platforms limits user support, forces payment to regain access, and how security measures can be weaponized—highlighted by ransomware that exposed a hidden APT.
- Stealth APTs and Ongoing Discovery - The panel stresses continued investigation beyond an initial noisy breach, using iceberg and “Quiet Crabs” analogies to illustrate hidden APTs that linger in networks, waiting for the optimal moment to strike rather than merely seeking quick ransom.
- Proactive Threat Hunting and Human Bottleneck - The speakers advocate leveraging digital workers and automated agents to make threat detection more proactive, reduce alert overload, and ensure analysts investigate even minor anomalies to prevent larger breaches.
- Cyber Threats Exploiting Disasters - The panel examines how attackers may time exploits to coincide with natural catastrophes such as hurricanes or snowstorms, explores the feasibility of artificially mimicking solar‑radiation damage, and stresses that these events are merely symptoms of larger system‑availability and resilience challenges that intersect with cybersecurity.
- Weak Links and Disaster Recovery - The speaker uses power‑grid analogies to illustrate how hidden single points of failure can cause massive outages, urging organizations to identify, test, and reinforce backup and redundancy measures before ransomware or other incidents expose weaknesses.
Full Transcript
# React2Shell Vulnerability: Severity Debate **Source:** [https://www.youtube.com/watch?v=LWGdsBMPcnY](https://www.youtube.com/watch?v=LWGdsBMPcnY) **Duration:** 00:50:03 ## Summary - The podcast frames hacking as forcing systems to do unintended actions, setting the tone for a deep dive into current cyber‑security threats. - Hosts introduce the agenda: evaluating malicious large‑language models, a bizarre Gmail‑lockout exploit that changes a user’s age, simultaneous attacks by multiple threat groups, and the impact of solar radiation grounding aircraft. - A major discussion centers on the “React2Shell” remote code execution flaw (CVSS 10.0) affecting React Server DOM packages and downstream frameworks like Next.js and Vite, which lets crafted HTTP requests turn into server‑side command prompts. - Experts debate the vulnerability’s severity, with some comparing it to historic Log4Shell exploits while others caution that alarmist responses could cause more harm than good. - Patches have already been released for the affected versions, emphasizing the need for rapid supply‑chain updates and vigilant monitoring of software dependencies. ## Sections - [00:00:00](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=0s) **Security Intelligence Podcast Intro** - The opening segment introduces the IBM Security Intelligence podcast, its hosts and guests, defines hacking, and previews topics including malicious LLMs, Gmail lockouts, dual hacks, solar radiation impacts, and the React2Shell vulnerability. - [00:03:08](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=188s) **Balanced Response to Emerging Vulnerabilities** - The speakers stress that while new exploits like Log4j‑style bugs demand swift action, organizations should avoid rushed patches—citing Cloudflare’s misstep—and instead apply controlled, intentional updates. - [00:06:33](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=393s) **Cascading Breach Impacts & Debate** - Discussion on how a single breach can propagate through token reuse, the challenges of patching dependencies, and the heightened polarization surrounding the vulnerability. - [00:09:54](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=594s) **Prudent Overreaction Strategy in Business Continuity** - The speaker advises leaders to intentionally “overreact” to emerging risks by conducting rigorous risk and business‑impact assessments, maintaining strong vendor communication, and adjusting responses proportionally to ensure swift and effective continuity. - [00:13:55](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=835s) **Prompt Injection Enables Malicious AI** - Ian explains that standard language models can be coaxed into creating phishing or other harmful content via subtle prompts, rendering specialized “low‑level” malicious bots like WormGPT4 largely unnecessary. - [00:17:09](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=1029s) **Democratizing Hacking Through AI Tools** - The discussion highlights how AI lowers the entry barrier for cyber‑attackers—enabling easier creation of phishing emails and synthetic media—while urging defenders to adapt and questioning the cost‑benefit of such tools for malicious actors. - [00:21:36](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=1296s) **AI-Driven Security Becomes Norm** - The speakers discuss the rise of automatic AI-powered fuzzing and penetration testing, urging organizations to adopt more agile, dynamic defenses since attackers are also leveraging AI. - [00:28:38](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=1718s) **Managing Single Point of Failure** - The speaker recommends using passkeys, token‑based authentication, and separate accounts for sensitive versus non‑sensitive data, applying data classification to avoid a single point of failure. - [00:33:05](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=1985s) **Free Services Turned Double‑Edged Sword** - The speakers examine how reliance on free platforms limits user support, forces payment to regain access, and how security measures can be weaponized—highlighted by ransomware that exposed a hidden APT. - [00:37:02](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=2222s) **Stealth APTs and Ongoing Discovery** - The panel stresses continued investigation beyond an initial noisy breach, using iceberg and “Quiet Crabs” analogies to illustrate hidden APTs that linger in networks, waiting for the optimal moment to strike rather than merely seeking quick ransom. - [00:41:13](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=2473s) **Proactive Threat Hunting and Human Bottleneck** - The speakers advocate leveraging digital workers and automated agents to make threat detection more proactive, reduce alert overload, and ensure analysts investigate even minor anomalies to prevent larger breaches. - [00:45:13](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=2713s) **Cyber Threats Exploiting Disasters** - The panel examines how attackers may time exploits to coincide with natural catastrophes such as hurricanes or snowstorms, explores the feasibility of artificially mimicking solar‑radiation damage, and stresses that these events are merely symptoms of larger system‑availability and resilience challenges that intersect with cybersecurity. - [00:48:33](https://www.youtube.com/watch?v=LWGdsBMPcnY&t=2913s) **Weak Links and Disaster Recovery** - The speaker uses power‑grid analogies to illustrate how hidden single points of failure can cause massive outages, urging organizations to identify, test, and reinforce backup and redundancy measures before ransomware or other incidents expose weaknesses. ## Full Transcript
I think this is actually a really great use of
hacking. That's something we always tell our clients is hacking
in its simplest form is making something or someone do
something they're not intended to do. And that's exactly what
this is. All that and more on Security Intelligence. Hello,
and welcome to Security Intelligence, IBM's weekly cyber security podcast,
where our expert panelists break down the biggest industry news
stories into practical takeaways you can use. I'm your host,
Matt Kaczynski, and I will not tell you what was
on my Spotify wrapped this year because I believe in
data privacy and I might be a little bit embarrassed
by what was there. Joining me today, Claire Nunez, Creative
Director for the X Force Cyber Range and a rug
maker, as I've just learned. Sridhar Mupiti, IBM Fellow and
cto, IBM Security and making his podcast debut. Ian Malloy,
Department Head, Security Research thank you all for joining me.
Here's what we're going to be talking about. Do malicious
LLMs live up to the hype hackers can lock you
out of Gmail by changing your age? What happens when
you get hacked by two different groups at once? And
the time solar radiation grounded a bunch of planes? And
why that matters to security Pros. But first, React2Shell has
sparked a cybersecurity debate. Last week, the React team announced
that security researcher Lachlan Davids had discovered a remote code
execution vulnerability with a CVSS score of 10.0. So big
deal in React Server components now quick PSA, the affected
packages are React Server DOM, Webpack, React Server DOM Parcel,
and React Server DOM Turbo Pack versions 19.0, 19.1.0, 19,
and 19.2.0, plus any frameworks or bundlers connected to the
vulnerable packages through the software supply chain, which means things
like Next js, Waku Vite, and Redwood SDK, among others.
Now, the way this remote code execution vulnerability works is
so you have to know a little bit about React
server components first. They are little pieces of code that
execute activity on a server. The server and the client
communicate by way of HTTP requests. Basically, hackers can sneak
data into the HTTP request gets sent to the server,
the server decodes it, thinks it's a command, and it
executes it right. So in plain terms, the bug lets
attackers turn a normal web request into a command prompt
on your server. Now, we've seen attackers exploiting this, and
we've seen patches and updates already rolled out, but I'm
really interested in the debate surrounding this vulnerability because people
have been kind of split on the One hand, you've
got people who likened it to log four shell, which
was a big deal when it came out, as I'm
sure you all remember, and you see that in the
name we've given it. But others felt like we're kind
of overreacting to this and we're doing a little more
harm than good. So, Sridhar, I'd like to start with
you and get your take on this vulnerability. Where do
you think it falls in terms of severity and why?
I think any severe, any vulnerability with 10 requires some
level of legitimacy and action for sure. Right. But the
blast surface, in my opinion, is not as big as
log 4J. That doesn't mean we can't ignore it. Right.
We still need to go and take care of it
as soon as possible. Like you said, any of the
next JS packages as easy to update, but trying to
go and do it with some level of control would
be good, rather than what Cloudflare tried to do is
to go and release something in panic, and that creates
even more chaos. Right, yeah. I'm glad you mentioned the
Cloudflare thing. Right. Because that's a good example of an
organization who looked good intentions, but they were rushing to
get something out and they accidentally knocked themselves off for
a little bit. And I think that's, again, part of
the reason why people are saying, look, you gotta slow
down. It's not that it's not important, not that it's
not serious, but approach it with some intention. Right. Anybody
have any agreements or disagreements, want to build on what
Sridhar said there? Ian, you got anything you want to
add there? Yeah. So I think the interesting thing there
is when you have these new vulnerabilities, if they are
potentially easy to exploit, what you're going to realize is
that the attackers are going to try to exploit them
very, very quickly, and you want to maintain the ability
to be ahead of them. And. And that's where, you
know, speed is obviously very, very important. But as Sridhar
just mentioned, sometimes moving too quickly can become a problem,
and, you know, it can have unintended consequences there. I
think it's good to prudently overreact sometimes, and especially when
something is very out of the norm, that's important to
do. But I do think that it can also hurt
you sometimes if you overreact too much. There's. There's a
difference between reacting and saying, let me look into this,
and reacting where the sky is falling down and you
kind of don't know what to do. So I think
this is a Good time for organizations to evaluate kind
of where they're at and where REACT exists within their
architectures and their vendor architectures as well. But I don't
necessarily think everybody needs to kind of run around like
Chicken Little right now. But it's worth looking around and
considering where does our risk lie? I think that's a
really good point. Not always knowing exactly what you have
deployed. Sometimes your CMDB databases are out of date. You
don't necessarily know what you have deployed, where you have
it deployed. This is one of the things we learned
with log 4J. That's an important thing to keep in
mind then having good hygiene on your supply chain. The
other thing, Ian, is that this is one of those
things where it's not just a patch. It's a patch
for sure. You can go do it, but I believe
you have to go rebuild and redeploy. Right. This is
where I think I've seen a couple of customers talk
about, yes, I updated it, but I still see it.
Right. So. And rebuilding broke something else. Yeah, I'm glad
that you know. So two things. First of all, prudent
overreaction. I think that's a lovely little term and I'm
going to keep it in mind for my life in
general, but also for cybersecurity purposes. But I also like,
Ian, how you brought up that you don't necessarily know
right off the bat where this thing might be in
your system. Right. Like the day requires a little bit
of digging. And so that got me thinking about the
kind of long tail that this attack could potentially have.
That like, you know, I. It made me think of
last week we discussed the gain site breach, which slightly
different, but, you know, it was people using tokens they
had stolen in an earlier breach of sales, law of
drift to gain access into more systems. And it just
got me thinking about how one attack on one piece
of infrastructure can really branch out in ways you don't
expect. And I'm wondering if that influences how we look
at the severity of this particular breach. Sridhar, you got
any takes on that? I think the dependency aspect is
a key thing. Right. That's what I was trying to
allude as well. Right. Which is while you may fix
something, but the fact that, number one, understand where it
is, two, understand, okay, what does it take to actually
fix it? Patching3 is trying to look at the dependency
of it. Okay. If I fix it, what else might
break? So some level of caution in terms of going
with the controlled approach is what I would suggest. Matt.
Now, Ian, I'm wondering if you have any thoughts on
why such a debate was sparked in the first. I
mean, I just, I kind of feel like it seemed
like there was a particularly strong polarization here. I don't
know, maybe that's just where I hang out online, but
I felt like it was. It was more, I don't
know, vicious than I've seen before in a lot of
times. I'm just wondering if you have any thoughts about
why this vulnerability sparked such a reaction. I mean, it's
hard to say sometimes. I mean, I think if we
went through the log four shell, it was huge. As
Sridhar said, that was pervasive. It was used everywhere as
a major dependency. REACT has a lot of the market
and so I think people were expecting it to be
much, much bigger than it ended up being. It's also
a little bit different in that the log four shell,
you could attack systems that were in the back end
here. I think you have to actually have direct access
to them. But I think one of the reasons people
think we're overreacting is we're not necessarily seeing the exploits
in the wild, maybe as quickly. And a lot of
the exploits seem to indicate that people aren't able to
exploit it as easily and as readily as maybe they
had expected. It's a simplicity of how you can exploit
it. You don't need to authenticate. You just basically go
to the specific endpoint and execute a code and it
just does it. So Sometimes you have CBSS 10.0 which
are not so easy to exploit, but this one seems
to be like a piece of cake. Right. And I
think these are really strong kind of technical reasons for
why there's been the kind of reaction that we've seen.
But I'm also wondering if there's a sort of, I'll
call it a cultural component to this. And by that
I mean we've recently seen a lot of kind of
pushback in cybersecurity against specifically how we cover like AI
related threats. Right. Like, I've seen a lot of people
being kind of skeptical about that language. And I think
specifically of the MIT report that got kind of pulled
down because it said that there were 80% of ransomware
attacks used AI. And then it turned out that was
kind of bunk. And I'm do wondering if, I do
wonder if this is kind of like a perfect storm
thing where this is coming along at a time where
there's an increasing skepticism around some alarmism maybe in how
we cover cybersecurity. I don't Know, do you think that's
part of anything? We as a community are getting more
and more paranoid, right? Like, you know, because, you know,
we've got the hype of Gen AI being able to
launch attacks very quickly. So we are reacting very quickly
to anything like this than what we have done before
the era of log 4J. So Claire, I want to
ask you because you do a lot of work at
the cyber range, kind of preparing organizations to address risks
and react to things like situations kind of like this.
Right. And again, that phrase you used, prudent overreaction, keeps
popping up in my mind. So I was wondering if
you had any advice on kind of how you prepare
your organization, your executive leaders, whoever, to approach these kinds
of things with that level, that appropriate kind of, you
know, sense of proportion. Any thoughts there? Yeah. So usually
again, we do advise organizations to prudently overreact to things
that are new in this space and to kind of
take a step back and maybe move a little bit
more slowly to evaluate kind of the risk at hand.
So mostly we, we do advise our leaders that come
through the range to really focus on risk assessments, prudent
overreaction, and to also continuously just do business impact analysis
as well. And to have a good line of communication
with their third party vendors as well. Just because if
they know that they're going to be changing something because
of a critical outage or patch that they're doing, you
want to be aware of that so you can ensure
business continuity on your side. Because you may not have
this in place, but maybe one of your vendors does.
So those are some of the things that we kind
of highlight on and that that applies really to any
industry where you do want to know your risk, be
able to react swiftly. And to. You know, have good
communication with your vendors. Makes perfect sense. Before we move
on to our next topic, I just wanted to throw
it to Ian or Sridhar. Do you have any last
words aside from patching or updating, any advice that you
would give organizations dealing with this thing at this time?
Sridhar, let's start with you. Anything you want to add
there? I think Claire covered it well. Right. It's about.
First is visibility. Right? Like what is the block surface?
What's the visibility in terms of to the vulnerability and
how susceptible is my environment to that? Right. Second, is
able to do some level of risk assignment to say,
okay, do I need to go and update it? If
so, am I going to break something or can I
just put like a, I don't know, a waf rule
or an ip, allow a deny list to be able
to block that port. Right, Meaning can I do some
banded approach so that I can buy some time? Right.
And then once you do the risk assessment, then it's
a mitigation or remediation for me. Right. Mitigation is something
like a WAF gateway. Remediation is, okay, I gotta go
and update the patch, but do the dependency chipping and
make sure that you actually fix the root cause. Absolutely.
Ian, any final thoughts? I like what Schrader had said,
but I think a couple things would add maybe just
how can you actually automate some of your testing so
that, you know, when this actually does happen, how can
you respond? How can you respond very, very quickly and
how can you not actually create some new kind of
errors and, you know, make the problem actually bigger than
it was, especially when some of the mitigations end up
being quite easy to deploy. So I think we've covered
this pretty well. Let's move on to our next story
here. Do malicious LLMs live up to the hype? There's
a lot of conversation about whether generative AI is delivering
on all of its promises in the enterprise world. And
it turns out that some cybercriminals are also wondering whether
it's worth it. In a recent report, Palo Alto's Unit
42 analyzed the capabilities of two of the most popular
malicious LLMs on the market. We've got Wormgpt 4 and
Kawaii GPT. Now, unlike mainstream LLMs, two tools have no
ethical constraints or safety filters. And they're mainly sold as
tools for, you know, phishing, malware code generation, automated reconnaissance,
that kind of thing. Now, some people kind of look
at these tools as unleashing a whole new wave of
cyber attacks by lowering the barrier to entry and streamlining
malicious operations. Others are more skeptical. Specifically, I'm thinking of
an article by Nate Nelson in Dark Reading who wrote
that these things fail to live up to the hype.
They, they're kind of only useful for very low level
criminals. And after that they, they're just not that powerful.
Ian, I know you do a lot of AI security
research, so I wanted to throw to you first you
look at things like Wormgpt4, kawaii GPT. What do you
think? Impressive? Disappointing. Somewhere in between. What's your take on
these things? They make good headlines. It gets people excited.
You can see, hey, there is now a malicious use
of AI. Are they actually pushing the envelope or anything?
I don't really think they are necessarily. I think the
barrier to entry to grabbing any other LLM and actually
using it for potentially malicious purposes is actually pretty low.
Most of the models tend to be fairly easy to
prompt inject if you want to get them to do
something malicious. On top of that, I can actually just
take any normal model and as long as I'm not
explicitly telling it, hey, I'm writing a phishing email. Help
me write a phishing email. I just go and say,
I'm writing a marketing email. Help me write a marketing
email. Add some emphasis here, make it critical. The user
has to respond very, very quickly. This is urgent and
it'll happily do that. You can describe a perfect phishing
email and it'll write it as long as you don't
tell it it's phishing. So the intent is very, very
difficult to understand. So that's maybe one of the first
things I've kind of observed. No, it makes sense. Right.
And again, that goes back to something that we talked
about in the last episode, which is that research into
using poems to kind of jailbreak your prompt, your models.
Right? Like if you phrase your malicious prompt not directly,
but in verse, you can make it do things it's
not supposed to do. Right. So like you said, maybe,
maybe these models aren't even really necessary. Right. On that
front it got me thinking about something which is worm
GPT4 has like a subscription model, right? Like it sells
it's, it's services to other hackers. And, and part of
me kind of wonders if like that's the whole play
for them, like if it's not even really about, you
know, cyber crime, but like they're making money by selling
a thing to people. It's a branding play. Maybe hackers
are the real, you know, target in a sense. I
don't know. Shridhar, any, any thoughts on that? Do you
feel like that that could be a possibility there? Due
respect to Ian, right, But he top of the researchers
in the world in this space, right? Of course the
bar is really high for him and everything is. But
look at it from individuals who don't do this on
a daily basis. For them, things like warmgpt or Kawaii
GPT lower the bar. Very, very, very to be accessible
things that we don't know. Somebody like Ian, I would
expect him to say, okay, write me a marketing campaign
and embedded links in there and do the right way.
But somebody who doesn't know this is going to say
give me a phishing link. It does not have spelling
mistakes from Nigeria. Right. That itself is probably good enough
for Them to go and. Researchers in like Ian would
probably go for 40, 50%. These guys would probably look
for 8%, 10%. Right. And 10% of a billion people
is plenty. Right. So I think there's a market, I
mean Kawaii GPT is free and chat, the warm GPT
is what, 200 bucks? Something like that, Easily accessible, right?
Something like that. Right. So I mean there is a
market and they're trying to make the hacking democratize a
little bit so that it's easy to approach. So we
as defenders have to do a better job of at
least addressing this lower barrier, their attacks. That makes a
lot of sense. It's, it's like you said, there is
like a real market there. You know, not everybody is
as skilled as our Ian Malloy's of the world are,
but that's why, you know, Ian's here at IBM instead
of doing that. Claire, any thoughts on your end on
these things? When you look at them, you know, what
do they spark for you? Well, I'm also wondering, besides
just like creating phishing emails and such, these probably also
have other nefarious use cases, right? Like helping you do
malicious research of some kind, maybe generating. Photos that are,
cases too that individuals would, you know, try to access
these for and you know, the, the hacking ones of
creating a phishing email or such. I'm sure that's just
kind of like a nice to have for some of
these users. But I think that on the topic of
like are these worth it kind of thing, I mean,
I guess it depends how you use them. So if
a hacker is going to use them just to spell
check or something, maybe not. They could maybe use Microsoft
Word for that or like a free plugin. But you
know, if you really want to do a lot of
different things in terms of generating photos and then maybe
it's worth your criminal money, that that's kind of my
thought on it. It AI improves your phishing and research
overall. You just have to know how to use those
tools. That makes sense. And, and it reminds me that
I think it was Worm GPT that they kind of
said that. Look, we, part of our training set is
like, you know, malicious stuff specifically like. Right. Like phishing
emails, malicious code, whatever. The idea being that it would
be even better generating some of that stuff for you,
which, but that does still raise the question, right, about
like hallucinations and those kinds of things that we still
see happening in Vibe coding with legitimate LLMs. And I
just kind of wonder, I don't know, is that also
throttling our AI malware the same way it's kind of
throttling some of our clear Net development? I don't know.
Ian, any thoughts there? Do you think that the hallucinations
stop it from being super powerful with malware? What's your
take? It definitely could. I think some of the articles
kind of question whether or not people are getting the
performance boost that they were, but I think we've been
making the same kind of comments about just Genai and
Vibe coding. And actually if you are not a skilled
developer, it actually can cause a. Net negative through some
studies. And so we're probably seeing a little bit of
that as well. Sridhar, I wanted to ask you because
you brought up that KWAI GPT is different. They don't
charge for it. It's a kind of open source community
approach. It's freely available on GitHub according to Palo Alto.
It's really easy to set up. I was wondering what
your take was on this approach of an open source
malware community, basically. Personally, it's not something I haven't quite
seen before, but. But I'm also not as seasoned in
the space. What are your thoughts on this thing? When
you make it open source like this, it is not
only assisting the bad actors, but can potentially be used
by the good actors to say, hey, let me go
test to see how easy my system is to be
vulnerable for this, or be able to learn from that,
to be able to use that as an input for
my red teaming or my blue teaming, or a combination
that. That can result in doing a better job of
security coming in from a hacker mindset. Right. So to
me, I think this becomes a collaborative effort where the
bad guys are doing a great job of collaborating with
each other anyway. And when you keep it in the
open, at least we know some of the tactics and
procedures that can help us in terms of building better
software. I like that a lot. You know, kind of
a. There's a constructive use for this kind of thing,
right? If you're smart with it, you can make it
work for you. I really like that. So to round
out this segment, I just wanted to ask if we
have any kind of final takeaways for defenders, should the
existence of dark LLMs kind of change anything about what
we do or stay the course. And I'll just go
back around. Ian, any thoughts on your end? Should we
adjust in any way. I mean, everything that an LLM
can do, a human can do effectively. So it's really
about the right and pace and the scale. And that
really then comes down to, I think, what Sridhar was
saying, automating all of your pen testing, finding your vulnerabilities.
We're seeing lots of companies do this automatic fuzzing. Google,
Microsoft and others are starting to do this and starting
to build these automatic pen testing capabilities where if you
can capture everything through an AI agent, for example, using
some of these tools before you put it into production,
that's going to make it that much harder for them
to attack it once you actually do release it. Absolutely.
Sridhar, anything on your end to add there? These will
become norms. Right. And throughout this discussion, I'm sure we'll
have more top. Even if you go back to the
previous topic and this topic, we just have to become
more agile, more dynamic in our security. Right. And so
what means is okay, you know, Fido or passkeys is
no longer an option. Right. You have to assume that
every email is not trusted. Right. You have to assume
certain things and build security around that. Right. As defenders.
So that's the way I would look at it to
say you'll see more and more and more of these.
The challenge is for us as defenders to say how
do we raise the bar to make the security more
adaptive, more dynamic, more resilient? Gotcha. And Claire, any final
thoughts on your end? Yeah, I agree with Sreedhar on
all of those topics. I think something that has come
up a lot with clients and in kind of CISO
roundtables lately is that. We really need to be using
AI to our advantage as well. And threat actors are
going to level up with it. So we have to
as well. There's a lot of great options out there,
as Ian mentioned, to kind of do that. So it's
important that organizations, instead of just kind of forgetting about
it a little bit, is thinking about how they can
better integrate it into their security as well. So I
think that the kind of takeaway here is that even
if these things maybe aren't as impressive as they sometimes
sound in the headlines, we can't rest on our laurels.
Right? Like, there's still plenty of work to be done
there to make sure their organizations are prepared for when
these threats do become as. As powerful as some of
the headlines make them sound. Let's move on to our
next story. Hackers lock users out of their Gmail accounts
by changing their ages. It's a pretty simple technique. First
they break into your account often by just stealing your
credentials. Then they change your age to 10 or something
else that makes you look like a child. And that
means they can add you to their family group where
they are a parent. And now they have total control
you yet. And that was kind of on purpose, right?
Google was making it so the children couldn't circumvent the
control, so their parents set and hackers have used this
against them. Now, Claire, I know you have some thoughts
on kind of how this might align with some new
regulatory efforts to spin up an even bigger problem. Walk
us through that. Yeah. So first off, I think this
is actually a really great use of, of hacking. It's,
it's hacking in its simplest form, right? So it's making
something do something it's not intended to do. So and
it's, you know, like a meal hack kind of thing.
But it's, it's for technology. So that's something we always
tell our clients is, is hacking in its simplest form
is making something or someone do something they're not intended
to do. And that's exactly what this is. But it's
very interesting because. So Google rolled out, I think, these
parental accounts a couple years ago. But in Australia, Australia
use social media. Instagram has recently rolled out teen accounts
that have different parameters around it. So my thought is
that this, this could become a bigger threat than it
has. I mean, we've seen people's accounts be compromised on,
in different levels because they clicked on something they shouldn't
have. But perhaps this is something that will scale more
broadly as people or as these platforms start to roll
out these teen accounts or child accounts, because instead of
just completely preventing users from being on their platform, they
want to have these users, you know, for life and
have them graduate into adult accounts. So I think it,
it brings up a lot, you know, that's a potential
for what can a threat actor may do. And a
lot of people would probably be willing to shell out
a decent amount of money to get their Gmail back
accounts back, to get their Instagrams back. If you get
locked out of those, there's a sense that your life
is lost. Whereas other platforms, you could very simply just
make another account. But if you can't get into your
Gmail and you can't get to Perhaps your bills or,
or something, whatever is in there. That's very scary. And
I could see people shelling out a lot of. I,
I think they reference gift cards so shelling out a
lot in gift cards to get their accounts back and
who knows if they are ever reverted to whatever age
actually are or if they just get to stay 10
forever. There are, there are two things there that I
want to kind of pick up on. The first is
like you said, I can see, you know, a scenario
where this becomes almost like a, a method of wiping
accounts. Right. Like if you think of those, the Australia
social media bands, I think it's like people under 16
just can't have an account flat out anymore. And it's
almost like again, I don't know if this would work
and I'm not trying to give anybody ideas, but I
could see a situation where you get on there, you
change someone to age, you basically destroy their account, delete
it, it's gone. Right. And then the other thing that
you had mentioned too was that. This is a kind
of situation where people might be willing to pay a
lot of money to get these things back because they're
very, very dear to us. And it got me thinking
about, I don't know, is that like a fundamental flaw
in, in how we approach our, our security or our
security hygiene that we rely on such a free service
like this for so many things. Ian, let me throw
it over to you because you know much more about
this stuff than I do. Any thoughts on that or
this attack in general? What are you thinking here? Another
couple of things interesting. All roads seem to lead to
Gmail and once you get locked out of that, you
lose access to everything else. But I can say the
unintended consequences of that, how difficult it is to get
access back. The fact that Gmail is effectively an unsupported
service because it is free, so there is no tech
support you can call up and complain to. The different
forums I saw indicate that if you have it tied
to a YouTube account, you actually have to go through
YouTube to get access to it. So unless you have
a channel there. And that seemed to be the only
kind of road and trying to figure out, well, what
is the way to get access back when you actually
do put all of your eggs in this one basket
where you have no recourse. Yeah, no, I think it's
extremely scary. Sridhar, I want to bring you in here
and get your take on it and also specifically on
this being a situation where they're using kind of, well,
intentioned controls to do some harm to us. Like Claire
said, it's like hacking in its purest form. What's your
take on this whole situation? I think there's a couple
of demograph dimensions over here, right. I mean, first just
picking up, leaving. I mean, picking up where Ian left,
right. This is not a new. You know, revelation. This,
this exposure has been there for almost a year actually.
Right. And Google has still not fixed it. So that
kind of goes to the point that Ian is saying
is, you know, it'll happen when we will make it
happen. Right. So for me, I think is a couple
of things. If you're really relying on the single point
of failure across all the things, then make sure at
least it's done right. Like don't use passwords, use passkeys.
Right. Make sure that there's some level of a token
or that credentials cannot be stolen. So if you're putting
everything into one place, make sure it's a good, solid,
safe. Right. I'm not advocating for that, but if you're
doing that. But the other dimension is, yeah, I mean
this is a single point of failure which has your,
not just email, but your financial information, your pointer to
your bank statements, pretty much everything. Right. I'm not giving
my IBM address to my financial statements. Right. That's what
I'm doing. So it's better to do a little bit
of a difference in depth and have a different account
that provides access to your sensitive information and use this
for probably nonsensitive. Right. So trying to combine your. What
we do in enterprises, we say that go and discover
data, classify the data and protect your sensitive data differently
than non sensitive data. I think we should try to
you're using it, use it correctly. But if you're option
not to use it, then try to think about how
do you safeguard your most sensitive information with the right
set of tools versus using one for everything. Right. I
think it's an extremely good point bringing up this concept
of defense and depth and kind of being able to
apply that to so much of what you do. Ian,
I think I might have cut you off. You want
to jump in there? No, I was going to ask
a question. I mean, how many email accounts do I
actually need to make myself more resilient to these types
of threats? Then you have another problem. I have a
passkey, it's a yubikey, I can show you right now.
And unfortunately it died on me this morning, so I
can't log in remotely because my UV key died. And
so I was thinking about this. What would I actually
do? I now have to have one of the recovery
accounts, which might be my wife. And now that's another
potential weak link. I'm using in a passkey as an
example of the defense. But one should use more than
one passkey as an example. Something like a hardware device
that you have, you can use your touch id, right.
As well as have backup mechanisms where you're not just
saying that this is a child account and something happens,
you cannot unlock it. But having, like you said, your
partner or your extended family be able to open that,
those are the techniques that one can use. Right. So
backup Yubikey at home so I'll be able to get
back in, no problem. But I was just thinking about.
The resiliency problem that we actually have here and then
kind of leaning back on this notion of the unintended
consequences of these different security procedures. So when you actually
give someone else access to unlock your account, can that
be weaponized? And I think this is what Claire is
saying. Everything is at some point going to be weaponized
and there's going to be some unintended consequence that someone
didn't think about when they designed the system system. And
that's kind of the mentality we. Have to get into
key thing on my laptop. Sridhar, you mentioned passkeys. If
you're somebody who's not technically savvy, you may not have
any of these things implemented. And I'm not sure about
the customer support that Gmail has for free users. But
I've recently had an issue with my Meta account where
I'm trying to figure something out. There's some kind of
like, weird block on my account and I'm trying to
get it fixed because I'm trying to sell things on
Facebook Marketplace and I, I can't contact Meta unless I
pay to contact Meta. So there's another issue there where
it's like, if you don't want to pay for a
subscription to one of these services and maybe I don't
know if Google has the same policy, but if it
were on a platform like Meta, this would be a
huge issue and all these people would also have an
issue. And, and think about all the payment information that
is also stored in Meta or, and where, where else
it's connected to because it's almost I see Facebook as
an option a lot more frequently as a login option.
So it's, it's kind of like the, we put a
lot of trust in these platforms but if, if you
need help, you might not be able to get it,
especially if you don't have any of those, you know,
extra access options enabled. Yeah, I think that's an extremely
good point. Right. And again it comes back to this
concept of so much like, of our load bearing Internet
infrastructure is like free services, right. Which like turns out
to be a double edged sword in this way because
like you said you have to pay to get your
account back. That's. Nobody wants to be faced with that.
Right. And Ian, I also wanted to just spotlight before
we move on this, this notion that you had said
about, you know, everything can kind of be weaponized eventually.
Right. And like even the, the protections you put in
place can like under the right circumst become weapons in
their own. Right. That's what we're kind of looking at
here. Right? Like that is exactly that situation. And so
you're kind of foreshadowing something, this resilience conversation that I
hope to get into in our final segment. But on
that note, we do have to move on to our
next segment. When ransomware reveals a bigger threat. Researchers at
Positive Technologies shared a curious case where a ransomware infects
ended up alerting its victims to the existence of a
much quieter and separate apt, or advanced Persistent Threat lurking
in their network. While investigating a victim's compromised system, the
researchers found evidence of Quiet Crabs, which is an espionage
oriented APT known to quietly dwell in systems for hundreds
of days. How did they find them? It's because a
much noisier attack believed to be the work of ransomware
gang Thor was also underway. Whereas Quiet Crabs uses unique
tools and tactics, including speedy exploitation of new vulnerabilities, Thor
used some commonly known tools which meant they triggered more
kinds of alarms and called attention to some of this
activity. And Quiet Crabs got caught in the kind of
crossfire. Shridhar, I'm wondering what this story tells us about
apts and particularly how hard they are to detect. I
mean this is a situation where we didn't even know
there was one until somebody else showed up. How confident
can we really be that that something's not lurking in
our systems right now? Now and the we. I mean
it's just general users, right? Go ahead. It is hard,
right? I mean apts by design are silent and not
noisy as some of these things that you see they're
supposed to be advanced and persistent and that means they're
just lurking and patience. Right. And part of the reason
why we do. The threat intelligence report every year. Right.
And every year I'm surprised to see that that. The
number of days to identify a threat is like 300
almost a year. Right. And that tells me that anybody
who is making a lot of noise is going for
a quick buck. Right. But the real money is in
this advanced persistent test where they will play the long
game which is make sure that you not just go
and circumvent the controls on the endpoint point go and
reach out to command and control silently move laterally until
you're confident about doing something. Right. So it's lucky that
we were able to find this quite crabs through some
noisy attack but not many people are as lucky. Until
you get to certain situation where it is probably a
little bit too late it so Yeah, I think APTs
are dangerous and just because you find something that doesn't
mean you need to stop looking. Even after you find
out that you probably remediated it. It's good to continue
to look beyond the D day to see if there's
any activity which is. Different behavior than what you expected.
Yeah, I like that point. Just because you find something
doesn't mean it's time to stop looking. Right. There's more
digging to do. Ian, any thoughts on your end here?
Yeah, it's kind of like that proverbial iceberg But I
feel like this wasn't actually the first time we've actually
seen this as well where you went and there was
one very, very noisy attack. Someone maybe tripped or made
a mistake and they're actually able to discover that there
was someone else who had actually been in the network
multiple times. Look, not all of us are going to
have a silver lining of a in a ransomware attack,
but sometimes you do have that silver lining. Right. It's
quite funny. Claire, any thoughts on your end here? I
really love the name Quiet Crabs. It sounds like a
like beachy indie band of some kinds. Really great name
and really works for you know, their stealth in the.
In the network. So I really like that. And they
probably are moving laterally like a Crab. I think APTs
are also just very interesting in general. A lot of
people are like you mentioned, I think Sridhar, you said
looking for the quick buck and curious or concerned about
ransom. But I think people should be more concerned about
APTs because they are in your network and they are
waiting for the right moment to strike. And if they're
just Hanging out there. They can do that on a
weekend or a holiday or they're ready, they're waiting for
the moment. Ready not just to encrypt systems. To, you
know, to make money quickly. They're, they're exfiltrating, they are
working. On their end to wait. So they're ready. So
you're not ready. So I think people should be a
little more concerned about them. And people don't always want
to admit that, oh, we weren't detecting anything within our
network either that we just, we've been sitting here for
300 days and haven't recognized that. So I think a
lot of people don't want to admit that they are
concerned about APTS either. Or to say like we would
detect that very quickly and perhaps they wouldn't. And it's
evident that they wouldn't because lots of organizations are breached
and attacked each year by these groups. Yes. Like Sridhar
said, right. In IBM's own research, the cause of the
data breach report, you know, routinely, it's like it takes
like 300 something days to like detect and remediate a
breach. Right. So it's like you're not alone. Like, you
know, maybe that's one of our messages here for folks
is like don't, don't feel ashamed of that. Like, okay,
might have taken some time, but you are not alone.
You're the norm with that kind of thing right now.
We can't always like count on threat actors to, to
be noisy and we can't always count on the noisy
ones to kind of give away the other ones. So
I'm just wondering if we have any thoughts, you know,
looking at this. Does this change anything about how you
think about how we investigate threats? I mean, shhar, you
already said, you know, don't stop looking just because you
found something. I think that's a really wonderful way to
put it, but I just wanted to open it up
before we move on. Any last thoughts here? I'll start
with you, Ian. Any last advice you'd give organizations when
it comes to apts? Well, definitely you could be proactive.
You can constantly look at your logs. You get to
run different analytics. I think with the additional automation that
we actually have now through agents, you could take any
threat and where you might not be able to have
a human actually spend the time to go through and
investigate that. You can just let an AI agent go
and run through your logs. Do these different correlations pull
down different threat intelligence and whatever it might be is
at the time this was not known but new piece
of information came out and as you're constantly churning away
at it, you might realize that hey, this was actually
the connection that we actually needed to realize that no,
this is not just background noise. This was not like
a one off. This actually was potentially an incident. The
last thing I'm able to say is I always heard
there is two types of companies, those who know they've
been compromised and those who don't know they've been compromised
yet. It's a very good way to put it. And
I also like, yeah, this idea of this is a
really good use case for AI agents. Like you said,
you might not be able to put a person on
investigating all this stuff, but you got agents, that's a
good way to put them to use. Sridhar, any thoughts
on your end? I'm a little bit worried. Right, like
you know, 2025, the number doesn't change. Right. Still almost
a year before we find. I think we need to
start thinking about, you know, how do we go after.
These brands persistent threats activity more proactively. Right. Whether it's
threat hunting, whether it's doing behavioral analytics, you gotta change
the game a little bit too. And Ian's point about
fine, maybe it is a human bottleneck. So fine, let's
use digital workers as a way to go through that.
I think we need to start embracing a little bit
more proactiveness. And hopefully we'll bring down the number from
300 to less than that. Absolutely. And Claire, any last
advice for organizations when it comes to apts? Back to
kind of what I mentioned earlier, Prudently overreact. If you
notice something that's, that's a little strange, don't just consider
it like. A small anomaly, it could be something bigger.
So you know, as Ian mentioned, agents will really help
you look into some of those lower level alerts. It's
important to kind of be able to cover your bases
and at the same time rely on agents and not
burn out your security staff either. If I remember, wasn't
that how they found the RSA breach? One person found
something, everyone said they were overreacting, but they just kept
sticking at it until they finally uncovered the entire threat.
So that's a perfect kind of example of why it's,
you keep digging, why it's worth doing that digging. Let's
move on then to our final segment because we are
almost out of time here. Airbus flights grounded by intense
solar radiation. Now, I like to end the show with
something kind of ridiculous or wacky and I think this
is a really good one. In that vein, vein, 6,000
jets needed to revert their systems to an earlier software
version after it was discovered that bursts of intense solar
radiation were corrupting data in the newest software and disrupting
critical flight controls and even leading to some, some injuries
of people when we saw some sudden, you know, planes
kind of suddenly losing altitude and whatnot. Now none of
us here are physicists, so I'm not asking anybody to
talk about how solar radiation works and what you can
do about it, but instead I just wanted to. This
whole thing made me think about an aspect of cyber
security and system resilience that I feel like gets less
attention. At least where I hang out it does, which
is these threats to our systems from things other than
human hackers. Right. The natural world. Natural disasters can shut
us down and not even disasters, solar radiation, you know.
So I'm just wondering, you know, we'll do a quick
roundtable here. Do you think our security approaches today address
this issue enough and if not, how can we make
them better? Claire, I'll start with you. Any thoughts there?
So I think the impacts of this are still kind
of going on a little bit. I've heard of so
many flights being delayed still and, and no one likes
an airline disruption that is like the one industry where
continuity being disrupted is the absolute worst. I think a
lot of organizations are actually thinking about the. Meeting point
of acts of God or weather. And nature and cybersecurity
or cyber resilience and business resilience. We have a lot
of clients come through the range that will say we
want to frame our scenario around a snowstorm and we're
an organization in Texas and like, because that happens right,
your, your continuity will be disrupted. And if there's a
cyber attack and you need to get to, to people
and tell them that their, their power is still going
to work or their water utilities or, or whatever are
still going to work. You, you, you have to practice,
practice that. So I do see a lot of organizations
thinking about, you know, this is during wildfire season or
hurricane season and thinking about what would happen if our
data centers went out on this side of the country
when we operate on this side of the country. Because
of natural disasters. So I think a lot of organizations
think about okay, and where, how are we still going
to continue? Whether it's a natural continuity issue or cybersecurity.
Because as mentioned earlier, APTS will just kind of sit
in your network and wait until you are at a
bad place. So if they are, you know, following the
weather of wherever you are and there is a hurricane
and that they might choose that moment. It's, it's not
impossible. So I see a lot of organizations kind of
working on this more should though. Yeah, that and I'm
glad to hear that you seeing that. And I think
like the Texas snowstorm is such a good example of
that. We've all seen the, the sort of news stories
about Texas grinding to a halt when certain amounts of
ice hit them. So that's a good thing. But you
also bring up, before I continue the roundtable, just a
real quick question. You also bring up the idea of
hackers kind of taking advantage of this, you know, sort
of natural disasters and the like to wreak some more
havoc. And I have a question here actually from one
of our producers about would it be possible for a
hacker to duplicate this kind of solar radiation damage using,
I don't know, UV or something? Ian Sridhar, any thoughts
on that? Is this a duplicable thing you could do?
You know, I would take a step back. Right. The
solar radiation is just a symptom of a bigger problem,
right? The problem is there are going, you, there are
going to be situations that will impact the system availability,
resilience and operational resiliency for lack of better it, right.
That juxtaposes with cybersecurity. Now that could be temperature. Yes,
of course you can, somebody can crank up the emf.
We've seen that in the movie, right? Or whatever. Forget
the movie now. Ocean's Eleven, I think so I kind
of look at those as. I look at the solar
radiation as an example of these external calamities that can
impact the operational aspect of a system. System that, number
or number two, that somebody can take advantage of it
and misuse for a different thing. Right. But the core
thing is how do I design system to be operationally
resilient so that I can minimize the cybersecurity. Impact of
that Makes perfect sense. And it kind of sounds like
maybe this is something that would more happen in a
movie in terms of hackers exploiting UV to duplicate the
damage, you know. But it's not even that, right? It's
not even that. Like let's say for example, in the
same plain example, you have a three mile redundancy. It's
the same hardware, it's the same software running three times.
If it's going to fail, it's going to fail all
three. Right. Instead I think we need to think about
a mechanism by we introduce a diversity where fine you
may have hardware version A and software version 2 and
hardware version B software version too. So that you're thinking
Or create solar radiation. It could be as simple as
exploiting the same software three different times. Right. So no.
Yeah, I like that a lot. You know, putting the
focus really on the concrete activity you can do instead
of some of this sort of wilder stuff. Ian, we'll
end here with you. Any thoughts on your end in
terms of how we approach this stuff? If we're doing
a good enough job, if we could do better. What's
your take? Yeah, I think just kind of adding what
was already said. I mean, there's always going to be
a weak link. There's going to be dependencies to systems
that we're not expecting. You're going to have cascading failures.
I remember the 2003 Northeast blackout because one power plant
went out. I remember reading as a result of some
other studies, there's a single chain link fence in Kansas
and if you cut the link there, it causes massive
outages. If you think about software, and this maybe was
kind of going back to the react when you have
a single dependency, it's the proverbial one person in Kansas
maintaining a prod and everything depends on it. We just
don't really know. I think that's really the problem is
that we don't know where the weak links are when
they fall over, typically until they do fall over. And
then if you think about having backups and disaster recovery,
they're not always tested. And a lot of times when
you actually go and you try to test your backups,
typically with a ransomware incident, you find out that they've
not been working or their tapes are corrupted or someone
overwrote them. And so really it's thinking through, through all
these different things that can potentially happen. Analyze it, determine
where the weak links are, where you don't have the
protection and the redundancy that you think you do, and
then kind of making a plan for that. Awesome. That's
a great way to end it. Thank you for being
here, folks. It's all the time we have for today.
Thank you, Sridhar, Claire and Ian. Thank you to the
viewers and the listeners. Shout out to YouTube commenter at
the bar who shared some thoughts on our last video
about how devs could protect themselves and their brands during
software supply chain attacks. Tax. I love hearing from people
about this kind of stuff. So please, please weigh in
if you have thoughts. As always, subscribe to Security Intelligence
wherever podcasts are found. Stay safe out there and test
your backups. I guess.