AI Browsers, Ghost Networks, Malware
Key Points
- The podcast opens with a warning that shutting down one cyber threat often leads to the emergence of new ones, exemplified by the rise of YouTube‑related malware targeting children.
- Hosts discuss several recent security incidents, including the YouTube “ghost network,” the Glassworm malware campaign, widespread neglect of mobile security in enterprises, and the massive AWS outage of 2025.
- A new AI‑integrated web browser from OpenAI (Atlas) is highlighted as especially risky because prompt‑injection attacks can embed malicious code in web content, images, or URLs to hijack the browser.
- Panelists agree that, despite the allure of AI assistants, the technology is still premature for secure deployment, emphasizing the need for thorough credential verification and proactive security design before widespread adoption.
Sections
- Whack-a-Mole Threat Landscape - The podcast intro likens cybersecurity to a whack‑a‑mole game, previewing scares such as YouTube’s ghost network, Glassworm malware, neglected mobile security, a major AWS outage, and the risks of AI‑integrated browsers.
- Caution Over AI Browser Adoption - Participants warn that AI‑driven browsers are still immature and risky for sensitive or enterprise use, urging security checks before adoption.
- Fundamentals of Enterprise Security - The speaker stresses that basic protective controls, monitoring, transparency, and auditability are essential for organizations, highlighting current gaps and contrasting human reasoning with AI's limited capabilities.
- AI‑Driven Social Engineering via Video Prompts - The speakers examine how relentless AI bots and our reliance on short YouTube tutorials blur the line between trusted information and disinformation, turning social engineering into “prompt engineering” that exploits platform trust.
- Defenders Grapple With Escalating Threats - The speakers highlight how today’s security defenders must depend on advanced technology to contend with ever‑more sophisticated, seemingly legitimate attacks that even ordinary users can unintentionally trigger.
- Platform Controls for Media Trust - The speakers outline technical measures—provenance tagging, malicious instruction flagging, AI moderation, and watermark verification—that platforms can implement to ensure trustworthy content and support zero‑trust browsing alongside user education.
- Glassworm: Blockchain C2 & Unicode Obfuscation - Researchers highlight the glassworm malware's novel use of the Solana blockchain and Google Calendar for command‑and‑control and its concealment of malicious code via invisible Unicode variation selectors, marking a shift toward resilient, post‑infrastructure attacks.
- Broken Detection and Response Playbook - The speaker critiques how their current playbook fails to detect or mitigate sophisticated blockchain‑based attacks, leaving core servers unreachable and defenses ineffective, while admiring the attackers’ ingenuity.
- BYOD Security and Click Risks - The speakers discuss how personal device proliferation and users’ propensity to click links create major security gaps, emphasizing the need for tools like IBM’s Mass316 to intercept threats while acknowledging the difficulty of restricting user behavior.
- Mobile Device Policy Challenges - The speaker emphasizes that ubiquitous mobile devices are viewed differently from traditional assets, so organizations need clear, enforceable acceptable‑use rules to manage usage and mitigate shadow‑IT risks.
- Automation Failures Highlight Resilience Needs - Security professionals reflect on a podcast‑app outage that cascaded through highly automated, cloud‑dependent services, emphasizing the urgent need to embed robust fallback mechanisms and resiliency into system design.
- Interdependency Risks in Cloud Resilience - The discussion highlights how the AWS outage exposed hidden cross‑region dependencies—like IAM and CI/CD services centralized in US‑East—demonstrating that true resiliency requires recognizing and mitigating such interconnected points of failure.
Full Transcript
# AI Browsers, Ghost Networks, Malware **Source:** [https://www.youtube.com/watch?v=zc8qBG99GWk](https://www.youtube.com/watch?v=zc8qBG99GWk) **Duration:** 00:43:58 ## Summary - The podcast opens with a warning that shutting down one cyber threat often leads to the emergence of new ones, exemplified by the rise of YouTube‑related malware targeting children. - Hosts discuss several recent security incidents, including the YouTube “ghost network,” the Glassworm malware campaign, widespread neglect of mobile security in enterprises, and the massive AWS outage of 2025. - A new AI‑integrated web browser from OpenAI (Atlas) is highlighted as especially risky because prompt‑injection attacks can embed malicious code in web content, images, or URLs to hijack the browser. - Panelists agree that, despite the allure of AI assistants, the technology is still premature for secure deployment, emphasizing the need for thorough credential verification and proactive security design before widespread adoption. ## Sections - [00:00:00](https://www.youtube.com/watch?v=zc8qBG99GWk&t=0s) **Whack-a-Mole Threat Landscape** - The podcast intro likens cybersecurity to a whack‑a‑mole game, previewing scares such as YouTube’s ghost network, Glassworm malware, neglected mobile security, a major AWS outage, and the risks of AI‑integrated browsers. - [00:03:16](https://www.youtube.com/watch?v=zc8qBG99GWk&t=196s) **Caution Over AI Browser Adoption** - Participants warn that AI‑driven browsers are still immature and risky for sensitive or enterprise use, urging security checks before adoption. - [00:09:02](https://www.youtube.com/watch?v=zc8qBG99GWk&t=542s) **Fundamentals of Enterprise Security** - The speaker stresses that basic protective controls, monitoring, transparency, and auditability are essential for organizations, highlighting current gaps and contrasting human reasoning with AI's limited capabilities. - [00:12:29](https://www.youtube.com/watch?v=zc8qBG99GWk&t=749s) **AI‑Driven Social Engineering via Video Prompts** - The speakers examine how relentless AI bots and our reliance on short YouTube tutorials blur the line between trusted information and disinformation, turning social engineering into “prompt engineering” that exploits platform trust. - [00:15:51](https://www.youtube.com/watch?v=zc8qBG99GWk&t=951s) **Defenders Grapple With Escalating Threats** - The speakers highlight how today’s security defenders must depend on advanced technology to contend with ever‑more sophisticated, seemingly legitimate attacks that even ordinary users can unintentionally trigger. - [00:18:59](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1139s) **Platform Controls for Media Trust** - The speakers outline technical measures—provenance tagging, malicious instruction flagging, AI moderation, and watermark verification—that platforms can implement to ensure trustworthy content and support zero‑trust browsing alongside user education. - [00:22:10](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1330s) **Glassworm: Blockchain C2 & Unicode Obfuscation** - Researchers highlight the glassworm malware's novel use of the Solana blockchain and Google Calendar for command‑and‑control and its concealment of malicious code via invisible Unicode variation selectors, marking a shift toward resilient, post‑infrastructure attacks. - [00:25:16](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1516s) **Broken Detection and Response Playbook** - The speaker critiques how their current playbook fails to detect or mitigate sophisticated blockchain‑based attacks, leaving core servers unreachable and defenses ineffective, while admiring the attackers’ ingenuity. - [00:29:36](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1776s) **BYOD Security and Click Risks** - The speakers discuss how personal device proliferation and users’ propensity to click links create major security gaps, emphasizing the need for tools like IBM’s Mass316 to intercept threats while acknowledging the difficulty of restricting user behavior. - [00:32:39](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1959s) **Mobile Device Policy Challenges** - The speaker emphasizes that ubiquitous mobile devices are viewed differently from traditional assets, so organizations need clear, enforceable acceptable‑use rules to manage usage and mitigate shadow‑IT risks. - [00:37:51](https://www.youtube.com/watch?v=zc8qBG99GWk&t=2271s) **Automation Failures Highlight Resilience Needs** - Security professionals reflect on a podcast‑app outage that cascaded through highly automated, cloud‑dependent services, emphasizing the urgent need to embed robust fallback mechanisms and resiliency into system design. - [00:41:11](https://www.youtube.com/watch?v=zc8qBG99GWk&t=2471s) **Interdependency Risks in Cloud Resilience** - The discussion highlights how the AWS outage exposed hidden cross‑region dependencies—like IAM and CI/CD services centralized in US‑East—demonstrating that true resiliency requires recognizing and mitigating such interconnected points of failure. ## Full Transcript
Literally, like whack a mole because you shut down one,
a new thing is going to come up. For example,
YouTube. Children start looking at YouTube from like the time
they're born. This is like having a malware in for
previous generation in tv. This is not something which is
an email. You work on it, your text. This is,
this is something that you're used to. It's an extremely
trusted one. All that and more on Security Intelligence. Hello
and happy Halloween and welcome to Security Intelligence, IBM's weekly
cybersecurity podcast, where we break down the most terrifying stories
in the field with the help of our panel of
experts. I'm your host, Matt Kaczynski, and this year I'm
going as the scariest thing I can think of, a
guy who still uses single factor authentication. Joining me today,
we have suja Viswasen, IBM VP of Security Products, J.R.
of the spooky season, we've got five frightful stories to
cover. And stay tuned after the conversation for a sneak
peek of a special episode coming out later this week.
But today we are covering YouTube's ghost network, the Glassworm
malware, how enterprises neglect mobile security, the great aws outage
of 2025. But first, is there such thing as a
safe AI browser? Now, I'm sure everybody knows that last
week OpenAI released Atlas, a web browser with ChatGPT and
Agentic capabilities built right in, which sounds really cool, but
experts are already warning that it like Perplexity's Comet before
it is vulnerable to prompt injections. Right? Attackers can hide
malicious code in web content, images, even URLs, to hijack
these browsers and influence their behavior and make them do
all kinds of nasty stuff. So to get us started
today, I want to ask everybody real quick, kind of
around the table, as a security professional, would you use
one of these things? And I'll start with you, Dave.
How are you feeling about these AI browsers? My goodness,
no, I would not use them. Not yet. I mean,
look, the promise is there. Just we're a little early.
The rush to get these things to market has not
allowed them to be secured. I think that's, you know,
my biggest worry, right, is, is, you know, I want
an assistant so bad that I don't bother to check
their credentials, you know, and that's kind of the parallel
for me. So, you know, that's what we've seen with
this particular browser. That's probably what we'll see with the
next one. You know, I mean the heartening thing for
me, I've been in security for 20 plus years, don't
ask me how long that suffice it to know. But
at least we're thinking about it now. Like if we
go back through the other big hills of IT advancements
and JR's been with me through a couple of these,
so at least security in AI is being thought of
shortly thereafter. Like we've had a big advancement in AI
and people go, wait a minute, I don't know if
I can trust this just yet. This is good, right?
It comes out and we immediately are thinking like, wait
a second, hold on, can we prompt and drink this?
Can I trust this? Wait, wait, wait, wait. What if
it starts acting unethically on my behalf and it's right
like these, this. So, so no, I think, you know,
for me, Matt, this is a little bit early. Absolutely.
And I like the idea of, you know, it's like
an agent, it's an assistant that you want so badly
that you're not checking the credentials. Right. Because I can
see that especially for a lot of everyday users who
might not know much about how this stuff works. They
just see shiny new browser, they download it, they're excited,
they're not taking precautions. Jr, how about you? Would you
use one of these things? I tend to agree here
with Dave. I think I might use it for some
casual browsing, you know, maybe summarizing a few articles or
question answering where the risk is extremely low. But these
browsers are not ready for enterprise use. They're not ready
for high stakes, especially when you have sensitive data. Right.
And so, and absolutely, if you have any workflows that
include things like confidential research, financial information, or even, you
know, anything that requires your credentials. I think the level
of maturity for these offerings is very low at this
point in time. And they're very mature, I would say.
And I think we have to wait until we start
fitting security into these browsers and we get some guarantees
before we can go forward to start using them for
any mission critical use. Absolutely. And Suja, how about you?
What are you feeling about these browsers? I'm still gonna
be curious and I would, because I wanna know what
is it? Right. Aren't you? I'd be curious. So I
would actually use it in a machine where I don't
have any of my bank accounts ever stored in it.
Okay. That's the most important thing than anything else. No
financial data in that laptop or phone, whichever I'M using.
That's what I would use. I will have a dumb
phone to use it. That makes a lot of sense.
And that brings me to my next kind of question
for you folks, right? Is we all kind of agree,
it seems like that these things are just, they're not
ready for prime time yet. They're interesting. They could do
some cool stuff, but they're not ready for prime time.
What's it going to take to get them there? And
I'll go back around the circle in reverse order this
time. Sujo, what do you think? We need to get
these things to a place where they might be good
for enterprise use. Do you think we can get there?
I think they need to be starting about the security
aspect of it, shifting left and thinking about it first.
Right? That is what is going to get them. Because
governance, the regulations are always going to catch up. We
saw it with data, right, GDPR and everything else after
we gave our data for free is when those regulations
came in. Governments and regulations are going to be later
no matter how early they think they are. So the
biggest thing, if somebody wants to make money, a business
that wants to start enterprise using it, they need to
be shifting left, thinking about security and have those testing
done. Because look, the rules are there for the good
people. The bad guys don't have any rules. That's a
really good point. We set up those rules, right? And
the bad guys, they're not going to listen anyway, right?
Junior, how about you? What are you feeling like? I
feel like Suja kind of agreed with you there. You
were saying you want to see security earlier in the
process. Do you have anything to add there? What do
you think it's going to take? I think even the,
even OpenAI with Atlas and even some of the others
like Publicity with Comet and so on, they've been very
clear that while they've done security engineering, that there are
many areas where things like prompt injection, which remains a
frontier, which remains an unsolved security problem for them, and
they need to be able to address some of these
before they move forward. So fundamentally, if you look at
where the problems are, I think you need to do
things like detection of prompt injection, prompt sanitization, if you
will, you need to be able to isolate content and
context specifically because many times what the attackers are doing
is they are mixing instructions along with the data that
is provided, the query that is provided, and the models
are unable to differentiate between the two. And so very
often attackers are able to sneak in specific instructions and
able to get the models to misbehave. So I think
there is a lot of promising research that we've done
in this space on how to sandbox some of these
executions, on how to track provenance and how to build
policies around LLMs and firewalls. Some of these capabilities will
have to make their way in. Then we'll have to
see what sticks and what doesn't. And as that level
of maturity goes up, our confidence in these products is
going to go up and the risk is going to
go down. Absolutely. And that's kind of the way that
all this technology moves, right? The new thing comes out
and we kind of have to do a little bit
of catch up. We experiment with what works, we put
it in and then we get to a place where
we can trust these things. Dave, how about you? Anything
to add there? What you want to see these browsers
do to become safer? I mean, as the first person
to say no, I wouldn't use them. I mean, and
the reason is, you know, I mean they're saying like,
well you should be really careful, Dave, like human in
the loop. Yeah, that's, that's a safeguard but you know,
not the best one, right. So you know, I like,
I like to click on things as much as the
next guy, right. But you know, so, so, but in
seriousness, you know, you kind of got to go back
to fundamentals, right? So visibility, right. In the world of
AI, we like to say things like transpar and observability,
but fancy words for being able to watch stuff, monitoring
like JR just talked about, right. I want some ability
to have protected controls, right. Maybe that's around identity, maybe
that's clear permissioning, like things like that inside of browsers.
Like I'm not talking rocket science or high, high levels
of data science. I'm talking about basic security, you know,
protective controls, right. And then I want to be able
to like once identify it, I have some ability to
respond, right. You know, whatever that may be, right. So
you know, it's just basics, right? And, and today those
basics aren't there, right. And it's, it's just left on
my shoulders to be very secure. And that's, it's going
to be out, it's out of my hands today. So
like that's a silly thing. So like, you know, I
think, I think, you know, for an enterprise to allow
these things to happen, right, it's going to require the
basics to be there. And like in a, like, like
the fundamentals have to be there. Like how are you
watching it? How transparent is it? How auditable is it
when something does happen, what can I do about it?
Right. What am I doing to protect those things from
potentially happening? Right. I mean, we've built entire industries around
doing just those things. So I know that's what I've
done. That's what that Suja was wondering. That's what I've
done for 20 years. So it's trying to, trying to,
trying to fight those things. So see, absolutely. The reasoning
capacity, the human being is going to reason more, right?
As opposed to. In today's AI, the reasoning capability is
not there like a human because when you tell it
to forget and it forgets, then we have a, what
do you call it, a single cell organism. We are
nowhere close to AGA at that point. So we need
to be thinking about it from. Because it's a combination
of rules as well as reasoning. And the reasoning capability
is not there yet. So yeah, I love that. That's
a really optimistic take on this challenge that we're facing
that we're eventually going to get there because we have
folks like you working on those security issues. So let's
end it on that optimistic note and move on to
something slightly. Well, actually just as scary, I would say
Research has uncovered what it calls the YouTube Ghost Network.
This is a series of fake accounts hosting a combined
3,000 videos, all designed to trick people into downloading malware.
What's interesting here is that the videos pretend to give
people instructions on, you know, downloading cracked software or hacking
a game. You know, you, you, you turn to it,
to a tutorial. And of course, if you follow the
instructions, what it's actually telling you to do is download
and execute some kind of malware on your device and
then you end up compromised. Now, some of the things
that are noteworthy here, at least from where I'm looking,
is that in addition to these accounts that post videos,
the network includes a bunch of fake users who engage
with the videos to make them seem more legit. And
because there are so many accounts involved, if some get
taken down, more can just take its place. So it's
a very resilient network. And really interesting to me is
that it was before this. So I want to start
with kind of talking about that tripling of the pace
there. What do you think is motivating that what's allowing
them to go so much faster right now? And I
will throw to you, Sujo, what do you think is
speeding things up here? I think just like everything else,
like automation and AI is helping us to create these
things much faster. Just like what we saw with phishing,
creating an email took 14 days. Now it takes minutes.
So you are able to go do this with bots
much faster than a human would. Because I think we
have said the humans need to at least go eat,
go to the restroom, whatever it is for AI, they
can run 247 without sleep doing these things. I think
that's what is causing it very, very difficult. And as
us, we stopped reading, looking for instructions, we just now
look at a video, two minute video to tell me
what to do for everything. We need a hack. So
that also makes it very easy and tempting for us
to go click on those links. Yeah, I know. I
myself am extremely guilty of I need instructions on something.
I just. Whatever the first video on YouTube is, I
watch it. I don't know who these people are. I
shouldn't just be following this blindly. And maybe that makes
me ill qualified to host a cybersecurity show. But anyway,
jr, I want to throw to you real quick because
we were just talking before about the kind of evolution
of social engineering, right. And you were talking about how
social engineering is kind of becoming prompt engineering. Well, here's
another new take on social engineering, right. Which is that
like it's harnessing the trust we have for platforms like
YouTube and getting people to do things without even really
questioning it. Wondering if you have any thoughts on that
aspect of this attack. Absolutely, Matt. I think what's happening
is that the boundaries between what is trusted information and
disinformation are blurring and disappearing. And when that happens, what
you get is weaponized content that is wrapped in the
aesthetics of authenticity. So when the threat and the legitimate
experience are indistinguishable, all your traditional defenses are going to
fail. And that's exactly what's happening now. Because if you
look at some of the specifics of the YouTube ghost
networks, users are actually served up content along with instructions
as what to do. And exactly. As Suja was saying
as well, they have provided a recipe including some really
egregious things like temporarily disable your Windows Defender and they
just go off and do that. They sort of meekly
follow. And then of course they become victims of these
kinds of attacks. Absolutely. Yeah, you're totally right. Like the
disinformation looks exactly the same as the stuff that We've
come to trust. Right. And there's nothing to kind of
warn you, obviously, except your own wits. Dave, want to
get you in here. What are your thoughts about this
whole ghost network situation? You've got it expanding, right? Because
it's easier to expand. And if it didn't work, there's
no reason to automate it. So it's worked, Right, and
then why does it work? Well, it works because it's
trusted, right? And no one's blocking it, right? Because. Right,
so. So you've got this trusted source that's being pumped
full, right? Like you said, it's accelerating in 2025. So
you get this acceleration of bad content cloaked, you know,
the wolf in sheep's clothing, if you will. Right. J.
Thank you for that. It was fantastic. Right, like, so
you. You've got that. So now, now, now, like, as
a defender, right, like, you know, how am I going
to stop this? Right? It gets to the weakest link
again. Is. Is the person clicking, you know, the mouse,
right? Or the. The pad or, you know, the phone
or whatever it may be, right? So, like, that's where
it's got to stop, right? Some. Someone has to have
the wherewithal to go. Maybe I shouldn't. You know, in
this case, you know, we're talking about crack software or
something like that, but it doesn't have to be that.
It could be something much more legitimate. You have to
do it. So to me, this is like, from a
defender's perspective, right? Like, this is. This is just another,
like, wow, now I have to watch, you know, like,
I've always had to watch legit sources for, you know,
false activity, but now it's just even harder, right? And
it's just. It's never been a human solvable problem to
be a defender. It's always required technology to help us,
you know, get a leg up or at least keep
our head above water. And this is just. Just another
example of how much more difficult the job of a
defender, you know, in today's world has become. Absolutely. And
I think it's a really good point that you bring
up that, like, this is something where that. That person
is kind of the. The first and only line of
defense in a lot of ways, right, because they're being
told to just do what they. What the instructions say,
and then they end up infecting themselves with malware. So,
like, how do you. I don't know, how do you
defend against something like that? Like, can you. How do
you get in there to. To stop people from Doing
this sort of thing. Can you stop them from doing
this sort of thing? Sudra, I want to throw to
you to start. What do you think? How do we
defend against something like this? Because literally, like whack a
mole because you shut down one, a new thing is
going to come up, right? Like, for example, YouTube is
something that children start looking at YouTube from like the
time they're born. This is like having a malware in
for previous generation in tv. Okay, so this is not
something which is an email, you work on it, your
text. This is something that you're used to. It's an
extremely trusted one. So it's more awareness and education is
what I can think of. Because you cannot run at
the pace at which these are going to come in
to having that understanding that what is, what is legitimate
versus net and not giving it to temptation, that I'm
playing a game now, because most of the time that's
what happens. I want to hack and then I go
look at it and that's where you get caught. So
stay in your lane, man. I, you know, that's a
really good point that I had not considered. The fact
that like, YouTube also reaches a whole different audience than
some of this other stuff, Right? Like, you know, I
think about, I don't know about you folks, but I've
gotten so many of those texts, the scam texts that
are like, you have overdue tolls or whatever and, and
you know, a kid gets that and they're like, I
don't drive a car. This is not for me. You
know what I mean? But you're right. Who's one of
the primary, you know, targets for like, how do you
hack a game? A lot of kids, they get on
and then they can get into some trouble. So that's,
that is a. That makes me uncomfortable. Jr, what are
your thoughts on, on how we go about maybe shoring
up our defenses against an attack like this? I do
think that's going to be a mixture of both technical
controls that I think platforms like YouTube can put in
place. But there is no escaping the fact that one
has to have human in the loop education and awareness.
Exactly as Suja was saying. So there are things that,
for example, YouTube could do, they could bring about a
focus, a renewed focus around provenance. This is something that
we've talked about for a long time, but we have
ignored. I think we haven't really architected controls around provenance.
So when users come in and upload videos, right, you
could assign a provenance to where those videos are. Whether
they are trustworthy or not, we do this in other
places. We can also do this on YouTube. Similarly, when
there are instructions, especially those that say disable your Microsoft
defender, those are things that the platform provider can easily
flag so that, so as to ensure something like a
zero trust browsing experience. And then there could be, even
if you think a little bit further out, we could
bring in, along with the sort of the accountability on
the platform level, we could bring in some amount of
AI moderation as well. Things like having AI watermarks or
signature verification for uploaded media. Right. These are things that
we could bring in. So there are, you know, to
complement the human awareness aspect, there are technical controls and
there are technical steps that platform providers can take. Absolutely.
I like that very much. There are things that can
be done. It's not as, you know, insurmountable as it
might seem at first glance. Dave, take us home. Anything
you would recommend in terms of shoring up our defenses
here? I would like to take Jer's comments, I would
like to highlight them, underline bold face and increase the
size of the font. No, and seriously, because the traditional
answer here is well, you got to, you got to
educate your users and you got to deploy defense in
depth. That's what you got to do. Right. And it
completely ignores the fact that, that there's some, there's some
culpability on the provider side. There are plenty of things
to do and j, you just gave tremendous examples of
things that, that the provider can do to check out
the link on the other side. And you know what,
there's the, there, there's this, these new technologies that they
can use to test these things out. That's like the
greatest answer. So that's exactly what we should be doing.
Like it cannot fall to user beware. There has to
be culpability on the provider side to check the things
out. Yes, you can have a platform that puts links
out there. Right. But you can also double check that
maybe those links are, you know. Right. Like thanks Chair.
That was a great answer. I totally agree. But given
the track record of all the corporations I know who
need to be responsible. I know, I, I, I, I
hear you, but that's also a challenge. Let's move on
to our next story. Here we've got another concerning new
kind of attack. This is Glass Worm introduces some new
tricks to the world of malware. Now a few weeks
ago we had the shy hulude worm moving through software
supply chains. And this week we have Glass Worm. It's
been found hidden in 13 compromised extensions on the OpenVSX
registry and the Microsoft Extension Marketplace. Once it gets onto
a device, it can drop info stealers and something called
Zombie. Again, we got some spooky names here. There's a
theme, you know, but end something called Zombie, which basically
turns the infected device into a proxy for criminal activity.
Now, there are two things in particular that I found
really, really fascinating about glassworm. The first was that it
uses the Solana blockchain and Google Calendar for command and
control activity, right? Which is like, these are two things
that, I don't know, I don't think they attract a
lot of attention as possible sources of maliciousness. And then
the other thing it does is it uses invisible Unicode
characters called variation selectors to hide the malicious code in
the extension's source code. So basically, if you open up
the source code, you can't even see the malicious code
buried in there because it's hidden with these invisible Unicode
characters. So I just want to start with some initial
reactions to this. Is does this seem as concerning as
I think it is? Let me start with you, J.R.
what are your thoughts about glassworm? Oh, I think it
is extremely concerning. I think officially we have entered the
era of post infrastructure malware. Let me explain what I
mean by post infrastructure. So previously, you know, you could
have a piece of malware that was spreading and it
had command and control servers assigned to it. You get
the IP addresses, the government helps you, you go in,
you seize the equipment, you take the command and control
out. What is now happening is the malware is using
publicly deployed infrastructure, which is ubiquitously available, which is resilient
in fault tolerance, fault tolerant. So you take down, it
is still available elsewhere. And that's the case with Solana,
right? Because, you know, it's public, it's immutable, and it's
ubiquitous, and you just can't take that down, right? I
mean, so it's not as easy as taking down a
command and control server. And the same thing. What you
saw in this particular case is that this was very
similar to they had actually built layers of resilience because
the first layer of resilience was using Solana. The second
layer was using Google calendar. Now, cloud APIs like calendar
and Drive are always allowed through enterprise firewalls. And Google
Calendar is something that has been architected so that it's
available all the time, right? So malware is now beginning
to take advantage of this. And now when you add
that icing that you can now have invisible characters. We
always knew Unicode cloaking was A problem. Experts are not
going to be able to see the difference. If you
do any differences in GitHub, it won't show you these
characters. And what's happening is the code looks benign visually,
but functionally it is altered because of Unicode characters. So
this combination is really advance on the part of the
attackers, which is something we cannot ignore. Post infrastructure malware.
I really, really like that. And again, I feel awful
learning about that right here for the first time. Dave,
what are your thoughts on this situation, this world of
post infrastructure malware and glassworm. Yeah, I love that phrase
as well. That's. That's it. That's why. That's why JR
is who he is. He should name everything for us.
You know, I. I mean, I think, like, isn't the
creativity, like, an ingenuity here? Like, like astounding? Like, it's
just crazy. I mean, hiding characters. Okay, maybe not that
part of it, but. But the blockchain part of it,
right? And. And very well explained, Jer. I think, like,
that's. That's the bit that is. That's it. Like, so,
like, this is what we've. The detect and respond mechanism
playbook is completely gone. I can't detect it and I
can't respond to it. I got nothing. Right? So I
have to wait until the attack has happened. There's no
real responding to root cause, right? So, you know, it's
shoring up defenses and it's, you know, like defense becomes.
Look, the. The playbook in other areas responds. But when
I go straight into threat management around, you know, like
core, you know, identify the. You know, investigate like that,
that core cycle, this absolute. It's like it. It's completely
done. Like there's nothing I can do, right. I cannot
get to those core servers because if I. If I
manage to find one, it's gone. Right? So it's. I
mean, and I'm not. Nobody's. Nobody's happy. Smiley facing or
thumbs upping it. But, you know, look, some. Every now.
Every now and again, something comes along and you're like,
oh, that's clever. You know, this was supposed to be
our spooky episode, and I think we're making it even
spookier than I thought it was going to be. Suja,
how are you feeling about this? I mean, you know,
the detect and response playbook being kind of completely disrupted
here. What are your thoughts on this situation? I'm good.
I think Junior said it better, so I don't have
any comment. Fair enough. Fair enough. So then let's move
on to kind of, you know, Dave, you were saying,
look, it's what we do is we adapt to this
sort of thing, right? We see every once in a
while you see something, you say, hey that's really clever,
but we don't have time to rest on our laurels.
We got to do something about it. So how do
we adapt to something like this? And I'll swing back
to you again, J.R. do you have thoughts on what
is required of us in this moment? Fundamentally, the idea
under post infrastructure Malware is that malware is relying upon
publicly available infrastructure to execute its purposes, which means that
if we have publicly deployed infrastructure, they have to up
the game in terms of protecting against use, abuse and
misuse by such malware. And what that means is if
you take something like the Solana blockchain, I think we
are going to see the emergence of an era where
you'll have telemetry and threat detection and intelligence around blockchains.
That may be a subfield that's going to come. Similarly,
if you look at something like Google Calendar, we might
actually have something like Context aware cloud API inspection. We've
had things around cloud security before, but these kinds of
developments may bring that back into fashion. And then finally,
if you look at things like Unicode cloaking, I think
we'll have to do something around Unicode normalization and auditing
of differences. Maybe things like software, bills of materials and
linters which are more Unicode aware may become more important
as we go forward. But definitely it's not going to
be a one trick solution. It will require a village
to get this thing done. According to Verizon, organizations are
neglect mobile security. Now Verizon Business's 2025 Mobile Security Index
came out recently and it reports that people are far
more likely to fall for smishing attacks or SMS phishing
than for email phishing. And attackers know this. Hence the
increase in smishing attacks recently. But we're not seeing corresponding
increases in investment in mobile security tools, which leaves organizations
wide open. So my first question here is why do
you think organizations are kind of neglecting to invest in
mobile security? And I'm going to throw it to you
first. Sujay, any thoughts on why this is a gap
for so many companies right now? Well look, if most
most companies when they give out mobile devices it does
come with security, right? Whether it is edge or whatever.
Like for example in our own organization we use like
IBM's mass316 everyone so that you know what is going
on even if you accidentally click the link. We are
able to intercept and then say, okay, this is not
something you're supp. As Dave pointed out, our job is
to click on things. So it's very difficult to tell
people and say, don't click on things. And you are
preoccupied and you might accidentally click something. But in today's
world, if you think about it, in many, many companies,
people are bringing their own device to work. When they
are bringing their own device, does it have the same
level of security in it? That's something we fail to
look at. And in today's world, that big opens it
up and you're taking your work device everywhere else, and
then the home device comes to your work network. It's
a huge challenge with device proliferation. Absolutely. And, Dave, anything
to add there? What are your thoughts on why this
is such a gap for so many organizations? Yeah, sorry,
I was busy trying to pay a toll that I
just got a text about. Yeah, it was worth a
shot. It was worth a shot. No, make myself laugh.
No. So, no, I. I think, I think, Suja, you
hit it, right? I mean, it's. It's. If it was
just the, you know, like a laptop, right? Like, the
company issues a laptop and they can lock it down
and, you know, your. Primarily, your primary use is work.
Sure. You might do some things here. There's. This is
a mobile phone or a mobile smartphone or whatever we're
calling these computers these days that we carry around in
our pockets. It's bring your own device. It's shared use.
You throw it to the kid in the backseat so
they be quiet. There's a zillion things that happen to
that device. And so you can't lock it down. Some
companies can. Right. I think we do a pretty darn
good job about it. But I think. I think it's.
There's. There's outrage. Like, what do you mean? I can't
do this. I can't do that? Right. And even when
you do that, there's still quite a bit of, you
know, of things, Bad things that you could potentially do.
But I also think there's this mindset that, you know,
it's not attached to anything. Right. I don't touch the
data. Like, I don't really look at. I don't log
into. Right. You know, now increasingly there are. But even
then, like, oh, well, it's just the app. That app,
you know, I might log into my, you know, my
SAP app or my, you know, whatever app, but that's
not the real thing. Yeah, it is. Right. You know
what I mean? Right. Like, you know, But I think
there's a mindset in a lot in the larger, you
know, population that, like, you know, it's just my phone,
my phone isn't my laptop. It's not what it really
is, which is an extension of the enterprise. That's the
edge of the network. It's right there in your hand,
right? And as you access it and things like that.
So I think that's the second part of it, is
that they are ubiquitous. They're all over the place. They're
used in very different ways than other resources within the
enterprise. And then they don't think about them in the
same way as they would a laptop up. Right. I
don't know, it just, I think that's the challenge. I
think that's the challenge. I, I do think that, that
it is something that's easily overcome, right? It. Because you
just, you just need to say it, right? Like, no,
no, no, no, no. This, that's our phone. Right? That's
right. Like, right. Like this is, this is, these are
the, this is the acceptable use. This is what we're
going to go do with it, right? And, and thou
shall not do these things there, right? And sure, you
can do, you know, this is what, you know, personal
use on your work device is allowed and not allowed,
you know, and, you know, whether that includes social or
not include social or includes, you know, kid games or
whatever, like, like that's, you know, within the safety. But
you must run these apps, whatever it may be. So
I think that's a, that's a very enforceable thing. I
just think that that's, that's the next step. Now, something
I'm thinking about, especially after, you know, Dave, you're talking
about how part of the answer here is you just
say, no, no, no, that's our phone. But what about
the opportunity for shadow it? I mean, there's a lot
of people who, even if you tell them, don't touch
this data with your personal phone, they might go ahead
and touch that data with their personal phone. I'm wondering
if we have anything to kind of combat that angle.
And I want to ask you about that, J.R. do
you think shadow it is a problem here in this
world of mobile security? Oh, absolutely. I think very much
like what Dave was saying, what we are confronted with
is not just a technical problem, but a cultural problem.
Right? It's both technical and cultural. Technical on the side
that actually when mobile phones first came out, when we
looked at some of their architectures, at first they held
the promise of being more secure. Than what our desktops,
what our laptops, what our MacBooks were like. But over
a period of time, from a technical perspective, the enterprise
security governance mechanisms like endpoint detection and response, things like
this, that we've applied to mobile, to corporate endpoints, they've
become so strong and on the other side. And the
culture has also reinforced that certain kinds of behavior around
corporate endpoints. But when you come to mobile endpoints, very
often these are bring your own devices. The corporate governance
has lagged behind because many people are hesitant to put
they will use their mobile endpoints, their phones to take
calls or to call in, but they're kind of hesitant,
they still draw a line on their mobile device between
personal use and official use. And furthermore, the corporate governance
has lagged behind. We don't have EDR solutions deployed on
these endpoints. And culturally, because we start using it for
a host of personal applications, whether it's social media, it's
professional media, we don't hesitate as much to go click
on a link and open it on a mobile endpoint
that we would have done on a corporate endpoint. So
it's both technical and cultural in that sense. And it's
almost a perfect storm again. And absolutely there are things
that again, we have to raise the bar, especially if
these mobile endpoints are going to be used for corporate
functions as to what both from a technical perspective, like
EDR solutions, and also user awareness perspective, what is acceptable
or unacceptable use of these devices. I think what people
don't understand is a lot of people talk about privacy,
especially technologists. They would say, I will not put these
spyware, corporate spyware in my phone. What they don't realize
is with that they probably would have been much more
secure than what they had if they just gave the
data to one corporation. They prevented it from going for
so many other bad things. I think we don't think
like that when we are really working because I would
rather have that corporate security in my phone rather than
my personal phone, which doesn't have any security. And that
kind of gets to, you know, what I think is
a really important through line of this entire episode, maybe
every single episode we've ever done, which is that like
human psychology plays such a role in a lot of
these security, security challenges. Because like you said, Suja, for
a lot of people, they hear, wait a minute, you
want to put software on my phone that's an invasion
of my privacy? And I get that, but it's also
like, okay, but would you rather have your privacy invaded
that way or would you like to invaded by a
bunch of hackers on the dark web who sell your
data, you know what I mean? Like there are trade
offs that have to be made moving along because we're
almost out of time. I've got one last topic for
you folks here and I think we're going to treat
it like a quick roundtable. Security repercussions of the AWS
outage. Now, last week's outage, as I'm sure you're all
aware, wasn't caused by a cyber attack, but it did
expose just how fragile some of our core Internet infrastructure
can be. Right. One sort of area of AWS went
down and that affected Amazon's website. Ring cameras, Signal Riverside.
The app we used to record this podcast was down
that day. So so much stuff just broke because one
little thing shut down. And I'm wondering, you know, from
your perspective as security professionals, what kind of lessons do
you think this holds for us in terms of how
we build more resilient infrastructure overall? And I'll go around
real quick. Suja, let's start with you. What are your
thoughts here? Whenever there is automation, the resiliency goes out
the door, right? Like, because that is something we learn.
It's extremely important to keep resiliency. Just like how security
is top of mind for us with agents, with AI,
proliferation and automation, resiliency becomes really, really, really key. Because
you're talking about it. I was reading about how mattresses
got hot and they couldn't bring it down because these
were all done using an app. And then because of
this outage and I was like, really? Mattresses? Because today
that's the level of automation we have. Everything operates using
cloud, right? So resiliency becomes extremely key. Just like the
garage door has a hatch which you can open when
the electricity goes off. What is that? That's going to
get you out. So that's very, very important when we
are designing these systems. Absolutely. And you know, I laugh
but like, yeah, I'd be pretty mad if my bed
was too hot and I couldn't shut it off because
of AWS was down. Dave, how about you? What are
your thoughts on the kind of resiliency angle here? I'm
not sure I can follow a hot mattress, but I
mean it's tough. It's tough. Look, look, I think it's
about contingency plan. Oh my gosh, it's aws. They've taken
care of that for me maybe. Right. The good news
here is that the tech exists, right? You can pop
between providers easier than Ever before. That barrier to move,
you know, has never been lower in the history of
tech, right? I mean, you don't have to pick up
and have a whole different data center. You don't have
to roll. Like, you can just move the workload, right?
So, like this, this didn't have to be as someone
should have been able to cool off their mattress, right?
Like you didn't have. That's a great story. I missed
that part of it, to be honest with you. I
heard about some of the other ones, but, you know,
like the, you know, like, that's contingency planning, right? That's
saying like, hey, this is, this is something. What is
my contingency plan if this happened? Even if it's a,
you know, a, you know, one in a whatever shot,
right? Like, what is it? Okay, it's this. All right,
cool. If that happens, I push this button and I
have. I'm back up in whatever time, right? Like that's,
that's not magical, right? Like, that's, again, that's just kind
of common sense kind of a thing. Like, my clients
require me to be here 100%. And so whoever I'm
building this on, whether I do it myself or I
rely on a big provider who almost never goes down.
But what if they did? I mean, so that's just,
you know, you know, kind of, you know, like, again,
it's resiliency, just like Suju said. So, you know, it's
it's not high. The tech exists, just needed to be
done. Jr, how about you to close us out today?
What are your thoughts on this AWS situation and what
it teaches us about resilient systems? So I think one
of the biggest lessons that security has taught us in
the last, I would say, ever since the Google aurora
attacks of 2008, is how incredibly interconnected and dependent we
are on each other. And when we lose sight of
that, how we are going to pay the price. So
if you look at the AWS outage, despite having multiple
zones, availability zones, and fault tolerance built in, things like
key aspects of the AWS infrastructure, like identity and access
management, and some of their CICD pipelines were still being
served out of the US east geography. And what happened
as a consequence of that was everybody else, including all
the businesses that you mentioned, mattresses business included, were all
dependent upon US East. So if you lose sight of
dependencies in this incredibly interconnected world, then there is a
price we're going to pay. And that's the place we
would start in trying to rebuild resiliency for our infrastructure
like clouds as we go forward. Absolutely. You can't lose
sight of those dependencies. JR I feel like I should
have you on every single episode because you have a
gift of summarizing all of these topics and making them
so clear and crisp and to the point. But that
is all the time we have for today, folks. Thank
you to our panelists Suja and JR And Dave, thank
you to the viewers. Thank you to Brian Clark for
stepping in for me on short notice last week. As
always, subscribe to Security Intelligence wherever podcasts are found. Stay
safe out there and watch out for things that go
bump on the web. Up next, a sneak peek of
a very special episode. Check out the Security Intelligence podcast
podcast on Spotify, Apple, or wherever you get your podcasts
this Friday for the full audio experience to hear the
terrifying tale of just how easy it is to break
into your corporate office. You're the receptionist at a mid
sized firm in Los Angeles and a woman you've never
met before walks right up to you. She's interviewed with
the company next door and she just spilled coffee all
over her resume. She's got extra copies right here on
a flash drive. If you could print her off a
new copy, that would be amazing. But then you'd be
falling right into the trap laid for you by Stephanie
Carruthers. Stephanie is an expert in social engineering, particularly the
kind that involves breaking into people's buildings. Physical assessments are
are by far my. Favorite and I think a lot
of. Social engineers favorite because it's the most tangible thing
you can do.