Learning Library

← Back to Library

AI Browsers, Ghost Networks, Malware

Key Points

  • The podcast opens with a warning that shutting down one cyber threat often leads to the emergence of new ones, exemplified by the rise of YouTube‑related malware targeting children.
  • Hosts discuss several recent security incidents, including the YouTube “ghost network,” the Glassworm malware campaign, widespread neglect of mobile security in enterprises, and the massive AWS outage of 2025.
  • A new AI‑integrated web browser from OpenAI (Atlas) is highlighted as especially risky because prompt‑injection attacks can embed malicious code in web content, images, or URLs to hijack the browser.
  • Panelists agree that, despite the allure of AI assistants, the technology is still premature for secure deployment, emphasizing the need for thorough credential verification and proactive security design before widespread adoption.

Sections

Full Transcript

# AI Browsers, Ghost Networks, Malware **Source:** [https://www.youtube.com/watch?v=zc8qBG99GWk](https://www.youtube.com/watch?v=zc8qBG99GWk) **Duration:** 00:43:58 ## Summary - The podcast opens with a warning that shutting down one cyber threat often leads to the emergence of new ones, exemplified by the rise of YouTube‑related malware targeting children. - Hosts discuss several recent security incidents, including the YouTube “ghost network,” the Glassworm malware campaign, widespread neglect of mobile security in enterprises, and the massive AWS outage of 2025. - A new AI‑integrated web browser from OpenAI (Atlas) is highlighted as especially risky because prompt‑injection attacks can embed malicious code in web content, images, or URLs to hijack the browser. - Panelists agree that, despite the allure of AI assistants, the technology is still premature for secure deployment, emphasizing the need for thorough credential verification and proactive security design before widespread adoption. ## Sections - [00:00:00](https://www.youtube.com/watch?v=zc8qBG99GWk&t=0s) **Whack-a-Mole Threat Landscape** - The podcast intro likens cybersecurity to a whack‑a‑mole game, previewing scares such as YouTube’s ghost network, Glassworm malware, neglected mobile security, a major AWS outage, and the risks of AI‑integrated browsers. - [00:03:16](https://www.youtube.com/watch?v=zc8qBG99GWk&t=196s) **Caution Over AI Browser Adoption** - Participants warn that AI‑driven browsers are still immature and risky for sensitive or enterprise use, urging security checks before adoption. - [00:09:02](https://www.youtube.com/watch?v=zc8qBG99GWk&t=542s) **Fundamentals of Enterprise Security** - The speaker stresses that basic protective controls, monitoring, transparency, and auditability are essential for organizations, highlighting current gaps and contrasting human reasoning with AI's limited capabilities. - [00:12:29](https://www.youtube.com/watch?v=zc8qBG99GWk&t=749s) **AI‑Driven Social Engineering via Video Prompts** - The speakers examine how relentless AI bots and our reliance on short YouTube tutorials blur the line between trusted information and disinformation, turning social engineering into “prompt engineering” that exploits platform trust. - [00:15:51](https://www.youtube.com/watch?v=zc8qBG99GWk&t=951s) **Defenders Grapple With Escalating Threats** - The speakers highlight how today’s security defenders must depend on advanced technology to contend with ever‑more sophisticated, seemingly legitimate attacks that even ordinary users can unintentionally trigger. - [00:18:59](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1139s) **Platform Controls for Media Trust** - The speakers outline technical measures—provenance tagging, malicious instruction flagging, AI moderation, and watermark verification—that platforms can implement to ensure trustworthy content and support zero‑trust browsing alongside user education. - [00:22:10](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1330s) **Glassworm: Blockchain C2 & Unicode Obfuscation** - Researchers highlight the glassworm malware's novel use of the Solana blockchain and Google Calendar for command‑and‑control and its concealment of malicious code via invisible Unicode variation selectors, marking a shift toward resilient, post‑infrastructure attacks. - [00:25:16](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1516s) **Broken Detection and Response Playbook** - The speaker critiques how their current playbook fails to detect or mitigate sophisticated blockchain‑based attacks, leaving core servers unreachable and defenses ineffective, while admiring the attackers’ ingenuity. - [00:29:36](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1776s) **BYOD Security and Click Risks** - The speakers discuss how personal device proliferation and users’ propensity to click links create major security gaps, emphasizing the need for tools like IBM’s Mass316 to intercept threats while acknowledging the difficulty of restricting user behavior. - [00:32:39](https://www.youtube.com/watch?v=zc8qBG99GWk&t=1959s) **Mobile Device Policy Challenges** - The speaker emphasizes that ubiquitous mobile devices are viewed differently from traditional assets, so organizations need clear, enforceable acceptable‑use rules to manage usage and mitigate shadow‑IT risks. - [00:37:51](https://www.youtube.com/watch?v=zc8qBG99GWk&t=2271s) **Automation Failures Highlight Resilience Needs** - Security professionals reflect on a podcast‑app outage that cascaded through highly automated, cloud‑dependent services, emphasizing the urgent need to embed robust fallback mechanisms and resiliency into system design. - [00:41:11](https://www.youtube.com/watch?v=zc8qBG99GWk&t=2471s) **Interdependency Risks in Cloud Resilience** - The discussion highlights how the AWS outage exposed hidden cross‑region dependencies—like IAM and CI/CD services centralized in US‑East—demonstrating that true resiliency requires recognizing and mitigating such interconnected points of failure. ## Full Transcript
0:01Literally, like whack a mole because you shut down one, 0:03a new thing is going to come up. For example, 0:05YouTube. Children start looking at YouTube from like the time 0:09they're born. This is like having a malware in for 0:12previous generation in tv. This is not something which is 0:15an email. You work on it, your text. This is, 0:17this is something that you're used to. It's an extremely 0:19trusted one. All that and more on Security Intelligence. Hello 0:28and happy Halloween and welcome to Security Intelligence, IBM's weekly 0:33cybersecurity podcast, where we break down the most terrifying stories 0:37in the field with the help of our panel of 0:40experts. I'm your host, Matt Kaczynski, and this year I'm 0:43going as the scariest thing I can think of, a 0:46guy who still uses single factor authentication. Joining me today, 0:51we have suja Viswasen, IBM VP of Security Products, J.R. 1:04of the spooky season, we've got five frightful stories to 1:07cover. And stay tuned after the conversation for a sneak 1:11peek of a special episode coming out later this week. 1:15But today we are covering YouTube's ghost network, the Glassworm 1:20malware, how enterprises neglect mobile security, the great aws outage 1:25of 2025. But first, is there such thing as a 1:29safe AI browser? Now, I'm sure everybody knows that last 1:37week OpenAI released Atlas, a web browser with ChatGPT and 1:42Agentic capabilities built right in, which sounds really cool, but 1:46experts are already warning that it like Perplexity's Comet before 1:50it is vulnerable to prompt injections. Right? Attackers can hide 1:54malicious code in web content, images, even URLs, to hijack 1:58these browsers and influence their behavior and make them do 2:01all kinds of nasty stuff. So to get us started 2:03today, I want to ask everybody real quick, kind of 2:06around the table, as a security professional, would you use 2:09one of these things? And I'll start with you, Dave. 2:11How are you feeling about these AI browsers? My goodness, 2:14no, I would not use them. Not yet. I mean, 2:17look, the promise is there. Just we're a little early. 2:20The rush to get these things to market has not 2:23allowed them to be secured. I think that's, you know, 2:26my biggest worry, right, is, is, you know, I want 2:30an assistant so bad that I don't bother to check 2:33their credentials, you know, and that's kind of the parallel 2:37for me. So, you know, that's what we've seen with 2:41this particular browser. That's probably what we'll see with the 2:43next one. You know, I mean the heartening thing for 2:47me, I've been in security for 20 plus years, don't 2:50ask me how long that suffice it to know. But 2:53at least we're thinking about it now. Like if we 2:56go back through the other big hills of IT advancements 3:00and JR's been with me through a couple of these, 3:02so at least security in AI is being thought of 3:08shortly thereafter. Like we've had a big advancement in AI 3:11and people go, wait a minute, I don't know if 3:12I can trust this just yet. This is good, right? 3:16It comes out and we immediately are thinking like, wait 3:19a second, hold on, can we prompt and drink this? 3:21Can I trust this? Wait, wait, wait, wait. What if 3:24it starts acting unethically on my behalf and it's right 3:27like these, this. So, so no, I think, you know, 3:29for me, Matt, this is a little bit early. Absolutely. 3:31And I like the idea of, you know, it's like 3:33an agent, it's an assistant that you want so badly 3:36that you're not checking the credentials. Right. Because I can 3:38see that especially for a lot of everyday users who 3:40might not know much about how this stuff works. They 3:42just see shiny new browser, they download it, they're excited, 3:45they're not taking precautions. Jr, how about you? Would you 3:48use one of these things? I tend to agree here 3:50with Dave. I think I might use it for some 3:54casual browsing, you know, maybe summarizing a few articles or 3:57question answering where the risk is extremely low. But these 4:02browsers are not ready for enterprise use. They're not ready 4:04for high stakes, especially when you have sensitive data. Right. 4:08And so, and absolutely, if you have any workflows that 4:12include things like confidential research, financial information, or even, you 4:17know, anything that requires your credentials. I think the level 4:22of maturity for these offerings is very low at this 4:25point in time. And they're very mature, I would say. 4:28And I think we have to wait until we start 4:34fitting security into these browsers and we get some guarantees 4:38before we can go forward to start using them for 4:41any mission critical use. Absolutely. And Suja, how about you? 4:44What are you feeling about these browsers? I'm still gonna 4:46be curious and I would, because I wanna know what 4:50is it? Right. Aren't you? I'd be curious. So I 4:53would actually use it in a machine where I don't 4:55have any of my bank accounts ever stored in it. 4:58Okay. That's the most important thing than anything else. No 5:01financial data in that laptop or phone, whichever I'M using. 5:05That's what I would use. I will have a dumb 5:07phone to use it. That makes a lot of sense. 5:09And that brings me to my next kind of question 5:11for you folks, right? Is we all kind of agree, 5:13it seems like that these things are just, they're not 5:15ready for prime time yet. They're interesting. They could do 5:17some cool stuff, but they're not ready for prime time. 5:19What's it going to take to get them there? And 5:22I'll go back around the circle in reverse order this 5:24time. Sujo, what do you think? We need to get 5:26these things to a place where they might be good 5:28for enterprise use. Do you think we can get there? 5:32I think they need to be starting about the security 5:35aspect of it, shifting left and thinking about it first. 5:38Right? That is what is going to get them. Because 5:41governance, the regulations are always going to catch up. We 5:44saw it with data, right, GDPR and everything else after 5:48we gave our data for free is when those regulations 5:50came in. Governments and regulations are going to be later 5:53no matter how early they think they are. So the 5:56biggest thing, if somebody wants to make money, a business 5:59that wants to start enterprise using it, they need to 6:02be shifting left, thinking about security and have those testing 6:05done. Because look, the rules are there for the good 6:08people. The bad guys don't have any rules. That's a 6:12really good point. We set up those rules, right? And 6:14the bad guys, they're not going to listen anyway, right? 6:16Junior, how about you? What are you feeling like? I 6:19feel like Suja kind of agreed with you there. You 6:21were saying you want to see security earlier in the 6:23process. Do you have anything to add there? What do 6:25you think it's going to take? I think even the, 6:29even OpenAI with Atlas and even some of the others 6:32like Publicity with Comet and so on, they've been very 6:36clear that while they've done security engineering, that there are 6:42many areas where things like prompt injection, which remains a 6:47frontier, which remains an unsolved security problem for them, and 6:50they need to be able to address some of these 6:53before they move forward. So fundamentally, if you look at 6:57where the problems are, I think you need to do 7:01things like detection of prompt injection, prompt sanitization, if you 7:05will, you need to be able to isolate content and 7:09context specifically because many times what the attackers are doing 7:13is they are mixing instructions along with the data that 7:18is provided, the query that is provided, and the models 7:22are unable to differentiate between the two. And so very 7:26often attackers are able to sneak in specific instructions and 7:31able to get the models to misbehave. So I think 7:37there is a lot of promising research that we've done 7:40in this space on how to sandbox some of these 7:43executions, on how to track provenance and how to build 7:48policies around LLMs and firewalls. Some of these capabilities will 7:54have to make their way in. Then we'll have to 7:56see what sticks and what doesn't. And as that level 7:59of maturity goes up, our confidence in these products is 8:04going to go up and the risk is going to 8:06go down. Absolutely. And that's kind of the way that 8:08all this technology moves, right? The new thing comes out 8:10and we kind of have to do a little bit 8:12of catch up. We experiment with what works, we put 8:14it in and then we get to a place where 8:15we can trust these things. Dave, how about you? Anything 8:18to add there? What you want to see these browsers 8:20do to become safer? I mean, as the first person 8:23to say no, I wouldn't use them. I mean, and 8:26the reason is, you know, I mean they're saying like, 8:29well you should be really careful, Dave, like human in 8:32the loop. Yeah, that's, that's a safeguard but you know, 8:35not the best one, right. So you know, I like, 8:37I like to click on things as much as the 8:39next guy, right. But you know, so, so, but in 8:41seriousness, you know, you kind of got to go back 8:42to fundamentals, right? So visibility, right. In the world of 8:46AI, we like to say things like transpar and observability, 8:49but fancy words for being able to watch stuff, monitoring 8:52like JR just talked about, right. I want some ability 8:55to have protected controls, right. Maybe that's around identity, maybe 8:58that's clear permissioning, like things like that inside of browsers. 9:02Like I'm not talking rocket science or high, high levels 9:06of data science. I'm talking about basic security, you know, 9:09protective controls, right. And then I want to be able 9:11to like once identify it, I have some ability to 9:14respond, right. You know, whatever that may be, right. So 9:19you know, it's just basics, right? And, and today those 9:22basics aren't there, right. And it's, it's just left on 9:25my shoulders to be very secure. And that's, it's going 9:28to be out, it's out of my hands today. So 9:30like that's a silly thing. So like, you know, I 9:32think, I think, you know, for an enterprise to allow 9:36these things to happen, right, it's going to require the 9:39basics to be there. And like in a, like, like 9:42the fundamentals have to be there. Like how are you 9:44watching it? How transparent is it? How auditable is it 9:48when something does happen, what can I do about it? 9:51Right. What am I doing to protect those things from 9:54potentially happening? Right. I mean, we've built entire industries around 9:57doing just those things. So I know that's what I've 10:02done. That's what that Suja was wondering. That's what I've 10:04done for 20 years. So it's trying to, trying to, 10:06trying to fight those things. So see, absolutely. The reasoning 10:11capacity, the human being is going to reason more, right? 10:14As opposed to. In today's AI, the reasoning capability is 10:19not there like a human because when you tell it 10:22to forget and it forgets, then we have a, what 10:26do you call it, a single cell organism. We are 10:31nowhere close to AGA at that point. So we need 10:33to be thinking about it from. Because it's a combination 10:36of rules as well as reasoning. And the reasoning capability 10:39is not there yet. So yeah, I love that. That's 10:42a really optimistic take on this challenge that we're facing 10:44that we're eventually going to get there because we have 10:46folks like you working on those security issues. So let's 10:50end it on that optimistic note and move on to 10:52something slightly. Well, actually just as scary, I would say 11:07Research has uncovered what it calls the YouTube Ghost Network. 11:11This is a series of fake accounts hosting a combined 11:153,000 videos, all designed to trick people into downloading malware. 11:20What's interesting here is that the videos pretend to give 11:23people instructions on, you know, downloading cracked software or hacking 11:27a game. You know, you, you, you turn to it, 11:29to a tutorial. And of course, if you follow the 11:31instructions, what it's actually telling you to do is download 11:33and execute some kind of malware on your device and 11:35then you end up compromised. Now, some of the things 11:38that are noteworthy here, at least from where I'm looking, 11:41is that in addition to these accounts that post videos, 11:43the network includes a bunch of fake users who engage 11:46with the videos to make them seem more legit. And 11:49because there are so many accounts involved, if some get 11:51taken down, more can just take its place. So it's 11:53a very resilient network. And really interesting to me is 12:03that it was before this. So I want to start 12:06with kind of talking about that tripling of the pace 12:08there. What do you think is motivating that what's allowing 12:12them to go so much faster right now? And I 12:15will throw to you, Sujo, what do you think is 12:18speeding things up here? I think just like everything else, 12:21like automation and AI is helping us to create these 12:24things much faster. Just like what we saw with phishing, 12:27creating an email took 14 days. Now it takes minutes. 12:30So you are able to go do this with bots 12:32much faster than a human would. Because I think we 12:36have said the humans need to at least go eat, 12:38go to the restroom, whatever it is for AI, they 12:40can run 247 without sleep doing these things. I think 12:43that's what is causing it very, very difficult. And as 12:47us, we stopped reading, looking for instructions, we just now 12:50look at a video, two minute video to tell me 12:52what to do for everything. We need a hack. So 12:55that also makes it very easy and tempting for us 12:57to go click on those links. Yeah, I know. I 12:59myself am extremely guilty of I need instructions on something. 13:02I just. Whatever the first video on YouTube is, I 13:05watch it. I don't know who these people are. I 13:07shouldn't just be following this blindly. And maybe that makes 13:10me ill qualified to host a cybersecurity show. But anyway, 13:13jr, I want to throw to you real quick because 13:15we were just talking before about the kind of evolution 13:18of social engineering, right. And you were talking about how 13:20social engineering is kind of becoming prompt engineering. Well, here's 13:23another new take on social engineering, right. Which is that 13:25like it's harnessing the trust we have for platforms like 13:29YouTube and getting people to do things without even really 13:31questioning it. Wondering if you have any thoughts on that 13:35aspect of this attack. Absolutely, Matt. I think what's happening 13:39is that the boundaries between what is trusted information and 13:44disinformation are blurring and disappearing. And when that happens, what 13:50you get is weaponized content that is wrapped in the 13:54aesthetics of authenticity. So when the threat and the legitimate 13:59experience are indistinguishable, all your traditional defenses are going to 14:03fail. And that's exactly what's happening now. Because if you 14:06look at some of the specifics of the YouTube ghost 14:10networks, users are actually served up content along with instructions 14:16as what to do. And exactly. As Suja was saying 14:20as well, they have provided a recipe including some really 14:23egregious things like temporarily disable your Windows Defender and they 14:27just go off and do that. They sort of meekly 14:30follow. And then of course they become victims of these 14:33kinds of attacks. Absolutely. Yeah, you're totally right. Like the 14:36disinformation looks exactly the same as the stuff that We've 14:39come to trust. Right. And there's nothing to kind of 14:41warn you, obviously, except your own wits. Dave, want to 14:45get you in here. What are your thoughts about this 14:46whole ghost network situation? You've got it expanding, right? Because 14:50it's easier to expand. And if it didn't work, there's 14:55no reason to automate it. So it's worked, Right, and 14:57then why does it work? Well, it works because it's 15:01trusted, right? And no one's blocking it, right? Because. Right, 15:05so. So you've got this trusted source that's being pumped 15:10full, right? Like you said, it's accelerating in 2025. So 15:13you get this acceleration of bad content cloaked, you know, 15:16the wolf in sheep's clothing, if you will. Right. J. 15:18Thank you for that. It was fantastic. Right, like, so 15:21you. You've got that. So now, now, now, like, as 15:24a defender, right, like, you know, how am I going 15:27to stop this? Right? It gets to the weakest link 15:32again. Is. Is the person clicking, you know, the mouse, 15:35right? Or the. The pad or, you know, the phone 15:39or whatever it may be, right? So, like, that's where 15:41it's got to stop, right? Some. Someone has to have 15:43the wherewithal to go. Maybe I shouldn't. You know, in 15:47this case, you know, we're talking about crack software or 15:49something like that, but it doesn't have to be that. 15:51It could be something much more legitimate. You have to 15:55do it. So to me, this is like, from a 15:57defender's perspective, right? Like, this is. This is just another, 16:02like, wow, now I have to watch, you know, like, 16:05I've always had to watch legit sources for, you know, 16:07false activity, but now it's just even harder, right? And 16:11it's just. It's never been a human solvable problem to 16:14be a defender. It's always required technology to help us, 16:18you know, get a leg up or at least keep 16:20our head above water. And this is just. Just another 16:24example of how much more difficult the job of a 16:28defender, you know, in today's world has become. Absolutely. And 16:31I think it's a really good point that you bring 16:32up that, like, this is something where that. That person 16:36is kind of the. The first and only line of 16:38defense in a lot of ways, right, because they're being 16:40told to just do what they. What the instructions say, 16:44and then they end up infecting themselves with malware. So, 16:46like, how do you. I don't know, how do you 16:49defend against something like that? Like, can you. How do 16:51you get in there to. To stop people from Doing 16:53this sort of thing. Can you stop them from doing 16:55this sort of thing? Sudra, I want to throw to 16:57you to start. What do you think? How do we 16:58defend against something like this? Because literally, like whack a 17:02mole because you shut down one, a new thing is 17:05going to come up, right? Like, for example, YouTube is 17:07something that children start looking at YouTube from like the 17:12time they're born. This is like having a malware in 17:15for previous generation in tv. Okay, so this is not 17:19something which is an email, you work on it, your 17:21text. This is something that you're used to. It's an 17:24extremely trusted one. So it's more awareness and education is 17:28what I can think of. Because you cannot run at 17:31the pace at which these are going to come in 17:34to having that understanding that what is, what is legitimate 17:37versus net and not giving it to temptation, that I'm 17:40playing a game now, because most of the time that's 17:42what happens. I want to hack and then I go 17:44look at it and that's where you get caught. So 17:47stay in your lane, man. I, you know, that's a 17:50really good point that I had not considered. The fact 17:52that like, YouTube also reaches a whole different audience than 17:55some of this other stuff, Right? Like, you know, I 17:57think about, I don't know about you folks, but I've 17:58gotten so many of those texts, the scam texts that 18:01are like, you have overdue tolls or whatever and, and 18:03you know, a kid gets that and they're like, I 18:06don't drive a car. This is not for me. You 18:08know what I mean? But you're right. Who's one of 18:09the primary, you know, targets for like, how do you 18:11hack a game? A lot of kids, they get on 18:13and then they can get into some trouble. So that's, 18:15that is a. That makes me uncomfortable. Jr, what are 18:20your thoughts on, on how we go about maybe shoring 18:23up our defenses against an attack like this? I do 18:25think that's going to be a mixture of both technical 18:29controls that I think platforms like YouTube can put in 18:33place. But there is no escaping the fact that one 18:37has to have human in the loop education and awareness. 18:42Exactly as Suja was saying. So there are things that, 18:45for example, YouTube could do, they could bring about a 18:49focus, a renewed focus around provenance. This is something that 18:53we've talked about for a long time, but we have 18:55ignored. I think we haven't really architected controls around provenance. 18:59So when users come in and upload videos, right, you 19:03could assign a provenance to where those videos are. Whether 19:06they are trustworthy or not, we do this in other 19:08places. We can also do this on YouTube. Similarly, when 19:11there are instructions, especially those that say disable your Microsoft 19:15defender, those are things that the platform provider can easily 19:19flag so that, so as to ensure something like a 19:22zero trust browsing experience. And then there could be, even 19:26if you think a little bit further out, we could 19:28bring in, along with the sort of the accountability on 19:31the platform level, we could bring in some amount of 19:34AI moderation as well. Things like having AI watermarks or 19:37signature verification for uploaded media. Right. These are things that 19:42we could bring in. So there are, you know, to 19:45complement the human awareness aspect, there are technical controls and 19:49there are technical steps that platform providers can take. Absolutely. 19:53I like that very much. There are things that can 19:55be done. It's not as, you know, insurmountable as it 19:59might seem at first glance. Dave, take us home. Anything 20:02you would recommend in terms of shoring up our defenses 20:05here? I would like to take Jer's comments, I would 20:08like to highlight them, underline bold face and increase the 20:12size of the font. No, and seriously, because the traditional 20:17answer here is well, you got to, you got to 20:19educate your users and you got to deploy defense in 20:22depth. That's what you got to do. Right. And it 20:24completely ignores the fact that, that there's some, there's some 20:28culpability on the provider side. There are plenty of things 20:32to do and j, you just gave tremendous examples of 20:35things that, that the provider can do to check out 20:38the link on the other side. And you know what, 20:41there's the, there, there's this, these new technologies that they 20:44can use to test these things out. That's like the 20:47greatest answer. So that's exactly what we should be doing. 20:51Like it cannot fall to user beware. There has to 20:57be culpability on the provider side to check the things 20:59out. Yes, you can have a platform that puts links 21:03out there. Right. But you can also double check that 21:05maybe those links are, you know. Right. Like thanks Chair. 21:10That was a great answer. I totally agree. But given 21:15the track record of all the corporations I know who 21:18need to be responsible. I know, I, I, I, I 21:21hear you, but that's also a challenge. Let's move on 21:25to our next story. Here we've got another concerning new 21:29kind of attack. This is Glass Worm introduces some new 21:37tricks to the world of malware. Now a few weeks 21:40ago we had the shy hulude worm moving through software 21:43supply chains. And this week we have Glass Worm. It's 21:47been found hidden in 13 compromised extensions on the OpenVSX 21:53registry and the Microsoft Extension Marketplace. Once it gets onto 21:57a device, it can drop info stealers and something called 22:00Zombie. Again, we got some spooky names here. There's a 22:03theme, you know, but end something called Zombie, which basically 22:07turns the infected device into a proxy for criminal activity. 22:10Now, there are two things in particular that I found 22:12really, really fascinating about glassworm. The first was that it 22:15uses the Solana blockchain and Google Calendar for command and 22:19control activity, right? Which is like, these are two things 22:22that, I don't know, I don't think they attract a 22:25lot of attention as possible sources of maliciousness. And then 22:27the other thing it does is it uses invisible Unicode 22:30characters called variation selectors to hide the malicious code in 22:35the extension's source code. So basically, if you open up 22:38the source code, you can't even see the malicious code 22:40buried in there because it's hidden with these invisible Unicode 22:43characters. So I just want to start with some initial 22:45reactions to this. Is does this seem as concerning as 22:48I think it is? Let me start with you, J.R. 22:49what are your thoughts about glassworm? Oh, I think it 22:53is extremely concerning. I think officially we have entered the 22:58era of post infrastructure malware. Let me explain what I 23:02mean by post infrastructure. So previously, you know, you could 23:06have a piece of malware that was spreading and it 23:10had command and control servers assigned to it. You get 23:13the IP addresses, the government helps you, you go in, 23:18you seize the equipment, you take the command and control 23:20out. What is now happening is the malware is using 23:24publicly deployed infrastructure, which is ubiquitously available, which is resilient 23:29in fault tolerance, fault tolerant. So you take down, it 23:32is still available elsewhere. And that's the case with Solana, 23:36right? Because, you know, it's public, it's immutable, and it's 23:41ubiquitous, and you just can't take that down, right? I 23:45mean, so it's not as easy as taking down a 23:47command and control server. And the same thing. What you 23:51saw in this particular case is that this was very 23:54similar to they had actually built layers of resilience because 23:59the first layer of resilience was using Solana. The second 24:03layer was using Google calendar. Now, cloud APIs like calendar 24:08and Drive are always allowed through enterprise firewalls. And Google 24:13Calendar is something that has been architected so that it's 24:16available all the time, right? So malware is now beginning 24:20to take advantage of this. And now when you add 24:22that icing that you can now have invisible characters. We 24:25always knew Unicode cloaking was A problem. Experts are not 24:29going to be able to see the difference. If you 24:31do any differences in GitHub, it won't show you these 24:34characters. And what's happening is the code looks benign visually, 24:42but functionally it is altered because of Unicode characters. So 24:46this combination is really advance on the part of the 24:51attackers, which is something we cannot ignore. Post infrastructure malware. 24:56I really, really like that. And again, I feel awful 24:59learning about that right here for the first time. Dave, 25:02what are your thoughts on this situation, this world of 25:05post infrastructure malware and glassworm. Yeah, I love that phrase 25:09as well. That's. That's it. That's why. That's why JR 25:13is who he is. He should name everything for us. 25:17You know, I. I mean, I think, like, isn't the 25:20creativity, like, an ingenuity here? Like, like astounding? Like, it's 25:24just crazy. I mean, hiding characters. Okay, maybe not that 25:29part of it, but. But the blockchain part of it, 25:30right? And. And very well explained, Jer. I think, like, 25:34that's. That's the bit that is. That's it. Like, so, 25:37like, this is what we've. The detect and respond mechanism 25:44playbook is completely gone. I can't detect it and I 25:48can't respond to it. I got nothing. Right? So I 25:54have to wait until the attack has happened. There's no 25:57real responding to root cause, right? So, you know, it's 26:01shoring up defenses and it's, you know, like defense becomes. 26:05Look, the. The playbook in other areas responds. But when 26:08I go straight into threat management around, you know, like 26:12core, you know, identify the. You know, investigate like that, 26:15that core cycle, this absolute. It's like it. It's completely 26:20done. Like there's nothing I can do, right. I cannot 26:23get to those core servers because if I. If I 26:26manage to find one, it's gone. Right? So it's. I 26:31mean, and I'm not. Nobody's. Nobody's happy. Smiley facing or 26:37thumbs upping it. But, you know, look, some. Every now. 26:39Every now and again, something comes along and you're like, 26:41oh, that's clever. You know, this was supposed to be 26:43our spooky episode, and I think we're making it even 26:45spookier than I thought it was going to be. Suja, 26:48how are you feeling about this? I mean, you know, 26:51the detect and response playbook being kind of completely disrupted 26:55here. What are your thoughts on this situation? I'm good. 26:59I think Junior said it better, so I don't have 27:01any comment. Fair enough. Fair enough. So then let's move 27:05on to kind of, you know, Dave, you were saying, 27:08look, it's what we do is we adapt to this 27:09sort of thing, right? We see every once in a 27:11while you see something, you say, hey that's really clever, 27:13but we don't have time to rest on our laurels. 27:15We got to do something about it. So how do 27:17we adapt to something like this? And I'll swing back 27:20to you again, J.R. do you have thoughts on what 27:23is required of us in this moment? Fundamentally, the idea 27:26under post infrastructure Malware is that malware is relying upon 27:30publicly available infrastructure to execute its purposes, which means that 27:35if we have publicly deployed infrastructure, they have to up 27:38the game in terms of protecting against use, abuse and 27:43misuse by such malware. And what that means is if 27:47you take something like the Solana blockchain, I think we 27:50are going to see the emergence of an era where 27:53you'll have telemetry and threat detection and intelligence around blockchains. 27:58That may be a subfield that's going to come. Similarly, 28:01if you look at something like Google Calendar, we might 28:06actually have something like Context aware cloud API inspection. We've 28:10had things around cloud security before, but these kinds of 28:16developments may bring that back into fashion. And then finally, 28:19if you look at things like Unicode cloaking, I think 28:22we'll have to do something around Unicode normalization and auditing 28:26of differences. Maybe things like software, bills of materials and 28:31linters which are more Unicode aware may become more important 28:34as we go forward. But definitely it's not going to 28:37be a one trick solution. It will require a village 28:42to get this thing done. According to Verizon, organizations are 28:49neglect mobile security. Now Verizon Business's 2025 Mobile Security Index 28:55came out recently and it reports that people are far 28:58more likely to fall for smishing attacks or SMS phishing 29:02than for email phishing. And attackers know this. Hence the 29:06increase in smishing attacks recently. But we're not seeing corresponding 29:10increases in investment in mobile security tools, which leaves organizations 29:15wide open. So my first question here is why do 29:18you think organizations are kind of neglecting to invest in 29:22mobile security? And I'm going to throw it to you 29:23first. Sujay, any thoughts on why this is a gap 29:25for so many companies right now? Well look, if most 29:29most companies when they give out mobile devices it does 29:33come with security, right? Whether it is edge or whatever. 29:36Like for example in our own organization we use like 29:39IBM's mass316 everyone so that you know what is going 29:42on even if you accidentally click the link. We are 29:44able to intercept and then say, okay, this is not 29:47something you're supp. As Dave pointed out, our job is 29:50to click on things. So it's very difficult to tell 29:52people and say, don't click on things. And you are 29:55preoccupied and you might accidentally click something. But in today's 29:59world, if you think about it, in many, many companies, 30:01people are bringing their own device to work. When they 30:04are bringing their own device, does it have the same 30:07level of security in it? That's something we fail to 30:10look at. And in today's world, that big opens it 30:13up and you're taking your work device everywhere else, and 30:16then the home device comes to your work network. It's 30:20a huge challenge with device proliferation. Absolutely. And, Dave, anything 30:27to add there? What are your thoughts on why this 30:29is such a gap for so many organizations? Yeah, sorry, 30:33I was busy trying to pay a toll that I 30:36just got a text about. Yeah, it was worth a 30:41shot. It was worth a shot. No, make myself laugh. 30:47No. So, no, I. I think, I think, Suja, you 30:50hit it, right? I mean, it's. It's. If it was 30:53just the, you know, like a laptop, right? Like, the 30:56company issues a laptop and they can lock it down 30:59and, you know, your. Primarily, your primary use is work. 31:03Sure. You might do some things here. There's. This is 31:06a mobile phone or a mobile smartphone or whatever we're 31:10calling these computers these days that we carry around in 31:13our pockets. It's bring your own device. It's shared use. 31:20You throw it to the kid in the backseat so 31:22they be quiet. There's a zillion things that happen to 31:26that device. And so you can't lock it down. Some 31:34companies can. Right. I think we do a pretty darn 31:36good job about it. But I think. I think it's. 31:39There's. There's outrage. Like, what do you mean? I can't 31:41do this. I can't do that? Right. And even when 31:43you do that, there's still quite a bit of, you 31:48know, of things, Bad things that you could potentially do. 31:51But I also think there's this mindset that, you know, 31:53it's not attached to anything. Right. I don't touch the 31:58data. Like, I don't really look at. I don't log 32:01into. Right. You know, now increasingly there are. But even 32:06then, like, oh, well, it's just the app. That app, 32:09you know, I might log into my, you know, my 32:12SAP app or my, you know, whatever app, but that's 32:15not the real thing. Yeah, it is. Right. You know 32:19what I mean? Right. Like, you know, But I think 32:22there's a mindset in a lot in the larger, you 32:25know, population that, like, you know, it's just my phone, 32:28my phone isn't my laptop. It's not what it really 32:31is, which is an extension of the enterprise. That's the 32:35edge of the network. It's right there in your hand, 32:37right? And as you access it and things like that. 32:39So I think that's the second part of it, is 32:42that they are ubiquitous. They're all over the place. They're 32:46used in very different ways than other resources within the 32:50enterprise. And then they don't think about them in the 32:57same way as they would a laptop up. Right. I 33:01don't know, it just, I think that's the challenge. I 33:03think that's the challenge. I, I do think that, that 33:06it is something that's easily overcome, right? It. Because you 33:11just, you just need to say it, right? Like, no, 33:13no, no, no, no. This, that's our phone. Right? That's 33:16right. Like, right. Like this is, this is, these are 33:20the, this is the acceptable use. This is what we're 33:22going to go do with it, right? And, and thou 33:24shall not do these things there, right? And sure, you 33:28can do, you know, this is what, you know, personal 33:30use on your work device is allowed and not allowed, 33:34you know, and, you know, whether that includes social or 33:37not include social or includes, you know, kid games or 33:41whatever, like, like that's, you know, within the safety. But 33:44you must run these apps, whatever it may be. So 33:47I think that's a, that's a very enforceable thing. I 33:50just think that that's, that's the next step. Now, something 33:53I'm thinking about, especially after, you know, Dave, you're talking 33:56about how part of the answer here is you just 33:58say, no, no, no, that's our phone. But what about 34:00the opportunity for shadow it? I mean, there's a lot 34:03of people who, even if you tell them, don't touch 34:05this data with your personal phone, they might go ahead 34:08and touch that data with their personal phone. I'm wondering 34:11if we have anything to kind of combat that angle. 34:13And I want to ask you about that, J.R. do 34:15you think shadow it is a problem here in this 34:17world of mobile security? Oh, absolutely. I think very much 34:20like what Dave was saying, what we are confronted with 34:23is not just a technical problem, but a cultural problem. 34:28Right? It's both technical and cultural. Technical on the side 34:31that actually when mobile phones first came out, when we 34:35looked at some of their architectures, at first they held 34:38the promise of being more secure. Than what our desktops, 34:41what our laptops, what our MacBooks were like. But over 34:45a period of time, from a technical perspective, the enterprise 34:49security governance mechanisms like endpoint detection and response, things like 34:54this, that we've applied to mobile, to corporate endpoints, they've 34:58become so strong and on the other side. And the 35:02culture has also reinforced that certain kinds of behavior around 35:06corporate endpoints. But when you come to mobile endpoints, very 35:10often these are bring your own devices. The corporate governance 35:14has lagged behind because many people are hesitant to put 35:19they will use their mobile endpoints, their phones to take 35:22calls or to call in, but they're kind of hesitant, 35:26they still draw a line on their mobile device between 35:28personal use and official use. And furthermore, the corporate governance 35:32has lagged behind. We don't have EDR solutions deployed on 35:36these endpoints. And culturally, because we start using it for 35:40a host of personal applications, whether it's social media, it's 35:43professional media, we don't hesitate as much to go click 35:47on a link and open it on a mobile endpoint 35:51that we would have done on a corporate endpoint. So 35:55it's both technical and cultural in that sense. And it's 35:58almost a perfect storm again. And absolutely there are things 36:03that again, we have to raise the bar, especially if 36:06these mobile endpoints are going to be used for corporate 36:08functions as to what both from a technical perspective, like 36:12EDR solutions, and also user awareness perspective, what is acceptable 36:17or unacceptable use of these devices. I think what people 36:20don't understand is a lot of people talk about privacy, 36:22especially technologists. They would say, I will not put these 36:26spyware, corporate spyware in my phone. What they don't realize 36:30is with that they probably would have been much more 36:32secure than what they had if they just gave the 36:34data to one corporation. They prevented it from going for 36:37so many other bad things. I think we don't think 36:40like that when we are really working because I would 36:43rather have that corporate security in my phone rather than 36:46my personal phone, which doesn't have any security. And that 36:49kind of gets to, you know, what I think is 36:52a really important through line of this entire episode, maybe 36:55every single episode we've ever done, which is that like 36:57human psychology plays such a role in a lot of 37:00these security, security challenges. Because like you said, Suja, for 37:03a lot of people, they hear, wait a minute, you 37:05want to put software on my phone that's an invasion 37:08of my privacy? And I get that, but it's also 37:10like, okay, but would you rather have your privacy invaded 37:12that way or would you like to invaded by a 37:14bunch of hackers on the dark web who sell your 37:15data, you know what I mean? Like there are trade 37:17offs that have to be made moving along because we're 37:23almost out of time. I've got one last topic for 37:26you folks here and I think we're going to treat 37:27it like a quick roundtable. Security repercussions of the AWS 37:32outage. Now, last week's outage, as I'm sure you're all 37:35aware, wasn't caused by a cyber attack, but it did 37:39expose just how fragile some of our core Internet infrastructure 37:43can be. Right. One sort of area of AWS went 37:47down and that affected Amazon's website. Ring cameras, Signal Riverside. 37:51The app we used to record this podcast was down 37:54that day. So so much stuff just broke because one 37:59little thing shut down. And I'm wondering, you know, from 38:02your perspective as security professionals, what kind of lessons do 38:05you think this holds for us in terms of how 38:07we build more resilient infrastructure overall? And I'll go around 38:12real quick. Suja, let's start with you. What are your 38:14thoughts here? Whenever there is automation, the resiliency goes out 38:19the door, right? Like, because that is something we learn. 38:22It's extremely important to keep resiliency. Just like how security 38:25is top of mind for us with agents, with AI, 38:29proliferation and automation, resiliency becomes really, really, really key. Because 38:33you're talking about it. I was reading about how mattresses 38:36got hot and they couldn't bring it down because these 38:39were all done using an app. And then because of 38:41this outage and I was like, really? Mattresses? Because today 38:45that's the level of automation we have. Everything operates using 38:49cloud, right? So resiliency becomes extremely key. Just like the 38:54garage door has a hatch which you can open when 38:56the electricity goes off. What is that? That's going to 38:59get you out. So that's very, very important when we 39:02are designing these systems. Absolutely. And you know, I laugh 39:05but like, yeah, I'd be pretty mad if my bed 39:06was too hot and I couldn't shut it off because 39:08of AWS was down. Dave, how about you? What are 39:12your thoughts on the kind of resiliency angle here? I'm 39:15not sure I can follow a hot mattress, but I 39:19mean it's tough. It's tough. Look, look, I think it's 39:22about contingency plan. Oh my gosh, it's aws. They've taken 39:26care of that for me maybe. Right. The good news 39:33here is that the tech exists, right? You can pop 39:36between providers easier than Ever before. That barrier to move, 39:42you know, has never been lower in the history of 39:45tech, right? I mean, you don't have to pick up 39:47and have a whole different data center. You don't have 39:49to roll. Like, you can just move the workload, right? 39:54So, like this, this didn't have to be as someone 39:58should have been able to cool off their mattress, right? 40:00Like you didn't have. That's a great story. I missed 40:05that part of it, to be honest with you. I 40:07heard about some of the other ones, but, you know, 40:09like the, you know, like, that's contingency planning, right? That's 40:13saying like, hey, this is, this is something. What is 40:15my contingency plan if this happened? Even if it's a, 40:20you know, a, you know, one in a whatever shot, 40:23right? Like, what is it? Okay, it's this. All right, 40:27cool. If that happens, I push this button and I 40:30have. I'm back up in whatever time, right? Like that's, 40:34that's not magical, right? Like, that's, again, that's just kind 40:36of common sense kind of a thing. Like, my clients 40:39require me to be here 100%. And so whoever I'm 40:43building this on, whether I do it myself or I 40:45rely on a big provider who almost never goes down. 40:49But what if they did? I mean, so that's just, 40:52you know, you know, kind of, you know, like, again, 40:54it's resiliency, just like Suju said. So, you know, it's 41:04it's not high. The tech exists, just needed to be 41:08done. Jr, how about you to close us out today? 41:11What are your thoughts on this AWS situation and what 41:14it teaches us about resilient systems? So I think one 41:17of the biggest lessons that security has taught us in 41:20the last, I would say, ever since the Google aurora 41:23attacks of 2008, is how incredibly interconnected and dependent we 41:29are on each other. And when we lose sight of 41:31that, how we are going to pay the price. So 41:36if you look at the AWS outage, despite having multiple 41:40zones, availability zones, and fault tolerance built in, things like 41:48key aspects of the AWS infrastructure, like identity and access 41:52management, and some of their CICD pipelines were still being 41:57served out of the US east geography. And what happened 42:02as a consequence of that was everybody else, including all 42:05the businesses that you mentioned, mattresses business included, were all 42:09dependent upon US East. So if you lose sight of 42:13dependencies in this incredibly interconnected world, then there is a 42:17price we're going to pay. And that's the place we 42:20would start in trying to rebuild resiliency for our infrastructure 42:26like clouds as we go forward. Absolutely. You can't lose 42:29sight of those dependencies. JR I feel like I should 42:31have you on every single episode because you have a 42:33gift of summarizing all of these topics and making them 42:36so clear and crisp and to the point. But that 42:39is all the time we have for today, folks. Thank 42:42you to our panelists Suja and JR And Dave, thank 42:45you to the viewers. Thank you to Brian Clark for 42:47stepping in for me on short notice last week. As 42:49always, subscribe to Security Intelligence wherever podcasts are found. Stay 42:54safe out there and watch out for things that go 42:56bump on the web. Up next, a sneak peek of 43:00a very special episode. Check out the Security Intelligence podcast 43:04podcast on Spotify, Apple, or wherever you get your podcasts 43:07this Friday for the full audio experience to hear the 43:11terrifying tale of just how easy it is to break 43:15into your corporate office. You're the receptionist at a mid 43:24sized firm in Los Angeles and a woman you've never 43:27met before walks right up to you. She's interviewed with 43:30the company next door and she just spilled coffee all 43:33over her resume. She's got extra copies right here on 43:35a flash drive. If you could print her off a 43:37new copy, that would be amazing. But then you'd be 43:41falling right into the trap laid for you by Stephanie 43:44Carruthers. Stephanie is an expert in social engineering, particularly the 43:49kind that involves breaking into people's buildings. Physical assessments are 43:53are by far my. Favorite and I think a lot 43:55of. Social engineers favorite because it's the most tangible thing 43:58you can do.