Learning Library

← Back to Library

AI-Driven Vibe Hacking Threats

Key Points

  • The new “vibe hacking” technique lets threat actors use generative AI (like Claude) not only to write malicious code but also to make tactical decisions such as data selection and ransom amounts, enabling rapid attacks on multiple organizations.
  • HexStrike AI exemplifies an emerging “agentic” cyber‑attack model where autonomous AI agents can conduct large‑scale intrusions with minimal human oversight, raising concerns that AI is lowering the barrier to sophisticated crime.
  • A resurgence of Lapsus$‑style actors introduced an unconventional ransom demand strategy, highlighting how hacker groups are experimenting with novel extortion tactics beyond traditional ransomware.
  • Remote Access Trojans (RATs) are increasingly favored by attackers as the primary malware choice, driving a surge in RAT‑related incidents and prompting urgent defensive focus.
  • Experts from IBM and X‑Force discuss these trends, emphasizing that while AI tools can be weaponized, they also offer defensive potentials, and the cybersecurity community must adapt quickly to the evolving threat landscape.

Sections

Full Transcript

# AI-Driven Vibe Hacking Threats **Source:** [https://www.youtube.com/watch?v=u-ZRZX2VZh4](https://www.youtube.com/watch?v=u-ZRZX2VZh4) **Duration:** 00:38:05 ## Summary - The new “vibe hacking” technique lets threat actors use generative AI (like Claude) not only to write malicious code but also to make tactical decisions such as data selection and ransom amounts, enabling rapid attacks on multiple organizations. - HexStrike AI exemplifies an emerging “agentic” cyber‑attack model where autonomous AI agents can conduct large‑scale intrusions with minimal human oversight, raising concerns that AI is lowering the barrier to sophisticated crime. - A resurgence of Lapsus$‑style actors introduced an unconventional ransom demand strategy, highlighting how hacker groups are experimenting with novel extortion tactics beyond traditional ransomware. - Remote Access Trojans (RATs) are increasingly favored by attackers as the primary malware choice, driving a surge in RAT‑related incidents and prompting urgent defensive focus. - Experts from IBM and X‑Force discuss these trends, emphasizing that while AI tools can be weaponized, they also offer defensive potentials, and the cybersecurity community must adapt quickly to the evolving threat landscape. ## Sections - [00:00:00](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=0s) **AI-Driven Cybercrime and RAT Threats** - The episode opens the Security Intelligence podcast by exploring whether AI is simplifying cybercrime, examines new AI attack tools like HexStrike, unconventional ransom tactics from Lapsus$, and the growing dominance of Remote Access Trojans, with insights from IBM security experts. - [00:03:23](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=203s) **AI Arms Race in Cybersecurity** - The speakers discuss how AI-driven defenses and attacks are now an inevitable, evolving arms race, mirroring traditional cybersecurity dynamics. - [00:07:36](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=456s) **AI Lowering Cybercrime Skill Barriers** - The participants discuss how AI-driven tools simplify hacking, potentially disrupting ransomware affiliate economics and raising alarms about research code unintentionally becoming weaponized. - [00:16:33](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=993s) **AI Weaponization & Cybercrime Economics** - The speakers discuss using the attacker’s AI against them and how AI‑driven tools could reshape the affiliate model of cybercrime, potentially eliminating human hackers. - [00:21:06](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=1266s) **Debating Human Ransom in Cyber Extortion** - Panelists argue against paying non‑monetary ransoms, likening them to blackmail and emphasizing that yielding only fuels continued attacks. - [00:25:42](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=1542s) **Rising RAT Use Over Info Stealers** - The speakers examine why cyber attackers are shifting from popular info‑stealer malware to Remote Access Trojans, linking targeting dynamics, recurring scams, and historical insights. - [00:28:50](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=1730s) **Hype vs Reality in Malware Naming** - The speaker criticizes media and analysts for sensationalizing and renaming existing info‑stealing tools such as RATs, while noting that ubiquitous mobile devices now provide an even broader attack surface. - [00:34:56](https://www.youtube.com/watch?v=u-ZRZX2VZh4&t=2096s) **Fundamental Cyber Hygiene Over Sophistication** - The speakers argue that strengthening basic security practices—patching, phishing awareness, device control, and a “human firewall”—is far more effective than pursuing advanced zero‑day or AI‑driven attacks. ## Full Transcript
0:00Are we making cybercrime too easy? Would you rather get hacked by a human or an AI agent? Would 0:05you fire someone to stop a data leak? What's up with cybersecurity RAT problems? All that and more 0:10on Security Intelligence. Hello and welcome to Security 0:17Intelligence, IBM's new weekly podcast where we break down the most important cybersecurity news 0:23with help from a panel of expert practitioners in the field. I'm your host, Matt Kozinski. This week's 0:30stories vibe hacking. Is AI making cybercrime too easy? HexStrike AI, is this the 0:36beginning of the agentic revolution in cyber attacks? Scattered Lapsus$ Hunters are back with an 0:42unconventional ransom demand and RATs, RATs everywhere recorded future says Remote Access 0:48Trojans are becoming attackers' malware of choice. Joining me today to break it all down. First up, if 0:54you've ever been to the IBM technology YouTube channel, you've seen this man before. Jeff Crume, IBM 1:00Distinguished Engineer. Master inventor. AI and Data security. Jeff, thanks for joining us. My 1:05pleasure. Glad to be here. Thank you. Nick Bradley of X-Force Incident Command and one of the three 1:11hosts of the Not the Situation Room podcast. Which folks do me a favor. When you're done with this 1:15episode, go to their YouTube channel, click like and subscribe. They're fantastic. Nick, thank you 1:19for being here. Thanks for having me. And the illustrious Suja Viswesan, VP of Security 1:26Products. Suja, have you picked out a hacker name yet? I'm still working on it. Matt. Still working on 1:32it? Well, I'm glad to hear your thinking about it. All right. So let's get into our topics for today. 1:42First up, Anthropic August 2025 Threat Intelligence report introduces the world to vibe 1:48hacking. Now we've all heard of vibe coding, right? You describe what you want your software to do to 1:53a generative AI assistant, and it spits out the code for you. You don't have to write a single 1:57line yourself. Vibe hacking applies this same workflow to cyber attacks. Now, the report that 2:03we're talking about identifies a few different cases, but I want to hone in on one in particular, 2:08because in this case, the threat actor didn't just use Claude Code to write malicious scripts. They 2:13also used it to make tactical and strategic decisions, including asking it which data to 2:18exfiltrate and how much a ransom to charge for that data. at a certain point, it's almost like 2:24Claude was carrying out the attack and this guy was just there to push some buttons. All in all, 2:28the attacker hit 17 organizations before being shut down. So I want to start by just getting some 2:32initial reactions to this development of vibe hacking. And I think I'll start by asking you, how 2:37do you feel about this? We are learning as we go, but it's like any tool, right? Any weapon can be 2:42used to protect as well as to basically be offensive to people. And one thing you know, 2:49even with organized, um, crimes is a little bit easier to know than guerrilla warfare because 2:56there is nothing organized about it. So this is really, really tough one to catch. So we need to be 3:02just like vibe hacking, vibe coding. We have vibe security as well. So in 3:09when you think about it, you have red agents, blue agents. How do they learn from each other and 3:14start fighting each other and then get there? That's the only way I got that. So you thinking of 3:19vibe security? If I correct me if I'm wrong, it's kind of like an experiential approach to security. 3:23Almost. Right. We're just kind of learning as we go and taking those best lessons and applying them. 3:26Does that make sense? Absolutely. Because you saw that on the anthropic one they are looking at, 3:31okay, what kind of prompts are being asked what we can prevent before it becomes a problem. That 3:37means the model is learning for which one it shouldn't be answering because this can lead to 3:42something bad. Jeff, any thoughts to head there? Yeah. So what you just talked about is basically 3:47the inevitable space that we've been heading on for a while. I could foresee this coming. I think a 3:53lot of people could foresee this coming. It's disappointing that it's already here now because 3:58I don't think we're all fully ready for it, but it was absolutely inevitable and it now has put us 4:04in the world of AI versus AI. It's a question of is my AI better at defending than yours is 4:11at attacking or the other way around? And that's the arms race. It's moved now to the AI 4:18battlefield, and now it's a question of who's going to have the best tech to to deal with this, 4:23and who's going to keep it the most updated, because it will constantly be changing, as we've 4:28always had. So in many senses, it's not new. It's what we've seen. It's always security. 4:35Cybersecurity has always been an arms race of the good guys versus the bad guys. And we get a tool, 4:39they get a tool, they get a tool, we get a tool. And you know, it's it's whoever, you know deploys the 4:45tool the best. An AI, as you said, is exactly that. It's what we refer to as a dual use technology 4:52where it can do good or it can do bad, and it's got equal potential to do either depending or 4:58even both at the same time, depending on whose hands it's in. So what that means is organizations 5:04who maybe weren't so robust in their deployment of AI didn't really have a plan on how to use it. 5:10Well, they're going to be the ones that lose in this battle. So I recommend get out in front of 5:15this and make sure your AI is better than theirs. There's no such thing as a kind of neutral tool. 5:19It all depends on whose hands it's in, right? Nick, any thoughts on vibe hacking on your end? I would 5:24love to try and offer a counterpoint or argue against either one of you, but I literally can't 5:30because the foreshadowing here is more obvious than that of a B rate horror flick. I mean, the 5:37second we all started seeing AI taking the world by storm, those of us in security just went, oh, 5:43wait for it. The weaponization of AI was, I think one of you already said it. It was inevitable. And 5:50not only was it inevitable, but the bigger challenge here is the fact that it's going to. 5:55It's going to lower the Bard what it takes to be a bad guy. Right? Because now I don't even have to 6:02be a programmer. I mean, we already had, you know, malware as a service and things of the sort, but 6:07now I don't even need that. Now I can just, you know, say, you know, I didn't want to say the name 6:12of the, uh, the assistant because everybody listening will fire up, but assistant name. Write 6:18me some malware. Okay. What would you like to do today? Right? I mean, we're we're right there. Yeah. I 6:23remember 20 years ago when people started observing that you didn't even have to be a 6:27really great coder to generate malware. It was a click here to hack kind of situation. Well, now you 6:34don't even have to click. You can just almost think it into existence, which is, you know, a very 6:40different kind of thing. And as you said it, it's one of those things I remember the first time I 6:45woke up, couldn't sleep on one particular night, and I did what all good nerds do turned on 6:51YouTube and started watching tech channel stuff and and I'm and I'm watching and there's this 6:56thing they're talking about called ChatGPT. And I thought, wait a second, AI is not doing that right 7:02now. Is it really? And of course, this was November 2022 when it had first launched. So the 7:09world was just starting to get out and I thought, this is the coolest thing in the world. I'm not 7:13going to sleep tonight because I am too excited about what the possibilities are. I took that 7:19imaginary hat off and put my security hat on and said, oh my gosh, I'm not going to sleep for a few 7:24nights. Because now the potential of this is is mind boggling and we 7:31haven't scratched the surface yet. So in terms of good or bad, I can't see this going bad at all. 7:36Yeah, right. Right. What could possibly go wrong? You know, I'm glad you bring up that issue of de-skilling 7:42sort of Nick. Right, in making it even easier. And this is a trend that we've seen for a 7:46long time. Right. That the trend in hacking has always been it gets easier and easier to be a 7:50cybercriminal. And I'm sort of wondering, actually, on that note, do you think that this development 7:55is going to put any pressure on that affiliate model that we're so used to seeing that Jeff 7:59mentioned, right, where you if you're somebody who doesn't really have a lot of skills, you got to go 8:03find a ransomware gang, ransom ransomware from them or whatever, and make a cut of their profit. 8:08Do you really need malware as a service? If you could just ask an AI to come up with a scheme for 8:13you. I guess I'm just wondering if this affects the economics of cybercrime. Any thoughts on that? 8:17It could eventually. I don't think we're at that point yet, but that's just a turn of the page. It's 8:23another one of those inevitability. I read an article in the Register just just today was 8:29talking about some researchers that came out with their version of this just before the version you 8:35just cited, and they were doing it as a research project. They didn't intend it to be released. And 8:40then all of a sudden they checked it out on VirusTotal and found out that it was now being 8:46discovered by other people. And so even though they intended some good from it. You know this. 8:51This is where it goes. And the the ability for this thing to get smarter and smarter. The version 8:56they had, they said was polymorphic, which we've had polymorphic viruses for ages. But just imagine 9:03the speed if literally every instance of this looks different than every other instance of it, 9:08then. Oh, good luck in detecting. Uh, not impossible, but certainly the degree of difficulty just 9:15went way up, and AI would be really good at hiding. Yeah, I feel like I feel like we're watching an 9:20episode of Star Trek with the board. Right? They've adapted. Absolutely. Yes. On an episode of, uh, Black 9:26Mirror that keeps me up at night. Let's move on, then, 9:33to the next story for this week, talking about HexStrike AI and how it might help attackers launch 9:40their own armies of AI agents. Right. So HexStrike is positioned as a legitimate security tool. And 9:46just like those LLMS we were just talking about, and it's an offensive security framework that 9:51serves as an orchestration and abstraction layer to control large numbers of AI agents. The idea is 9:58that a framework like this would help you automate red teaming or penetration testing by 10:02getting a bunch of agents to operate these tools automatically for you, which, you know, obviously 10:06there's some real value to a technology like that. But of course, again, hackers got their hands on it. 10:12Threat actors saw it and thought, how can we use this for our own gain And the dark web now forms 10:18are just full of chatter about HexStrike and how it can be weaponized and used to marshal their 10:24own kind of evil. AI agents and people are even using it to to sort of start developing exploits 10:29for some difficult vulnerabilities that might have taken a lot longer to develop exploits for. 10:33So I'm sort of wondering if this, you know, you always have to be careful with the way these 10:37things are covered in the media, because we hear a lot of sort of apocalyptic talks sometimes. But 10:41I'm wondering if this is an important moment in the weaponization of AI agents. Jeff, let's start 10:45with you. Any thoughts on this one? The short version is same song second verse we just talked 10:51about. Click here to to hack. Well, this is yet another tool that accelerates that and enables 10:57that kind of capability. It reminds me again, if you've been around in this space long enough and 11:03gathered the gray hairs that I have it in, in many senses, the things that are new don't 11:09seem so new. They all seem like variations on a theme. And I immediately, when I saw this, harkened 11:16back to a technology that probably will go over most people's head. They don't remember this, but 11:20it was called SATAN. It was the it. SATAN was an acronym. It was like System Analyst Tool for 11:27Analyzing Networks or something along those lines. was an acronym, but it was one of the first 11:32network vulnerability scanners to come out. And when it came out, it was highly controversial. This 11:38was probably 25 years or so ago, maybe even longer. And the idea was that the tool was 11:45released as a way to test your network to see if you had vulnerabilities, because the idea is you 11:51want to find them before the bad guys do. And so it was kind of the predecessor to all the network 11:57vulnerability scanners and all these other kinds of things that we had. But again, highly 12:01controversial. A lot of security people said you just automated the attack process for the bad 12:06guys. This is the end of the world. Okay. Well, I don't know. We're still here. I think we're still 12:12here. Uh, this is not an alternate universe. We've continued to exist. So part of me gets worked up 12:19when I see this, because we just made the job harder for defense. But part of me also says, yeah, 12:25but we've been here before. Yes, this one is more difficult. It always gets more difficult as we 12:31move forward in time. Isn't that part of our lives, though? Jeff is surviving apocalypse after 12:35apocalypse. I mean, absolutely, absolutely, yes. Here's one thing I'll say that everyone who's 12:41ever predicted the end of the world or the end of technology, They all have exactly one thing in 12:46common. You know what that is? They've all been wrong, so I hope that continues to be the case I'm 12:53going to be an optimist. We still haven't disproven simulation theory. We could all be in a 12:57computer. But, you know, I'll leave that for a different episode. Um, I also sort of feel like if 13:02you name your tool SATAN, you're asking for trouble. But again, I think so. Yeah, a lot of people 13:07kind of fall through that. Oh, well, it's a double edged sword, right? You can't limit what you 13:14do with the idea in mind that, oh, the bad guys might use this for bad things, because if you do, 13:20then you just stop doing everything and then they'll be the first movers on it, Nick. They'll be 13:25the ones that do it before we do. So I, I don't think the the answer is don't do this stuff. I 13:30think the answer is do it and do it better and faster than the bad guys do. It's just that it's a 13:34rapid evolution. We had a lot of time before now. We had time for regulations and everything to 13:41catch up with data that social media we saw that we didn't actually have time. A lot of damage was 13:46already done and now we are learning. I'm just hoping we are reacting much faster because it's 13:53still going to be reaction. It's because as the technology comes in, we have to react much faster 13:57because the reaction time needs to be shorter and shorter so that we can get better at it. And I am 14:04an optimist, just like Jeff. I do believe that we'll see how before we see heaven. But it's going 14:09to be a journey. Definitely a journey. So we've got two optimists on the panel. Nick, where would you 14:14place yourself on that? I am not an optimist or a pessimist. I consider myself a realist that plans 14:20for the worst and hopes for the best. You know what? I think it's a good approach in 14:25cybersecurity. Um, I want to I want to play a quick game with you folks real quick. A little round of 14:31would You rather? Right. We've been talking a lot about AI attacks and whatnot. And I want to ask 14:35you folks as defenders, if you had to go up mano a mano one on one against either a human hacker or 14:42an AI agent. Who are you picking and why? Let's start with you. Who would you rather get attacked 14:47by? I was leaning towards human, but I changed my mind to AI agent because it is learning from a 14:52lot of humans. And humans are completely unpredictable. And when you're learning from so 14:56many of those humans I don't have. I have no idea what it's going to do. So yes, I'm going 15:03to have a tough time dealing with. I would rather deal with one human. At least. It's very easy to 15:08figure it out. That was my kind of thought, too. Nick, what about you? I'm sticking with going 15:12against human, not AI. Because I guess maybe I'm just too much of a sci fi junkie, and I've seen 15:19too much terrifying AI and the things that it can do, and as you say, learning from humans. So yes, it 15:24learns from us, but then it learns how to do what we do, but better and faster. And so at 15:31least with with a human, they have to eat, sleep, bio, things like that. The AI 15:38to sound like a nightmare. It never sleeps, it never stops. It will always keep coming. I'll go 15:43ahead and just. Just stir the pot and say the opposite. Just because it's more interesting if I 15:48do, um, I'm going to say for the moment I might choose AI. And the only reason is because AI has a 15:54problem with hallucinations. So if I could trick it into doing the wrong thing, I might even be 16:01able to turn it back on itself and point the the gun back in the other direction. I don't know if I 16:06can, but I'm going to hope that I can. And, uh, but now that's a moment in time. If you ask me this, a 16:13couple of years from now, I might not say the same thing, uh, because the other points that were made 16:18were absolutely valid as well. Well, said. I, I can agree with that because I have dealt with enough 16:23AI hallucination. And my other favorite word, confabulation are that, oh yeah, I could see where 16:29you're coming from with that. There we go. I would love to see a kind of meta weaponization. Right. 16:33You weaponize the attacker's AI against the attacker. That's a fantastic little move there. 16:38It's kind of one of those Elmer Fudd turned the the gun barrel back on him and let him shoot. We 16:44get a lot of classic cartoon references on this show. I'm very happy about that. Uh, but, you know, 16:49something else just occurred to me, and I know I just brought this up in the last conversation, but 16:53I also want to talk about I want to talk about the economics of cybercrime again, a little bit, 16:57because it almost seems to me like this exerts another pressure on that affiliate model, but from 17:03the opposite direction. Right. In terms of of your cybercrime gang, do you need human affiliates if 17:08you can just outsource your work to AI agents? Right. So it's almost like I'm looking at that, 17:12that that vibe hacking and thinking it lets somebody who is not that good, uh, go out and do 17:18their own attacks without a gang. And I'm almost looking at this strike AI stuff as if it's like 17:21it lets a gang dudes attacks without affiliates But then you still need an affiliate group that's 17:26going to manage the LLM and the AI to do the affiliates work. I think it's a real tragedy that 17:30we're talking about putting hackers out of out of work, that AI in their jobs. And I think we rise 17:37up in defense of that. Yeah. No, no, I'm not going to lose a moment's sleep over that if they get 17:43replaced by agents. Let me be clear. I'm not. I don't feel bad for them. I'm just, you know, I'm 17:46just kind of wondering how things are going to work on the dark web if it's going to be dead. 17:50Internet theory for them to. Everyone's a bot now. You know, we're using the word agents now. So now 17:55all I can think of is Mr. Anderson. Yeah. Yeah, exactly. On the on the next episode, 18:01we'll talk about how realistic The Matrix is. Prepare. We're in it. Oh. 18:10Scattered Lapsus$ Hunters are back in the news, this time with a new kind of extortion technique 18:16firing people. So this unholy collaboration between three of the most notorious cybercrime 18:22gangs. Today, we're talking, of course, about Scattered Spider, "Lapsus$ " with a dollar sign. And 18:27ShinyHunters popped up a couple weeks ago with a telegram channel that claimed they were working 18:32on a new ransomware strain, and they are now back because they claim to be in possession of 18:37internal Google data, and they're threatening to leak it unless Google terminates two specific 18:42security employees. We don't know who these employees are, and I'm frankly skeptical myself 18:46that they would even Google whatever do that. So I want to start by asking you, Nick, what's the 18:51thought process here? Do they think this is actually going to work or is this something else? 18:54This is such a deliciously diabolical story And I, I really 19:01think this is someone overplaying their hand. I think they they feel like they have a stronger 19:06hand than they do. Uh, and so to, to put a little backstory on this. So the reason that I 19:13enjoyed, I've enjoyed this story so much is one on our other podcast, Not The Situation Room. We've 19:18already we've done two episodes on this so far. One when it first debuted that they were going to 19:23create their their little triumvirate of power, and then second one when they decided they were 19:29going to try to strongarm Google. So a lot of this plays out from the, uh, the Salesforce 19:36Salesloft breach, right. Because that's allegedly where the data from Google they have came from 19:41and Google had. I think it was for researchers that put out a research paper on 19:48this and on shiny. Well, we got all kinds of different names for em. Scattered Lapsus$ Hunters, 19:53Shiny Happy Hunters. I mean, just because the more we talk about it, the more we kind of just talk 19:58about it however we want. However, oh and, the shiny thing, if you miss that part, that's a Pokemon 20:04reference. you'll have to look up that when you're so. I got that one. Well, ShinyHunters, just 20:10think about it. But anyway, they supposedly have this data from Google, and Google has done the 20:15research. So it's it feels like it's a detective story. Right? The bad guy has got info or dirt on 20:21the good guy. The good guy is researching the bad guy. And so now they're trying to strong arm and 20:26say, stop investigating us or we're going to we're going to release the, you know, release the 20:30cracking on you. We'll let all your data loose or fire these two people and stop investigating us 20:35right now. And I don't know why they think they have that level of leverage. I mean, we have seen 20:42so many times in the past when clients data does a company's data does get compromised, does get 20:49leaked, makes its way to the dark web, and it's out. It's done. You're not going to shame me 20:56to do more, because what happens when I say, okay, I'll capitulate, I'll fire these two people. Then 21:02you go, is there anything else you need me to do? Because that's what's going to happen. Absolutely. 21:06I'm with Nick on that because there is no end to blackmailing, right? There is no end to it once you 21:11give in. So the only way we have to put a stop to it is like, accept the shame, whatever it is, and 21:15then go from there. We saw it from the credit report companies from which we very, very valuable 21:21data is being leaked. And then that's it. And we see more and more companies 21:27stopping from paying ransom because it doesn't it doesn't work. Uh, right now there is. There is an 21:34automotive company going through this as we speak. Right. Productions are stopped. These things are 21:39happening. But I don't believe the companies are going to give in, and they shouldn't. Jeff, I know 21:45you've done like a video before on whether or not you should ever pay a ransom. So I'm wondering 21:48about your thoughts on paying a kind of human ransom. Yeah, sure. Exactly. My thoughts on this is 21:53this is this is essentially a different kind of ransomware, because Ransom ransomware is basically 21:59an extortion attack. And in this case, rather than asking for money, they're asking for a particular 22:05action. So I guess at least in this case, there's going to be no bitcoins exchanged in 22:11this. But but the reality is this is this mafioso style stuff. You know, it would be a shame if this 22:18stuff got out. You know, if somebody if a window got broken, you know, that's the kind of ham handed 22:24stuff they're doing here. I feel like it's not likely to succeed I do agree with with what 22:30Nick and Suja have said that, you know, I hope they don't give in. I can't imagine that they 22:36would. Because where does this end? I mean, then every if you if you give in on one 22:43of these kind of cases, every little, you know, hacker collective is going to start demanding 22:50things of every single company. it's a really bad precedent and it never ends. I do think when 22:57it comes to to paying ransoms, look, I know people have business decisions they have to make. And, and 23:02I talked to a CISO of a hospital one time, and he said, when it comes to ransomware, we have three 23:08priorities patient safety, patient safety and patient safety. I get it. Okay. But 23:15I think in general, the problem with paying is you make yourself a welcome soft target for 23:22future. So okay, this person paid and the bad guys are going to see them as a sucker. Let's, let's 23:29you just painted a bigger bullseye on yourself. You might not have. Might have dodged this bullet. 23:33Only to catch a bigger one later. And so this one. This one doesn't seem like a really smart idea. 23:40I agree with with what Nick said. I think they've overplayed their hand. I think they're going to 23:45find that out. Um, and, you know, release, release the the info. You know, we'll have to deal with it and 23:51see if they have anything. Yeah, that makes a lot of sense to me because again, I for myself, I could 23:55not figure out what they were thinking. But I think, Nick, your theory that they're just they're 23:58gotten too big for their britches. They really think they have more leverage than they actually 24:01do. That makes the most sense to me. And so the next thing I was going to ask, and I kind of feel 24:06like I know the answer already from everybody, but could you ever foresee a ploy like this actually 24:10working, or is this this just complete nonsense? Any thoughts on that? I think in this context, not 24:15I think it could happen if it was on a much greater scale. If we were talking about not just 24:22two unnamed employees at a tech company. But what if we're talking about the CEO of the company 24:29where there's some sort of ransom extortion attack. In that case, the CEO is making the 24:35decision. CEO says, okay, you know, whatever you say, we'll do. If we're talking about a head of state, 24:41okay, that's a wholly different kind of deal. But I think at this level, you know, we're talking about 24:46two employees. It's it just doesn't seem like it benefits them to follow through. I was leaning in 24:52the other direction, to be honest. I was thinking that, I mean, I could definitely see what you're 24:57saying, but I think it would be easier to make this work on a smaller company that basically has 25:03no way to survive this ransom attacks. Like, we got to capitulate or we got to close our doors. So 25:09it's like, Susie, Bobby, I'm sorry. We gotta let you go or we gotta just close the doors and bankrupt 25:14the whole company. At the end of the day, like Nick and Jeff mentioned, depends on what is a blast 25:20radius. Is our lives in danger? Those things are going. Yes. It's very easy to say, hey, we don't 25:26negotiate with terrorists. But if it's your children, if it's you, you're going to make very, 25:30very different decisions. So I understand. So in some of those it will it will change based on 25:36what it is. For the most part I don't believe. Once you start negotiating there is no end to it. Yeah. 25:42And that makes a lot of sense, right? Any time you give in to those demands that you get a target on 25:45your back, right? You see the same thing even in, like, interpersonal scams, right? People who fall 25:49for it once they they get targeted again and again and again. Same thing for an organization. 25:57Let's move on then to our final story for today. So recorded future finds attackers are 26:04shifting away from info stealers and using more RATs. Right. In Recorded Futures H1 26:102025 Malware and Vulnerability Trends report, they found that the use of Remote Access Trojans 26:17was increasing in the first half of the year quite significantly. that info stealers, which 26:21for the past couple of years have been quite popular. We're a little bit on the decline, so I'm 26:26wondering, first off, what kinds of factors might be fueling a shift like this? I'll throw it to you 26:31first. Jeff, any thoughts on why we might be seeing a shift like this right now? Yeah, again, I feel 26:35like I'm the old man in the room talking about, well, back in my day, because literally back in my 26:40day, 25 years ago, I wrote a book called What Hackers Don't Want You to Know And I wrote about 26:45RATs back then, and, and I, they were pretty new at that point. And I thought this stuff could be a 26:52big deal. And I'm shocked that, I mean, good news and bad news, good news for my publisher. We still 26:58haven't solved the problem. The bad news for us is we still haven't solved the problem. So here we 27:04are. What's old is new again, and I look at RATs as just a more sophisticated, more 27:11capable, I guess I would say form of info stealer, because now I'm not just stealing your passwords 27:17and your keystrokes and that sort of stuff. I mean, I'm turning on the camera, turning on the the 27:23audio. I'm recording everything. I'm stealing your image, your likeness, your everything. Maybe I make 27:28a deepfake out of that. Uh, I'm sure somebody is going to consider that possibility. Um, getting 27:35material for extortions. So. Yeah, this is a bigger, badder, you know, version 27:42of info stealing where I'm getting more than just info. So I, I'm assuming that that might be the 27:48case. Or maybe they just got bored with the other stuff Who knows? Nick, what were you thinking on 27:53this one? I feel like we're having a in these reports like this. First off, I, I'm a I'm a fan of 27:59recorded future, so this isn't a negative on them. This is just a general oversight thing is that we 28:05are having a battle of naming conventions. Right. It's what are we calling things right in this in 28:12this case, we're talking about, oh, RATs are taking the place of an info stealer. But just like Jeff 28:16said, a RAT is just an info stealer on steroids. Right? And then another one that kind of fell into 28:23this category and I, We were going to talk about it, I don't know. We're going to talk about it yet 28:27on another episode. But, uh, the, you know, rant or encryption? Ransomware. And it's another 28:34naming convention problem. like encryption less ransomware than it isn't ransomware. Okay, 28:39encryption less ransomware is a stealer. It's not ransomware. Ransomware specifically called 28:44ransomware because it encrypts the data. So if we're not encrypting, it's something else, right? 28:50And so an info stealer and a RAT. RATs are also info stealers, so it's easy 28:57to put these reports together and say the decline of this, the rise of that. you're you're 29:02playing a shell game with the naming conventions. At least that's my opinion. That makes a lot of 29:06sense, right? It's kind of like, uh, you know, if you kidnap someone and call it real world ransomware, 29:10you know what I mean? It's not exactly the same thing if you think about it. If you're trying to 29:14publish a report or you're a media outlet, you've got to do something to get more clicks, more 29:19eyeballs So call something, something else. Come up with a new jazzy term do something to 29:25get people you know everything you knew. Uh, now it's it's a thousand times worse. It's the end of 29:31the world. Yeah, we told you that last week, but we really mean it this week. It's really real this 29:36time. The realest real we've ever realed. Another apocalypse? Yes. Oh, what do 29:43you think? I agree with both of them. It's another form of info stealing. But the world has changed, 29:48right? In the sense that everybody is dealing with a mobile phone these days. Uh, with a, with a 29:55transacting most secure thing where dropping off kids, picking up kids to your bank accounts, to 30:00your most intimate private details, everything is available in there. So now it becomes a much 30:06easier target because we talked about that's why I said we talked about, hey, um, we won't negotiate 30:12for, uh, negotiate with terrorists. That can happen in a large organizational way. Think of it as an 30:17individual who's who's, who's who can be small time blackmailed into doing things that they are 30:23not supposed to do, right? Or they are giving up money and everything else. These these can happen 30:28because most people who just don't understand technology, but they are equipped with technology 30:33and that is what they live day in and day out. So this has become very, very dangerous because I'm 30:38always educating my parents. Okay. Don't click that link. And even if it's come from me, check with me 30:42before you click on it. So all these things because this is entertainment, this is life, this 30:48is everything. Now in a mobile phone or because it's not a laptop that people used to go work on 30:53it and then get there. And it's a small number of population now everybody is using tech, and with 30:58AI it has become very easy to infiltrate. So that is why the instilling of the info in a different 31:05way. Just like you said, now you are able to get their biometrics there. You are able to go look at 31:10most intimate details that they didn't have access before. So as we've all kind of said here, 31:15and this makes a lot of sense to me, the thing that the RAT kind of gives you that info is done 31:20is you can do more than just steal, right? you can steal different kinds of things. And so I'm 31:24wondering how this sort of changes how you as defenders sort of relate to the threat landscape 31:29in terms of, you know, if you know that RATs are on the rise instead of info stealers, are there 31:33changes you'd be making or changes you'd be recommending people make to kind of defend 31:37themselves against these things? Um, we'll start again with you, Jeff. Any thoughts on that? I think 31:42a lot of the basic blocking and tackling is still just as relevant as it has been. Going back to 31:47that book that I told you, the interesting thing, even though the book's 25 years old and I'm not 31:51trying to sell copies because I it's even hard to find anymore. I can't get it. Shameless plug away 31:57Jeff I would yeah, exactly. Okay. All right. If you can find it. Yeah. It was written on, on on, uh, 32:04parchment. But, uh, anyway, it's that old and and the here's the thing is that 90% of what what 32:11I wrote there is still true today, which, again, is the good news for the publisher. The bad news for 32:16us, because I was talking about here are the things that we need to be doing here are the ways 32:21that people compromise. Systems here are what we prevent them to do that with. And yet here we 32:27still are. So, you know, obviously when it comes to to RATs or info stealers keeping your system 32:33patched, uh, keeping, um, if you need any virus, malware scanning this kind of 32:40stuff, look for behavior behavior detection anomalies in the network because these things 32:45will tend to exfiltrate information out. we have the tools to kind of do a lot of this kind 32:51of detection EDR tools, all this kind of stuff. Um, we're just not applying them all the time. And um, 32:58and of course, sometimes we are applying them and we're doing them perfectly, and there's a zero day 33:02that comes out that somebody takes advantage of. And that means obviously, you know, we've got to 33:08find ways to, to, to bring those windows down, uh, where we're patching faster and that 33:15the vendors are responding with patches faster So it's a lot of the same stuff that that we've 33:22it doesn't, you know, there's not going to be some brand new technology that comes in and you 33:26sprinkle this over and then all of a sudden you don't deal with RATs. Here's your, here's your, your your 33:31RAT trap. And just put some cheese in that and it will kill all the RATs. It's never going to 33:37be that simple. I like that you brought this back kind of almost full circle, because we were 33:41talking in the very beginning about how security is kind of inherently dynamic. It's always an arms 33:45race, right? And it's like you said, you know, this is another situation where it's just we're in a 33:50bit of an arms race. And even if there are things we're doing perfectly, sometimes there's still a 33:54zero day that's there and you have no choice but to react to that. So I like that we have a little 33:58circle full circle moment like that. Um, Suja, how about you? Any thoughts on on this? I think I've 34:03mentioned this before, which is like very much like a pandemic. Right? You need basic hygiene. You 34:08have you can wait for vaccination, right? At the same time, you have to wash your hands. You have to 34:13have basic hygiene. That is very, very important. Look, if you have at IBM, we have gone to 34:18passwordless. Right. Because if you don't have a password, you cannot steal it. So you need to go 34:24into certain basic hygiene. Where are you making sure that your data is secure. Right. So that even 34:30in a when when it happens, it's not about if when it happens or is your data secure that you are 34:36able to close shop and not make it accessible to people when it comes to pass? So are you keeping 34:41your secrets in a vault instead of keeping it out? Because it could be in git? People put it all 34:46those things because previously you had to go mined for it. You have to search for it. Today, with 34:51agents, this has become very easy to figure out that zero to vulnerability in minutes, not in days. 34:56Previously it took days. That is why the way we were building product, where we are thinking about 35:01it is, are we making sure our products are built with resilience? Are they making sure the basic 35:07hygiene is there so that these zero day vulnerabilities become less and less and less? 35:12Nick, what about you? Any thoughts there? I want to offer some astounding revelation that nobody's 35:17got to. Yet. But I can't, because the the problem has already been stated. It's we don't 35:24need a new zero day. We don't need some brand new complex AI developed 35:30exploit to to take advantage of something. Because I don't want to say we because it's not we. But 35:37the basic blocking and tackling is still getting in our way, right? We were missing the forest for 35:41the trees. Like you've still got employees clicking on phishing emails. You've still got 35:46employees bringing devices to work they aren't supposed to have and loading software they're not 35:50supposed to have installing cracked software. That's just letting people right in. And I mean, I 35:56could just keep going down the list, not patching all of the things that are the basic things that 36:02we think we should just know they're still there. I mean, Salesforce Salesloft, as far as I know, was 36:08not even a compromise done through any sophisticated attack. It was social engineering. So 36:15what are you going to put in place to stop social engineering? A firewall for the human mind. Yeah. 36:22You know, it goes back to something Suja has said before, which is that, um, it's. is 36:27often as much about human psychology as it is about, you know, your actual technical controls, 36:32right? And that's the slippery thing, right? You can't you can't put, uh, you know, access management 36:37tools on on your employees. That's that's just the person who's going to do stuff. You know, you can't 36:42stop them. You can't stop them from clicking on things, which I think is the thing you've said 36:45before, Nick. They're always going to keep clicking on things. You get a virus, you get a virus, you get 36:49a virus. We all get virus. You can't make anything foolproof because they keep making better fools. 36:54Yes, I know there was some news today where they had said, uh, you that phishing, uh, the training, 37:01that cybersecurity training has no value, especially when it comes to phishing, because 37:05people are just doing it automatically. But the phishing still happens. I think that is why it 37:10becomes very important. How do you automate some of these things so that the humans don't have to. 37:15Even if they inadvertently click on it, they're not. That's why I talked about passwordless, 37:20because that's one way of thinking that you're not typing, even when you're clicking or typing a 37:24password, because it's asking for something else. So those are all the things that we can make sure 37:29that we stop it. Because the other part is we are getting low and low attention span. Jeff, don't you 37:36think we're just like, while we are talking, I have to check my phone. What is happening here or there? 37:40What did you say I forgot already. Exactly so. And I clicked the link Oh my God, what did I what do I do? 37:46Okay, that's all the time we have for today. Thank you so much, Jeff, Suja and Nick for joining us. 37:52Thank you, listeners and viewers for hanging out with us. Make sure to subscribe to Security 37:57Intelligence wherever podcasts are found. And stay safe out there. And remember stop clicking on 38:04things.