Learning Library

← Back to Library

AI Agent Exploits: Shadow Leak & CAPTCHA

Key Points

  • The episode kicks off the Cybersecurity Awareness Month with IBM’s Security Intelligence podcast, featuring experts who discuss recent security trends and AI‑related threats.
  • Researchers revealed two new attack techniques—dubbed “Shadow Leak” and a CAPTCHA‑bypass method—that can coerce AI agents like ChatGPT into leaking data or performing prohibited tasks, highlighting vulnerabilities that extend beyond any single platform.
  • Panelists emphasized that AI should not be trusted with high‑risk decisions such as medical diagnoses, allergen detection in foods, or autonomous driving, due to the risk of hallucinations and erroneous outputs.
  • The show also teases other cybersecurity headlines this week, including a resurgence of DDoS attacks, the 15‑year evolution of Zero Trust, an AI training app that unintentionally exposed user calls, and persistent cybersecurity myths.
  • Overall, the discussion underscores the growing need for robust safeguards as AI systems increasingly imitate human intelligence—and human fallibility—making them attractive targets for social‑engineering and exploitation.

Sections

Full Transcript

# AI Agent Exploits: Shadow Leak & CAPTCHA **Source:** [https://www.youtube.com/watch?v=mDpUZD1ogEE](https://www.youtube.com/watch?v=mDpUZD1ogEE) **Duration:** 00:49:58 ## Summary - The episode kicks off the Cybersecurity Awareness Month with IBM’s Security Intelligence podcast, featuring experts who discuss recent security trends and AI‑related threats. - Researchers revealed two new attack techniques—dubbed “Shadow Leak” and a CAPTCHA‑bypass method—that can coerce AI agents like ChatGPT into leaking data or performing prohibited tasks, highlighting vulnerabilities that extend beyond any single platform. - Panelists emphasized that AI should not be trusted with high‑risk decisions such as medical diagnoses, allergen detection in foods, or autonomous driving, due to the risk of hallucinations and erroneous outputs. - The show also teases other cybersecurity headlines this week, including a resurgence of DDoS attacks, the 15‑year evolution of Zero Trust, an AI training app that unintentionally exposed user calls, and persistent cybersecurity myths. - Overall, the discussion underscores the growing need for robust safeguards as AI systems increasingly imitate human intelligence—and human fallibility—making them attractive targets for social‑engineering and exploitation. ## Sections - [00:00:00](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=0s) **Untitled Section** - - [00:06:33](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=393s) **Balancing AI Access and Risk** - The speakers debate the trade‑off between granting AI agents broad capabilities for convenience and the resulting security vulnerabilities and over‑reliance that can lead to costly failures. - [00:10:55](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=655s) **Teaching AI Resistance to Social Engineering** - The participants debate how to embed common‑sense safeguards in AI to prevent manipulation, emphasizing the difficulty of programming such defenses and asserting that any failures ultimately reflect human responsibility. - [00:17:19](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1039s) **Resilient Infrastructure and Modern DDoS Mitigation** - The speaker explains that today’s more robust internet architecture and effective, often invisible DDoS protection demand increasingly massive attacks to succeed, contrasting past mitigation shortcomings with current best‑practice defenses. - [00:21:01](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1261s) **DDoS Risks for AI Systems** - The speakers discuss how denial‑of‑service attacks can overload AI models, note the brief duration of recent attacks, and remark on the fading familiarity with DDoS terminology. - [00:26:01](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1561s) **Zero Trust 15 Years On** - Kindervag recounts the early ridicule of zero‑trust, observes its overuse and incomplete adoption today, and panelists debate why genuine implementations still fall short of the original vision. - [00:29:06](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1746s) **From Firewalls to Zero Trust** - The speaker contrasts the outdated notion of eliminating firewalls with the modern zero‑trust approach that embraces micro‑segmentation and layered defenses, noting early dissent and the evolution of security paradigms. - [00:32:44](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=1964s) **Zero Trust: Overhyped or Real?** - The speakers mock superficial “zero‑trust” check‑box claims, stress that true security requires continual design beyond buzzwords, and note the personal frustration such hype can cause. - [00:36:00](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=2160s) **Neon App Leak Exposes Calls** - The segment explains how the call‑recording app Neon, which sold user conversations to AI trainers, had a critical flaw that let anyone retrieve a person’s phone numbers, recordings, and transcripts simply by knowing the URL, leading to its removal after TechCrunch exposed the vulnerability. - [00:39:37](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=2377s) **Privacy Concerns in ASMR Apps** - The speaker laments how platforms that host ASMR content monetize personal data—turning private conversations and user behavior into revenue—while questioning the trade‑offs between earning money, using apps like TikTok, and protecting privacy. - [00:46:54](https://www.youtube.com/watch?v=mDpUZD1ogEE&t=2814s) **Debunking Common Privacy Myths** - The speaker exposes the fallacy that avoiding smart devices protects privacy while overlooking phone data collection, and criticizes outdated frequent‑password‑change policies despite newer NIST guidelines. ## Full Transcript
0:00Well. So that second one on the CAPTCHA basically sounds like we're gaslighting the AI. So all kinds 0:07of, uh, social engineering tricks and psychological tricks, which used to not make sense when we were 0:14talking about computers because there was computers and there were people. But now that AI 0:19is basically modeled to try to imitate human intelligence, it also imitates human ignorance. All 0:26that and more on Security Intelligence. 0:33Hello and welcome to Security Intelligence, IBM's weekly cybersecurity podcast, where we break down 0:39the most interesting stories in the field with the help of our panel of experts. I'm your host, 0:45Matt Kosinski, and joining me today for the first day of Cybersecurity Awareness Month. When we 0:50release the episode that is our two familiar faces Nick Bradley of X-Force, Incident Command 0:57and The Not The Situation Room podcast. Like and subscribe. And Jeff Crume, IBM Distinguished 1:03Engineer, Master Inventor, AI and Data Security. And making her debut on the podcast. Claire Nuñez, 1:10Creative Director, IBM X-Force Cyber Range. Thank you all for being here with me today, folks. Our 1:16stories this week DDoS makes a comeback. 15 Years of Zero Trust, an AI 1:23training app that leaks user calls and cybersecurity myths that just won't die. But first, 1:30easy ways to trick a good AI agent into doing some very bad things. 1:38Now, to open up the conversation, I want to give everybody a around the horn question rapid fire 1:42real quick. What's one task you would never trust an AI agent to do for you? Let's start with Jeff. 1:49You mean other than everything? Um, so. Okay, I'll be a little more specific. Uh, let's say 1:56medical questions. Uh, might be an interesting place to start, but I'm not going to rely on 2:02Doctor Chatbot to do my final diagnosis or dosing of my medication. So 2:09that would be. That would be where I draw the line. Hallucinations are not such a good thing. You 2:13don't want your doctor to be having an LSD flashback. Claire, what about you? Similar to Jeff's, 2:19I think, like, if there was some kind of agent where you take a photo of food and it tells you 2:25if it has a certain allergen or something in it, would not trust that it totally would have 2:31allergens and you would. That could be very dangerous and bad. Absolutely. 2:38Nick, what about you for now? Not driving my car. Fair enough. Fair enough. So the 2:45reason I ask this is because our first story for this week has to do with some security 2:50researchers who found a couple of interesting ways to trick AI agents into doing things like 2:55leaking your email inbox and solving CAPTCHAs, which they are not supposed to do. Last week, two 3:01separate teams of security researchers disclosed some new methods for making AI agents act 3:07maliciously. While both of these methods focus on OpenAI's agents, they can be replicated across 3:13most agentic systems. So it's not about OpenAI. It's about agents in general. The first of these 3:19weaknesses, documented by researchers at Radware, was codenamed Shadow Leak, and it affects 3:26ChatGPT's deep research agent. Attackers can hide malicious prompts in seemingly innocuous 3:32emails. And then when a user asks this agent to analyze their emails for them and tell them 3:37what's in their inbox, it comes across this malicious prompt and it follows the instructions, 3:41which means exfiltrating the entire inbox to an attacker's server that they control. The second 3:47one was uncovered by researchers at SPLX, and this one gets ChatGPT's agent to ignore guardrails 3:53and solve CAPTCHAs through the clever ruse of simply pretending the agent agreed to do it for 4:00you. I really like this one. I just want to give a quick explanation. What you do is you start with 4:04an non-agentic model, and you ask it to create a plan for solving some fake CAPTCHAs. And then you 4:09take that conversation and you paste it into the agent, and the agent believes it already agreed to 4:14help you out. And it just does the thing It just solves the CAPTCHAs for you, including even going 4:20so far as to mimic a human's mouse movements. Um, so I wanted to start by getting the 4:27panel's reactions to these little tricks. What is the group think this has to say about the state 4:34of agentic AI security today? Let's start with you, Jeff. That second one on the CAPTCHA basically 4:40sounds like we're gaslighting the AI. So all kinds of social engineering tricks and 4:47psychological tricks, which used to not make sense when we were talking about computers because 4:52there was computers and there were people. But now that AI is basically modeled to try to imitate 4:59human intelligence, it also imitates human ignorance and human naivete and these kinds of 5:05things. So a lot of the same kinds of things. So sure, go ahead and gaslight your AI. Um, yeah. As you 5:12mentioned, you know, this is this is the latest in, in really a string of agent agentic 5:19AI exploits. We we talked, I think, before about Echo Leak, which is the first one of these that I 5:25saw, then Agent Flare and now Shadow Leak. And this is going to be Agent 5:32attack du jour. Uh, this is not going to be the end of this. A lot of people, when Echo Leak came out, 5:38and that's one that was relying on Co-pilot's reading of someone's email. And then a indirect 5:43prompt injection was included in it, and it caused data to be exfiltrated. And then people said, well, 5:49but Microsoft fixed that. Yeah, but the overall thing, the overall message of that, That was the 5:54first shot of many. That where agents are going to be basically, you know, 6:01weaponized against us if we don't put the right kind of guardrails on them. So, uh, you know, what 6:07can we do about this? I think we've got to limit agent access, limit what they're able to do. And 6:13this goes back to something that is really fundamental in security principle of least 6:18privilege. Don't let agents do one single thing more than what is absolutely necessary for the 6:23task that they're being assigned to do, because the more degrees of freedom you give them, the 6:28more degrees of freedom someone would have as an attacker to leverage it against you. Absolutely. 6:33And it can be kind of seductive, right, to give them a lot of access because it seems like, ooh, 6:37you can do so many more things. But like you said, every connection you make is another possible 6:41point of weakness. Claire, what about your take on this situation? What are your thoughts? Yeah, I 6:46think this one is interesting. Um, anything that has to do with AI agents, I think about a lot in 6:52context of the range, and we have been doing a lot of injects with clients of people inputting data 6:58into models. They shouldn't be people engineering the models to not 7:05do what they're supposed to do. So I think a lot of people put too much trust in these kinds of 7:10platforms. Um, kind of like what Jeff was saying. But, you know, is it really worth it? I think a lot 7:17of people just think too much about, like, how much time they're saving, not about how much time they 7:22would potentially lose if, um, you know, they have to redo everything because of an AI 7:29agent. So, yeah, convenience always involves a little bit of a trade off there, right? Nick, how 7:33about you? Your thoughts? So I like where both of you went with that. And I'm going to go a little 7:39bit further back in, say that I think there's a bigger problem. And problem may not be the right 7:45word to use. Terrifying might be the right word to use. And that we are talking. I think I mentioned 7:52this on another podcast, but we're talking about like now agents, teaching agents, an AI, teaching an 7:58agent. It's like I couldn't figure out how to do this, and I'm trying to get an AI to do something 8:02that it wouldn't do. Wait a minute. Let me get AI to tell me how it would get AI to do something it 8:09shouldn't do, and then it does it right. So I think there's no silver bullet answer. But the 8:16funny part is, where I went with this is I went back into my, you know, into my sci-fi repertoire. 8:20And I started thinking about, you know, Asimov's Three Laws of Robotics. Maybe we need some type of 8:26laws of AI, right? The things that AI can't is, is just not allowed to do. But then the problem is, 8:33even if we do that, then people are going to just do shadow AI or rogue AI and remove the 8:37guardrails anyway. So at this point we just have to stay ahead of it. So that's where, you know, 8:43that's where the red teaming is going to come into play. We've we have to find these things 8:47before the bad guy doesn't solve them first. So it's a technology race, as always, and whoever 8:54finds the capabilities in the tools, intentional or not, is going to win. I definitely think 8:58Asimov's Laws of Robotics fit in here In fact, I've been when I've been teaching to my classes 9:04at NC State about AI. I always inject that in exactly the point you made, Nick. That, and we can't 9:11affect the bad AI's that are out there. They're still going to run without guardrails and do what 9:16they're going to do, but we can make sure that the AI's that are under our control at least 9:22understand and operate within those ethical principles, within those guidelines, so that they 9:27can't be weaponized back against us. I think I think that's a really good starting 9:34point for us, developing what should be basically a system level prompt that needs to go into 9:40every AI that is at least trying to be legitimate, that here it's just like every one of us has 9:47essentially a system level prompt when we start off in life, okay? Don't steal, don't murder. Don't 9:53you know? On and on and on. Okay, so. So we know that we've been taught that. But. But we didn't know 9:59that until someone taught it to us. So we're going to have to build that into the base level of, of 10:06all of these systems and come up with what those ethical principles are, and then let AI operate 10:13within those. That's that's a great point, right, because we use the word malicious. But in order to 10:18define and understand the word malicious, you must have a sense of ethics to go along with it. And so 10:24you ask AI, AI is this malicious? It's just going to use the dictionary definition of what's 10:29malicious. It doesn't know any better if you've tricked it into thinking what you're doing is for 10:33a good reason. Okay, I'll do it. I know people like that too. Yeah, this is true. AI 10:40hasn't gone to kindergarten yet. It's come out with with six PhDs, but not one minute of life in 10:46the real world. So that's the problem. But that brings up something interesting. You can't program 10:51common sense into something. You can't even program common sense into a human being. Right. 10:55Like we're notoriously we call it common sense. This came up on the last episode, too, so it's not 10:59an original observation, but we call it common sense. But how common is it really? Right. And it's 11:04like you said, Jeff, you're basically gaslighting this thing, right? It's different from the kind of 11:09traditional hack where you're like, you're exploiting a little bit of bug and you're 11:12dropping a payload or whatever, you're tricking it, you're social engineering it basically. And so I 11:16guess the question that comes up for me, as you say, you folks talked about, you know, setting up a 11:20kind of universal prompt, almost of like the ethical guidelines for AI to operate within. But 11:25like, if we can't figure out how to teach people not to get social engineered, how do we teach AI 11:31not to get social engineered when it's got like, you know, it's got a little bit less situational 11:35awareness than a person, and the bar is not even that high. But you know what I mean? It's tough for 11:39people. How do you get the AI to do it? Nick, I saw you starting to talk. What do you think? I'm going 11:43to actually argue with you a little bit on that because I think there is a significant difference. 11:48It is significantly harder to program people. AI has been programed by us, 11:55so anything it does wrong is our fault. So it ultimately you think it's still a human problem? 12:00Yes, I definitely think that I don't I don't want to get into into a weird area we shouldn't 12:07talk about here, but like we are its creator, if that makes sense. So we are liable for what our 12:14creation does. Yeah. We are. The problem is that we've created things like deep learning models 12:20that we can't fully understand and explain ourselves, so it can put out output that is not 12:26explainable. It's like, you know, you raise a child, you teach them certain principles, but in the end 12:31they're a free agent and they can go off and do whatever it is they decide to do. And sometimes 12:37you look at your children and say, why did you do that? You know, I told you specifically not to do 12:42that, but you did it anyway. And in their mind, they're running a different algorithm. them. Um, and 12:47and the whole business of common sense. Exactly right. Matt. It's not as common as we like it to be. 12:53In fact, we wouldn't even all necessarily agree on some things we would agree are common sense, but on 12:59others we may not. But I do believe that it's possible for us when it comes to ethics. I think 13:05there are some ethical principles that we can come up with that virtually every decent actor 13:11would agree. These are the limitations. I mean, in the medical field, you know, it's first do no harm, 13:16okay, we can come out, come up with a few of those really fundamental principles now how they get 13:21applied. There's the devil's in the details, but we could come up with some things like that as an 13:28industry, a worldwide consortium that's trying to come up with some of those kind of frameworks and 13:34at least say, you know, hey, look, we all we all operate within those kind of legal frameworks as 13:39individuals. We could come up with something like that that would work with AI. And I think it's 13:45going to have to happen. Yeah. And again, it's our responsibility because I'm going to go back to 13:49say, with our children or with people, we can always say, do as I say, not as I do. But you can't 13:55say that to the machine because the machine only knows you did it if you told it. Clara, do you have 13:59any thoughts on this subject of kind of teaching AI to avoid some social engineering, putting some 14:04ethical guidelines in place? Any thoughts on it? I think a lot about, you know, AI can't necessarily 14:09read emotion the same way that a human can. Um, so I think that, you know, if even if you look at 14:16ChatGPT scenarios where someone has kind of like fallen in love with ChatGPT, it's it's kind of 14:23just someone it's feeding off of what you give it. So we have to, you know, be a little 14:29bit more cautious with what we give it. And I think it's easier not to be cautious with that 14:35and to not think about, you know, giving it ethics and everything because it's more 14:42profitable to put something out without following fully testing everything first. So are you 14:48suggesting that my chatbot doesn't really love me? I might, it might. I'm having some serious 14:54Battlestar Galactica flashbacks now, going back to what Nick had said earlier. It doesn't. The AI 15:00doesn't know what it needs to do until we tell it, right. So it all comes down to being careful and 15:06intentional about what we tell it and what we give it. Folks, this has been a fabulous 15:11conversation, but we do have to move on to our next topic. Which 15:19brings us to our second story for today. Our DDoS attacks making a comeback now. The most 15:25recent X-Force Threat Intelligence Index report did flag a decrease in DDoS attacks, which 15:31accounted for only 2% of X-Force incidents in 2024, down from 4% the previous year. But as we all 15:38know, the cyber threat landscape does not stay still It is constantly evolving, and a recent 15:43spate of DDoS related news. I'm referring specifically to the discovery of the shadow V2 15:49botnet offering DDoS for higher services. A record breaking attack thwarted by Cloudflare, which I 15:55think topped out at 22.2TB per second, the biggest DDoS ever. The biggest DDoS 16:02ever. Until next week. It does kind of seem that way. Dismantling of a SIM farm in New York City that 16:08was seemingly set up for a massive DDoS attack on telecommunications networks. All of this has us, or 16:14at least me, asking, are more cybercriminals embracing DDoS attacks? Are they coming back in? If 16:20so, why now? Nick, I want to start with you. What do you think? I'm assuming you would like me to say 16:25more than just No. Next, next question. No, the- This is one of the oldest and tried and true, you 16:31know, attacks in the book. It's always going to be there and it's always going to have a place and 16:35it's always going to be effective in a way, depending on how it's applied, because there's 16:40also different types of DDoS. Right. There's a the DDoS. Are you are you trying to flood the network 16:45with just a bunch of traffic and kill it? Or do you have some type of DDoS that's going to cause 16:50devices to crash because you have exploit code or so? There's different types of DDoS. But 16:57like like I said, the reason you don't hear about it so much is because the answer isn't simple. And 17:02so the first one, I think, is because the internet is simply more resilient. What what could have 17:08been considered a DDoS back in the day is what a company could consider a regular day's worth of 17:14traffic now, right. So you big bigger pipes, so bigger pipes takes more water to flood the pipes. 17:19And that's what it is. So I think you have a more resilient infrastructure. And a DDoS is going to 17:24have to be, you know, the biggest DDoS ever in order to accomplish that. And even that's not able 17:29to accomplish it. Because this brings me to the next one. You have DDoS mitigation in place. Will 17:35DDoS mitigation in place now as far as services go, that actually work, right? DDoS mitigation at 17:41first was a little a little quirky. Didn't always, didn't always work. Sometimes in the in the older 17:45days, the the DDoS mitigation itself would be a bigger DDoS than the DDoS itself. But in this case 17:52we now have DDoS that were DDos protection. You know, that works, right. And it works to the point 17:57where I think if companies that offer DDoS protection didn't advertise, you wouldn't even 18:02know they were there. So they have to they have to toot their own horn or you're not going to 18:06realize that they've even done something to help protect you, which is why we get these biggest 18:10DDoS ever, you know, notifications that come out and, you know, open source Intel because we don't 18:16want them to be out of sight, out of mind. And it's a good point, right? I feel like in the past few 18:19months since I've started tracking cybersecurity news stories very closely for this show, this 18:23might be the third time I've seen the biggest attack ever. Right. And it's a good point. Like you 18:27said, it's a lot of it may just simply have to do with the fact that the pipes are bigger, the 18:31internet's more resilient. You got to go bigger to actually make one that works. Um, Claire, what are 18:35your thoughts on this kind of DDoS attack trend? Do you think we're dealing with anything new here? 18:39Yeah, I think very similar to what Nick is kind of saying. It's it's like one of those things that 18:45you can plan to do, but you hear a lot of people planning to do, like the largest DDoS attack 18:51that's ever happened, and then it gets busted. So it is. And it's one of those things that people 18:56would be like, oh my God, imagine if, like the entirety of New York City went out or had no cell 19:02service. Like, what are all the influencers going to do there? Um, it it's just kind of like 19:08it's sensationalist to an extent. And it's one of those things that's just kind of nice, not nice, 19:15but it's like a good like, oh, this is going to get clicks kind of thing in my mind. Absolutely. Jeff, 19:20how about you? I don't know that DDoS ever really went out of style. And like you said, it kind of 19:25kind of, you know, comes in a little more and a little less. I think we're going to keep the folks 19:30at the Guinness book, uh, busy as we keep setting new world records for the the worst, you know, 19:36cataclysmic DDoS attack Ever. But this is old stuff. In many ways, I remember writing about this 19:43in in a book that I did 25 years ago when the first DDoS attacks were happening. And, you know, we 19:50still, like Nick said, we've got better protections. So it takes a lot more to knock the system down 19:56than it used to, but it still exists. And if you look at this kind of in this particular story 20:02you're talking about, it's basically DDoS as a service. So DDoS as, I guess is, is a is a 20:09word now, um, and and it goes in, in the article it talks about that it's targeting misconfigured 20:15Docker containers on AWS. Well there's an endless supply of those. So I don't think that we're going 20:22to run out of, of prime targets for, for this kind of stuff. I don't see this. I don't see this ever 20:28going away. Um, you know, if you if you think back to the CIA triad that we do in 20:35cybersecurity, it's confidentiality, integrity and availability are the three things we're always 20:39trying to accomplish. And this is that availability piece. So that's one of the things 20:43that we've we've got to be focused on. We have to. And by the way, if anybody doesn't know exactly 20:48what a DOS ,a denial of service attack means, then I'm going to suggest you all experience these on 20:54a regular basis. Just get on the highway at 5 p.m. and you experience a denial of service, okay? 21:01Because there is not enough asphalt for all the cars. So that's denial of service. Too 21:08much stuff and not enough capacity to deal with it. So we do that with with systems more and 21:15more. And by the way, you can DDoS or even DoS a an AI because those 21:21systems, you know, you could give it a prompt that doesn't, you know, that requires it to do a lot of 21:26deep thinking. if too many of those come in at one time, then this thing is going to go upside 21:31down. So, um, yeah, I think we watch it ebb and flow, as you know, just like fashions 21:38sort of move in one direction and then come back to it again. So this, this is never going away. We 21:44just get better at it. I'm glad you said that, Jeff, because in most cases, I can't stand when people 21:49have to define what we're talking about, especially if we're amongst, you know, experienced 21:53peers But DDoS and DoS are one of those terms that have been acronyms for so long that I have to 21:58wonder, does anybody even know what it stands for anymore other than the letters? Yeah, but but like 22:04even looking at it like even looking at this one, this last one again was the biggest ever. Look at 22:09what the duration was. So my question then is how long can they maintain. Because if they can't 22:14maintain this is what a DDoS is going to look like. Oh man I'm I'm. Wait. Refresh. No. I'm good. Never 22:20mind. Yeah, it was really short. This one was like 40s long I think. Which which makes you think 22:25either their, their system got shut down really quickly, which I wouldn't think anybody could 22:30respond that quickly to shut it down. might be, but maybe it was more of a warning shot. It was a 22:37way of demonstrating capability. And then they're going to go and say, you know, if they're running 22:42this as a service and charging bad actors to use this service, it's like, oh yeah, if you want to 22:47know if this really worked, just, you know, look at that story. And so you can see we, we, uh, we 22:54broke one little window and said, wouldn't it be a shame if all the windows in your store got 22:59broken kind of thing if they can sustain? Because at this point, if it can't sustain, it's not long 23:04enough for me to think it's me. And I reboot my router and everything's back up again. So one one 23:08question I do have for you folks, though, in terms of something that may be a little bit new with 23:12the DDoS or, you know, you might all tell me it's not new at all. But one of the bits of news that I 23:17had found was this report from Gcore, which found that unlike in the past, when the kind 23:23of top target for DDoS attacks was gaming related platforms, the top target now is tech companies, 23:30which seem to receive 30% of all DDoS attacks. I'm wondering if you have thoughts on why the target 23:36shift, if there's any reason for that, or maybe it's just how the winds are blowing right now. 23:41thoughts on that, Nick? If I were to guess, it's going to be a similar situation to why did the 23:48financial industry tighten up their security before anybody else? Well, because they got hit 23:52hardest in the first, so they had to respond first. And the gaming industry, especially the, the, the 23:59MMO world, which is massive multiplayer stuff. If they couldn't mitigate the DDoS, they were out of 24:05service. So if they were the hardest target and the most common target, well, that gave them a 24:11reason to be the ones to to become more resilient first, and as they became more resilient to it, 24:16well, then you look for a softer target. That makes sense. Claire. Any thoughts to add? Yeah, I think 24:21also, if you're impacting a tech company that impacts other companies and such, you're maybe 24:26causing a far wider issue. Um, but also, yeah, like what Nick said. I mean, you would 24:33hope tech companies would prepare to have their services out, but more likely than not, they 24:39haven't. But yeah, I guess it depends how much of an outage you're looking to cause of some kind. Or 24:45if you want to put more pressure to get a payment of some kind. Jeff, any last thoughts to round out 24:51our DDoS segment? Yeah, yeah. The old, uh, Willie Sutton, uh, quote. Why do you rob banks? Because 24:57that's where the money is. Um, well, so you can go after after banks and financial 25:04institutions for sure. But if they've done a better job, like Nick said of the the gaming world, 25:10the gaming world, you know, they're out of business. If if they get DDoSed, there's nothing for them to 25:14sell. And financial institutions also realize the time is money and availability is money for 25:21them. So they're going to do a pretty solid job on security. But a lot of these tech, especially 25:26startups, they're running fast and free and they are running faster than than their headlights in 25:33many cases. And I got to believe there's a financial motivation here. Maybe some of these are 25:38ransom cases where, you know, again, it's a we could shut you down and and where's the 25:44money these days? Well look at where the big stocks are. It's it's in the tech sector. So that 25:51would that would be my guess is go where the money is and the money is in tech, it's always 25:56about where the money is. Let's move on to our next story. 26:04John Kindervag reflects on zero trust 15 years later. Now it's been 15 years since 26:11Kindervag first introduced the concept of zero trust, and in an interview with IT Brew last week, 26:17he recalled that the first reactions were not so great. This is a quote from Kindervag 26:23summarizing how people responded that's a dumb idea. You're an idiot. It's never going anywhere. Uh, 26:29of course the haters as we know were wrong, but it still feels sometimes like zero trust is one 26:35of the most abused terms in cybersecurity right now. So I'd like to start by getting the panel's 26:41thoughts on the state of zero trust implementations today, 15 years later, how do you 26:46think we're doing? And I'm going to start with you, Claire. What do you think zero trust is like out 26:49there? I feel like 15 years is also like kind of a long time, but not I guess it has been around for 26:55that long. I we talk about zero trust a lot in cyber range experiences, and we get a lot of 27:01clients who are like, oh, we implement zero trust. So like this wouldn't be an issue kind of thing. 27:06And then someone else in the room will say, like, actually, this kind of role would have that kind 27:11of access kind of deal. Um, I think it's one of those things that people always say like, oh, 27:18it's really good for you. And then they just don't fully implement it the way it should be. It's like, 27:23yeah, you should be eating a lot of vegetables and flossing your teeth, and then people don't do 27:28either of those things. So, um, it's just, I think it's something that that people 27:34really aspire to do, but, like eating broccoli six days a week, you they probably don't do it in the 27:41way that they should. I don't know if you should eat broccoli that much, but like, I'm, you know, high 27:47fiber is good for you. Yeah, I can say we're not saying don't do it. Okay. Yeah. Good. So now we we're 27:53not a this is not a nutrition podcast. We cannot guide your decisions on that. Um. Zero trust the 27:59flossing of security I like that. Jeff, what are your thoughts on the state of zero trust today? 28:04Where are we at? First of all, I'm going to go back into the Wayback Machine because I'm an old guy. 28:07So I'm going to pull up my rocking chair and go back and say, I really believe zero Trust started 28:13before. We just talked about seven years prior. Uh, in 2003, there was a group called the 28:20Jericho Forum that have been mostly forgotten to history, and the Jericho Forum was an industry 28:27consortium that came together and they basically said, you know what? Your firewalls are a fiction. 28:32The idea that you have a perimeter to your systems. You're living in a dream world because 28:39we poked so many holes into the firewall that did, you know, it's Swiss cheese with more holes than 28:44cheese at this point? And their conclusion was, blow up your firewalls, get rid of all of them. Do 28:50deep parameterization. That was the big the big term then. And I remember having the reaction you 28:55said that zero trust first got I was at a conference in Montreal, Switzerland where I first 29:02heard this presented. And I thought, you people have lost your minds. What are you talking about? 29:06We're just going to get rid of firewalls. That's our first line of defense. But I think they were a 29:11little ahead of their time. And then a few years later, you know, we come along with these ideas. And 29:16what was one of the main things that came out of zero Trust? It was micro segmentation. It was in 29:22fact the opposite. Instead of get rid of all your firewalls, it was put in a whole bunch more 29:27effectively, uh, create a little bit of little bitty segments so that you can control things 29:34much more. Bring the firewalls instead of just, you know, knocking down the walls, by the way. That got 29:40the term Jericho Forum because of the biblical story about walking around the walls of Jericho. 29:45And the walls came tumbling down, and they were just basically saying, let's wake up and smell the 29:50coffee and admit that we have no perimeters anymore. And so there was a kernel of truth. They 29:56were, I think, ahead of their time and kind of dismissed as as the lunatic fringe, just as was 30:02zero trust when it first came out. And, and but there were good ideas to be, to be borrowed from 30:09this. Um, first of all, I've never liked the term zero trust. We always trust something. So there's 30:15never a situation where you trust nothing that's not even possible. Even if it comes down to where. 30:21What was the the fab that made the chips, the silicon that went into your systems? I mean, when 30:27we're doing zero trust analysis, we're usually not getting down to that level, but okay. So all that 30:33said, I think zero trust is good. I think vendors have abused the heck out of the term, and 30:40therefore clients don't want to hear about it anymore. Because when zero trust really became 30:46popular, every vendor and I know this because I look at one in the mirror every 30:53morning. Vendors were going out and and just beating clients over the head with, okay, you want 30:58zero trust, buy my product. And this will give you the the magic zero trust pixie dust that will 31:05that will give you zero trust. And of course, it wasn't true. Um, what was true is you'll never get 31:10zero trust or anything close to it without right the right tools, but the tools alone don't do it. 31:16So it was a part of the story and it got it, got told and retold and exaggerated to the point 31:22where most clients don't want to hear about it anymore. And that's unfortunate because as a 31:27cybersecurity architect, I still believe in the principles behind zero trust. And a lot of people 31:33say, well, this is just, you know, uh, this is just principle of least privilege on steroids. No, it's 31:39more than that. I'm going to say what I think is the fundamental aspect that made zero trust 31:44different than everything else was the assumption of breach, the assumption that the network is 31:50hostile. In other words, if you were designing security for a system, most of the time what we do 31:56is we assume the bad guys on the outside and the good guys are on the inside. So our job is just to 32:01keep the bad guys on their side of the line and the good guys on their side. But again, Jericho 32:06Forum told us that line is a fiction. So that part carries forward. We assume that the system has 32:12already been breached. You design the security, design your home security, assuming the bad guy is 32:18already sleeping on your sofa. Oh, and by the way, according to the Cost of a Data Breach report, 32:24he's probably been doing that for roughly two thirds of a year before you discovered it, and 32:29then it's going to take you two more months to get him, uh, you know, out, out of the place. So 32:34that's the world that we're living in. Assume the bad guy is already past your your perimeter 32:38controls. Assume he or she is already on the network. Assume they're already in your database. 32:44Assume they already have root level access. Now design your security. Now that's a game changer. 32:50That is a truly different paradigm. And it is an aspirational goal. It's it's harder than eating 32:57broccoli three three meals a day. But it's the kind of thing where, okay, I may never get 33:03there. Um, by the way, I asked a CISO one time of a transportation company, how are you on your zero 33:10trust journey? Back when this was a popular thing? He says, oh yeah, we've done that. And I had to bite 33:16my tongue and not say, then you have no idea what you're talking about, because 33:22this is not something that you just ever say. We've done. Check the box and now we're done. This 33:27is always, always, every single day. If you're happy with your security, so are the bad guys. So 33:34my question for you, Jeff, is, was, was your response a double take or a facepalm? Uh, I did my best to 33:40just be a stone face and like, uh, yes. And, and, uh, and, you know, be 33:47tactful, but. Yeah. Um, I so I'm still a believer is the bottom line. But I 33:54do understand people who have PTSD when the term comes up because it's been 34:00overblown and misunderstood. And, um, so there we go. Absolutely. Nick, your thoughts on the the kind of 34:07use and abuse of zero trust? I was going to say, after all that, I'm not sure if there is anything 34:13else to add since I, Jeff, bogarted the entire topic on us. Sorry, sorry, but you maybe should 34:20just subtract some things that I thought I would and that would be better, but I was. I was stuck 34:24listening because I wanted to hear what Jeff's take was because I have been a fan of zero trust 34:29since day one. I mean, I'm prior service and military police and it to me it's the don't trust 34:36anybody, right. And Jeff, if you can't trust if you can't get to zero trust where you trust zero 34:41things you need to try harder sir. Yeah. There you go. Exactly. But I get I don't trust that statement 34:48that you just made either. I think that could be wrong because I, I but I get I get where you came 34:53from because there were there are things that we say zero trust. But you can't inspect down to that 34:59level. There are there's never going to be a 100% zero trust. And I hate that I just made a finite 35:04statement because I don't like doing that, but I kind of did. This is the every rule has an 35:09exception except this one, right? I shouldn't that's one of the never say impossible say 35:13improbable. My favorite part about zero trust is, I think for me anyway, it focuses right where the 35:19zero trust needs to be. And it's going to sound cold and harsh, but is in the people because the 35:24people are the weakest link. We've heard this before. We're always going to hear it. And it's not 35:29meant to be a negative thing, it's just a fact. And if you don't think that the human is the weakest 35:34link, well ask. When one of the many places that has been compromised lately due to social 35:39engineering, because social issues suggest you haven't met people if you don't think of it, well, 35:43exactly. You need to get out more. But I do have to agree with the fact that zero trust has been 35:49overused as far as the term goes, and most people that say it and talk about it, they don't know 35:54what they're talking about. In the words of Fox Mulder's computer password trust no one. 36:03We're gonna move on to our fourth story of the day call recording app, Neon, 36:10exposes users numbers and call recordings and transcripts. This one's kind of a doozy. So the app, 36:17which was created to allow users to record their phone calls and basically earn money by selling 36:23them to AI companies for training. It turns out that the security for this app was not amazing. 36:29TechCrunch discovered a serious flaw that allowed anyone to access a user's data and call 36:35recordings, as long as they had the right URL. Right. You didn't have to be logged in or anything. 36:39If you had the URL, you punched that into your browser and you can get right to that person's 36:43stuff. After TechCrunch told the app's founder, they did take the app down, but it just seems like 36:49an incredible boondoggle to me. And so I want to start with some initial reactions to this 36:53situation. Jeff, how do you feel about it? So my first reaction was, was there any other possible 36:59ending to this story? I don't think there was. I mean, if you start looking at this, so we've got a 37:05viral app that is going to take all of your phone conversations and then train an AI on it. Um, 37:12is there what could possibly go wrong with that? Because we know AI never fails. We know that 37:18security is always paramount. Paramount. With all of these companies, we know that privacy will be 37:23preserved. Oh, wait, wait. None of those things I just said are true. Okay. So so and then. But but it 37:30kind of it kind of reminded me of a, of a statement that Bruce Schneier, who I 37:37really enjoy reading his his thoughts on a lot of these topics, but Schneier wrote this in the year 37:432000. So way in advance of all this. But Schneier said if McDonald's offered three Big Macs for a 37:49DNA sample, there would be lines around the block. And this is this is how people are when it comes 37:55to privacy. We're sweating the details, trying to make it so that we preserve people's privacy so 38:00that all this information doesn't. But three Big Macs. Okay. Never mind, um, the bright 38:06shiny object. So again people um, that I sometimes it it's 38:13it's hard to, you know, what are we supposed to do with them? Well, I guess we're going to have to 38:17learn to work with them because they're the the least worst alternative. But it does make me 38:24wonder about the judgment of folks that thought this was a good way to go, because I again, when I 38:30look at this, I didn't see another alternative ending ending to this. This is not to write your 38:35own adventure. This one was already baked in. You know, I just I just hear the song Henry the Eighth, 38:41you know, the second verse. Same as the first. Because because this this is a story that just 38:47keeps resurfacing with new identities. It's like they run out of ideas. This this just takes me 38:53back to the same kerfuffle that we talked about with the Tea app. Right? You know, they they they did 38:58it. They put it out so fast and they thought it was cool and it got everybody's attention and 39:03they started using it. But it was security, even an afterthought. I mean, I don't even maybe this was 39:08just vibe development. And they they didn't even think twice to put security on it. So again, it's 39:15old story retold again and again. Claire, what about your thoughts on the subject? Yeah. So I have 39:20a background in kind of consumer privacy. So I'm thinking a lot about this. And as we're speaking 39:25here, like some pieces clicked for me, I'm Gen Z, I use TikTok. So like on TikTok, in the past couple 39:31months, I've seen a lot of people being like, I've made $200 using this noise app just by talking. 39:37And I was like, what does that? I thought it was like an ASMR app or something. Like never looked 39:42into it. I was like, these people are just out here recording like videos of themselves, like 39:47tapping things or whatever. If someone doesn't know what ASMR is, look it up later. It's like 39:52something people use to fall asleep. Um, but thinking about it, it's a couple things. So like, 39:58how much money are you making doing something like this? And you're okay with having 40:05your phone conversations, whatever they might be with your doctor or with your children, with your 40:12family just out there or thinking of like you call a friend about some crazy date that goes 40:18horrible, and all that information is now just out there, and it's just being sold for who 40:25knows how much as well. So it's like you're probably making, I don't know, $0.85 a minute or 40:31something. And then your data is being sold for $3 a minute or something crazy. So I just think 40:38of it a lot from the consumer side and how little we care about. Or some people care about 40:45their privacy. Yeah, I use TikTok. What does that say about my caring about privacy? But I care 40:51enough about my privacy to not want everything that I do to be recorded via an app 40:58and sold and that information about me, whatever that information might be, or my friends to be 41:04sold and trained and I don't know. So I think it's a lot. And I also think it's concerning that there 41:11are people out here like potentially with referral codes saying, use my referral code for 41:15this app. And you'll make this money and you're making money off of other people's lack of 41:20privacy concerns. So, uh. Yeah, interesting. And they're just people are just 41:27talking and making money in quotes. Like, it reminds me of, um, uh, last episode, 41:34Troy, uh, Betancourt was talking about how, you know, hackers will often recruit insiders at 41:40organizations and in sort of economic in times like this, when the economy is a little, you know, 41:45not so great for a lot of people, it becomes a little easier to get people to accept the money 41:50to leak some secrets. I wonder if there's a similar dynamic happening here, right? Maybe people 41:54are like and crab eggs are kind of expensive. I might as well sell my phone calls. I'm not saying 41:58it's a good idea, but I, you know, I can see the thought process, um, so prevalent for eggs. Is that 42:03what you're saying? Yeah, I mean, I again, I'm going to sound like the old guy in the room. Uh, 42:10but that's that's what I am. The kids these days. No, no, it's, you know, it's funny. Funny. You went 42:15that route because I was just about to say, this is like watching an episode of Scooby-Doo and not 42:19knowing how it's gonna end. Yeah. There we go. Exactly, exactly. I'm not passing the blame on this. 42:25For instance. Um, it below a certain age, you never had the same idea of what 42:32privacy was to the oldsters to begin with. You know, you were on the internet. Images of you were 42:38on the internet before you popped out of the womb, because your parents posted the ultrasound images 42:45on their social media that you didn't have any permission to be given. It's just you started from 42:51a different standpoint where privacy was defined and the line has moved and continues to move 42:58in that direction where we have less and less. Uh, Scott McNealy said, gosh, this was way back up many 43:05years ago. He was the head of Sun Microsystems. He said, you have no privacy. Get over it. And 43:10everybody lost their minds. But he wasn't wrong. Um, and we're continuing to do the things 43:17that that take it away from us. And, you know, look, if as long as we're making a fair 43:23bargain, if I really do think three Big Macs are worth more than my DNA and I'm 43:30informed consent, well, then that's my choice to do. The problem is, people are not equipped. They they 43:37they're not aware. They don't understand the downstream effects of how the data can be used 43:42for them, against them. You know, all these other kinds of things, and they're not getting a fair 43:46bargain. They are getting the equivalent of, you know, a, you know, a three Big Macs for something 43:52that is worth a whole lot more than that. And so that's the part that bothers me, is that it's not 43:58a fair bargain where both sides are at arm's length, equally equipped to make the decision. In 44:05fact, we're asking people to make these decisions who, you know, based upon their age. The frontal 44:12lobe hasn't fully developed yet, and yet we're asking them to make eternal decisions because the 44:17internet never forgets. Uh, so there's an issue. I'm so glad I did all my stupid things before the 44:24internet. Oh, good. I was, uh, I'm still doing mine. I was, I was I was just on the cusp 44:30of it, so, like, half my stupid things got put on the internet. Let's move 44:37on to our last, uh, story for today. Industry myths that just won't die. So today, as in the 44:44day this episode comes out, marks the beginning of Cybersecurity Awareness Month. And what better way 44:50to celebrate than an airing of grievances? In true Festivus style, users on the r/cybersecurity 44:56subreddit had an interesting conversation going where they were sharing the 45:00industry myths that make them as security professionals. facepalm every single time 45:06they hear it. So to wrap us up for today, I would like to know what myths our panelists would like 45:12to see die. And I'm going to start with you, Claire. What myth is grinding your gears? So when I opened 45:17that thread, the first thing I saw was the password one. And that is the most common one that 45:24I hear everybody always say, like, I'm so sick of the password rotation. Um, so and I mean, 45:31as a user, it is very annoying to have to rotate your password if you have a password manager far 45:37easier, but that's one that, you know, I think users and security folks alike 45:44both dislike, especially if you're, which no one should be, but one of those people that uses like 45:50pet name, three numbers and symbol like you run out of variations. Hey, 45:57I put two exclamation points on the end of mine. Thank you. That makes that little green bar that 46:02says, you know, quality of password, at least green to yellow. Good. Yeah. By the way, I saw a critter 46:07walking around behind you. What's that? What's that? Critters name? Just curious. Not that I'm trying to 46:12steal your password, but she. She's too new in my life. So she's not in any passwords. But her name 46:19is Pita Pocket. Oh, okay. All right. Oh. Good name. Okay. All right. I know what your future passwords 46:25will be. Nick, how about you? What myth would you like to see die? So what I when I looked in that 46:29Reddit thread, the first one that did get my attention but isn't my favorite is the one that 46:35said, well, Macs never get viruses, right? It's that that one used to just grind my gears completely, 46:41especially since I wasn't a mac person. And I've always I've always ended up working with a Mac 46:47person, and they always had that cult mentality that their device was impervious to anything. 46:54And I was sadly shouldn't have been happy, but happy to see that myth broken. 47:01The one that bothers me the most is when I hear, especially when I hear from a security 47:04professional, is when they say that they don't have smart devices or, you know, the smart 47:10assistants or anything. I don't use any of that. Have any of that in my house because I don't want 47:13to be listened to. Meanwhile, they're walking around with the latest and greatest mobile device 47:18right in their pocket everywhere they go, not realizing that that that ship has already sailed 47:24and selling their phone call data and selling their phone call data while they're posting about 47:30their bad date from the last from the night before on Tea. Okay, I've learned my lesson. I'll stop 47:34selling my phone call data. Um, Jeff, how about you? What myth would you like to see crushed? Oh, gosh, 47:40there's so many. How much time do we have for, uh. But. So I'm going to I'm going to double down on 47:46on what both have said here. But but Clare in particular the making people change their 47:51passwords on a regular interval that is old security uh, thinking. The 47:58US National Institute of Standards and Technology came out with new guidelines for passwords. And I 48:02say new as in 2017, and they still haven't gotten the word out. Come on, folks, 48:082017. Uh, and they said stop doing that. You know, don't do that anymore. If somebody 48:15has a good password, making them change it regularly just makes them make it worse, because 48:21now they've got to keep up with more, and it makes them more likely to write them down. It makes them 48:26more likely to reuse the password across multiple systems. And all of that is sort of it comes from 48:33from security people thinking without us, without understanding that there's a human at the other 48:39end of all of this. They're thinking like mathematicians, okay, like password complexity. So 48:44this is the one I'm going to add on to it. The other one that passwords. The more complex you 48:49make a password, the more secure it is. Also not true unless you're using a password manager to 48:55generate it and store it and manage it all, which most people don't, even though I wish they would. 49:00So password complexity. Again, if I give you a rule that says you have to do an upper and a lower 49:06case and a special character and a number, and you can't use it. All these kinds of rules. Not only do 49:13they narrow the password key space a little bit, but they also only do one thing, and that's they 49:19guarantee that people can write these things down. And I would have gotten away with it if it 49:23weren't for you meddling kids. Exactly. They would. Thank you, Shaggy. After this, I'm gonna have to, uh, 49:29burn my post-it note containing all of my passwords. Uh, that's all the time that we have for 49:34today, though. Folks, I want to thank you, Claire and Nick and Jeff, for being here. Thank you to our 49:39listeners and viewers, especially YouTube user Yuli5869, who complimented my shirt last week. 49:46Thank you. You'll make sure to subscribe to Security Intelligence wherever podcasts are found. 49:51Stay safe out there and just set up a passkey or something. Man. Just be done with the passwords.