Learning Library

← Back to Library

Gartner Recommends Banning AI Browsers

Key Points

  • Gartner recommends organizations temporarily ban AI‑enabled browsers (e.g., Perplexity’s Comet, ChatGPT’s Atlas) due to risks of data exposure and uncontrolled AI agents accessing corporate systems.
  • Recent research demonstrated a “drive‑wipe” attack where a simple email command could delete an entire Google Drive, highlighting the real‑world danger of AI‑driven automation.
  • Panelists, particularly Ryan Anschutz, expressed a conservative stance, agreeing that the lack of accountability and zero‑click exploits make AI browsers too risky for enterprise use today.
  • The episode also raised broader questions about AI companies’ responsibilities to the threat‑intelligence community, upcoming MITRE‑identified software weaknesses for 2025, and the safety of using Google sign‑on and BYOVM attacks.
  • Overall, the experts advise caution and immediate mitigation (e.g., bans) until robust security controls are implemented for AI browsers.

Sections

Full Transcript

# Gartner Recommends Banning AI Browsers **Source:** [https://www.youtube.com/watch?v=8jWAQiSqDVU](https://www.youtube.com/watch?v=8jWAQiSqDVU) **Duration:** 00:51:33 ## Summary - Gartner recommends organizations temporarily ban AI‑enabled browsers (e.g., Perplexity’s Comet, ChatGPT’s Atlas) due to risks of data exposure and uncontrolled AI agents accessing corporate systems. - Recent research demonstrated a “drive‑wipe” attack where a simple email command could delete an entire Google Drive, highlighting the real‑world danger of AI‑driven automation. - Panelists, particularly Ryan Anschutz, expressed a conservative stance, agreeing that the lack of accountability and zero‑click exploits make AI browsers too risky for enterprise use today. - The episode also raised broader questions about AI companies’ responsibilities to the threat‑intelligence community, upcoming MITRE‑identified software weaknesses for 2025, and the safety of using Google sign‑on and BYOVM attacks. - Overall, the experts advise caution and immediate mitigation (e.g., bans) until robust security controls are implemented for AI browsers. ## Sections - [00:00:00](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=0s) **Gartner Advises Ban on AI Browsers** - The IBM Security Intelligence podcast panel discusses Gartner's recommendation to block AI browsers over data‑leakage concerns while covering AI threat‑intelligence responsibilities, upcoming Mitre software weaknesses, Google sign‑in safety, and BYOVM attacks. - [00:04:11](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=251s) **AI Browsers: Emerging Security Risks** - Panelists discuss how integrated AI browsers expand attack surfaces—through prompt injection, data exfiltration, and zero‑click exploits—and stress the need for strict policies and trust boundaries, aligning with Gartner's pragmatic view. - [00:07:58](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=478s) **User Alignment Critic for AI Safeguards** - The speaker outlines safeguards such as mandatory human review of non‑user‑origin instructions and Google’s “user alignment critic” model that evaluates AI agents’ plans before execution, and solicits security experts’ perspectives on the promise and direction of these approaches. - [00:14:07](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=847s) **AI Providers' Role in Cyber Threats** - The discussion highlights criticism that AI vendors sit at the center of emerging attacks yet withhold vital indicators of compromise, prompting calls for clearer detection standards and accountability. - [00:18:41](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1121s) **Balancing AI Threat Transparency** - Participants discuss the dilemma of sharing AI vulnerability details, likening it to cloud security’s responsibility matrix and urging collaborative limits to prevent providing threat actors a playbook. - [00:21:53](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1313s) **Closing Knowledge Gaps Through Collaboration** - The speakers emphasize that sharing detailed information and leveraging existing frameworks such as the cloud responsibility matrix are essential to understand and remediate security issues before transitioning to the discussion of the 2025 CWE Top 25 software weaknesses. - [00:25:31](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1531s) **Why Injection Attacks Persist** - The speakers explain that cross‑site scripting and SQL injection remain common because they’re easy to exploit, while defense requires secure‑by‑design practices despite emerging AI and supply‑chain threats. - [00:29:32](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1772s) **Injection Risks Persist Amid Development Pressure** - The speaker laments the lack of enforced secure coding and rapid release cycles that leave legacy apps vulnerable, noting that both traditional and AI‑driven prompt injection attacks continue to thrive, and challenges defenders to derive actionable takeaways despite slow industry progress. - [00:32:44](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1964s) **Evaluating Consumer SSO Risks** - The speaker urges organizations to cut breach risk by addressing the top 25 vulnerabilities and then debates the convenience versus single‑point‑of‑failure trade‑off of consumer single sign‑on/social login services like Google and Facebook. - [00:36:54](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=2214s) **Credential Reuse and Security Hygiene** - The speakers discuss how leaked passwords are rapidly tested across multiple sites, highlighting users' habitual neglect of strong, unique credentials and the resulting need for MFA, passkeys, and better overall security practices. - [00:41:39](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=2499s) **Bring-Your-Own Virtual Machine Attack** - Red Canary describes a spam‑bombing campaign that coerces victims into granting remote assistance, allowing attackers to drop a malicious virtual machine via script for persistent control. - [00:45:04](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=2704s) **Misdirection via Noise Flooding** - The speakers discuss how attackers use email bombing and other noise to distract defenders, conceal hypervisor‑level malware, and expose the limits of traditional endpoint visibility, highlighting the four essential malware behaviors of running, hiding, communicating, and persisting. - [00:48:49](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=2929s) **Defenders Beware: Bring‑Your‑Own‑VM Attacks** - The speakers argue that VM‑based payloads act like traditional malware—making classification moot—and urge defenders to boost visibility by monitoring for unauthorized virtual machines to counter this novel attack vector. ## Full Transcript
0:01So when it comes to AI browsers, I'm going to 0:03sort of just hold my breath and wait until we 0:05actually see some security implementation involved. All that and more 0:09on security intelligence. Hello, and welcome to Security Intelligence, IBM's 0:18weekly cyber security podcast, where our expert panelists turn the 0:23biggest industry news stories into practical takeaways you can use. 0:27I'm your host, Matt Kaczynski, and this is going to 0:30be our last regular panel of the year. And what 0:33a panel it is. We've got Austin Zeissel, threat intelligence 0:36consultant Evelyn Anderson, CSS cto, distinguished engineer and master inventor, 0:43and Ryan Anschutz, North America, leader of X Force Incident 0:47Response. Thank you all for being here today. Here is 0:49what we're going to talk about. Should AI companies do 0:52more for the threat intelligence community? Mitre's most dangerous software 0:57weaknesses for 2025. Is it safe to use Google to 1:01sign into other websites and a bring your own virtual 1:05machine attack? But first, Gartner says organizations should ban AI 1:11browsers. Now, in a recent advisory titled CyberSecurity must block 1:19AI browsers for now, which kind of tells you exactly 1:23what the advisory is about, the analyst firm recommends that 1:26businesses block their employees from using AI browsers like, you 1:30know, Comet by Perplexity, ChatGPT's Atlas. The concern stems from 1:35the fact that sensitive personal and corporate data can end 1:38up with the AI services that power these things. And 1:41also that you have an AI agent right there in 1:43the browser who might have some access to corporate systems 1:46and maybe mess with some things they shouldn't or be 1:49weaponized by malicious actors. Right, and, and as if on 1:53cue, this week we also saw research come out from 2:03a way to wipe your Google Drive just by sending 2:06an email that says, please organize my drive. So even 2:10with security measures in place, Gartner's report suggests that the 2:14complexity of the effort and the unpredictability of end users 2:18means that you're just safer getting, you know, banning the 2:20browsers from the workplace. For now, I want to start 2:23by just asking everybody what we think of this proposal. 2:27If we agree, disagree. Have anything to add? Ryan, I 2:30want to start with you. You see this proposal from 2:32Gardner. They're saying just ban the AI browsers. What's your 2:34take? Yeah, I think I am more conservative, so I 2:39would probably lean in the direction of agreeing with this. 2:42You know, you look at the Star Labs 0 click 2:46exploit and that really Shows why Gartner hit the brakes, 2:50right? Like one malformed prompt and your entire Google Google 2:54Drive is gone. That is a bit of a problem, 2:58right? So AI browsers that are have the ability to 3:01create automation without accountability, where they can read, write, click, 3:09delete, all without explicit approval in the moment, is problematic 3:13from my perspectives. And I think that STAR Labs really 3:17proved how dangerous that is. You know, triggering a workflow, 3:21in this case, you know, triggering that workflow that wiped 3:24that Google Drive didn't require phishing. It didn't require, you 3:29know, macros enabled or any social engineering. It just poisoned 3:34the input. And that really changes. I think our, our 3:37blast radius conversation when we're talking about defending our networks 3:41and our people, you know, we're not talking about stealing, 3:44you know, browser cookies anymore, tricking the user. You know, 3:48we're talking about really full data loss that's initiated by 3:52an AI agent that you didn't directly instruct. I'm glad 3:56you kind of brought up how, how different this is 3:58from some of those traditional attacks where, like you said, 4:00you're social engineering, you're stealing cookies from a browser. This 4:03is just an email you send. And not just that, 4:05but it relies on one of the kind of benefits 4:09that the AI browsers are supposed to bring you, right? 4:11They say, hey, look, these browsers, they'll take care of 4:14the stuff that's in your inbox for you. You know, 4:16if there's a to do list in your inbox, they'll 4:17do it for you. If somebody sends you a malicious 4:19to do list, all of a sudden the AI agent 4:21is weaponized and it's barely even a hack. Right. Austin, 4:26let me move on to you. When you look at 4:28this proposal from Gartner, are you agreeing, disagreeing. What are 4:31your thoughts here? So, no, I definitely agree with Gartner 4:33here. You know, their, their opinion isn't alarmist. I'd say 4:38it's rather pragmatic. You know, until vendors mature their security 4:43posture, we're going to see a lot of issues with 4:45AI browsers and organizations should really enforce strict policies on 4:51AI browser use, especially in regulated sectors. AI browsers are 5:04lacking a lot of different security issues still with chatbots 5:08and gen AI tools rather than just the browsers themselves. 5:13Now, with integrated AI browsers, you introduce an entirely new 5:17attack surface with things like prompt injection, data exfiltration, and 5:22then, of course, as Ryan mentioned, the zero click exploits 5:26from STAR Labs. So better boundaries of trust need to 5:30be established here moving forward. Absolutely. And yeah, I think 5:35that, you know, it's not just that we're seeing these 5:38zero click exploits in our inboxes, but even just on 5:41public web pages. Right. We've seen the ability to put 5:44in these malicious prompts that your agent can read and 5:47it can act. So it's like you said, the attack 5:49surface just blows up. It's everywhere all of a sudden. 5:53Evelyn, I'd like to get your take on Gartner's recommendations. 5:56Agree, disagree, where you land in here. I actually agree 5:59with both, Ryan. I, you know, in all honesty, when 6:02I first saw this, the one thing that really stood 6:04out for me was the statement around prioritized user experience 6:09over security. And so I went out and did a 6:12bit more of research, looking at some of the other, 6:14you know, people that have responded to this, some of 6:17the experts within this field, looking at some additional articles, 6:21and for the most part, many in the community, you 6:24know, in the cybersecurity community, they actually agreed. But there 6:27were a few that felt like the ban was more, 6:30you know, kind of the classic shadow, which was a 6:34bit comical to me because we all know that governance 6:37and security are critical. But it did cause me to 6:40wonder, when you're looking at this, how many agents are 6:43already embedded within the enterprise? And how effective will the 6:47ban actually be on protecting against some of the key 6:50risk? I mean, when we looked at some of the 6:52things that they pointed out about being able to send 6:55web content, accessing the browser, I think what was one 6:59of a couple of the others, like being able to 7:02click and access backends autonomously and execute transactions. That actually 7:09did make me wonder how effective this would be. I 7:12think it's the first step, but I think it's critical 7:14that we all kind of get together, figure out what 7:16are the correct security controls that need to be in 7:19place around how we're securing AI as a whole, and 7:23what is the governance structure? I'm glad you bring that 7:25up because it kind of segues into exactly what I 7:27wanted to ask you folks about, which was, you know, 7:30how do we start thinking about tightening security for these 7:33kinds of things? Right. Because Gartner even points out, look, 7:36there are security measures you can put in place, and 7:38their take is kind of like, but even if you 7:40put them in place, it might not be enough, so 7:42you might as well just avoid it. But I want, 7:43I'm thinking specifically of some of the recommendations that came 7:47out of the Straker Labs, research we've been talking about 7:50where they said, you know what, One of the things 7:52you might want to do is put require a human 7:55to review any instructions that are, are in content. Right? 7:58And by what that they mean any instructions an agent 8:00comes across that didn't come directly from the, the, the 8:03user, whether they're in an email or they're on a 8:06web page. One thing you can do is just make 8:07sure that the agent has to review that stuff with 8:09a human before it can act. The other thing that 8:12was interesting to me was we also saw that Google 8:14has taken some steps to protect its own agents embedded 8:17in Chrome. And it announced, this is really interesting to 8:20me, a user alignment critic, which is a different model 8:25that's put there to evaluate the agent's plan, right? Agent 8:29gets some instructions, it comes up with a plan to 8:31execute them, and then it has to pass through this 8:33other model, the user alignment critic, which kind of evaluates 8:36the plan, makes sure it looks good. If it doesn't, 8:38it kicks it back to the agent. If it does, 8:40it allows it to. And I think those are some 8:42pretty inventive ways to start safeguarding these things. And I 8:46wanted to get your thoughts as the kind of security 8:47experts here. Is that kind of thing, Is it promising 8:51to you? Do you think we're heading the right direction? 8:53What would you like to see? Austin, I want to 8:55start with you. When you think about safeguarding these AI 9:04be some advancements here, but really from a security standpoint, 9:09and in my opinion, I still think it's too early 9:12to just jump in and to risk it, really give 9:16it more time to develop. And I sort of analogize 9:19this to Mac OS software updates, where the latest OS 9:24comes out and everyone rushes to download it. Then you 9:27find out that it's littered with different bugs and vulnerabilities 9:31and security flaws. So I usually like to kind of 9:34take a step back and wait for that 1.1 version 9:37to come out until anything. Right. So when it comes 9:42to AI browsers, I'm going to sort of just hold 9:44my breath and wait until we actually see some security 9:47implementation involved. I think it's an extremely good point. Right. 9:50Maybe wait for the 1.1 of ChatGPT's atlas. You know 9:54what I mean? Evelyn, how about you? What kind of 9:57things would you like to see them implement to make 9:59you feel more comfortable with them? I think Austin was 10:02spot on. I mean, the very first thing that I 10:04thought about when he said that was, you know, when 10:07releases come out, unfortunately I tend to lag a little 10:11bit to wait to find out what's going to break 10:14before, you know, I jump in and make the, you 10:17know, install the updates. I think Google is first, you 10:21know, because of the one click issue that they're trying 10:23to offset kind of the light shining on them. So 10:26it's out there and we're all scrambling, trying to figure 10:29out what shifts should and should not be in place. 10:32And I don't think we know the biggest thing is 10:34that we can put different controls in place. I think 10:39it's going to take a collaborative effort between multiple bodies 10:44to figure out what is the right approach. Right now 10:46we're just trying to pull rabbits out of our hat 10:49and hope that it works. And I don't think we 10:53have the right answers yet. I think it's going to 10:55take some actual time in the lab testing, trying to 10:58figure out, okay, what are the risks that are out 11:01there and if we put this in place, does it 11:03mitigate this risk, yes or no, and move on. But 11:05right now we're just being a bit more reactive to 11:09every time a new problem arises, we're trying to identify 11:13something as a quick fix versus us taking the time 11:16to be a bit more, you know, saying, okay, these 11:18are the potential exposures and this is how we could 11:21actually close them. So I think it's going to take 11:24some time and the path that we're taking is not 11:27necessarily the right one. It's, you know, one to be 11:30reactive, to try to address the problems. But I don't 11:32know that it's the right approach that we're taking right 11:35now not makes sense. I mean, I think that especially 11:37when it comes to this AI related stuff and these 11:39agents, they're new, they're shiny, they're very exciting and they 11:42can feel sometimes like there's a lot of pressure to 11:44be among those early adopters. But you know, there is 11:47wisdom in waiting a little bit, Ryan, to close out 11:50the segment. I'd like to get your take. And you 11:52look at these AI browsers, what would you like to 11:54see them do? Is there anything they could do to 11:56make you more comfortable with, with them? Yeah, I think, 11:58you know, Evelyn mentioned it really well. She mentioned one 12:01specific word and that's collaboration. And I think that's going 12:04to be on a lot of us, from security companies 12:07to vendors to implementers, because the, the answer is we 12:13can't avoid AI browsers forever. Right. I think from a 12:17security standpoint if we are able to wrap them in 12:20the same or similar guardrails that we currently already build, 12:25like for cloud workflows or any type of privile, I 12:30think that is a really promising way forward because those 12:34tools, now, those act on our behalf. Right. So they 12:38need permission, scopes, decision logs, and actually real isolation. So 12:42when we actually treat the AI autonomy like a privileged 12:47user instead of just maybe a passive browser, I think 12:52that would dramatically reduce our risk and then allow innovation 12:57to actually scale safely. Yeah. You know, I like that 13:01you point out that, because I think a lot of 13:03times with some of these AI related conversations, there's a 13:06tendency to feel like everything is brand new. And like 13:08you're saying there is a, there is a precedent here, 13:11there are things we can look at that apply to 13:13this stuff. So, sure, it may be very exciting technology 13:15that's different from some things we've seen. But like you 13:18said, we've dealt with similar things in the past. We 13:19can apply some of those same approaches to this sort 13:22of thing. And you also almost kind of read my 13:25mind in talking about the collaboration that's necessary because that's 13:27exactly, exactly what I want to talk about in our 13:30next segment. Where do AI companies fit into the threat 13:35intelligence landscape? Now, all this conversation about AI browser safety 13:44also brings to mind a recent LinkedIn post that I 13:47saw from Rob T. Lee, who is Chief AI Officer 13:50at the SANS Institute. Now, to give you a little 13:52context, back in November, as I'm sure we all recall, 13:56Anthropic busted this AI powered espionage ring that was using 13:59Claude to automate significant portions of its campaign. And it 14:02was a pretty big deal. It was one of the 14:04more sophisticated, you know, uses we've seen for these tools. 14:07And so the report garnered a lot of attention and 14:10it also garnered a little bit of backlash. Specifically, some 14:13cybersecurity pros felt that it left out really valuable threat 14:16intelligence information. Like, specifically indicators of compromise. I saw a 14:20lot of people say, how come we can't get any 14:22IOCs, right? And as Lee put it on LinkedIn, and 14:26this is a direct quote from him, AI providers now 14:29sit in the middle of these attacks and we don't 14:31yet have clear expectations for how they detect abuse and 14:34notify victims. Right. So part of what we're dealing with 14:37here is that these vendors are entering the kind of 14:41attack surface, the blast radius, in a way that they 14:43hadn't before. And we haven't sussed out what that looks 14:46like, where they fit into the collaboration. So, Austin, I 14:48wanted to start with you first. You know, you do 14:51a lot of work in threat intelligence. Where do you 14:53think AI organizations fit into this landscape? You know what, 14:57if anything, do they, I don't know, owe cybersecurity professionals 15:01when it comes to these kinds of things. So again, 15:03we're so early in this technological revolution of AI and 15:07the providers are really sitting in the center of all 15:11this abuse happening, but are really lacking the ability to 15:14formally detect and disclose this victim abuse that we're seeing. 15:19And we also see that the AI providers are able 15:22to disrupt some of these AI powered espionage campaigns. But 15:26we really need standardized frameworks in place here because right 15:30now the expectations are pretty unclear across the industry. There's 15:34no standard for how AI providers detect malicious use, when 15:39they notify victims, or even how intelligence is shared with 15:43defenders. So threat intel teams really should prioritize tracking AI 15:48misuse patterns, behavior and then pressure these vendors for transparency. 15:53Because within AI we, we've been talking about how important 15:58governance and compliance is becoming. So that really weaves into 16:02that aspect here. Absolutely. Now Ryan, I want to ask 16:05you about something else that, that came up in Rob's 16:09post, which was this idea and it's not just Rob, 16:12I shouldn't say like that because it's actually come up 16:14with a lot of people. But there's this view that 16:16even if anthropic had shared IOCs, it wouldn't be that 16:20much use to defenders because most defenders aren't defending generative 16:24AI models. Right. So the idea is that like why 16:27are we so upset about Anthropic not sharing some of 16:28this stuff? Could we actually use it? I want to 16:31get your take. Do you think it's true that the 16:33IOCs wouldn't be of much use to defenders or where 16:36do you land on that? I kind of land in 16:38the middle. And it just depends on what IOCs are. 16:42What are we looking for? You know, they, you know, 16:45at that time, Anthropic really they can see or any 16:49other AI vendor really if they're, if we're talking about, 16:52you know, threat chains or you know, inside that attack 17:05I.e. the pre attack intelligence that, that as defenders we 17:10never usually get. And that's not a typical, what you 17:13would consider a classic ioc. Right. You're looking at really 17:18behavioral aspects. And you know, Rob said AI providers, they 17:24sit in the middle of these attacks. 100% agree with 17:27that. And I think just by sitting in the middle 17:30of these attacks, you inherit new responsibilities, even if you're 17:35not a cybersecurity company. You know, it's about setting a 17:39baseline set of expectations to, you know, detect abuse signals, 17:45sharing those, maybe those aggregated threat patterns that would be 17:49important to organizations that are potentially, you know, maybe a 17:52victim, even notifying victims, even when their data or identity 17:57is actually being weaponized. I think that. That ignoring the 18:02AI in the threat chain would repeat the cloud security 18:08mistakes that we made 10 years ago. I think we 18:12learned very slowly that cloud providers were critical intelligence partners, 18:18and AI is just that next evolution, as we mentioned 18:22before. And I would say that we're not asking AI 18:27vendors to police the Internet. That would be completely unreasonable, 18:31but we're asking them not to ignore the crime scene 18:35happening within their own platform. That makes perfect sense. And 18:39I'm glad this is. You're developing a theme here. Right. 18:41Which I think it makes a lot of sense. There's 18:44so much that we can kind of learn from the 18:46cloud security kind of moment and the cloud security evolution. 18:49Right. That can apply to some of this stuff. And 18:52again, for me, this is a new lens on it 18:54because I never really considered how in many ways, these 19:03it's got some slightly fancier capabilities. Evelyn, I wanted to 19:07ask you about another kind of common response I've seen 19:11to this conversation, which is that there is some concern 19:15that maybe if these AI vendors release too much information 19:18about this kind of stuff, too much about the IOCs 19:21or the prompts or the technical specifics, it would be 19:24almost like handing the threat actors a playbook on how 19:26to jailbreak an AI. Right. And so I'm kind of 19:28wondering, how do you feel about that and that balance 19:31you need to strike between giving away too much but 19:34keeping your collaborators informed. What's your take? I think Austin 19:37and Ryan actually expressed it very, very well here. It's 19:42kind of funny how all of our minds have gone 19:44with this. When I started looking at this, the very 19:46first thought that came in my head is, we need 19:49to establish this similar to the cloud, where we establish 19:52a responsibility matrix. I mean, when we're looking at the 20:03and he was looking at it to give them a 20:05break, I'm like, yeah, I wouldn't have gone that far. 20:07But I think this is going back to kind of 20:10what we said earlier, we really have to take a 20:13step back and there has to be real, true collaboration 20:17between the cybersecurity firms, the AI providers, et cetera. There 20:21are no clear controls, regulations frameworks around this space. I 20:27work within the regular, you know, within the regulatory agency 20:30where I'm always reviewing regulations. It was a little bit 20:33comical to me when I started just looking at the 20:36US and we have 50 states, but when I started 20:40looking at the actual guidelines and regulations and controls that 20:44were coming across just the United States, there were over 20:47335. Some of them were regulations, some of them were 20:51laws, some were the exit were executive orders, some of 20:54them were guidance, which told me that everyone was confused 20:58on exactly what we should and shouldn't be doing and 21:00how to actually structure this. And until we bring all 21:03of those people together to provide the appropriate guidance, then 21:09I think we're going to continue down this path of 21:12the finger pointing. I don't think it's just a scenario 21:16that the AI providers can solve, nor is it just 21:19something that the cybersecurity firms can, can solve on their 21:22own. I think we have to work together. There's no 21:25way for us to really determine how we actually put 21:28the proper security and governance controls in place that are 21:31really going to safeguard enterprises and provide faster mitigation to 21:35make sure that we have and understand the clear rules 21:39on what should be there versus what's not there. And 21:42then if it's exploited, how do we mitigate it quickly 21:46and some of the information, regardless of what it is 21:50and how they were exploited, we need to understand that. 21:54I think not providing the detailed information, when it came 22:02enough information that you would have been able to do 22:04anything with. And so I think that's something that we 22:06have to take a closer look at. I understand you 22:09don't want to give them the keys to the kingdom, 22:10but how do we fix it if we don't understand? 22:13You know, I can't fix something if I don't understand 22:16how it occurred. Absolutely. And I'm glad that you point 22:18out that, that, you know, some of the fog that 22:21it feels like we're dealing with around how we handle 22:23this stuff is just about that lack of collaboration and 22:26coming together and defining these things. Right. I think again, 22:29it's very easy to feel like, oh, this technology is 22:31so new and unprecedented, that these, this confusion is inherent 22:35to it, but it's not really. Right. We have this 22:38again, all three of you have pointed out we have 22:40some precedents for this, especially that cloud model, that cloud 22:43responsibility matrix you were talking about, Evelyn. So like, let's 22:46start with what we know and start applying it to 22:48what we have here. I feel so much better about 22:52this, this now having talked to you three. I gotta 22:54say you, you, you have lightened the burden on me. 22:58So let's go ahead and move on then to the 23:01next story we gotta cover today, which is the 2025 23:05CWE Top 25 Most Dangerous Software Weaknesses. Mitre recently published 23:16its 2025 CWE Top 2025 Most Dangerous Software Weaknesses is, 23:21as you might guess, a list of the software design 23:24and implementation flaws that underlie the most frequently exploited vulnerabilities 23:29in the wild. Some notable bits up top, the top 23:33three were the cross site scripting, SQL injection and cross 23:37site request forgery. Missing authorization jumped up five spots from 23:42last year, so it was number nine. Now it's number 23:44four. Don't like to see something like that, but those 23:46are the two things that stuck out to me. However, 23:48I'm not the security expert here, so I want to 23:50know what sticks out to you folks when you look 23:52at this list. And I'll start with you Evelyn. You 23:55look at this list of flaws, what sticks out to 23:57you? What comes to mind for you? When I looked 23:59at it and the way the article read it led 24:03you to believe that there's been improvement. And then I 24:08kind of chuckled saying, okay, let me take, I can 24:11be cynical from time to time, so let me take 24:13a step back and look at it a little differently. 24:16But the first thing that I noted was when you 24:19look at cross site scripting, SQL injections and the cross 24:23site request forgery, they were still in the top three 24:27positions. They didn't change. But so my initial thought was, 24:33I mean, should I really look at this as it's 24:36been an improvement or because these are the key root 24:41causes of the majority of the exploits and breaches that 24:44we see out there. And then when I started looking 24:46at some of the IAM pieces around the authorization moving 24:50up, I'm like, eh, okay, so can we say that 24:53there's been improvements? Okay, but I feel like that when 24:57it comes to this, the messaging that we really should 24:59be taking away from this is that defenders need to 25:02continue to drive secure by design initiatives. They need to 25:06make sure that they're building actual checklists based on using 25:10this to build out their strategic roadmap and how they're 25:14prioritizing their risk mitigation and reviewing their code to make 25:18sure that they have secure development. So I was a 25:21little bit cynical, I will admit when I looked at 25:23this, because I didn't necessarily, you know, when you looked 25:27at the word dropped, it was a bit subjective to 25:29me, I guess is what I'll say about this one. 25:32No, that makes sense. And I did have a similar 25:34thought. You know, I feel like every time we get 25:36these lists, it's always the cross site scripting, the SQL 25:39injection. It's the that those injection attacks are like always 25:42right there at the top. And, and it, you know, 25:44I kind of wonder why they're so persistent and I 25:49don't know, Austin, I don't mean to put you on 25:50the spot. Do you have any thoughts about why they're 25:52so persistent, why these injection attacks are so popular? So 25:55comm. Any thoughts there? The really striking thing is not 26:00a lot has changed with this list. You know, despite 26:03all the talk around supply chain attacks AI0 days, most 26:08of these breaches that we're seeing all go back to 26:12decades old vulnerabilities. And the reason we're seeing a lot 26:14of cross site scripting, SQL injection is because it's really 26:18easy on the defender side to carry these attacks out. 26:22Now from the defensive side of things, you know, this 26:27goes well beyond patching. As Evelyn pointed out, secure by 26:31design principles really need to be implemented into the software 26:35development life cycle AI. When it comes to AI related 26:40weaknesses and supply chains, those are creeping up the list. 26:43So it is signaling a slight shift here. But when 26:47we look at the top 25 year over year, it's 26:50pretty much the same. It really signals to organizations and 26:55security professionals that this has become a blueprint for adversaries. 27:02should assume that attackers are weaponizing it and going to 27:06leverage this when they carry out attacks against organizations. Absolutely, 27:10absolutely. Ryan, let's bring you in here. You look at 27:12this list, you look at the persistence of these decades 27:14old vulnerabilities. Do you have the same kind of cynicism 27:17that Evelyn has about this sort of thing? Do you 27:20feel like how come we haven't moved? What's your take? 27:22I do, I feel like this is, is PTSD from 27:24the OWASP list. You know, this, this MITRE list is 27:29the really less about what's new and more about what 27:34we still refuse to fix. You know, this isn't a 27:37list of emerging threats. It's that report card essentially of 27:42fundamentals that we get wrong or fundamentals that we haven't 27:45Fixed. I think these are indicative of foundational engineering failures. 27:51This isn't exotic AI age vulnerabilities. This is input, validation 27:57errors, authorization mistakes, insecure object handling, really the boring stuff. 28:03The boring stuff continues to drive real breaches. And attackers 28:08love this because it's predictable. And while, you know, defenders 28:12were over here, maybe partly chasing some AI hype, you 28:17know, our adversaries are exploiting the same top 25 weaknesses 28:21with a 99% success rate. So even with the rise 28:25of AI, it doesn't change the list. It accelerates our 28:28exploitation. It helps attackers discover, chain, and even weaponize those 28:35weaknesses faster than ever. I would say the message for 28:39defenders or even engineering teams, really, if we're not using 28:44this list as essentially like an okr with some type 28:47of measurable improvement in our environment, then we're just simply 28:52guaranteeing attackers an entry point. And to talk about the 28:57injection kind of circling back to the injection attacks, ejection 29:00attacks are still everywhere because they exploit the oldest and 29:05probably the most universal truth in software, where anytime user 29:10controlled input touches any type of sensitive logic, there's a 29:15risk there. And we are still building systems where our 29:20inputs are not sanitized, validated or even isolated. I would 29:24say, I would argue that the reason we're not fixing 29:28it is merely cultural at this point and not technical. 29:33Secure coding is rarely enforced and developers are really under 29:39pressure to ship and deliver features and legacy applications never 29:46get rewritten. I think that is a really a foundational 29:50and fundamental problem. And injection attacks are going to continue 29:54to persist because attackers really, they only need one missed 29:58validation. And in most organizations there's always one. You know 30:02what's funny, and I did not have this thought until 30:04you were talking just then, which is that like, even 30:06when it comes to these new AI attacks that we're 30:08worried about, what's the main one? It's prompt injection. It's 30:12another injection attack. Right. Like, even when it comes to 30:15this new stuff, we're still dealing with this question of 30:17how do you deal with it when that user input 30:20kind of touches, you know, the, the back end, the 30:22data. Right. And, and I just, I don't know what 30:25we do about that. I think we've got, we've heard 30:27some pretty good ideas here. But yeah, I think it's 30:30just, it is a little bit disheartening to see that 30:32we haven't had a ton of movement. I just want 30:35to, and we've kind of touched on this already, but 30:36I just want to let everybody speak on this before 30:38we move on to the next one, which is that, 30:39you know, you look at this list and as Ryan 30:41said, you take it as like a, a set of 30:44OKRs for, for development going forward. What's your kind of 30:47key takeaway then for defenders? Right, Looking at this list, 30:49like, what should they walk away from this being able 30:52to do with it? So it's not just a list 30:53that you look at and you say, ah, geez, this 30:56is a bummer. We're still dealing with the same old 30:57stuff. Austin, let's start with you. What would you say, 30:59how do you turn this into a practical action? Put 31:02it simply here, you know, defenders need to map this 31:05list to their technology stack, focus on automation detection and 31:10prioritizing that education for your engineers and your developers and 31:16really treating this list as a board level strategic risk 31:20tool. That makes perfect sense. Evelyn, how about you? Any 31:23thoughts there? Pretty much agree. I mean, this is, I 31:27don't quite understand why it's not used as a. I 31:31mean, I would build my project plans using this list 31:34of, you know, the checks that we're going to go 31:37through, making sure that we're aligning with secure coding practices, 31:40making sure we're automating the scans, that when we do 31:43our own penetration testing before we release code. I mean, 31:47so to me this is. It should be best practice 31:49by now, but as a part of Secure by Design. 31:55But I don't understand why we're still struggling with this. 31:57Because when I first read this and I looked at 31:59it, I was like, wait, I was looking at this 32:0110 years ago, we still haven't made any changes. And 32:05I mean, cross scripting and prompt injections, all of these 32:09things were the same thing that we were looking at 32:11more than 10 years ago. And especially my core was 32:14in identity and access management. So when I was looking 32:17at some of the authentication issues, I was like, wow. 32:22Yeah, we may have changed the mechanism in which we 32:25authenticate, but we're still singing the same song. It's just 32:29a different choir. So I struggle with understanding why we're 32:34still struggling with this, if you want me to be 32:36honest. No, no, I get that, I get that. And 32:39Ryan, I know I kind of used your answer to 32:41turn it into a question here, but I just wanna, 32:43I'd be remiss if I didn't give you a chance. 32:44Was there anything else you wanted to add on this 32:46is. I would definitely keep it short and sweet, you 32:48know, for organizations and defenders. If we truly want to 32:52reduce breach risk meaningfully in 2026, we have to start 32:58by eliminating this top 25, I think everything else is 33:02merely optimization at this point. Once again, are kind of 33:05gesturing towards the next story already. So I'm going to 33:08roll into it. Is consumer SSO safe life? In a 33:17recent article, ZDNet Senior Contributing Editor David Berlin weighs the 33:22pros and cons of consumer oriented SSO schemes. That's his 33:25name for it. You also hear these things called social 33:27logins, right? It's like when you go to a website 33:29and instead of making your own account, you log in 33:31with Google, you log in with Facebook. We've all seen 33:33these options and, and they're extremely handy, right? It's very 33:37convenient. You don't have to set up a new account. 33:40But that also means you're setting up a kind of 33:43single point of failure, right? If all of your accounts 33:46are tied to the same Gmail account, if someone steals 33:48that account, they can get into all the rest of 33:50those accounts, right? And that reminds me of on last 33:52week's show we talked about this attack where hackers can 33:55take over your Gmail account and lock you out by 34:02people say, look, there is that that single point of 34:05failure. But your Googles, your Facebooks, these platforms, they tend 34:09to have more robust security than some random website. So 34:12like if the choice is between putting your information into 34:14a random website or using a bigger platform, maybe use 34:17the bigger platform. And I get that. But you know, 34:21I think about how we just looked at this list, 34:23right, and saw these, these identity and authentication weaknesses still 34:27being there, still being big ones. And I think about 34:30the fact that like every new X Force Threat Intelligence 34:32index that comes out is like, what's the number one 34:34attack? Attack? It's valid account abuse, right? It's just stealing 34:36credentials and using that. So, you know, given that we 34:39have all of these pressures on identity security at this 34:42moment, I want to your takes on where you land 34:45in this debate. Are social logins safe? And you know 34:48what, we'll start with Ryan on this one. Ryan, what's 34:51your take? You feel like these things are safe? Where 34:53do you land? I don't know if I would use 34:55safe in those exact terms. I would, I would, I 34:58like talking in risk, right? But I would say it's 35:02both safer and riskier. I think it just really shifts 35:08where we are placing the danger, right? So like we're, 35:13if we upgrade our authentication strength, but we're also centralizing 35:19failure into that single identity provider. I would say for 35:23most people, Google or Apple logins is way more secure 35:27than a password that's reused across 30 sites. Right. The 35:31protections are stronger, the fraud detection is better, and MFA 35:36adoption, I would say, is higher. But if we look 35:39at the flip side of that, if something were to 35:41be compromised, that blast radius that we always talk about, 35:44that blast radius is real. If an attacker compromises your 35:48primary identity, they then inherit everything connected to it, it, 35:52your banking, your apps, your cloud data, medical portals, you 35:58know, whatever that might be. Right. And attackers are, you 36:01know, as you mentioned, are increasingly targeting those account linking 36:05and recovery flows. So it's really not the SSO providers 36:10themselves. They go after the back doors, not the front 36:14door, I think. So I guess the real answer is 36:17probably conditional. SSO is safe if you pair it with 36:22strong mfa, good recovery hygiene and visibility into what accounts 36:29are federated and what accounts are not. I would say 36:33consumer SSO is safe when you treat it like a 36:37security control, not just a convenience shortcut. That's an extremely 36:42good point. Right. And I also, I wanted to just 36:44emphasize, as you pointed out, what we often see when 36:46people don't use this stuff is they just use the 36:48same password across a bunch of different websites. And again, 36:51that's how so many attackers get into people's accounts. Right. 36:54You get a leaked credential from somewhere, they'll just go 36:57try it on a whole bunch of websites and see 36:58where it works, and often works way more than it 37:01should. Evelyn, let's bring you into this. You're looking at 37:04this question of are these things safe? Or maybe in 37:07Ryan's word, how do they affect the risk? Where's your 37:10take? Where do you land? I love when Ryan said 37:13convenience shortcut because it really made me think about, you 37:18know, this was a definitely a thought provoking question because 37:22when I stop and take a step back, how many 37:25of us, when we created a social media account, was 37:28really thinking about strong credentials? You know, thinking about, okay, 37:32let me make sure that I'm using a strong password 37:35with mfa, you know, blah, blah, and that I'm not 37:38tying it to something that if it's, if it's, it's 37:41compromised, it could potentially have this massive impact on my 37:44life. So I'll be the first one to say that 37:47when I set up a Facebook account years ago, no, 37:51I was not thinking security. And so I had to. 37:54But I only use that account on just that platform. 37:58So if you, if you steal it, you steal it. 37:59No, but, but in all honesty, being serious about this, 38:03I have to admit this really made me think about 38:07now I have been laxed in getting passkeys set up 38:10everywhere, but it really made me think about, with technology 38:14advancements, the importance of setting up, you know, the passkeys, 38:18of setting, making sure that you have MFA that, I 38:21mean for important websites like my banking, financial, things of 38:26that nature. I don't authenticate using any form of Google 38:29anywhere. But you know, my mind is thinking security. The 38:32average person may not actually be thinking about that. So 38:36this was one that really made me take a step 38:38back and say, okay Evelyn, you know, I don't want 38:42you to be hypocritical here. So think back on every 38:46credential that you set up over the years. Did you 38:49really take the security aspect into, you know, into play? 38:52And where have you not set up passkeys that you 38:55should have set up passkeys? So you did. This did 39:02clean up. And I think everyone has to do that. 39:04I mean single sign on is great when it's used 39:07properly. And I commend, I mean the Google, the Apple 39:12for giving you that additional capability to be able to 39:15authenticate. But Ryan said it best. I mean, if this 39:18one area is exploited, exposed, I mean, if they're able 39:22to breach and steal my credentials, how many other avenues 39:26am I opening them up to? And I think that's 39:28something that end users have to really think about when 39:31they're going through and setting up these single sign on 39:33mechanism mechanisms. Absolutely. And I think that you're right in 39:35that these things aren't necessarily presented to the end user 39:38as a security play, right? It is, it's a convenience 39:40thing, right? Like you're, you're absolutely. Same with me. When 39:43I set up my Facebook years ago, I was not 39:46thinking about security at all. Right. And then I saw, 39:48oh, I can sign into another website with Facebook. Yeah, 39:50why not? No one ever said a word to me 39:52about security. So I do think there's some messaging that 39:55needs to happen there. Austin, your take on the debate, 39:58where do you fall? You mentioned convenience. We always sacrifice 40:02security for convenience, especially for consumers that want that. So 40:07for enterprises relying on consumer single sign on for critical 40:11applications, I think that can be fine. But these applications 40:16need to be low risk as that can improve their 40:19usability. But it certainly shouldn't be the standard. So as 40:23Evelyn pointed out, I think we should integrate pass keys, 40:27integrate that with MFA passkeys, eliminate shared secrets and reduce 40:34the risk of Phishing by using that authentication through cryptography, 40:39which is tied to the user's device. But again, there's 40:41no one size fits all. And I think when it 40:43comes to password security, it's good to have layered defenses 40:47involved. I always tell friends and family about a password 40:50manager. Yeah, you have have one specific password that's tied 40:55to everything, but that's the one password that you need 40:58to protect. And then when you integrate things like MFA 41:01passkeys and SSO into sort of a one, one solution 41:07there, then you are much more secure. And then you 41:10don't run into these issues or those single, single points 41:13of failure. Absolutely. So I think the kind of broad 41:15takeaway here is that, as with so many things in 41:18security world, it's not black and white. It's conditional, to 41:21use Ryan's word. Right, right. It's not necessarily that. Is 41:24the SSO itself secure so much as what are you 41:27putting around that thing and what kind of defense and 41:29depth do you have? And that's the real question, because 41:32if you're only relying on one password for anything, no 41:35matter what it is, SSO or not, it's not good. 41:39That's not a good passkeys. We like pass keys, folks. 41:41Let's move on, though, to our last story because we 41:44are running out of time here. Red Canary researchers outline 41:48a bring your own virtual machine attack. Now, the security 41:56operations firm Red Canary reported on a recent incident they 41:59helped address in which a spam bombing campaign ultimately led 42:03to a malicious virtual machine on the victim system. Real 42:07quick, it went like this. The attack started with a 42:09campaign to flood the victim's inbox with a bunch of 42:12thousands of emails. So they, they. A lot of important 42:15notifications were obscured. And also this put people on high 42:19alert. Next, the attacker calls the victim, says, hey, I'm 42:22technical. I can help you with this problem. The victim, 42:24not thinking clearly because they're being inundated with these messages, 42:28takes them up on the offer, gives them remote access 42:30to their computer using Quick Assist, and at that point, 42:33the attacker drops their own virtual machine in there using 42:37a Visual Basic script. And it gives them this, like, 42:40very strong persistence and control. Now, what's interesting to me 42:44was, you know, I've heard of like, you know, bring 42:46your own driver attacks or whatever before, but this is 42:49the first time I've heard of a bring your own 42:50virtual machine attack. And so I just wanted to start 42:52by asking if this is something that anybody's seen before. 42:55Austin, have you ever heard of something like this before? 42:57Bring your own virtual Machine attack? Not specifically, but what's 43:02so interesting about this form of attack that, you know, 43:05it's not your standardized, say, malware, it's infrastructure within your 43:10infrastructure that can really go undetected. And it's a reminder 43:14that attackers don't need these complex zero days. They just 43:18need some creativity and patience here. A virtual machine can 43:22survive reboots, evade host detection, and even run its own 43:28tool set isolated from a main operating systems. And most 43:33organizations don't have the practices in place to monitor any 43:38hypervisor activity from these virtual machines. So really the lesson 43:44here is to monitor any unexpected behavior or activity from 43:50these virtual machines and to expand that across your endpoint 43:56detection and response. Because by ensuring your endpoint security tools 44:01can detect any of these nested environments. That will really 44:06go a long way. And what's also interesting here is 44:10they can sort of go under the radar. So simply 44:15by analyzing resources within your environments, like different spikes of 44:20memory and cpu, that will sort of indicate for defenders 44:25that maybe there's something more going on here. I like 44:28that. Yeah. And that's a real concrete way to kind 44:30of monitor for some of these, like you said, infrastructure 44:33within your infrastructure attacks, which in a way we're kind 44:36of going for full circle because that's sort of what 44:38some of these AI abuses are. Right. It's sort of 44:41of attackers using your infrastructure against you or setting up 44:44their kind of own infrastructure within your system. So I 44:47like that we're starting to talk more about how you 44:49detect that and not just malware. Ryan, I wanted to 44:52ask you, you know, building on that, you know, we 44:56talked about the vm, the virtual machine, but I also 44:58wanted to ask you about this kind of spam bombing 45:00campaign, because it's not necessarily something I've seen before again. 45:04And I'm wondering if you have any thoughts on, on 45:07that. Have you seen anything like that? This kind of 45:09flooding the victim with this noise to put him on 45:11edge? What's your take there? Yeah, I think each one 45:13of these is kind of situational. And in this particular 45:17one, I think, no, we don't. I guess to answer 45:20that, no, we don't see that all too often, but 45:23I would argue that the surrounding trade craft in this 45:28situation, this email bombing, the social engineering, the remote access 45:34abuse, I would say is classic misdirection. It's creating noise. 45:40So the defender actually never thinks to look for a 45:44hypervisor artifact that's quietly spinning in the background. Right. So 45:49I think that really the attack in general, I think 45:52it forces us really to rethink where the endpoint. I 45:57use endpoint in quotations, where the endpoint actually is. And 46:01if attackers can import their own operating system into your 46:05operating system, then your visibility model isn't just incomplete, it 46:12would be essentially obsolete. And I really, I loved this 46:17article. I think this really is a master class on. 46:22I know we talked about infrastructure, but I kind of 46:24wanted to shift back to the malware, what I call 46:26the four truths of malware. Modern threat actors have what 46:32I call call the four truths of malware, where all 46:35malware must run, hide, communicate and persist. And sometimes our 46:42tools are not designed to see all of this, right? 46:46So in this particular case, the malware must run, right? 46:50So the attacker didn't run code on the host, they 46:53ran it inside of the virtual machine. So that execution 46:57or that running happens in a sandbox the defender actually 47:01never inspects, which entirely bypasses edr. When we talk about 47:07hiding, the VM is, I would argue, is really a 47:11perfect hiding place. Instead of hiding a simple process, they're 47:16actually hiding an entire operating system parallel to the the 47:23host, right? So it's invisible to actually the traditional monitoring 47:27standards that we've come accustomed to to. We talk about 47:30communication inside that virtual machine. That attacker can use their 47:34own tooling at their free will, their own C2 channels, 47:38their own networking stack, right? That host operating system is 47:44creating traffic that just looks like normal virtualization activity or 47:50even benign resource usage. And we talk about the fourth 47:54one is persistence. The persistence here isn't just a registry 47:59key or a scheduled task. The persistence is actually the 48:04VM itself, which I think is really fascinating. And if 48:08the virtual machine actually survives a reboot or a user 48:12login, that attacker really has long term, has a long 48:17term beachhead, really, with no conventional indicators. So I really 48:22think that if all malware must run, hide, communicate and 48:28persist, then it's scary. But virtual machines would be the 48:33ultimate way to do all four without ever touching the 48:38host. I think that fundamentally should change how we think 48:43about our endpoint defense and the telemetry that goes into 48:46the. That absolutely. And, and you know those four truths. 48:49I really like that because it's like, you know, is 48:51it. Is the VM malware or not? It almost doesn't 48:54matter, right? It's accomplishing the same thing, right? It's, it's 48:56doing the same thing. And like you said, it's doing 48:57the same thing, maybe even better than, than traditional malware 49:00does. So at that point, the categorization distinction is like 49:03it's moot all right, it's moot. We're dealing with something 49:05big. Evelyn, to close out the episode. I just wanted 49:08to get your take looking at this sort of report. 49:11What do you think the key takeaways are for defenders 49:13when it comes to the, the rise of these. Not 49:15the rise I shouldn't say because it's the first one 49:17I've seen, but the fact that possibility now these bring 49:19your own virtual machine attacks. What's your, what's your takeaways? 49:22I definitely think that Ryan did an outstanding job breaking 49:25this down and explaining it. The only thing that I 49:27would probably add too, and which was really covered here 49:31was that we have to look at this from, you 49:34know, from a visibility, looking at the virtualization layers where 49:37we have to make sure that we're monitoring any unauthorized 49:40VMs. This was a very unique, interesting case that, that, 49:46I mean, you know, they were definitely thinking outside of 49:49the box. I mean, who would have thought that at 49:51some point that an attacker would have thought, you know 49:53what? I'm not going to go in the normal way. 49:55I'm going to bring my own vm. And so just 49:58thinking about the, the. Well, I, I mean it's like 50:03I feel like they need an award for thinking outside 50:05of the box, but because it's something very, very unique 50:11that I must admit when I read it, I had 50:13to go back and read it again and then start 50:15doing some research on it because I started thinking about 50:18all the risks that are associated with this and how 50:20we will be able to detect it. No, I get 50:23that. And it is, it is, is an amazing bit 50:25of misdirection, I think was the word that Ryan used. 50:27Right? Like this classic misdirection. You do almost want to 50:30give him a reward, but you don't gotta hand it 50:33to him, folks. But that is all the time we 50:36have for today. I want to thank our panelists Austin, 50:39Evelyn and Ryan. Thank you to the viewers and the 50:42listeners. As I mentioned up top, this is the last 50:44regular panel episode of the year, but it's not the 50:46last show of the year. Look out for our special 50:502025 Year in Review episode next week and the week 50:53after that, we're going to have an in depth interview 50:55with regular panelist Michelle Alvarez where we dive into why 50:59it costs so much to get hacked in America. And 51:02if, if you can't get enough of IBM's podcasts and 51:05who can be sure to subscribe to TechSplainers wherever you 51:09get your podcasts. This is a daily audio only show 51:12where IBM writers give you a crash course in hot 51:15tech topics, including our very own producer, Brian Clark delivering 51:19a five part series on cybersecurity fundamentals. Again, that's TechSplainers 51:23on Apple, Spotify, wherever else you listen to podcasts. And 51:27as always, please subscribe to Security Intelligence wherever podcasts are 51:32found and stay safe out there.