Gartner Recommends Banning AI Browsers
Key Points
- Gartner recommends organizations temporarily ban AI‑enabled browsers (e.g., Perplexity’s Comet, ChatGPT’s Atlas) due to risks of data exposure and uncontrolled AI agents accessing corporate systems.
- Recent research demonstrated a “drive‑wipe” attack where a simple email command could delete an entire Google Drive, highlighting the real‑world danger of AI‑driven automation.
- Panelists, particularly Ryan Anschutz, expressed a conservative stance, agreeing that the lack of accountability and zero‑click exploits make AI browsers too risky for enterprise use today.
- The episode also raised broader questions about AI companies’ responsibilities to the threat‑intelligence community, upcoming MITRE‑identified software weaknesses for 2025, and the safety of using Google sign‑on and BYOVM attacks.
- Overall, the experts advise caution and immediate mitigation (e.g., bans) until robust security controls are implemented for AI browsers.
Sections
- Gartner Advises Ban on AI Browsers - The IBM Security Intelligence podcast panel discusses Gartner's recommendation to block AI browsers over data‑leakage concerns while covering AI threat‑intelligence responsibilities, upcoming Mitre software weaknesses, Google sign‑in safety, and BYOVM attacks.
- AI Browsers: Emerging Security Risks - Panelists discuss how integrated AI browsers expand attack surfaces—through prompt injection, data exfiltration, and zero‑click exploits—and stress the need for strict policies and trust boundaries, aligning with Gartner's pragmatic view.
- User Alignment Critic for AI Safeguards - The speaker outlines safeguards such as mandatory human review of non‑user‑origin instructions and Google’s “user alignment critic” model that evaluates AI agents’ plans before execution, and solicits security experts’ perspectives on the promise and direction of these approaches.
- AI Providers' Role in Cyber Threats - The discussion highlights criticism that AI vendors sit at the center of emerging attacks yet withhold vital indicators of compromise, prompting calls for clearer detection standards and accountability.
- Balancing AI Threat Transparency - Participants discuss the dilemma of sharing AI vulnerability details, likening it to cloud security’s responsibility matrix and urging collaborative limits to prevent providing threat actors a playbook.
- Closing Knowledge Gaps Through Collaboration - The speakers emphasize that sharing detailed information and leveraging existing frameworks such as the cloud responsibility matrix are essential to understand and remediate security issues before transitioning to the discussion of the 2025 CWE Top 25 software weaknesses.
- Why Injection Attacks Persist - The speakers explain that cross‑site scripting and SQL injection remain common because they’re easy to exploit, while defense requires secure‑by‑design practices despite emerging AI and supply‑chain threats.
- Injection Risks Persist Amid Development Pressure - The speaker laments the lack of enforced secure coding and rapid release cycles that leave legacy apps vulnerable, noting that both traditional and AI‑driven prompt injection attacks continue to thrive, and challenges defenders to derive actionable takeaways despite slow industry progress.
- Evaluating Consumer SSO Risks - The speaker urges organizations to cut breach risk by addressing the top 25 vulnerabilities and then debates the convenience versus single‑point‑of‑failure trade‑off of consumer single sign‑on/social login services like Google and Facebook.
- Credential Reuse and Security Hygiene - The speakers discuss how leaked passwords are rapidly tested across multiple sites, highlighting users' habitual neglect of strong, unique credentials and the resulting need for MFA, passkeys, and better overall security practices.
- Bring-Your-Own Virtual Machine Attack - Red Canary describes a spam‑bombing campaign that coerces victims into granting remote assistance, allowing attackers to drop a malicious virtual machine via script for persistent control.
- Misdirection via Noise Flooding - The speakers discuss how attackers use email bombing and other noise to distract defenders, conceal hypervisor‑level malware, and expose the limits of traditional endpoint visibility, highlighting the four essential malware behaviors of running, hiding, communicating, and persisting.
- Defenders Beware: Bring‑Your‑Own‑VM Attacks - The speakers argue that VM‑based payloads act like traditional malware—making classification moot—and urge defenders to boost visibility by monitoring for unauthorized virtual machines to counter this novel attack vector.
Full Transcript
# Gartner Recommends Banning AI Browsers **Source:** [https://www.youtube.com/watch?v=8jWAQiSqDVU](https://www.youtube.com/watch?v=8jWAQiSqDVU) **Duration:** 00:51:33 ## Summary - Gartner recommends organizations temporarily ban AI‑enabled browsers (e.g., Perplexity’s Comet, ChatGPT’s Atlas) due to risks of data exposure and uncontrolled AI agents accessing corporate systems. - Recent research demonstrated a “drive‑wipe” attack where a simple email command could delete an entire Google Drive, highlighting the real‑world danger of AI‑driven automation. - Panelists, particularly Ryan Anschutz, expressed a conservative stance, agreeing that the lack of accountability and zero‑click exploits make AI browsers too risky for enterprise use today. - The episode also raised broader questions about AI companies’ responsibilities to the threat‑intelligence community, upcoming MITRE‑identified software weaknesses for 2025, and the safety of using Google sign‑on and BYOVM attacks. - Overall, the experts advise caution and immediate mitigation (e.g., bans) until robust security controls are implemented for AI browsers. ## Sections - [00:00:00](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=0s) **Gartner Advises Ban on AI Browsers** - The IBM Security Intelligence podcast panel discusses Gartner's recommendation to block AI browsers over data‑leakage concerns while covering AI threat‑intelligence responsibilities, upcoming Mitre software weaknesses, Google sign‑in safety, and BYOVM attacks. - [00:04:11](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=251s) **AI Browsers: Emerging Security Risks** - Panelists discuss how integrated AI browsers expand attack surfaces—through prompt injection, data exfiltration, and zero‑click exploits—and stress the need for strict policies and trust boundaries, aligning with Gartner's pragmatic view. - [00:07:58](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=478s) **User Alignment Critic for AI Safeguards** - The speaker outlines safeguards such as mandatory human review of non‑user‑origin instructions and Google’s “user alignment critic” model that evaluates AI agents’ plans before execution, and solicits security experts’ perspectives on the promise and direction of these approaches. - [00:14:07](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=847s) **AI Providers' Role in Cyber Threats** - The discussion highlights criticism that AI vendors sit at the center of emerging attacks yet withhold vital indicators of compromise, prompting calls for clearer detection standards and accountability. - [00:18:41](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1121s) **Balancing AI Threat Transparency** - Participants discuss the dilemma of sharing AI vulnerability details, likening it to cloud security’s responsibility matrix and urging collaborative limits to prevent providing threat actors a playbook. - [00:21:53](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1313s) **Closing Knowledge Gaps Through Collaboration** - The speakers emphasize that sharing detailed information and leveraging existing frameworks such as the cloud responsibility matrix are essential to understand and remediate security issues before transitioning to the discussion of the 2025 CWE Top 25 software weaknesses. - [00:25:31](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1531s) **Why Injection Attacks Persist** - The speakers explain that cross‑site scripting and SQL injection remain common because they’re easy to exploit, while defense requires secure‑by‑design practices despite emerging AI and supply‑chain threats. - [00:29:32](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1772s) **Injection Risks Persist Amid Development Pressure** - The speaker laments the lack of enforced secure coding and rapid release cycles that leave legacy apps vulnerable, noting that both traditional and AI‑driven prompt injection attacks continue to thrive, and challenges defenders to derive actionable takeaways despite slow industry progress. - [00:32:44](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=1964s) **Evaluating Consumer SSO Risks** - The speaker urges organizations to cut breach risk by addressing the top 25 vulnerabilities and then debates the convenience versus single‑point‑of‑failure trade‑off of consumer single sign‑on/social login services like Google and Facebook. - [00:36:54](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=2214s) **Credential Reuse and Security Hygiene** - The speakers discuss how leaked passwords are rapidly tested across multiple sites, highlighting users' habitual neglect of strong, unique credentials and the resulting need for MFA, passkeys, and better overall security practices. - [00:41:39](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=2499s) **Bring-Your-Own Virtual Machine Attack** - Red Canary describes a spam‑bombing campaign that coerces victims into granting remote assistance, allowing attackers to drop a malicious virtual machine via script for persistent control. - [00:45:04](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=2704s) **Misdirection via Noise Flooding** - The speakers discuss how attackers use email bombing and other noise to distract defenders, conceal hypervisor‑level malware, and expose the limits of traditional endpoint visibility, highlighting the four essential malware behaviors of running, hiding, communicating, and persisting. - [00:48:49](https://www.youtube.com/watch?v=8jWAQiSqDVU&t=2929s) **Defenders Beware: Bring‑Your‑Own‑VM Attacks** - The speakers argue that VM‑based payloads act like traditional malware—making classification moot—and urge defenders to boost visibility by monitoring for unauthorized virtual machines to counter this novel attack vector. ## Full Transcript
So when it comes to AI browsers, I'm going to
sort of just hold my breath and wait until we
actually see some security implementation involved. All that and more
on security intelligence. Hello, and welcome to Security Intelligence, IBM's
weekly cyber security podcast, where our expert panelists turn the
biggest industry news stories into practical takeaways you can use.
I'm your host, Matt Kaczynski, and this is going to
be our last regular panel of the year. And what
a panel it is. We've got Austin Zeissel, threat intelligence
consultant Evelyn Anderson, CSS cto, distinguished engineer and master inventor,
and Ryan Anschutz, North America, leader of X Force Incident
Response. Thank you all for being here today. Here is
what we're going to talk about. Should AI companies do
more for the threat intelligence community? Mitre's most dangerous software
weaknesses for 2025. Is it safe to use Google to
sign into other websites and a bring your own virtual
machine attack? But first, Gartner says organizations should ban AI
browsers. Now, in a recent advisory titled CyberSecurity must block
AI browsers for now, which kind of tells you exactly
what the advisory is about, the analyst firm recommends that
businesses block their employees from using AI browsers like, you
know, Comet by Perplexity, ChatGPT's Atlas. The concern stems from
the fact that sensitive personal and corporate data can end
up with the AI services that power these things. And
also that you have an AI agent right there in
the browser who might have some access to corporate systems
and maybe mess with some things they shouldn't or be
weaponized by malicious actors. Right, and, and as if on
cue, this week we also saw research come out from
a way to wipe your Google Drive just by sending
an email that says, please organize my drive. So even
with security measures in place, Gartner's report suggests that the
complexity of the effort and the unpredictability of end users
means that you're just safer getting, you know, banning the
browsers from the workplace. For now, I want to start
by just asking everybody what we think of this proposal.
If we agree, disagree. Have anything to add? Ryan, I
want to start with you. You see this proposal from
Gardner. They're saying just ban the AI browsers. What's your
take? Yeah, I think I am more conservative, so I
would probably lean in the direction of agreeing with this.
You know, you look at the Star Labs 0 click
exploit and that really Shows why Gartner hit the brakes,
right? Like one malformed prompt and your entire Google Google
Drive is gone. That is a bit of a problem,
right? So AI browsers that are have the ability to
create automation without accountability, where they can read, write, click,
delete, all without explicit approval in the moment, is problematic
from my perspectives. And I think that STAR Labs really
proved how dangerous that is. You know, triggering a workflow,
in this case, you know, triggering that workflow that wiped
that Google Drive didn't require phishing. It didn't require, you
know, macros enabled or any social engineering. It just poisoned
the input. And that really changes. I think our, our
blast radius conversation when we're talking about defending our networks
and our people, you know, we're not talking about stealing,
you know, browser cookies anymore, tricking the user. You know,
we're talking about really full data loss that's initiated by
an AI agent that you didn't directly instruct. I'm glad
you kind of brought up how, how different this is
from some of those traditional attacks where, like you said,
you're social engineering, you're stealing cookies from a browser. This
is just an email you send. And not just that,
but it relies on one of the kind of benefits
that the AI browsers are supposed to bring you, right?
They say, hey, look, these browsers, they'll take care of
the stuff that's in your inbox for you. You know,
if there's a to do list in your inbox, they'll
do it for you. If somebody sends you a malicious
to do list, all of a sudden the AI agent
is weaponized and it's barely even a hack. Right. Austin,
let me move on to you. When you look at
this proposal from Gartner, are you agreeing, disagreeing. What are
your thoughts here? So, no, I definitely agree with Gartner
here. You know, their, their opinion isn't alarmist. I'd say
it's rather pragmatic. You know, until vendors mature their security
posture, we're going to see a lot of issues with
AI browsers and organizations should really enforce strict policies on
AI browser use, especially in regulated sectors. AI browsers are
lacking a lot of different security issues still with chatbots
and gen AI tools rather than just the browsers themselves.
Now, with integrated AI browsers, you introduce an entirely new
attack surface with things like prompt injection, data exfiltration, and
then, of course, as Ryan mentioned, the zero click exploits
from STAR Labs. So better boundaries of trust need to
be established here moving forward. Absolutely. And yeah, I think
that, you know, it's not just that we're seeing these
zero click exploits in our inboxes, but even just on
public web pages. Right. We've seen the ability to put
in these malicious prompts that your agent can read and
it can act. So it's like you said, the attack
surface just blows up. It's everywhere all of a sudden.
Evelyn, I'd like to get your take on Gartner's recommendations.
Agree, disagree, where you land in here. I actually agree
with both, Ryan. I, you know, in all honesty, when
I first saw this, the one thing that really stood
out for me was the statement around prioritized user experience
over security. And so I went out and did a
bit more of research, looking at some of the other,
you know, people that have responded to this, some of
the experts within this field, looking at some additional articles,
and for the most part, many in the community, you
know, in the cybersecurity community, they actually agreed. But there
were a few that felt like the ban was more,
you know, kind of the classic shadow, which was a
bit comical to me because we all know that governance
and security are critical. But it did cause me to
wonder, when you're looking at this, how many agents are
already embedded within the enterprise? And how effective will the
ban actually be on protecting against some of the key
risk? I mean, when we looked at some of the
things that they pointed out about being able to send
web content, accessing the browser, I think what was one
of a couple of the others, like being able to
click and access backends autonomously and execute transactions. That actually
did make me wonder how effective this would be. I
think it's the first step, but I think it's critical
that we all kind of get together, figure out what
are the correct security controls that need to be in
place around how we're securing AI as a whole, and
what is the governance structure? I'm glad you bring that
up because it kind of segues into exactly what I
wanted to ask you folks about, which was, you know,
how do we start thinking about tightening security for these
kinds of things? Right. Because Gartner even points out, look,
there are security measures you can put in place, and
their take is kind of like, but even if you
put them in place, it might not be enough, so
you might as well just avoid it. But I want,
I'm thinking specifically of some of the recommendations that came
out of the Straker Labs, research we've been talking about
where they said, you know what, One of the things
you might want to do is put require a human
to review any instructions that are, are in content. Right?
And by what that they mean any instructions an agent
comes across that didn't come directly from the, the, the
user, whether they're in an email or they're on a
web page. One thing you can do is just make
sure that the agent has to review that stuff with
a human before it can act. The other thing that
was interesting to me was we also saw that Google
has taken some steps to protect its own agents embedded
in Chrome. And it announced, this is really interesting to
me, a user alignment critic, which is a different model
that's put there to evaluate the agent's plan, right? Agent
gets some instructions, it comes up with a plan to
execute them, and then it has to pass through this
other model, the user alignment critic, which kind of evaluates
the plan, makes sure it looks good. If it doesn't,
it kicks it back to the agent. If it does,
it allows it to. And I think those are some
pretty inventive ways to start safeguarding these things. And I
wanted to get your thoughts as the kind of security
experts here. Is that kind of thing, Is it promising
to you? Do you think we're heading the right direction?
What would you like to see? Austin, I want to
start with you. When you think about safeguarding these AI
be some advancements here, but really from a security standpoint,
and in my opinion, I still think it's too early
to just jump in and to risk it, really give
it more time to develop. And I sort of analogize
this to Mac OS software updates, where the latest OS
comes out and everyone rushes to download it. Then you
find out that it's littered with different bugs and vulnerabilities
and security flaws. So I usually like to kind of
take a step back and wait for that 1.1 version
to come out until anything. Right. So when it comes
to AI browsers, I'm going to sort of just hold
my breath and wait until we actually see some security
implementation involved. I think it's an extremely good point. Right.
Maybe wait for the 1.1 of ChatGPT's atlas. You know
what I mean? Evelyn, how about you? What kind of
things would you like to see them implement to make
you feel more comfortable with them? I think Austin was
spot on. I mean, the very first thing that I
thought about when he said that was, you know, when
releases come out, unfortunately I tend to lag a little
bit to wait to find out what's going to break
before, you know, I jump in and make the, you
know, install the updates. I think Google is first, you
know, because of the one click issue that they're trying
to offset kind of the light shining on them. So
it's out there and we're all scrambling, trying to figure
out what shifts should and should not be in place.
And I don't think we know the biggest thing is
that we can put different controls in place. I think
it's going to take a collaborative effort between multiple bodies
to figure out what is the right approach. Right now
we're just trying to pull rabbits out of our hat
and hope that it works. And I don't think we
have the right answers yet. I think it's going to
take some actual time in the lab testing, trying to
figure out, okay, what are the risks that are out
there and if we put this in place, does it
mitigate this risk, yes or no, and move on. But
right now we're just being a bit more reactive to
every time a new problem arises, we're trying to identify
something as a quick fix versus us taking the time
to be a bit more, you know, saying, okay, these
are the potential exposures and this is how we could
actually close them. So I think it's going to take
some time and the path that we're taking is not
necessarily the right one. It's, you know, one to be
reactive, to try to address the problems. But I don't
know that it's the right approach that we're taking right
now not makes sense. I mean, I think that especially
when it comes to this AI related stuff and these
agents, they're new, they're shiny, they're very exciting and they
can feel sometimes like there's a lot of pressure to
be among those early adopters. But you know, there is
wisdom in waiting a little bit, Ryan, to close out
the segment. I'd like to get your take. And you
look at these AI browsers, what would you like to
see them do? Is there anything they could do to
make you more comfortable with, with them? Yeah, I think,
you know, Evelyn mentioned it really well. She mentioned one
specific word and that's collaboration. And I think that's going
to be on a lot of us, from security companies
to vendors to implementers, because the, the answer is we
can't avoid AI browsers forever. Right. I think from a
security standpoint if we are able to wrap them in
the same or similar guardrails that we currently already build,
like for cloud workflows or any type of privile, I
think that is a really promising way forward because those
tools, now, those act on our behalf. Right. So they
need permission, scopes, decision logs, and actually real isolation. So
when we actually treat the AI autonomy like a privileged
user instead of just maybe a passive browser, I think
that would dramatically reduce our risk and then allow innovation
to actually scale safely. Yeah. You know, I like that
you point out that, because I think a lot of
times with some of these AI related conversations, there's a
tendency to feel like everything is brand new. And like
you're saying there is a, there is a precedent here,
there are things we can look at that apply to
this stuff. So, sure, it may be very exciting technology
that's different from some things we've seen. But like you
said, we've dealt with similar things in the past. We
can apply some of those same approaches to this sort
of thing. And you also almost kind of read my
mind in talking about the collaboration that's necessary because that's
exactly, exactly what I want to talk about in our
next segment. Where do AI companies fit into the threat
intelligence landscape? Now, all this conversation about AI browser safety
also brings to mind a recent LinkedIn post that I
saw from Rob T. Lee, who is Chief AI Officer
at the SANS Institute. Now, to give you a little
context, back in November, as I'm sure we all recall,
Anthropic busted this AI powered espionage ring that was using
Claude to automate significant portions of its campaign. And it
was a pretty big deal. It was one of the
more sophisticated, you know, uses we've seen for these tools.
And so the report garnered a lot of attention and
it also garnered a little bit of backlash. Specifically, some
cybersecurity pros felt that it left out really valuable threat
intelligence information. Like, specifically indicators of compromise. I saw a
lot of people say, how come we can't get any
IOCs, right? And as Lee put it on LinkedIn, and
this is a direct quote from him, AI providers now
sit in the middle of these attacks and we don't
yet have clear expectations for how they detect abuse and
notify victims. Right. So part of what we're dealing with
here is that these vendors are entering the kind of
attack surface, the blast radius, in a way that they
hadn't before. And we haven't sussed out what that looks
like, where they fit into the collaboration. So, Austin, I
wanted to start with you first. You know, you do
a lot of work in threat intelligence. Where do you
think AI organizations fit into this landscape? You know what,
if anything, do they, I don't know, owe cybersecurity professionals
when it comes to these kinds of things. So again,
we're so early in this technological revolution of AI and
the providers are really sitting in the center of all
this abuse happening, but are really lacking the ability to
formally detect and disclose this victim abuse that we're seeing.
And we also see that the AI providers are able
to disrupt some of these AI powered espionage campaigns. But
we really need standardized frameworks in place here because right
now the expectations are pretty unclear across the industry. There's
no standard for how AI providers detect malicious use, when
they notify victims, or even how intelligence is shared with
defenders. So threat intel teams really should prioritize tracking AI
misuse patterns, behavior and then pressure these vendors for transparency.
Because within AI we, we've been talking about how important
governance and compliance is becoming. So that really weaves into
that aspect here. Absolutely. Now Ryan, I want to ask
you about something else that, that came up in Rob's
post, which was this idea and it's not just Rob,
I shouldn't say like that because it's actually come up
with a lot of people. But there's this view that
even if anthropic had shared IOCs, it wouldn't be that
much use to defenders because most defenders aren't defending generative
AI models. Right. So the idea is that like why
are we so upset about Anthropic not sharing some of
this stuff? Could we actually use it? I want to
get your take. Do you think it's true that the
IOCs wouldn't be of much use to defenders or where
do you land on that? I kind of land in
the middle. And it just depends on what IOCs are.
What are we looking for? You know, they, you know,
at that time, Anthropic really they can see or any
other AI vendor really if they're, if we're talking about,
you know, threat chains or you know, inside that attack
I.e. the pre attack intelligence that, that as defenders we
never usually get. And that's not a typical, what you
would consider a classic ioc. Right. You're looking at really
behavioral aspects. And you know, Rob said AI providers, they
sit in the middle of these attacks. 100% agree with
that. And I think just by sitting in the middle
of these attacks, you inherit new responsibilities, even if you're
not a cybersecurity company. You know, it's about setting a
baseline set of expectations to, you know, detect abuse signals,
sharing those, maybe those aggregated threat patterns that would be
important to organizations that are potentially, you know, maybe a
victim, even notifying victims, even when their data or identity
is actually being weaponized. I think that. That ignoring the
AI in the threat chain would repeat the cloud security
mistakes that we made 10 years ago. I think we
learned very slowly that cloud providers were critical intelligence partners,
and AI is just that next evolution, as we mentioned
before. And I would say that we're not asking AI
vendors to police the Internet. That would be completely unreasonable,
but we're asking them not to ignore the crime scene
happening within their own platform. That makes perfect sense. And
I'm glad this is. You're developing a theme here. Right.
Which I think it makes a lot of sense. There's
so much that we can kind of learn from the
cloud security kind of moment and the cloud security evolution.
Right. That can apply to some of this stuff. And
again, for me, this is a new lens on it
because I never really considered how in many ways, these
it's got some slightly fancier capabilities. Evelyn, I wanted to
ask you about another kind of common response I've seen
to this conversation, which is that there is some concern
that maybe if these AI vendors release too much information
about this kind of stuff, too much about the IOCs
or the prompts or the technical specifics, it would be
almost like handing the threat actors a playbook on how
to jailbreak an AI. Right. And so I'm kind of
wondering, how do you feel about that and that balance
you need to strike between giving away too much but
keeping your collaborators informed. What's your take? I think Austin
and Ryan actually expressed it very, very well here. It's
kind of funny how all of our minds have gone
with this. When I started looking at this, the very
first thought that came in my head is, we need
to establish this similar to the cloud, where we establish
a responsibility matrix. I mean, when we're looking at the
and he was looking at it to give them a
break, I'm like, yeah, I wouldn't have gone that far.
But I think this is going back to kind of
what we said earlier, we really have to take a
step back and there has to be real, true collaboration
between the cybersecurity firms, the AI providers, et cetera. There
are no clear controls, regulations frameworks around this space. I
work within the regular, you know, within the regulatory agency
where I'm always reviewing regulations. It was a little bit
comical to me when I started just looking at the
US and we have 50 states, but when I started
looking at the actual guidelines and regulations and controls that
were coming across just the United States, there were over
335. Some of them were regulations, some of them were
laws, some were the exit were executive orders, some of
them were guidance, which told me that everyone was confused
on exactly what we should and shouldn't be doing and
how to actually structure this. And until we bring all
of those people together to provide the appropriate guidance, then
I think we're going to continue down this path of
the finger pointing. I don't think it's just a scenario
that the AI providers can solve, nor is it just
something that the cybersecurity firms can, can solve on their
own. I think we have to work together. There's no
way for us to really determine how we actually put
the proper security and governance controls in place that are
really going to safeguard enterprises and provide faster mitigation to
make sure that we have and understand the clear rules
on what should be there versus what's not there. And
then if it's exploited, how do we mitigate it quickly
and some of the information, regardless of what it is
and how they were exploited, we need to understand that.
I think not providing the detailed information, when it came
enough information that you would have been able to do
anything with. And so I think that's something that we
have to take a closer look at. I understand you
don't want to give them the keys to the kingdom,
but how do we fix it if we don't understand?
You know, I can't fix something if I don't understand
how it occurred. Absolutely. And I'm glad that you point
out that, that, you know, some of the fog that
it feels like we're dealing with around how we handle
this stuff is just about that lack of collaboration and
coming together and defining these things. Right. I think again,
it's very easy to feel like, oh, this technology is
so new and unprecedented, that these, this confusion is inherent
to it, but it's not really. Right. We have this
again, all three of you have pointed out we have
some precedents for this, especially that cloud model, that cloud
responsibility matrix you were talking about, Evelyn. So like, let's
start with what we know and start applying it to
what we have here. I feel so much better about
this, this now having talked to you three. I gotta
say you, you, you have lightened the burden on me.
So let's go ahead and move on then to the
next story we gotta cover today, which is the 2025
CWE Top 25 Most Dangerous Software Weaknesses. Mitre recently published
its 2025 CWE Top 2025 Most Dangerous Software Weaknesses is,
as you might guess, a list of the software design
and implementation flaws that underlie the most frequently exploited vulnerabilities
in the wild. Some notable bits up top, the top
three were the cross site scripting, SQL injection and cross
site request forgery. Missing authorization jumped up five spots from
last year, so it was number nine. Now it's number
four. Don't like to see something like that, but those
are the two things that stuck out to me. However,
I'm not the security expert here, so I want to
know what sticks out to you folks when you look
at this list. And I'll start with you Evelyn. You
look at this list of flaws, what sticks out to
you? What comes to mind for you? When I looked
at it and the way the article read it led
you to believe that there's been improvement. And then I
kind of chuckled saying, okay, let me take, I can
be cynical from time to time, so let me take
a step back and look at it a little differently.
But the first thing that I noted was when you
look at cross site scripting, SQL injections and the cross
site request forgery, they were still in the top three
positions. They didn't change. But so my initial thought was,
I mean, should I really look at this as it's
been an improvement or because these are the key root
causes of the majority of the exploits and breaches that
we see out there. And then when I started looking
at some of the IAM pieces around the authorization moving
up, I'm like, eh, okay, so can we say that
there's been improvements? Okay, but I feel like that when
it comes to this, the messaging that we really should
be taking away from this is that defenders need to
continue to drive secure by design initiatives. They need to
make sure that they're building actual checklists based on using
this to build out their strategic roadmap and how they're
prioritizing their risk mitigation and reviewing their code to make
sure that they have secure development. So I was a
little bit cynical, I will admit when I looked at
this, because I didn't necessarily, you know, when you looked
at the word dropped, it was a bit subjective to
me, I guess is what I'll say about this one.
No, that makes sense. And I did have a similar
thought. You know, I feel like every time we get
these lists, it's always the cross site scripting, the SQL
injection. It's the that those injection attacks are like always
right there at the top. And, and it, you know,
I kind of wonder why they're so persistent and I
don't know, Austin, I don't mean to put you on
the spot. Do you have any thoughts about why they're
so persistent, why these injection attacks are so popular? So
comm. Any thoughts there? The really striking thing is not
a lot has changed with this list. You know, despite
all the talk around supply chain attacks AI0 days, most
of these breaches that we're seeing all go back to
decades old vulnerabilities. And the reason we're seeing a lot
of cross site scripting, SQL injection is because it's really
easy on the defender side to carry these attacks out.
Now from the defensive side of things, you know, this
goes well beyond patching. As Evelyn pointed out, secure by
design principles really need to be implemented into the software
development life cycle AI. When it comes to AI related
weaknesses and supply chains, those are creeping up the list.
So it is signaling a slight shift here. But when
we look at the top 25 year over year, it's
pretty much the same. It really signals to organizations and
security professionals that this has become a blueprint for adversaries.
should assume that attackers are weaponizing it and going to
leverage this when they carry out attacks against organizations. Absolutely,
absolutely. Ryan, let's bring you in here. You look at
this list, you look at the persistence of these decades
old vulnerabilities. Do you have the same kind of cynicism
that Evelyn has about this sort of thing? Do you
feel like how come we haven't moved? What's your take?
I do, I feel like this is, is PTSD from
the OWASP list. You know, this, this MITRE list is
the really less about what's new and more about what
we still refuse to fix. You know, this isn't a
list of emerging threats. It's that report card essentially of
fundamentals that we get wrong or fundamentals that we haven't
Fixed. I think these are indicative of foundational engineering failures.
This isn't exotic AI age vulnerabilities. This is input, validation
errors, authorization mistakes, insecure object handling, really the boring stuff.
The boring stuff continues to drive real breaches. And attackers
love this because it's predictable. And while, you know, defenders
were over here, maybe partly chasing some AI hype, you
know, our adversaries are exploiting the same top 25 weaknesses
with a 99% success rate. So even with the rise
of AI, it doesn't change the list. It accelerates our
exploitation. It helps attackers discover, chain, and even weaponize those
weaknesses faster than ever. I would say the message for
defenders or even engineering teams, really, if we're not using
this list as essentially like an okr with some type
of measurable improvement in our environment, then we're just simply
guaranteeing attackers an entry point. And to talk about the
injection kind of circling back to the injection attacks, ejection
attacks are still everywhere because they exploit the oldest and
probably the most universal truth in software, where anytime user
controlled input touches any type of sensitive logic, there's a
risk there. And we are still building systems where our
inputs are not sanitized, validated or even isolated. I would
say, I would argue that the reason we're not fixing
it is merely cultural at this point and not technical.
Secure coding is rarely enforced and developers are really under
pressure to ship and deliver features and legacy applications never
get rewritten. I think that is a really a foundational
and fundamental problem. And injection attacks are going to continue
to persist because attackers really, they only need one missed
validation. And in most organizations there's always one. You know
what's funny, and I did not have this thought until
you were talking just then, which is that like, even
when it comes to these new AI attacks that we're
worried about, what's the main one? It's prompt injection. It's
another injection attack. Right. Like, even when it comes to
this new stuff, we're still dealing with this question of
how do you deal with it when that user input
kind of touches, you know, the, the back end, the
data. Right. And, and I just, I don't know what
we do about that. I think we've got, we've heard
some pretty good ideas here. But yeah, I think it's
just, it is a little bit disheartening to see that
we haven't had a ton of movement. I just want
to, and we've kind of touched on this already, but
I just want to let everybody speak on this before
we move on to the next one, which is that,
you know, you look at this list and as Ryan
said, you take it as like a, a set of
OKRs for, for development going forward. What's your kind of
key takeaway then for defenders? Right, Looking at this list,
like, what should they walk away from this being able
to do with it? So it's not just a list
that you look at and you say, ah, geez, this
is a bummer. We're still dealing with the same old
stuff. Austin, let's start with you. What would you say,
how do you turn this into a practical action? Put
it simply here, you know, defenders need to map this
list to their technology stack, focus on automation detection and
prioritizing that education for your engineers and your developers and
really treating this list as a board level strategic risk
tool. That makes perfect sense. Evelyn, how about you? Any
thoughts there? Pretty much agree. I mean, this is, I
don't quite understand why it's not used as a. I
mean, I would build my project plans using this list
of, you know, the checks that we're going to go
through, making sure that we're aligning with secure coding practices,
making sure we're automating the scans, that when we do
our own penetration testing before we release code. I mean,
so to me this is. It should be best practice
by now, but as a part of Secure by Design.
But I don't understand why we're still struggling with this.
Because when I first read this and I looked at
it, I was like, wait, I was looking at this
10 years ago, we still haven't made any changes. And
I mean, cross scripting and prompt injections, all of these
things were the same thing that we were looking at
more than 10 years ago. And especially my core was
in identity and access management. So when I was looking
at some of the authentication issues, I was like, wow.
Yeah, we may have changed the mechanism in which we
authenticate, but we're still singing the same song. It's just
a different choir. So I struggle with understanding why we're
still struggling with this, if you want me to be
honest. No, no, I get that, I get that. And
Ryan, I know I kind of used your answer to
turn it into a question here, but I just wanna,
I'd be remiss if I didn't give you a chance.
Was there anything else you wanted to add on this
is. I would definitely keep it short and sweet, you
know, for organizations and defenders. If we truly want to
reduce breach risk meaningfully in 2026, we have to start
by eliminating this top 25, I think everything else is
merely optimization at this point. Once again, are kind of
gesturing towards the next story already. So I'm going to
roll into it. Is consumer SSO safe life? In a
recent article, ZDNet Senior Contributing Editor David Berlin weighs the
pros and cons of consumer oriented SSO schemes. That's his
name for it. You also hear these things called social
logins, right? It's like when you go to a website
and instead of making your own account, you log in
with Google, you log in with Facebook. We've all seen
these options and, and they're extremely handy, right? It's very
convenient. You don't have to set up a new account.
But that also means you're setting up a kind of
single point of failure, right? If all of your accounts
are tied to the same Gmail account, if someone steals
that account, they can get into all the rest of
those accounts, right? And that reminds me of on last
week's show we talked about this attack where hackers can
take over your Gmail account and lock you out by
people say, look, there is that that single point of
failure. But your Googles, your Facebooks, these platforms, they tend
to have more robust security than some random website. So
like if the choice is between putting your information into
a random website or using a bigger platform, maybe use
the bigger platform. And I get that. But you know,
I think about how we just looked at this list,
right, and saw these, these identity and authentication weaknesses still
being there, still being big ones. And I think about
the fact that like every new X Force Threat Intelligence
index that comes out is like, what's the number one
attack? Attack? It's valid account abuse, right? It's just stealing
credentials and using that. So, you know, given that we
have all of these pressures on identity security at this
moment, I want to your takes on where you land
in this debate. Are social logins safe? And you know
what, we'll start with Ryan on this one. Ryan, what's
your take? You feel like these things are safe? Where
do you land? I don't know if I would use
safe in those exact terms. I would, I would, I
like talking in risk, right? But I would say it's
both safer and riskier. I think it just really shifts
where we are placing the danger, right? So like we're,
if we upgrade our authentication strength, but we're also centralizing
failure into that single identity provider. I would say for
most people, Google or Apple logins is way more secure
than a password that's reused across 30 sites. Right. The
protections are stronger, the fraud detection is better, and MFA
adoption, I would say, is higher. But if we look
at the flip side of that, if something were to
be compromised, that blast radius that we always talk about,
that blast radius is real. If an attacker compromises your
primary identity, they then inherit everything connected to it, it,
your banking, your apps, your cloud data, medical portals, you
know, whatever that might be. Right. And attackers are, you
know, as you mentioned, are increasingly targeting those account linking
and recovery flows. So it's really not the SSO providers
themselves. They go after the back doors, not the front
door, I think. So I guess the real answer is
probably conditional. SSO is safe if you pair it with
strong mfa, good recovery hygiene and visibility into what accounts
are federated and what accounts are not. I would say
consumer SSO is safe when you treat it like a
security control, not just a convenience shortcut. That's an extremely
good point. Right. And I also, I wanted to just
emphasize, as you pointed out, what we often see when
people don't use this stuff is they just use the
same password across a bunch of different websites. And again,
that's how so many attackers get into people's accounts. Right.
You get a leaked credential from somewhere, they'll just go
try it on a whole bunch of websites and see
where it works, and often works way more than it
should. Evelyn, let's bring you into this. You're looking at
this question of are these things safe? Or maybe in
Ryan's word, how do they affect the risk? Where's your
take? Where do you land? I love when Ryan said
convenience shortcut because it really made me think about, you
know, this was a definitely a thought provoking question because
when I stop and take a step back, how many
of us, when we created a social media account, was
really thinking about strong credentials? You know, thinking about, okay,
let me make sure that I'm using a strong password
with mfa, you know, blah, blah, and that I'm not
tying it to something that if it's, if it's, it's
compromised, it could potentially have this massive impact on my
life. So I'll be the first one to say that
when I set up a Facebook account years ago, no,
I was not thinking security. And so I had to.
But I only use that account on just that platform.
So if you, if you steal it, you steal it.
No, but, but in all honesty, being serious about this,
I have to admit this really made me think about
now I have been laxed in getting passkeys set up
everywhere, but it really made me think about, with technology
advancements, the importance of setting up, you know, the passkeys,
of setting, making sure that you have MFA that, I
mean for important websites like my banking, financial, things of
that nature. I don't authenticate using any form of Google
anywhere. But you know, my mind is thinking security. The
average person may not actually be thinking about that. So
this was one that really made me take a step
back and say, okay Evelyn, you know, I don't want
you to be hypocritical here. So think back on every
credential that you set up over the years. Did you
really take the security aspect into, you know, into play?
And where have you not set up passkeys that you
should have set up passkeys? So you did. This did
clean up. And I think everyone has to do that.
I mean single sign on is great when it's used
properly. And I commend, I mean the Google, the Apple
for giving you that additional capability to be able to
authenticate. But Ryan said it best. I mean, if this
one area is exploited, exposed, I mean, if they're able
to breach and steal my credentials, how many other avenues
am I opening them up to? And I think that's
something that end users have to really think about when
they're going through and setting up these single sign on
mechanism mechanisms. Absolutely. And I think that you're right in
that these things aren't necessarily presented to the end user
as a security play, right? It is, it's a convenience
thing, right? Like you're, you're absolutely. Same with me. When
I set up my Facebook years ago, I was not
thinking about security at all. Right. And then I saw,
oh, I can sign into another website with Facebook. Yeah,
why not? No one ever said a word to me
about security. So I do think there's some messaging that
needs to happen there. Austin, your take on the debate,
where do you fall? You mentioned convenience. We always sacrifice
security for convenience, especially for consumers that want that. So
for enterprises relying on consumer single sign on for critical
applications, I think that can be fine. But these applications
need to be low risk as that can improve their
usability. But it certainly shouldn't be the standard. So as
Evelyn pointed out, I think we should integrate pass keys,
integrate that with MFA passkeys, eliminate shared secrets and reduce
the risk of Phishing by using that authentication through cryptography,
which is tied to the user's device. But again, there's
no one size fits all. And I think when it
comes to password security, it's good to have layered defenses
involved. I always tell friends and family about a password
manager. Yeah, you have have one specific password that's tied
to everything, but that's the one password that you need
to protect. And then when you integrate things like MFA
passkeys and SSO into sort of a one, one solution
there, then you are much more secure. And then you
don't run into these issues or those single, single points
of failure. Absolutely. So I think the kind of broad
takeaway here is that, as with so many things in
security world, it's not black and white. It's conditional, to
use Ryan's word. Right, right. It's not necessarily that. Is
the SSO itself secure so much as what are you
putting around that thing and what kind of defense and
depth do you have? And that's the real question, because
if you're only relying on one password for anything, no
matter what it is, SSO or not, it's not good.
That's not a good passkeys. We like pass keys, folks.
Let's move on, though, to our last story because we
are running out of time here. Red Canary researchers outline
a bring your own virtual machine attack. Now, the security
operations firm Red Canary reported on a recent incident they
helped address in which a spam bombing campaign ultimately led
to a malicious virtual machine on the victim system. Real
quick, it went like this. The attack started with a
campaign to flood the victim's inbox with a bunch of
thousands of emails. So they, they. A lot of important
notifications were obscured. And also this put people on high
alert. Next, the attacker calls the victim, says, hey, I'm
technical. I can help you with this problem. The victim,
not thinking clearly because they're being inundated with these messages,
takes them up on the offer, gives them remote access
to their computer using Quick Assist, and at that point,
the attacker drops their own virtual machine in there using
a Visual Basic script. And it gives them this, like,
very strong persistence and control. Now, what's interesting to me
was, you know, I've heard of like, you know, bring
your own driver attacks or whatever before, but this is
the first time I've heard of a bring your own
virtual machine attack. And so I just wanted to start
by asking if this is something that anybody's seen before.
Austin, have you ever heard of something like this before?
Bring your own virtual Machine attack? Not specifically, but what's
so interesting about this form of attack that, you know,
it's not your standardized, say, malware, it's infrastructure within your
infrastructure that can really go undetected. And it's a reminder
that attackers don't need these complex zero days. They just
need some creativity and patience here. A virtual machine can
survive reboots, evade host detection, and even run its own
tool set isolated from a main operating systems. And most
organizations don't have the practices in place to monitor any
hypervisor activity from these virtual machines. So really the lesson
here is to monitor any unexpected behavior or activity from
these virtual machines and to expand that across your endpoint
detection and response. Because by ensuring your endpoint security tools
can detect any of these nested environments. That will really
go a long way. And what's also interesting here is
they can sort of go under the radar. So simply
by analyzing resources within your environments, like different spikes of
memory and cpu, that will sort of indicate for defenders
that maybe there's something more going on here. I like
that. Yeah. And that's a real concrete way to kind
of monitor for some of these, like you said, infrastructure
within your infrastructure attacks, which in a way we're kind
of going for full circle because that's sort of what
some of these AI abuses are. Right. It's sort of
of attackers using your infrastructure against you or setting up
their kind of own infrastructure within your system. So I
like that we're starting to talk more about how you
detect that and not just malware. Ryan, I wanted to
ask you, you know, building on that, you know, we
talked about the vm, the virtual machine, but I also
wanted to ask you about this kind of spam bombing
campaign, because it's not necessarily something I've seen before again.
And I'm wondering if you have any thoughts on, on
that. Have you seen anything like that? This kind of
flooding the victim with this noise to put him on
edge? What's your take there? Yeah, I think each one
of these is kind of situational. And in this particular
one, I think, no, we don't. I guess to answer
that, no, we don't see that all too often, but
I would argue that the surrounding trade craft in this
situation, this email bombing, the social engineering, the remote access
abuse, I would say is classic misdirection. It's creating noise.
So the defender actually never thinks to look for a
hypervisor artifact that's quietly spinning in the background. Right. So
I think that really the attack in general, I think
it forces us really to rethink where the endpoint. I
use endpoint in quotations, where the endpoint actually is. And
if attackers can import their own operating system into your
operating system, then your visibility model isn't just incomplete, it
would be essentially obsolete. And I really, I loved this
article. I think this really is a master class on.
I know we talked about infrastructure, but I kind of
wanted to shift back to the malware, what I call
the four truths of malware. Modern threat actors have what
I call call the four truths of malware, where all
malware must run, hide, communicate and persist. And sometimes our
tools are not designed to see all of this, right?
So in this particular case, the malware must run, right?
So the attacker didn't run code on the host, they
ran it inside of the virtual machine. So that execution
or that running happens in a sandbox the defender actually
never inspects, which entirely bypasses edr. When we talk about
hiding, the VM is, I would argue, is really a
perfect hiding place. Instead of hiding a simple process, they're
actually hiding an entire operating system parallel to the the
host, right? So it's invisible to actually the traditional monitoring
standards that we've come accustomed to to. We talk about
communication inside that virtual machine. That attacker can use their
own tooling at their free will, their own C2 channels,
their own networking stack, right? That host operating system is
creating traffic that just looks like normal virtualization activity or
even benign resource usage. And we talk about the fourth
one is persistence. The persistence here isn't just a registry
key or a scheduled task. The persistence is actually the
VM itself, which I think is really fascinating. And if
the virtual machine actually survives a reboot or a user
login, that attacker really has long term, has a long
term beachhead, really, with no conventional indicators. So I really
think that if all malware must run, hide, communicate and
persist, then it's scary. But virtual machines would be the
ultimate way to do all four without ever touching the
host. I think that fundamentally should change how we think
about our endpoint defense and the telemetry that goes into
the. That absolutely. And, and you know those four truths.
I really like that because it's like, you know, is
it. Is the VM malware or not? It almost doesn't
matter, right? It's accomplishing the same thing, right? It's, it's
doing the same thing. And like you said, it's doing
the same thing, maybe even better than, than traditional malware
does. So at that point, the categorization distinction is like
it's moot all right, it's moot. We're dealing with something
big. Evelyn, to close out the episode. I just wanted
to get your take looking at this sort of report.
What do you think the key takeaways are for defenders
when it comes to the, the rise of these. Not
the rise I shouldn't say because it's the first one
I've seen, but the fact that possibility now these bring
your own virtual machine attacks. What's your, what's your takeaways?
I definitely think that Ryan did an outstanding job breaking
this down and explaining it. The only thing that I
would probably add too, and which was really covered here
was that we have to look at this from, you
know, from a visibility, looking at the virtualization layers where
we have to make sure that we're monitoring any unauthorized
VMs. This was a very unique, interesting case that, that,
I mean, you know, they were definitely thinking outside of
the box. I mean, who would have thought that at
some point that an attacker would have thought, you know
what? I'm not going to go in the normal way.
I'm going to bring my own vm. And so just
thinking about the, the. Well, I, I mean it's like
I feel like they need an award for thinking outside
of the box, but because it's something very, very unique
that I must admit when I read it, I had
to go back and read it again and then start
doing some research on it because I started thinking about
all the risks that are associated with this and how
we will be able to detect it. No, I get
that. And it is, it is, is an amazing bit
of misdirection, I think was the word that Ryan used.
Right? Like this classic misdirection. You do almost want to
give him a reward, but you don't gotta hand it
to him, folks. But that is all the time we
have for today. I want to thank our panelists Austin,
Evelyn and Ryan. Thank you to the viewers and the
listeners. As I mentioned up top, this is the last
regular panel episode of the year, but it's not the
last show of the year. Look out for our special
2025 Year in Review episode next week and the week
after that, we're going to have an in depth interview
with regular panelist Michelle Alvarez where we dive into why
it costs so much to get hacked in America. And
if, if you can't get enough of IBM's podcasts and
who can be sure to subscribe to TechSplainers wherever you
get your podcasts. This is a daily audio only show
where IBM writers give you a crash course in hot
tech topics, including our very own producer, Brian Clark delivering
a five part series on cybersecurity fundamentals. Again, that's TechSplainers
on Apple, Spotify, wherever else you listen to podcasts. And
as always, please subscribe to Security Intelligence wherever podcasts are
found and stay safe out there.