AI Trust and Windows 10 End‑of‑Life
Key Points
- AI is becoming increasingly capable, so organizations must adopt it as a tool while ensuring its trustworthiness, much like hiring an employee you trust to write code.
- The upcoming end‑of‑life for Windows 10 forces individuals and businesses to decide whether to upgrade, extend security updates, or switch to a different OS, each carrying distinct security and continuity risks.
- Planning early for OS transitions is essential; treating the end‑of‑life as a business risk and preparing a migration strategy helps maintain security and operational stability.
- The podcast’s broader agenda includes exploring how AI will transform security operations centers, the reliability of AI agents for fixing vulnerable code, and emerging threats such as maritime‑related cyber piracy.
Sections
- AI Trust and Windows 10 Risks - The episode introduction stresses the necessity of trusting AI as a security tool, previews discussions on AI’s transformation of SOCs and automated code fixes, and raises concerns about lingering Windows 10 deployments exposing users and organizations to threats.
- Risk‑Based Strategy for Legacy Systems - The speaker argues that applying a risk‑based approach to decide between patching, extended support, isolation, or extra monitoring is essential, since unsupported Windows 10 machines can become vulnerable “zombie” bots exploited by attackers.
- Evolution of AI in SOC - Sridhar explains how security operations have progressed from manual log analysis to machine‑learning detection and now to autonomous agents, emphasizing AI’s role in boosting detection accuracy, investigation speed, and the need for analysts to focus on high‑value, context‑driven decisions.
- Human Judgment Needed in AI Security - The speakers stress that while AI aids cyber defense, human oversight remains crucial to prevent over‑aggressive automation and to effectively counter long‑lasting attacker dwell times.
- AI Model Poisoning & SOC Realities - The speaker warns that malicious inputs can poison AI models and cautions against over‑relying on AI to replace human security analysts, stressing the need for continuous human‑AI collaboration.
- AI Code Generation Trust Dilemma - The speaker warns that as AI systems increasingly write their own code and evolve autonomously, a lack of transparency and oversight could lead to opaque, untrustworthy software, necessitating a zero‑trust “verify‑then‑trust” approach.
- Automating AI Patch Review Process - The speakers debate how to replace human‑reviewed code patches with automated validation and red‑team testing to keep pace with AI‑generated fixes.
- Recursive AI Oversight and Trust - The speakers discuss the need for layered, AI‑driven oversight, trust, and risk management to prevent backdoors and address the recursive challenges of monitoring AI systems.
- Critiquing University Security Practices - The speaker rebukes an article’s focus on low‑value university payroll attacks, decries the ongoing lack of multifactor authentication, and champions passkeys as a more phishing‑resistant alternative.
- Balancing Education and Technology in Security - The speakers emphasize teaching users through engaging methods while simultaneously evolving security tools—such as email monitoring and rapid response rules—to assume breaches will occur, noting that education can reduce but not eliminate human error amid a constantly shifting attack surface and AI-driven threats.
- University Security vs Open Culture - The speaker argues that universities’ cultural emphasis on free information creates soft phishing targets, contrasting the ideal of open access with funding pressures and advocating stronger safeguards such as multi‑factor authentication.
- Tailoring Cybersecurity Training to Threat Landscape - The speaker emphasizes aligning cybersecurity education with specific threat profiles—such as university HR and payroll units—to ensure training relevance and effectiveness.
Full Transcript
# AI Trust and Windows 10 End‑of‑Life **Source:** [https://www.youtube.com/watch?v=sVxSCcwlxus](https://www.youtube.com/watch?v=sVxSCcwlxus) **Duration:** 00:46:10 ## Summary - AI is becoming increasingly capable, so organizations must adopt it as a tool while ensuring its trustworthiness, much like hiring an employee you trust to write code. - The upcoming end‑of‑life for Windows 10 forces individuals and businesses to decide whether to upgrade, extend security updates, or switch to a different OS, each carrying distinct security and continuity risks. - Planning early for OS transitions is essential; treating the end‑of‑life as a business risk and preparing a migration strategy helps maintain security and operational stability. - The podcast’s broader agenda includes exploring how AI will transform security operations centers, the reliability of AI agents for fixing vulnerable code, and emerging threats such as maritime‑related cyber piracy. ## Sections - [00:00:00](https://www.youtube.com/watch?v=sVxSCcwlxus&t=0s) **AI Trust and Windows 10 Risks** - The episode introduction stresses the necessity of trusting AI as a security tool, previews discussions on AI’s transformation of SOCs and automated code fixes, and raises concerns about lingering Windows 10 deployments exposing users and organizations to threats. - [00:03:06](https://www.youtube.com/watch?v=sVxSCcwlxus&t=186s) **Risk‑Based Strategy for Legacy Systems** - The speaker argues that applying a risk‑based approach to decide between patching, extended support, isolation, or extra monitoring is essential, since unsupported Windows 10 machines can become vulnerable “zombie” bots exploited by attackers. - [00:08:45](https://www.youtube.com/watch?v=sVxSCcwlxus&t=525s) **Evolution of AI in SOC** - Sridhar explains how security operations have progressed from manual log analysis to machine‑learning detection and now to autonomous agents, emphasizing AI’s role in boosting detection accuracy, investigation speed, and the need for analysts to focus on high‑value, context‑driven decisions. - [00:11:55](https://www.youtube.com/watch?v=sVxSCcwlxus&t=715s) **Human Judgment Needed in AI Security** - The speakers stress that while AI aids cyber defense, human oversight remains crucial to prevent over‑aggressive automation and to effectively counter long‑lasting attacker dwell times. - [00:15:31](https://www.youtube.com/watch?v=sVxSCcwlxus&t=931s) **AI Model Poisoning & SOC Realities** - The speaker warns that malicious inputs can poison AI models and cautions against over‑relying on AI to replace human security analysts, stressing the need for continuous human‑AI collaboration. - [00:21:38](https://www.youtube.com/watch?v=sVxSCcwlxus&t=1298s) **AI Code Generation Trust Dilemma** - The speaker warns that as AI systems increasingly write their own code and evolve autonomously, a lack of transparency and oversight could lead to opaque, untrustworthy software, necessitating a zero‑trust “verify‑then‑trust” approach. - [00:25:42](https://www.youtube.com/watch?v=sVxSCcwlxus&t=1542s) **Automating AI Patch Review Process** - The speakers debate how to replace human‑reviewed code patches with automated validation and red‑team testing to keep pace with AI‑generated fixes. - [00:29:04](https://www.youtube.com/watch?v=sVxSCcwlxus&t=1744s) **Recursive AI Oversight and Trust** - The speakers discuss the need for layered, AI‑driven oversight, trust, and risk management to prevent backdoors and address the recursive challenges of monitoring AI systems. - [00:33:24](https://www.youtube.com/watch?v=sVxSCcwlxus&t=2004s) **Critiquing University Security Practices** - The speaker rebukes an article’s focus on low‑value university payroll attacks, decries the ongoing lack of multifactor authentication, and champions passkeys as a more phishing‑resistant alternative. - [00:36:45](https://www.youtube.com/watch?v=sVxSCcwlxus&t=2205s) **Balancing Education and Technology in Security** - The speakers emphasize teaching users through engaging methods while simultaneously evolving security tools—such as email monitoring and rapid response rules—to assume breaches will occur, noting that education can reduce but not eliminate human error amid a constantly shifting attack surface and AI-driven threats. - [00:41:54](https://www.youtube.com/watch?v=sVxSCcwlxus&t=2514s) **University Security vs Open Culture** - The speaker argues that universities’ cultural emphasis on free information creates soft phishing targets, contrasting the ideal of open access with funding pressures and advocating stronger safeguards such as multi‑factor authentication. - [00:45:04](https://www.youtube.com/watch?v=sVxSCcwlxus&t=2704s) **Tailoring Cybersecurity Training to Threat Landscape** - The speaker emphasizes aligning cybersecurity education with specific threat profiles—such as university HR and payroll units—to ensure training relevance and effectiveness. ## Full Transcript
Don't bet against AI. It's getting better, and I think
we will have to adapt and we'll have to figure
out how to use it as a tool. But we're
going to also have to make sure that it's trustworthy.
Just like you wouldn't hire an employee to start writing
code for you if you didn't trust them. All that
and more on security Intelligence. Hello, and welcome to Security
Intelligence, IBM's weekly security podcast, where we break down the
most important stories in the field with the help of
our panel of experts. I'm your host, Brian Clark. I'll
be standing in for Matt Kaczynski this week. Joining me
today are three returning panelists. Michelle Alvarez, Manager, X Force
Strategic Threat Analysis Sridhar Mupiti, IBM Fellow cto, IBM Security
and Jeff Kroon, distinguished engineer, master inventor, AI and data
security. All right, here's what we're talking about this week.
How will AI transform the SoC? Can we trust AI
agents to fix vulnerable code and payroll? Pirates are sailing
the high SEAs. But first, rip Windows 10. Hundreds of
millions of people still use Windows 10, and many PCs
don't meet Microsoft's strong requirements for Windows 11. This brings
about the question, should I spend the money on a
new PC or save a couple of bucks in the
short run, but remain vulnerable to attacks? So the first
question we'll start with is what kinds of security issues
might this introduce for people and organizations? Let's go to
Michelle. I'd love to hear your thoughts on this one.
Yeah. Thank you, Brian. So hopefully last week wasn't the
first time that organizations and individuals heard about the end
of life. I think the announcement originally came out maybe
over a year, year and a half ago. So hopefully
organizations have been putting steps in place to prepare for
this end of life. So I like to liken it
to maybe what something that everybody can relate to. And
that's our cars, right? We have warranties on our cars
that eventually come to end of life. So you have
a choice. Do you extend the warranty that would be
the same as extended security updates offered by Microsoft? Do
you maybe trade in your vehicle for a newer model
that could be Windows 11? Or are you contemplating purchasing
a whole new car, different make and model, and that
would be alternate operating systems. So I think organizations have
a lot of things to consider when an operating system
comes to life. But the important thing is to make
sure that you're setting up for success, preparing early so
that business continuity is there when the platform becomes end
of life. Exactly. Exactly. So there's a lot of different
choices, but the main thing to take away is you
need to do something. Right. Or you could do nothing.
There's always that choice, do nothing. Right. That might not
be the best choice, but that is a choice. Right.
I think it's probably start thinking about it as a
business risk. Right. I mean, to your point, Michelle. Right.
It's about business. In some cases you may have to
do something. In other cases, probably it's okay to patch
it or buy extended support or in some cases probably
circumvent it with additional controls like network isolation or extra
monitoring. But I think if you start looking at it
with a risk based approach, I think the choice becomes
really clear on what the options are and how do
we go about it. I think it's appropriate. Yeah, just
in time for Halloween. Now we get all these zombies.
So that's what essentially these Windows 10 systems are going
to be. They're going to be the living dead that
are out there. They're still able to operate and do
things, but in fact they're not up to date. They're
going to be vulnerable, they're going to represent a threat
to us. And they in fact could be leveraged to
become what we refer to as zombies or botnets, where,
you know, somebody takes remote control of a system and
then uses it to attack others, to do denial of
service, to do other things like that. So we just
now, you know, the bad guys just got a lot
of fertile ground handed to them because whenever there are
new vulnerabilities that come out, those will be known publicly,
but there won't be patches to fix them. So in
a sense it'll be like zero day every day for
all of these systems. The unfortunate thing I think about
it is that the people that are least likely to
have the hardware capable to do the upgrade and least
motivated to do it will be the ones that will
be taken advantage of in this. They're not going to
know that they need to do it. In many cases
it's going to be someone that's the family PC and
they bought it and they don't want to upgrade every,
you know, year or two. It still works from their
perspective. So again, like Michelle's analogy with the car, not
everybody wants to buy a new car every couple of
years as long as it still works. So it's, it's
going to be a big issue. But I think organizations
and to a great extent even just individuals have to
start planning for obsolescence because the vendors are doing that,
you know, their intention Is of course to, to keep
adding new features. And new features will require additional hardware,
additional capability. It's a big cost for Microsoft to continue
supporting down level software. At what point do they cut
it off? Are they supposed to still support Windows 3.1,
Windows 95? I mean at some point they say no,
but this does seem, this seems like this is cutting
off a lot of folks. There's the estimates 200 million
computers. That's a lot. That's a lot of potential zombies
that are about to come alive here in the next
few weeks and years to come. Yeah, that's a great
point. I think I even noticed a startling fact. I'm
not sure the validity of it, but 9% of Windows
users are still on Windows 7. This is not a
new topic. Right. Systems have gone out of support for
many, many decades. What we've seen is that there's a
90 day window period where attackers are stockpiling exploits for
this day. Right. So because not everybody is going to
Michelle's analogy. Sure, let me drive the additional two miles
before I can do something about it. And that's a
window that attackers try to take advantage of to be
able to launch some attacks. So something to get started
right right away as opposed to wait. Yeah. I think
most normal people hate software upgrades. I don't put myself
in the category of normal. I'm always enjoying the new
beta. I want the newest features, the newest whatever. But
that's not normal. Most people in my family, when new
software comes out for their phones, they're the last ones
to update. They're years behind. And then when they come
and ask me, the family tech support guy to fix
the thing, I look and see what level of whatever
know mobile operating system they're on and it makes my
eyes pop out. But there's just, that's not what most
people want to do. Which means this stuff has to
be automated and it has to be easy, it has
to be frictionless. And it's just not when it comes
to these cases. And if you look at from the
economic standpoint, this is what economists would call an externality
for a software vendor. This is not something that benefits
them directly. It's something that, you know, why would they
want to upgrade all these old systems? It's harder for
them. It costs them more to support multiple levels of
operating system. They'd rather spend their time developing new features
and things like that. So there's a tension and it
won't just resolve itself naturally. Yeah, absolutely. And I think
when we talk about these topics there's two things we
can look at, right? We often speak in terms of
organizations and enterprises because at least for me, that's who
I'm often speaking to, right? Our clients. But then, you
know, Jeff mentioned and brought it really personal, right? Our
family members, our friends. And I think this is where
education comes into play and doing things like these podcasts
where we can talk directly to just individuals about your
own personal security hygiene and how important it is. Because
they might see then the benefits to doing the extended
warranty or, you know, upgrading to Windows 11. They can
now put it into context. Okay, this is significant, this
is serious, and this is why. Move along to our
next topic. How will AI transform the SoC? So more
and more organizations are introducing AI agents for the SoC.
IBM has Adam and Blumira just dropped one last week
called SOC Autofocus. Let's talk about more AI in the
SoC. Sridhar, why don't we move to you for the
first question. What is AI doing in the SOC now
and what might it do in the future? If you
look at the progression of the socs, right? I mean,
we've been here for a number of years. We started
looking at manual logs to SIM rules to machine learning
in their detection, and now we are getting into autonomous
agents. So it's a progression. The way I look at
it, right? The progression of how SOC analytics are looking
at the number of events that we're seeing every day.
And what AI helps us is moving from, first of
all, just pure AI. It helps us improving the accuracy
of detection, right. And increasing the speed of investigation so
they can respond faster. Now what we have to think
about is with these agents like Atom and Blumeo and
other things, we go into the next level where some
of the detection is being done by the agents. So
the analysts have to move to the higher ground of
being able to either validate those or be able to
go and look at the events and incidents, which are
high value, where we may have to not only rely
on the speed and the efficiency of the AI, but
also couple that with the human ingenuity, creativity, the business
context to be able to make something relevant or not
relevant and take action quickly. I'll just say I think
this is necessary. This is a necessary evolution. The number
of events, the amount of data that's having to be
processed, the amount of time that we have to do
the analysis is not working in the favor of the
good guys. So we're going to need all the help
we can get. The bad guys have AI, the good
guys are Going to have to use it better. So
this is not a surprise. And like Sridhar said, we've
been working in this space, developing tools in this space,
but it's also an area that's new, and we need
to go in with caution. So it's great. If you
have a system that can go do all of this
analysis and do this research and take what would have
been hours down into minutes, that's great. So now you've
basically got a security analyst that's working 247 that's always
up to date on the latest and greatest and can
advise you. That's all great as long as it's not
hallucinating. So we don't need a security analyst on lsd.
It needs to be grounded in truth. It needs to
be when it's seeing something, it needs to be real.
And we also have to realize that attackers will find
ways to inject information into our systems that will be
designed to confuse the AI systems that we're relying on
to do the diagnosis. That'll be really clever stuff. So
the arms race will continue in both of these cases.
in with caution. And I think as much as the
temptation will be to that AI is doing such a
great job with this, let's just let it run. Things
that we need to keep the human in the loop
when it comes to what systems do we shut down,
what things do we block, what with these kinds of
things? Otherwise, it's really easy for us to basically automate
a denial of surface against ourselves. If we build a
system that says, every time I see an attack on
this port, shut down that port, well, then the bad
guy just has to attack you once on every port,
and you'll shut everything down for them. They don't have
to do it. So we've got to be intelligent with
this. And the AI is intelligent to an extent, but
there's judgment, and human judgment's still going to be needed
in these cases. Right, Right. It's not a, you know,
a full solve for all the problems in the SoC.
It's a tool and needs to be used as such.
Right? Yeah. I'm going to jump in on one thing
the defender side, it's not on the responder side. Because
oftentimes what we're seeing are attackers dwelling in environments for
long Periods of time. And I'm actually going to pull
in a relevant statistic from the cost of a data
breach, which I know my fellow panelists are very familiar
with. It's a study that IBM does analysis on data
and insights pulled from the Potimont Institute. And this year's
report has a lot of great information, especially when it
comes to the use of AI in environments. And so
security teams using AI and automation extensively, that's another keyword,
extensively shortened their breach times by 80 days and lowered
their their average breach cost by 1.9 million. And that's
compared to organizations that did not use AI at all.
So it's all about containing the incident. Well, first identifying
the incident. Right. And then containing and subsequently remediating the
incident. And as much as we can shorten that and
with AI as a tool in our belts, not to,
you know, completely erase the human oversight, but as a
tool to speak, speed up that process and that attacker
life cycle gets shorter and shorter. That's to our benefit.
Yeah, that's a great point. I think that the trend
that I'm seeing with all three of you is keeping
that human in the loop, not just removing the human
aspect from it. I read a bit about the AI
saving a lot of time and like you mentioned, Michelle,
80 days, so that's a lot less combing through the
logs. But we still need somebody there verifying and anything
that the AI agent is finding. We touched a little
bit about on any concerns on introducing AI to the
SoC. I can already guess that there are some among
all three of you. Jeff, do you want to elaborate
a little bit more about some of the concerns that
you were mentioning? Yeah, so some concerns would be that
one that I mentioned is that if someone is able
to inject information into the system, there's one type of
attack called evasion, where you're able to manipulate the inputs
in a way so that it doesn't confuse you or
me. Because we perceive things through certain processes, but AI
doesn't use exactly those same processes. So it might see
this as something completely different. And you and I would
filter some of that information out. So that's an example.
If these systems are staying up to date, which we
hope they will, then they're going to be scanning the
Internet, they're going to be using other inputs and feeds
like that that are coming in. So again, an attacker
might put out say a new document that's a fake
document that talks about a new type of attack and
inside the document is an indirect prompt injection. And then
injection, it starts reading instructions that cause it now to
start exfiltrating or doing other kinds of things like that.
You could plant things in information that later gets sucked
into models and then it stays in the model for
a long time until somebody realizes, oh, this model's been
poisoned. What are we going to do about that? So
those are just a few examples, but there's a lot,
and I think a lot of organizations the temptation will
be, oh, we can save a lot of money, we
can just bring in this AI and we can start
laying people off. We don't need people because we've got
AI. And I'm just going to say resist that temptation.
There's always going to be enough work in the SoC,
I think, at least for the foreseeable future, regardless of
how good our tools get. I don't think we ever
get to a point where we say, okay, you know
what, everybody just take the week off, we're ahead. This
is a line of work you never get ahead in.
So you're always going to need to be focusing, using
human intelligence, augmenting it with AI just in order to
keep your head above water. That's not even getting ahead.
One thing I want to mention is I know we've
been heavily focused on human and AI partnership, right? I
think we've been talking about. I think the one thing
that I want to highlight is even in the AI
part, right, we need to think about the explainability, right?
Can basically say block this IP address, but block this
IP address because of this following TTP building on Jeff's
example or this threat intel or this behavior pattern so
that you can make an informed decision and not be
subject to some of the prompt injection attacks in between,
right? So yes, absolutely important for human and AI and
agent partnership or what industry is calling as digital workers.
But at the same time the onus is on the
agents to ensure that it is not autonomously creating decisions
without validation, not subject to some malicious coarseness by attackers
or drifting with time. AI and agents specifically are non
deterministic and uncertainty. So it's possible and Jeff was alluding
to some of the examples of evasion. There's poisoning, there's
stealing. It's like asking 20 questions that we play in
a car game. You can ask the 20 questions to
AI and figure out what the model is doing and
then be able to make it drift. Right? So these
are the things that we should inherently protect the AI
in addition to humans. That's a great point. Michelle, any
final thoughts on this topic? What Sridhar mentioned about the
human AI connection or partnership I think really resonates with
a lot of security teams that are sort of trying
to find that balance between what does AI do versus
what does the human do. And we still need the
level 2 and level 3 analysts and above to be
able to learn from somewhere. So completely kind of cutting
out that that first level I don't think is going
to be ideal. I think we need to figure out
a way to again marry the that those two things
and build that partnership where we're able to build Our
human analyst 202.3 and beyond and not cut out that
first layer which is so important. Where do you build
your skills from? It's from that entry level position as
well. Moving on to our next topic for today involves
Google introducing Codemender, an AI agent for code security. Earlier
this month, Google introduced the agent designed to fix and
find flaws in code. So our first question is are
we ready to trust AI with that job? Jeff, let's
start it off with you. So this I think is
another necessary and foreseeable step in evolution that we're going
along here. AI, we've been using AI for a number
of years, machine learning and other types of technologies to
do identification of vulnerabilities in code, to scan source code,
to try to hack away at systems and things like
that. So that's not surprising and even have it suggest
replacements. Here I found a vulnerability. Here's a code snippet
that I recommend. This is the next step in that
where it's actually doing the repair, it's actually injecting the
new code and changing it around. And here's where we
I'm not going to say we shouldn't do it. I'm
going to say we should tap the brakes or at
least make sure we have good oversight. This is going
to be back to the human in the loop kind
of comment again. Imagine if someone is able to these
AIs, they're based on models. And again if somebody got
in and was able to poison a model and make
it so that you now have a codemainder, a piece
of code that's going to go fix your code, then
what if it injects what is a backdoor into your
system? If I'm blindly trusting that it found the vulnerability
and fixed it because in fact probably 90% of the
time it'll do great. And then there'll be these few
edge cases where who Knows there's a back door, there's
a Trojan horse, there's some other unexpected behavior that might
happen from this. Or in some cases, again, we have
hallucination issues where it might not be intentional, it just
might be the nature of the situation of the system.
You know, foundation models, large language models, these kinds of
things. We still have not solved hallucinations, we've gotten them
better. But I don't want, just like I didn't want
a SOC analyst on lsd, I don't want a coder
on LSD either, writing code and putting that into my
system. Yeah, yeah, I'm against that. Let's just say in
this context. So I'm definitely not for it. So what
are we going to. So there needs to be some
oversight. Who's going to be watching this system? And the
longer term effect is the part that actually is more
troubling to me if we don't get it right. Because
this can be a double edged sword. It can help
us, it can hurt us. So as it gets better
and better, as I expect it will, we'll become more
and more dependent and then will we have enough people
that understand how to read this stuff? If I have
an AI system that is writing code, it'll be writing
the code for the next generation AI system, which then
will create the code for the next generation AI system.
These AI systems will be creating their successive generations, which
is great, but eventually it's going to start writing code
that might not make sense to us anymore. And now
we're having to trust a system that's very opaque and
we're going to use it to do its own debugging.
That's a lot of trust. So I would just say
we're going to need to adopt the kind of zero
trust mentality of verify, then trust, not trust and then
verify. Jeff is stealing the calendar that's in my head.
So I actually. Oh, I'm sorry, I'm sorry. I hear
it. It's great. I'm glad we're thinking of like. But
I thought to myself, are we going from 0 trust
to 100 trust just because AI is involved? I don't
think so. Right. And I also thought if I could
phone a friend during this podcast, I would call our
X Force Red team, which is our pen testers, our
hackers, to kind of get their POV on this because
I'd be very interested. So I had to kind of
put this in perspective of like my wheelhouse and threat
intelligence. And when we do, for instance, a report for
a Client looking at their threat landscape based on their
industry, their geography. And what is our, I guess, risk
tolerance for putting out a report Maybe that has a
missing Oxford comma. Right. There's a lot of people that
love their Oxford commas. And what if this report, because
AI looked at it or produced part of it, missed
a comma, but what if it made the wrong attribution
to the most likely threat actors targeting this organization? That's
a huge mistake. So, so I think again, what is
the risk tolerance? And I guess the theme for this
podcast is yes to AI, let's make sure there's human
oversight. We're not obsolete yet. I hope not. My pickleball
career has not taken off yet, so I don't know.
Okay. Okay, well, AI will free you up to play
more pickleball. Yeah, that's the hope. Right. I actually welcome,
like Jeff said. Right. I welcome the, the automation in
patch management. Right. And not just automation, but be able
to go and scan for dynamic testing, static testing and
eventually I think, I mean I'm assuming some blue teaming
and red teaming or the other vice versa. Right. Purple
teaming concepts over here. I think both my colleagues said
things around human in the loop verification and all of
that is great. Right. The couple of things I would
mention is it's not about just the AI, but it's
also moving to trusting the processes. Do we have the
right processes in place so that we can do the
right level of validation? Because I think, Jeff, you were
mentioning there's going to be code that we're not going
to be able to take a look at it at
every granular level there's going to be millions of code.
Even what Google has claimed, I believe they've done close
to what, 72 or around 75 fixes in six months
with millions of lines of code. Right. There's nobody's reading
that. But what's the process? What's the process to review?
What's the process to test it? What's the automation that
we put in there so that we can then go
validate that not by a human just clicking yes or
no, but having a automated process in place to match
the rate and pace of AI based patch management? That's
a great point and we touched on this a bit.
But Google states that right now all the patches by
codemainder are reviewed by researchers before they're put into place.
How long do we think before researchers are no longer
taking a look at these and the AI system is
able to just go ahead and put the patches in
place? And move along without anybody reviewing them. Do we
have any estimates of how far off we might be?
I don't think anybody has looked at every line of
2.5 million lines of code. Right. There's got to be
a mechanism by which we have to evolve. Right. Human
put some automated ways of validation, for example. Right. And
we were talking about, Jeff was talking about poisoning and
evasion in the last topic. Let's go by that thread.
If AI becomes rogue, we need a mechanism to Michelle's
point, to automate the red teaming to create an agent
which is a red team that can go and launch
a, an automated testing of the code at the entire
system level, not just at the AI, not just at
the mcp, not just as a server, but the entire
end to end. Right. And once you do that, then
I foresee an automated blue agent which is able to
go, based on what we found out, be able to
automatically fix those and then do it in a continuous
manner. How far are we from that? I think research
going to take some validation and more importantly, trust. I
think it actually could be done now, but like Sridhar
said, it's not ready for prime time. There are too
many gaps, too many things that we don't understand, too
many things that could be introduced that I wouldn't feel
comfortable with it. But here's the thing. Whenever people make
predictions about, well, AI is never going to do this,
it's doing this, but it'll never do that. I recently
did a video on this on the IBM Technology channel.
My take on that is don't bet against AI because
most of the things that people have predicted it was
never going to do, it's done. I'm not saying it'll
do everything and I'm not saying it'll do everything perfectly
and it certainly can't today, but it's getting better and
I think we will have to adapt and we'll have
to figure out how to use it as a tool,
but we're going to also have to make sure that
it's trustworthy. Just like you wouldn't hire an employee to
start writing code for you if you didn't trust them.
You'd need some reason to trust them and you might
want some oversight, you know, some way to test and
make sure that what they're introducing is not backdoors and
stuff like that. We're going to need the same capability.
My guess is we'll use AI to test AI, we'll
use another AI to do oversight, and then we have
to wonder who's watching the watchers. So this whole thing
becomes a nested problem that recurses on itself. But that's
where I think we're moving. Hope for the best, but
prepare for the worst with AI. Michelle, anything to add?
No, I just thought to myself, can AI be bribed
because then we're in trouble. Yeah, right, sure. Yeah. That
might be an interesting topic for. For one of our
future podcast. Yeah, that's right. Absolutely. We've been spending a
lot of time on trust in this, in this podcast.
Right. I think we, we do need to step back
and think about it as risk management. I think Jeff
said, right, don't shy away because it's not ready and
trust is going to come over time. It means something
is earned. Not like A4 buttons that we have to
check. Right. So it's. We all need to think about
risk management. Right. Whether it's a previous topic or this
topic or there's going to be concerns. And I think
both, Jeff as well as Michelle said that the patches
will introduce more patches because of the flaws they will
create. And we talk about red team and blue team,
But I think we need to, number one, think about
how do we evolve the AI, which is things like
root cause analysis, fixed rationale validation. That's all technology that
has to evolve. And then meanwhile, there's a human side
of it which also has to evolve. Not just going
and rubber stamping 2000 or 2 million lines of code
per se, but how do you go and evolve the
human interaction such that you're doing not only automated testing,
code review, but also runtime monitoring and rollback? Because what
happens if AI drifts and you realize that it was
okay, but it's not okay tomorrow morning? Right. We got
to be able to roll back. Right. So, so I
talk, I think about it as risk management. Finally, our
last topic of the day. Payroll pirates target employees salaries
in a new social engineering campaign. Scammers impersonate employees to
trick HR into routing their salaries into accounts that the
scammers control. Since this scam works so well, especially in
HR departments, what does it say about the state of
enterprise security? Michelle, do you want to lead us off
with this one? Yeah. So essentially in this campaign, it's
actually a campaign that we. Or attacked it, that we've
seen for some time now. It's called adversary in the
middle attack. And what's Happening now is that attackers are
able to sort of get around some of the multi
factor authentication methods that are in place. And what we're
emphasizing is, and I'm glad Jeff Krum is on the
call because I'm going to direct to him next, we're
encouraging phishing resistant MFA methodology and authentication mechanisms. And so
I was so excited to see that Jeff was joining
the panel because I was like, this is a great
topic. We could just kind of just bring up his
latest video on this and spend the next probably half
an hour, 40 minutes just on this topic alone. But
this is definitely something that we have seen increase as
far as incidents and our incident response engagements where attackers
are using this type of attack to sort of circumvent,
of course, risk mitigations that are in place. And this
is, you know, an ongoing battle between defender and attacker
where we're putting things in place and they're finding ways
around it. But never fear, there's something better that we
can be doing to protect our organizations. Jeff, why don't
you take it from here? It seems like I thought,
shameless. Video plug here. Yeah, yeah, exactly, sure. I don't
need to. I paid Michelle in advance to already plug
that video. There we go. You're awesome. Yeah, thank you.
Thank you. Checks in the mail. The thing I thought
was interesting about this particular article is it said that
they focused a lot on university environments. And I thought,
if you're an attacker. Look, I'm an adjunct professor at
NC State University and I do it for the love,
not for the money, because there's not a ton of
money in it. Right. And I would just think if
I was an attacker and I was trying to siphon
off payroll, I'd go somewhere else where there was higher
value targets. At least adjuncts are not being paid a
ton of money. So that was the first thing. The
other thing I thought about and the. There is a
lot of discussion about multi factor authentication. And it seems
in a lot of these cases the MFA was not
implemented. And my reaction to that is, did I just
sleep through it? And we're still in 2005. I thought
we were in 2025. What are we still doing with
systems that have any importance at all that don't have
some form of multi factor authentication? To me, that's not.
There's no excuse for that anymore. We've had years and
years. The technology is mature. There's no reason for any
system of any real value not to have it. And
then, as Michelle said, I'm a big advocate of this
technology called passkeys as a replacement for passwords because they're
essentially phishing resistant. I'm never going to say it eliminates
any attack, but it does a pretty darn good job
of reducing that likelihood. These pass keys, and there's a
lot of misunderstanding because the first generation of pass keys
that special device can in fact just be your mobile
phone. So the friction on passkeys has come way down.
And there's. Look, the number one thing that phishing attacks
are going for is your password. Here's one way to
make sure they can't steal your password. Don't have one.
So have a passkey which is not easily stolen. And
we know how to deal with this. We haven't solved
the porch pirate problem, but we can solve the payroll
pirate problem. I love the fact that both of you
are advertising my product, but unfortunately, this is not a
technical. It is not a technical issue. The technical issue
has been solved. I think there's multiple ways you can
solve this technical issue. It's purely an adoption issue. It's
a social engineering issue. We've all been here for a
number of years and attacks continue to get sophisticated and
humans always remain the weakest link. I think that's what
I think we should be focusing on to say phishing
resistance has been there for a while and I'm seeing
organizations talk about the adoption rate of less than 10%.
Right? That's my worry. That's my worry to think about
focusing on. How do we think about improving the adoption
rate over here and how do we make sure that
things are less effective? Sorry, I apologize. But more effective
in terms of adopting these things. Right? So I kind
of look at it at two aspects, Jeff and Michelle.
Right. One aspect is fine. Technology is there. It's not
a new technology innovation over here. But number one, how
do we teach that? How do we teach that? How
do we gamify it, how do we make sure that
it is well trusted, et cetera. And the second thing
is we Hear Jeff and MFA is not a new
topic. Like you said 10, 15 years ago, adoption is
still low. So by this time we have to accept
to a certain extent some defeat and agree that we
need to evolve the technology to go really assume that
these bad things are going to happen. So things like
phishing attacks happen because of email. Do a better job
of monitoring email, Be able to go and put some
code in rules, monitor those, be able to quickly monitor
some of the HR systems so that we are able
to go and take a quick action. So I'm talking
about both education as well as evolution of technology to
assume that bad things are going to happen. Yeah, that's
a great point, and I'm glad you brought that up
with what Michelle said earlier about education. We can't remove
every bit of human error, but we can at least
lower that level of human error to the point where
it's not as destructive to our organization. We absolutely should
educate users. I think there's a limit to how much
we're going to be able to educate them because the
playing field, the attack surface, keeps changing every day and
they can't possibly keep up. AI is going to be
one of those things that changes it. We've seen from
the cost of a data breach report that Michelle mentioned
earlier, the last few years, the number one cause of
a data breach has been phishing attacks. Well, if we
keep having the same thing year after year, then that
means whatever we're doing is not working well enough. Maybe
it's working to some degree, but it's not working well
enough. We haven't slayed that dragon yet, and we should
be able to. We have technology that can help that's
not being deployed widely enough. We can train the heck
out of users, but you could, you know, they're going
in some cases for years and years on phishing attacks,
the advice was look for bad spelling, bad grammar, these
kinds of things. And that will be your tip? Well,
if AI is writing the phishing attacks, which is what
it's going to be happening more and more going forward.
It's going to be in perfect English or Spanish or
Portuguese or whatever. So there's those kinds of things. We
actually need to go unlearn that from people. We need
to tell them, ignore that. Now, if you see really
bad grammar and it claims to be your bank, well,
then probably it's not your bank. But the bottom line
is we can't rely on a lot of those kind
of cues that we have trained people to in the
past. We're going to have to solve a lot of
these things so that it's foolproof. Now, that's hard because
they keep making better fools all the time. But to
the best extent possible, we have to make it so
that a user can't do the wrong thing, because it's
going to be hard to just teach them not to
do the wrong. And organizations still keep making again these
same mistakes. Even if an organization said we're not doing
multi factor, it's too expensive or whatever, well, I would
disagree with them if they look at the cost of
cost of a data breach. Multi factor is pretty darn
cheap compared to the alternative. But even if you're going
to keep using passwords, at least use the best practice
guidelines from someone like the US National Institute of Standards
and Technology, which in 2017 changed all the advice that
people are still following and enforcing the things of making
people change their password on a regular basis. NIST says
no because that forces people to write them down. The
making your password complex. NIST says no because again, it's
forcing people to write their passwords down. You know, length
is strength when it comes to passwords, not complexity. Because
we're ignoring, we're looking at the math when we make
if they can't remember it well, then they're going to
use the same one everywhere or they're going to write
it down or other kinds of things. So we've got
to think in the security department, we've got to do
a better job of making it foolproof so they can't
do the wrong thing. While we have a couple more
minutes left, I'd like to touch on something that Jeff
brought up earlier about targeting higher education institutions. Do we
think that maybe this is a personal vendetta? Because this
isn't a place, like Jeff mentioned, that you might be
getting the most bang for your buck, I guess, out
of an attack. Whereas maybe a larger organization or maybe
a VIP client or something would be a better target
if you're looking to get more money. My suspicion is
no. My suspicion based on working with a few higher
ed clients is that they don't have the best security.
So they're easy targets, they're soft targets, they might not
be the highest value targets, but there's a kind of
in the academic world, there's an under underwriting current of
information wants to be free. So we don't want to
put a lot of impediments, we don't want to put
a lot of barriers and security looks like a barrier.
So we're kind of against that. And I don't know,
maybe information wants to be free, but the company that
just paid you a ton of money for that grant,
they don't want it to be Free. So if you
want to keep funding your research, you need to be
able to guard this stuff. But I know universities are
strapped, so they're doing the best they can. The university
that I work for is, I mean, they've had multi
factor authentication for at least the last five years that
I've been there. So good for them. And I think
the others should as well. One school of thought, right.
I mean, I work with a lot of universities as
well and you know, I understand resources and funding can
be challenged, but again, it has never. I don't think
it's a technology issue. Right. I feel like this may
be onset of something bigger in the sense it is
easy to understand the current environment at a university because
it's open. The fact that there is a situation going
on, whether it is a political situation or an education
situation or a health situation, it's a lot more easily
available than what is going on within a company. So
being able to use that as a mechanism to go
create the phishing email so there's a higher chance of
being successful. I think what we need to aim to
is the takeaway for me from this is attackers are
getting more contextual, attacks are getting more contextual in the
phishing email. So we need to do a better job
of number one, educating and creating the education material which
is more contextual and increase the rigor of testing with
penalties to say if Jeff clicked on it in the
penalty box for 20 minutes. Right. Sorry Jeff. Right. So
I mean, sorry Jeff, but I gotta put somebody in
the penalty box. If anybody deserves it, it's me. So
I'll go. So we gotta put like. And, but at
the same time I'm talking about improving the technology of
doing a better job of monitoring if somebody's cutting through
the gate all the time, cutting through the fence all
the time. Do a better job of monitoring it as
an example. Right. Do a better job of making it
tougher and tougher and tougher because we as human beings
don't seem to learn even after this many years, less
than 10%. It just makes me sad. Michelle, final thoughts.
Great, thank you. Lots of pressure there with final thoughts.
Take your time. I've got something, you know, based on.
It'S got to be you because I'm in the penalty
box. So. Yeah, that's a good point. Yeah. Just kind
of going off of what both Jeff and Sridhar said.
One size doesn't fit all when it comes to education.
So really understanding your threat landscape. And again, I guess
I'm talking to all the organizations out there, but actually
also on a personal level as well, knowing what is
your likely target and threat landscape in terms of who
is going to target you, what types of attacks. So
the example given with regards to this campaign against the
universities, do the HR and payroll departments of universities understand
that this is happening? Do they have this type of
threat intelligence that's disseminated to them, and is it incorporated
and integrated into their cybersecurity training program? So if you're
being trained on something that's likely not going to happen
in your environment, then maybe you need to reassess what
your training program looks like. Well, that's all the time
we have for today. Thank you for joining me, Michelle,
Sridhar and Jeff. Also, thank you to all our viewers
and listeners. Subscribe to Security Intelligence wherever podcasts are found,
and stay safe out there.