Learning Library

← Back to Library

AI Trust and Windows 10 End‑of‑Life

Key Points

  • AI is becoming increasingly capable, so organizations must adopt it as a tool while ensuring its trustworthiness, much like hiring an employee you trust to write code.
  • The upcoming end‑of‑life for Windows 10 forces individuals and businesses to decide whether to upgrade, extend security updates, or switch to a different OS, each carrying distinct security and continuity risks.
  • Planning early for OS transitions is essential; treating the end‑of‑life as a business risk and preparing a migration strategy helps maintain security and operational stability.
  • The podcast’s broader agenda includes exploring how AI will transform security operations centers, the reliability of AI agents for fixing vulnerable code, and emerging threats such as maritime‑related cyber piracy.

Sections

Full Transcript

# AI Trust and Windows 10 End‑of‑Life **Source:** [https://www.youtube.com/watch?v=sVxSCcwlxus](https://www.youtube.com/watch?v=sVxSCcwlxus) **Duration:** 00:46:10 ## Summary - AI is becoming increasingly capable, so organizations must adopt it as a tool while ensuring its trustworthiness, much like hiring an employee you trust to write code. - The upcoming end‑of‑life for Windows 10 forces individuals and businesses to decide whether to upgrade, extend security updates, or switch to a different OS, each carrying distinct security and continuity risks. - Planning early for OS transitions is essential; treating the end‑of‑life as a business risk and preparing a migration strategy helps maintain security and operational stability. - The podcast’s broader agenda includes exploring how AI will transform security operations centers, the reliability of AI agents for fixing vulnerable code, and emerging threats such as maritime‑related cyber piracy. ## Sections - [00:00:00](https://www.youtube.com/watch?v=sVxSCcwlxus&t=0s) **AI Trust and Windows 10 Risks** - The episode introduction stresses the necessity of trusting AI as a security tool, previews discussions on AI’s transformation of SOCs and automated code fixes, and raises concerns about lingering Windows 10 deployments exposing users and organizations to threats. - [00:03:06](https://www.youtube.com/watch?v=sVxSCcwlxus&t=186s) **Risk‑Based Strategy for Legacy Systems** - The speaker argues that applying a risk‑based approach to decide between patching, extended support, isolation, or extra monitoring is essential, since unsupported Windows 10 machines can become vulnerable “zombie” bots exploited by attackers. - [00:08:45](https://www.youtube.com/watch?v=sVxSCcwlxus&t=525s) **Evolution of AI in SOC** - Sridhar explains how security operations have progressed from manual log analysis to machine‑learning detection and now to autonomous agents, emphasizing AI’s role in boosting detection accuracy, investigation speed, and the need for analysts to focus on high‑value, context‑driven decisions. - [00:11:55](https://www.youtube.com/watch?v=sVxSCcwlxus&t=715s) **Human Judgment Needed in AI Security** - The speakers stress that while AI aids cyber defense, human oversight remains crucial to prevent over‑aggressive automation and to effectively counter long‑lasting attacker dwell times. - [00:15:31](https://www.youtube.com/watch?v=sVxSCcwlxus&t=931s) **AI Model Poisoning & SOC Realities** - The speaker warns that malicious inputs can poison AI models and cautions against over‑relying on AI to replace human security analysts, stressing the need for continuous human‑AI collaboration. - [00:21:38](https://www.youtube.com/watch?v=sVxSCcwlxus&t=1298s) **AI Code Generation Trust Dilemma** - The speaker warns that as AI systems increasingly write their own code and evolve autonomously, a lack of transparency and oversight could lead to opaque, untrustworthy software, necessitating a zero‑trust “verify‑then‑trust” approach. - [00:25:42](https://www.youtube.com/watch?v=sVxSCcwlxus&t=1542s) **Automating AI Patch Review Process** - The speakers debate how to replace human‑reviewed code patches with automated validation and red‑team testing to keep pace with AI‑generated fixes. - [00:29:04](https://www.youtube.com/watch?v=sVxSCcwlxus&t=1744s) **Recursive AI Oversight and Trust** - The speakers discuss the need for layered, AI‑driven oversight, trust, and risk management to prevent backdoors and address the recursive challenges of monitoring AI systems. - [00:33:24](https://www.youtube.com/watch?v=sVxSCcwlxus&t=2004s) **Critiquing University Security Practices** - The speaker rebukes an article’s focus on low‑value university payroll attacks, decries the ongoing lack of multifactor authentication, and champions passkeys as a more phishing‑resistant alternative. - [00:36:45](https://www.youtube.com/watch?v=sVxSCcwlxus&t=2205s) **Balancing Education and Technology in Security** - The speakers emphasize teaching users through engaging methods while simultaneously evolving security tools—such as email monitoring and rapid response rules—to assume breaches will occur, noting that education can reduce but not eliminate human error amid a constantly shifting attack surface and AI-driven threats. - [00:41:54](https://www.youtube.com/watch?v=sVxSCcwlxus&t=2514s) **University Security vs Open Culture** - The speaker argues that universities’ cultural emphasis on free information creates soft phishing targets, contrasting the ideal of open access with funding pressures and advocating stronger safeguards such as multi‑factor authentication. - [00:45:04](https://www.youtube.com/watch?v=sVxSCcwlxus&t=2704s) **Tailoring Cybersecurity Training to Threat Landscape** - The speaker emphasizes aligning cybersecurity education with specific threat profiles—such as university HR and payroll units—to ensure training relevance and effectiveness. ## Full Transcript
0:01Don't bet against AI. It's getting better, and I think 0:05we will have to adapt and we'll have to figure 0:08out how to use it as a tool. But we're 0:10going to also have to make sure that it's trustworthy. 0:12Just like you wouldn't hire an employee to start writing 0:15code for you if you didn't trust them. All that 0:17and more on security Intelligence. Hello, and welcome to Security 0:26Intelligence, IBM's weekly security podcast, where we break down the 0:30most important stories in the field with the help of 0:32our panel of experts. I'm your host, Brian Clark. I'll 0:36be standing in for Matt Kaczynski this week. Joining me 0:38today are three returning panelists. Michelle Alvarez, Manager, X Force 0:42Strategic Threat Analysis Sridhar Mupiti, IBM Fellow cto, IBM Security 0:49and Jeff Kroon, distinguished engineer, master inventor, AI and data 0:54security. All right, here's what we're talking about this week. 0:57How will AI transform the SoC? Can we trust AI 1:00agents to fix vulnerable code and payroll? Pirates are sailing 1:05the high SEAs. But first, rip Windows 10. Hundreds of 1:13millions of people still use Windows 10, and many PCs 1:16don't meet Microsoft's strong requirements for Windows 11. This brings 1:19about the question, should I spend the money on a 1:21new PC or save a couple of bucks in the 1:24short run, but remain vulnerable to attacks? So the first 1:27question we'll start with is what kinds of security issues 1:29might this introduce for people and organizations? Let's go to 1:32Michelle. I'd love to hear your thoughts on this one. 1:34Yeah. Thank you, Brian. So hopefully last week wasn't the 1:39first time that organizations and individuals heard about the end 1:44of life. I think the announcement originally came out maybe 1:47over a year, year and a half ago. So hopefully 1:51organizations have been putting steps in place to prepare for 1:55this end of life. So I like to liken it 1:58to maybe what something that everybody can relate to. And 2:02that's our cars, right? We have warranties on our cars 2:05that eventually come to end of life. So you have 2:08a choice. Do you extend the warranty that would be 2:12the same as extended security updates offered by Microsoft? Do 2:16you maybe trade in your vehicle for a newer model 2:19that could be Windows 11? Or are you contemplating purchasing 2:23a whole new car, different make and model, and that 2:27would be alternate operating systems. So I think organizations have 2:31a lot of things to consider when an operating system 2:34comes to life. But the important thing is to make 2:36sure that you're setting up for success, preparing early so 2:41that business continuity is there when the platform becomes end 2:46of life. Exactly. Exactly. So there's a lot of different 2:49choices, but the main thing to take away is you 2:52need to do something. Right. Or you could do nothing. 2:56There's always that choice, do nothing. Right. That might not 2:59be the best choice, but that is a choice. Right. 3:00I think it's probably start thinking about it as a 3:04business risk. Right. I mean, to your point, Michelle. Right. 3:08It's about business. In some cases you may have to 3:10do something. In other cases, probably it's okay to patch 3:14it or buy extended support or in some cases probably 3:18circumvent it with additional controls like network isolation or extra 3:24monitoring. But I think if you start looking at it 3:27with a risk based approach, I think the choice becomes 3:30really clear on what the options are and how do 3:32we go about it. I think it's appropriate. Yeah, just 3:34in time for Halloween. Now we get all these zombies. 3:37So that's what essentially these Windows 10 systems are going 3:40to be. They're going to be the living dead that 3:43are out there. They're still able to operate and do 3:46things, but in fact they're not up to date. They're 3:49going to be vulnerable, they're going to represent a threat 3:52to us. And they in fact could be leveraged to 3:55become what we refer to as zombies or botnets, where, 3:59you know, somebody takes remote control of a system and 4:02then uses it to attack others, to do denial of 4:04service, to do other things like that. So we just 4:07now, you know, the bad guys just got a lot 4:11of fertile ground handed to them because whenever there are 4:15new vulnerabilities that come out, those will be known publicly, 4:18but there won't be patches to fix them. So in 4:21a sense it'll be like zero day every day for 4:25all of these systems. The unfortunate thing I think about 4:28it is that the people that are least likely to 4:32have the hardware capable to do the upgrade and least 4:36motivated to do it will be the ones that will 4:40be taken advantage of in this. They're not going to 4:42know that they need to do it. In many cases 4:45it's going to be someone that's the family PC and 4:48they bought it and they don't want to upgrade every, 4:50you know, year or two. It still works from their 4:53perspective. So again, like Michelle's analogy with the car, not 4:57everybody wants to buy a new car every couple of 4:59years as long as it still works. So it's, it's 5:02going to be a big issue. But I think organizations 5:05and to a great extent even just individuals have to 5:08start planning for obsolescence because the vendors are doing that, 5:11you know, their intention Is of course to, to keep 5:14adding new features. And new features will require additional hardware, 5:18additional capability. It's a big cost for Microsoft to continue 5:24supporting down level software. At what point do they cut 5:28it off? Are they supposed to still support Windows 3.1, 5:31Windows 95? I mean at some point they say no, 5:34but this does seem, this seems like this is cutting 5:38off a lot of folks. There's the estimates 200 million 5:42computers. That's a lot. That's a lot of potential zombies 5:47that are about to come alive here in the next 5:51few weeks and years to come. Yeah, that's a great 5:54point. I think I even noticed a startling fact. I'm 5:57not sure the validity of it, but 9% of Windows 6:01users are still on Windows 7. This is not a 6:04new topic. Right. Systems have gone out of support for 6:08many, many decades. What we've seen is that there's a 6:1290 day window period where attackers are stockpiling exploits for 6:17this day. Right. So because not everybody is going to 6:21Michelle's analogy. Sure, let me drive the additional two miles 6:24before I can do something about it. And that's a 6:27window that attackers try to take advantage of to be 6:31able to launch some attacks. So something to get started 6:34right right away as opposed to wait. Yeah. I think 6:38most normal people hate software upgrades. I don't put myself 6:42in the category of normal. I'm always enjoying the new 6:46beta. I want the newest features, the newest whatever. But 6:49that's not normal. Most people in my family, when new 6:53software comes out for their phones, they're the last ones 6:56to update. They're years behind. And then when they come 6:59and ask me, the family tech support guy to fix 7:03the thing, I look and see what level of whatever 7:06know mobile operating system they're on and it makes my 7:08eyes pop out. But there's just, that's not what most 7:12people want to do. Which means this stuff has to 7:15be automated and it has to be easy, it has 7:17to be frictionless. And it's just not when it comes 7:20to these cases. And if you look at from the 7:22economic standpoint, this is what economists would call an externality 7:26for a software vendor. This is not something that benefits 7:29them directly. It's something that, you know, why would they 7:33want to upgrade all these old systems? It's harder for 7:36them. It costs them more to support multiple levels of 7:39operating system. They'd rather spend their time developing new features 7:43and things like that. So there's a tension and it 7:46won't just resolve itself naturally. Yeah, absolutely. And I think 7:50when we talk about these topics there's two things we 7:53can look at, right? We often speak in terms of 7:56organizations and enterprises because at least for me, that's who 7:59I'm often speaking to, right? Our clients. But then, you 8:03know, Jeff mentioned and brought it really personal, right? Our 8:06family members, our friends. And I think this is where 8:09education comes into play and doing things like these podcasts 8:12where we can talk directly to just individuals about your 8:16own personal security hygiene and how important it is. Because 8:20they might see then the benefits to doing the extended 8:24warranty or, you know, upgrading to Windows 11. They can 8:29now put it into context. Okay, this is significant, this 8:32is serious, and this is why. Move along to our 8:35next topic. How will AI transform the SoC? So more 8:42and more organizations are introducing AI agents for the SoC. 8:46IBM has Adam and Blumira just dropped one last week 8:49called SOC Autofocus. Let's talk about more AI in the 8:52SoC. Sridhar, why don't we move to you for the 8:54first question. What is AI doing in the SOC now 8:57and what might it do in the future? If you 8:59look at the progression of the socs, right? I mean, 9:03we've been here for a number of years. We started 9:05looking at manual logs to SIM rules to machine learning 9:11in their detection, and now we are getting into autonomous 9:14agents. So it's a progression. The way I look at 9:16it, right? The progression of how SOC analytics are looking 9:22at the number of events that we're seeing every day. 9:26And what AI helps us is moving from, first of 9:33all, just pure AI. It helps us improving the accuracy 9:36of detection, right. And increasing the speed of investigation so 9:39they can respond faster. Now what we have to think 9:44about is with these agents like Atom and Blumeo and 9:48other things, we go into the next level where some 9:51of the detection is being done by the agents. So 9:54the analysts have to move to the higher ground of 9:57being able to either validate those or be able to 10:01go and look at the events and incidents, which are 10:07high value, where we may have to not only rely 10:10on the speed and the efficiency of the AI, but 10:15also couple that with the human ingenuity, creativity, the business 10:20context to be able to make something relevant or not 10:23relevant and take action quickly. I'll just say I think 10:26this is necessary. This is a necessary evolution. The number 10:31of events, the amount of data that's having to be 10:34processed, the amount of time that we have to do 10:38the analysis is not working in the favor of the 10:41good guys. So we're going to need all the help 10:44we can get. The bad guys have AI, the good 10:47guys are Going to have to use it better. So 10:49this is not a surprise. And like Sridhar said, we've 10:52been working in this space, developing tools in this space, 10:57but it's also an area that's new, and we need 10:59to go in with caution. So it's great. If you 11:02have a system that can go do all of this 11:04analysis and do this research and take what would have 11:07been hours down into minutes, that's great. So now you've 11:12basically got a security analyst that's working 247 that's always 11:17up to date on the latest and greatest and can 11:20advise you. That's all great as long as it's not 11:24hallucinating. So we don't need a security analyst on lsd. 11:29It needs to be grounded in truth. It needs to 11:32be when it's seeing something, it needs to be real. 11:35And we also have to realize that attackers will find 11:39ways to inject information into our systems that will be 11:44designed to confuse the AI systems that we're relying on 11:48to do the diagnosis. That'll be really clever stuff. So 11:52the arms race will continue in both of these cases. 12:02in with caution. And I think as much as the 12:04temptation will be to that AI is doing such a 12:07great job with this, let's just let it run. Things 12:10that we need to keep the human in the loop 12:13when it comes to what systems do we shut down, 12:15what things do we block, what with these kinds of 12:18things? Otherwise, it's really easy for us to basically automate 12:22a denial of surface against ourselves. If we build a 12:25system that says, every time I see an attack on 12:27this port, shut down that port, well, then the bad 12:29guy just has to attack you once on every port, 12:32and you'll shut everything down for them. They don't have 12:34to do it. So we've got to be intelligent with 12:37this. And the AI is intelligent to an extent, but 12:42there's judgment, and human judgment's still going to be needed 12:45in these cases. Right, Right. It's not a, you know, 12:49a full solve for all the problems in the SoC. 12:52It's a tool and needs to be used as such. 12:54Right? Yeah. I'm going to jump in on one thing 13:03the defender side, it's not on the responder side. Because 13:07oftentimes what we're seeing are attackers dwelling in environments for 13:11long Periods of time. And I'm actually going to pull 13:14in a relevant statistic from the cost of a data 13:17breach, which I know my fellow panelists are very familiar 13:20with. It's a study that IBM does analysis on data 13:25and insights pulled from the Potimont Institute. And this year's 13:29report has a lot of great information, especially when it 13:32comes to the use of AI in environments. And so 13:36security teams using AI and automation extensively, that's another keyword, 13:41extensively shortened their breach times by 80 days and lowered 13:46their their average breach cost by 1.9 million. And that's 13:50compared to organizations that did not use AI at all. 13:53So it's all about containing the incident. Well, first identifying 13:58the incident. Right. And then containing and subsequently remediating the 14:02incident. And as much as we can shorten that and 14:05with AI as a tool in our belts, not to, 14:09you know, completely erase the human oversight, but as a 14:13tool to speak, speed up that process and that attacker 14:16life cycle gets shorter and shorter. That's to our benefit. 14:20Yeah, that's a great point. I think that the trend 14:23that I'm seeing with all three of you is keeping 14:25that human in the loop, not just removing the human 14:29aspect from it. I read a bit about the AI 14:35saving a lot of time and like you mentioned, Michelle, 14:3880 days, so that's a lot less combing through the 14:41logs. But we still need somebody there verifying and anything 14:44that the AI agent is finding. We touched a little 14:48bit about on any concerns on introducing AI to the 14:51SoC. I can already guess that there are some among 14:54all three of you. Jeff, do you want to elaborate 14:56a little bit more about some of the concerns that 14:58you were mentioning? Yeah, so some concerns would be that 15:02one that I mentioned is that if someone is able 15:05to inject information into the system, there's one type of 15:08attack called evasion, where you're able to manipulate the inputs 15:13in a way so that it doesn't confuse you or 15:16me. Because we perceive things through certain processes, but AI 15:20doesn't use exactly those same processes. So it might see 15:23this as something completely different. And you and I would 15:28filter some of that information out. So that's an example. 15:32If these systems are staying up to date, which we 15:35hope they will, then they're going to be scanning the 15:37Internet, they're going to be using other inputs and feeds 15:40like that that are coming in. So again, an attacker 15:44might put out say a new document that's a fake 15:48document that talks about a new type of attack and 15:51inside the document is an indirect prompt injection. And then 16:04injection, it starts reading instructions that cause it now to 16:06start exfiltrating or doing other kinds of things like that. 16:09You could plant things in information that later gets sucked 16:14into models and then it stays in the model for 16:16a long time until somebody realizes, oh, this model's been 16:19poisoned. What are we going to do about that? So 16:21those are just a few examples, but there's a lot, 16:26and I think a lot of organizations the temptation will 16:28be, oh, we can save a lot of money, we 16:31can just bring in this AI and we can start 16:33laying people off. We don't need people because we've got 16:36AI. And I'm just going to say resist that temptation. 16:40There's always going to be enough work in the SoC, 16:42I think, at least for the foreseeable future, regardless of 16:46how good our tools get. I don't think we ever 16:48get to a point where we say, okay, you know 16:50what, everybody just take the week off, we're ahead. This 16:53is a line of work you never get ahead in. 16:56So you're always going to need to be focusing, using 16:59human intelligence, augmenting it with AI just in order to 17:04keep your head above water. That's not even getting ahead. 17:08One thing I want to mention is I know we've 17:09been heavily focused on human and AI partnership, right? I 17:17think we've been talking about. I think the one thing 17:19that I want to highlight is even in the AI 17:23part, right, we need to think about the explainability, right? 17:26Can basically say block this IP address, but block this 17:30IP address because of this following TTP building on Jeff's 17:33example or this threat intel or this behavior pattern so 17:37that you can make an informed decision and not be 17:40subject to some of the prompt injection attacks in between, 17:43right? So yes, absolutely important for human and AI and 17:47agent partnership or what industry is calling as digital workers. 17:52But at the same time the onus is on the 17:55agents to ensure that it is not autonomously creating decisions 18:00without validation, not subject to some malicious coarseness by attackers 18:07or drifting with time. AI and agents specifically are non 18:15deterministic and uncertainty. So it's possible and Jeff was alluding 18:22to some of the examples of evasion. There's poisoning, there's 18:25stealing. It's like asking 20 questions that we play in 18:29a car game. You can ask the 20 questions to 18:32AI and figure out what the model is doing and 18:34then be able to make it drift. Right? So these 18:36are the things that we should inherently protect the AI 18:40in addition to humans. That's a great point. Michelle, any 18:44final thoughts on this topic? What Sridhar mentioned about the 18:47human AI connection or partnership I think really resonates with 18:52a lot of security teams that are sort of trying 18:54to find that balance between what does AI do versus 18:58what does the human do. And we still need the 19:02level 2 and level 3 analysts and above to be 19:05able to learn from somewhere. So completely kind of cutting 19:09out that that first level I don't think is going 19:12to be ideal. I think we need to figure out 19:14a way to again marry the that those two things 19:17and build that partnership where we're able to build Our 19:21human analyst 202.3 and beyond and not cut out that 19:27first layer which is so important. Where do you build 19:29your skills from? It's from that entry level position as 19:33well. Moving on to our next topic for today involves 19:36Google introducing Codemender, an AI agent for code security. Earlier 19:44this month, Google introduced the agent designed to fix and 19:47find flaws in code. So our first question is are 19:51we ready to trust AI with that job? Jeff, let's 19:54start it off with you. So this I think is 19:56another necessary and foreseeable step in evolution that we're going 20:00along here. AI, we've been using AI for a number 20:05of years, machine learning and other types of technologies to 20:08do identification of vulnerabilities in code, to scan source code, 20:13to try to hack away at systems and things like 20:17that. So that's not surprising and even have it suggest 20:23replacements. Here I found a vulnerability. Here's a code snippet 20:26that I recommend. This is the next step in that 20:30where it's actually doing the repair, it's actually injecting the 20:35new code and changing it around. And here's where we 20:39I'm not going to say we shouldn't do it. I'm 20:41going to say we should tap the brakes or at 20:43least make sure we have good oversight. This is going 20:46to be back to the human in the loop kind 20:48of comment again. Imagine if someone is able to these 20:53AIs, they're based on models. And again if somebody got 20:56in and was able to poison a model and make 20:58it so that you now have a codemainder, a piece 21:02of code that's going to go fix your code, then 21:06what if it injects what is a backdoor into your 21:10system? If I'm blindly trusting that it found the vulnerability 21:14and fixed it because in fact probably 90% of the 21:18time it'll do great. And then there'll be these few 21:21edge cases where who Knows there's a back door, there's 21:24a Trojan horse, there's some other unexpected behavior that might 21:28happen from this. Or in some cases, again, we have 21:31hallucination issues where it might not be intentional, it just 21:35might be the nature of the situation of the system. 21:38You know, foundation models, large language models, these kinds of 21:41things. We still have not solved hallucinations, we've gotten them 21:45better. But I don't want, just like I didn't want 21:50a SOC analyst on lsd, I don't want a coder 21:53on LSD either, writing code and putting that into my 21:56system. Yeah, yeah, I'm against that. Let's just say in 22:01this context. So I'm definitely not for it. So what 22:05are we going to. So there needs to be some 22:07oversight. Who's going to be watching this system? And the 22:11longer term effect is the part that actually is more 22:14troubling to me if we don't get it right. Because 22:17this can be a double edged sword. It can help 22:19us, it can hurt us. So as it gets better 22:21and better, as I expect it will, we'll become more 22:24and more dependent and then will we have enough people 22:27that understand how to read this stuff? If I have 22:30an AI system that is writing code, it'll be writing 22:34the code for the next generation AI system, which then 22:38will create the code for the next generation AI system. 22:41These AI systems will be creating their successive generations, which 22:46is great, but eventually it's going to start writing code 22:49that might not make sense to us anymore. And now 22:52we're having to trust a system that's very opaque and 22:55we're going to use it to do its own debugging. 22:58That's a lot of trust. So I would just say 23:03we're going to need to adopt the kind of zero 23:05trust mentality of verify, then trust, not trust and then 23:09verify. Jeff is stealing the calendar that's in my head. 23:13So I actually. Oh, I'm sorry, I'm sorry. I hear 23:15it. It's great. I'm glad we're thinking of like. But 23:19I thought to myself, are we going from 0 trust 23:22to 100 trust just because AI is involved? I don't 23:26think so. Right. And I also thought if I could 23:28phone a friend during this podcast, I would call our 23:33X Force Red team, which is our pen testers, our 23:36hackers, to kind of get their POV on this because 23:39I'd be very interested. So I had to kind of 23:42put this in perspective of like my wheelhouse and threat 23:44intelligence. And when we do, for instance, a report for 23:48a Client looking at their threat landscape based on their 23:51industry, their geography. And what is our, I guess, risk 23:57tolerance for putting out a report Maybe that has a 24:02missing Oxford comma. Right. There's a lot of people that 24:04love their Oxford commas. And what if this report, because 24:07AI looked at it or produced part of it, missed 24:11a comma, but what if it made the wrong attribution 24:15to the most likely threat actors targeting this organization? That's 24:19a huge mistake. So, so I think again, what is 24:22the risk tolerance? And I guess the theme for this 24:25podcast is yes to AI, let's make sure there's human 24:29oversight. We're not obsolete yet. I hope not. My pickleball 24:36career has not taken off yet, so I don't know. 24:39Okay. Okay, well, AI will free you up to play 24:42more pickleball. Yeah, that's the hope. Right. I actually welcome, 24:45like Jeff said. Right. I welcome the, the automation in 24:49patch management. Right. And not just automation, but be able 24:53to go and scan for dynamic testing, static testing and 24:57eventually I think, I mean I'm assuming some blue teaming 25:01and red teaming or the other vice versa. Right. Purple 25:04teaming concepts over here. I think both my colleagues said 25:09things around human in the loop verification and all of 25:15that is great. Right. The couple of things I would 25:17mention is it's not about just the AI, but it's 25:22also moving to trusting the processes. Do we have the 25:26right processes in place so that we can do the 25:30right level of validation? Because I think, Jeff, you were 25:33mentioning there's going to be code that we're not going 25:36to be able to take a look at it at 25:39every granular level there's going to be millions of code. 25:42Even what Google has claimed, I believe they've done close 25:47to what, 72 or around 75 fixes in six months 25:53with millions of lines of code. Right. There's nobody's reading 25:56that. But what's the process? What's the process to review? 25:59What's the process to test it? What's the automation that 26:02we put in there so that we can then go 26:06validate that not by a human just clicking yes or 26:11no, but having a automated process in place to match 26:14the rate and pace of AI based patch management? That's 26:20a great point and we touched on this a bit. 26:23But Google states that right now all the patches by 26:28codemainder are reviewed by researchers before they're put into place. 26:32How long do we think before researchers are no longer 26:36taking a look at these and the AI system is 26:40able to just go ahead and put the patches in 26:43place? And move along without anybody reviewing them. Do we 26:46have any estimates of how far off we might be? 26:49I don't think anybody has looked at every line of 26:512.5 million lines of code. Right. There's got to be 26:56a mechanism by which we have to evolve. Right. Human 27:05put some automated ways of validation, for example. Right. And 27:10we were talking about, Jeff was talking about poisoning and 27:13evasion in the last topic. Let's go by that thread. 27:17If AI becomes rogue, we need a mechanism to Michelle's 27:22point, to automate the red teaming to create an agent 27:25which is a red team that can go and launch 27:28a, an automated testing of the code at the entire 27:34system level, not just at the AI, not just at 27:37the mcp, not just as a server, but the entire 27:40end to end. Right. And once you do that, then 27:43I foresee an automated blue agent which is able to 27:46go, based on what we found out, be able to 27:50automatically fix those and then do it in a continuous 27:53manner. How far are we from that? I think research 28:02going to take some validation and more importantly, trust. I 28:06think it actually could be done now, but like Sridhar 28:09said, it's not ready for prime time. There are too 28:12many gaps, too many things that we don't understand, too 28:15many things that could be introduced that I wouldn't feel 28:21comfortable with it. But here's the thing. Whenever people make 28:25predictions about, well, AI is never going to do this, 28:28it's doing this, but it'll never do that. I recently 28:31did a video on this on the IBM Technology channel. 28:35My take on that is don't bet against AI because 28:38most of the things that people have predicted it was 28:41never going to do, it's done. I'm not saying it'll 28:44do everything and I'm not saying it'll do everything perfectly 28:46and it certainly can't today, but it's getting better and 28:51I think we will have to adapt and we'll have 28:54to figure out how to use it as a tool, 28:57but we're going to also have to make sure that 28:58it's trustworthy. Just like you wouldn't hire an employee to 29:01start writing code for you if you didn't trust them. 29:04You'd need some reason to trust them and you might 29:07want some oversight, you know, some way to test and 29:09make sure that what they're introducing is not backdoors and 29:14stuff like that. We're going to need the same capability. 29:16My guess is we'll use AI to test AI, we'll 29:19use another AI to do oversight, and then we have 29:22to wonder who's watching the watchers. So this whole thing 29:28becomes a nested problem that recurses on itself. But that's 29:33where I think we're moving. Hope for the best, but 29:36prepare for the worst with AI. Michelle, anything to add? 29:42No, I just thought to myself, can AI be bribed 29:45because then we're in trouble. Yeah, right, sure. Yeah. That 29:49might be an interesting topic for. For one of our 29:51future podcast. Yeah, that's right. Absolutely. We've been spending a 29:56lot of time on trust in this, in this podcast. 29:59Right. I think we, we do need to step back 30:02and think about it as risk management. I think Jeff 30:05said, right, don't shy away because it's not ready and 30:09trust is going to come over time. It means something 30:11is earned. Not like A4 buttons that we have to 30:14check. Right. So it's. We all need to think about 30:18risk management. Right. Whether it's a previous topic or this 30:21topic or there's going to be concerns. And I think 30:25both, Jeff as well as Michelle said that the patches 30:29will introduce more patches because of the flaws they will 30:32create. And we talk about red team and blue team, 30:35But I think we need to, number one, think about 30:38how do we evolve the AI, which is things like 30:42root cause analysis, fixed rationale validation. That's all technology that 30:49has to evolve. And then meanwhile, there's a human side 30:53of it which also has to evolve. Not just going 30:55and rubber stamping 2000 or 2 million lines of code 30:59per se, but how do you go and evolve the 31:02human interaction such that you're doing not only automated testing, 31:06code review, but also runtime monitoring and rollback? Because what 31:11happens if AI drifts and you realize that it was 31:14okay, but it's not okay tomorrow morning? Right. We got 31:16to be able to roll back. Right. So, so I 31:18talk, I think about it as risk management. Finally, our 31:21last topic of the day. Payroll pirates target employees salaries 31:30in a new social engineering campaign. Scammers impersonate employees to 31:34trick HR into routing their salaries into accounts that the 31:38scammers control. Since this scam works so well, especially in 31:42HR departments, what does it say about the state of 31:45enterprise security? Michelle, do you want to lead us off 31:48with this one? Yeah. So essentially in this campaign, it's 31:52actually a campaign that we. Or attacked it, that we've 31:56seen for some time now. It's called adversary in the 31:59middle attack. And what's Happening now is that attackers are 32:04able to sort of get around some of the multi 32:07factor authentication methods that are in place. And what we're 32:11emphasizing is, and I'm glad Jeff Krum is on the 32:15call because I'm going to direct to him next, we're 32:19encouraging phishing resistant MFA methodology and authentication mechanisms. And so 32:26I was so excited to see that Jeff was joining 32:29the panel because I was like, this is a great 32:30topic. We could just kind of just bring up his 32:35latest video on this and spend the next probably half 32:40an hour, 40 minutes just on this topic alone. But 32:42this is definitely something that we have seen increase as 32:46far as incidents and our incident response engagements where attackers 32:50are using this type of attack to sort of circumvent, 32:53of course, risk mitigations that are in place. And this 32:56is, you know, an ongoing battle between defender and attacker 33:01where we're putting things in place and they're finding ways 33:04around it. But never fear, there's something better that we 33:07can be doing to protect our organizations. Jeff, why don't 33:12you take it from here? It seems like I thought, 33:14shameless. Video plug here. Yeah, yeah, exactly, sure. I don't 33:18need to. I paid Michelle in advance to already plug 33:21that video. There we go. You're awesome. Yeah, thank you. 33:24Thank you. Checks in the mail. The thing I thought 33:28was interesting about this particular article is it said that 33:30they focused a lot on university environments. And I thought, 33:34if you're an attacker. Look, I'm an adjunct professor at 33:37NC State University and I do it for the love, 33:40not for the money, because there's not a ton of 33:43money in it. Right. And I would just think if 33:47I was an attacker and I was trying to siphon 33:49off payroll, I'd go somewhere else where there was higher 33:52value targets. At least adjuncts are not being paid a 33:55ton of money. So that was the first thing. The 33:58other thing I thought about and the. There is a 34:02lot of discussion about multi factor authentication. And it seems 34:05in a lot of these cases the MFA was not 34:08implemented. And my reaction to that is, did I just 34:12sleep through it? And we're still in 2005. I thought 34:15we were in 2025. What are we still doing with 34:18systems that have any importance at all that don't have 34:21some form of multi factor authentication? To me, that's not. 34:26There's no excuse for that anymore. We've had years and 34:28years. The technology is mature. There's no reason for any 34:33system of any real value not to have it. And 34:36then, as Michelle said, I'm a big advocate of this 34:39technology called passkeys as a replacement for passwords because they're 34:44essentially phishing resistant. I'm never going to say it eliminates 34:47any attack, but it does a pretty darn good job 34:50of reducing that likelihood. These pass keys, and there's a 34:55lot of misunderstanding because the first generation of pass keys 35:03that special device can in fact just be your mobile 35:05phone. So the friction on passkeys has come way down. 35:10And there's. Look, the number one thing that phishing attacks 35:15are going for is your password. Here's one way to 35:19make sure they can't steal your password. Don't have one. 35:23So have a passkey which is not easily stolen. And 35:28we know how to deal with this. We haven't solved 35:31the porch pirate problem, but we can solve the payroll 35:34pirate problem. I love the fact that both of you 35:37are advertising my product, but unfortunately, this is not a 35:43technical. It is not a technical issue. The technical issue 35:47has been solved. I think there's multiple ways you can 35:50solve this technical issue. It's purely an adoption issue. It's 35:53a social engineering issue. We've all been here for a 35:57number of years and attacks continue to get sophisticated and 36:02humans always remain the weakest link. I think that's what 36:05I think we should be focusing on to say phishing 36:09resistance has been there for a while and I'm seeing 36:12organizations talk about the adoption rate of less than 10%. 36:16Right? That's my worry. That's my worry to think about 36:21focusing on. How do we think about improving the adoption 36:26rate over here and how do we make sure that 36:30things are less effective? Sorry, I apologize. But more effective 36:39in terms of adopting these things. Right? So I kind 36:43of look at it at two aspects, Jeff and Michelle. 36:45Right. One aspect is fine. Technology is there. It's not 36:49a new technology innovation over here. But number one, how 36:53do we teach that? How do we teach that? How 36:55do we gamify it, how do we make sure that 36:58it is well trusted, et cetera. And the second thing 37:01is we Hear Jeff and MFA is not a new 37:06topic. Like you said 10, 15 years ago, adoption is 37:10still low. So by this time we have to accept 37:13to a certain extent some defeat and agree that we 37:17need to evolve the technology to go really assume that 37:20these bad things are going to happen. So things like 37:23phishing attacks happen because of email. Do a better job 37:27of monitoring email, Be able to go and put some 37:31code in rules, monitor those, be able to quickly monitor 37:36some of the HR systems so that we are able 37:38to go and take a quick action. So I'm talking 37:42about both education as well as evolution of technology to 37:46assume that bad things are going to happen. Yeah, that's 37:50a great point, and I'm glad you brought that up 37:53with what Michelle said earlier about education. We can't remove 37:58every bit of human error, but we can at least 38:01lower that level of human error to the point where 38:05it's not as destructive to our organization. We absolutely should 38:11educate users. I think there's a limit to how much 38:14we're going to be able to educate them because the 38:17playing field, the attack surface, keeps changing every day and 38:20they can't possibly keep up. AI is going to be 38:22one of those things that changes it. We've seen from 38:25the cost of a data breach report that Michelle mentioned 38:27earlier, the last few years, the number one cause of 38:31a data breach has been phishing attacks. Well, if we 38:34keep having the same thing year after year, then that 38:37means whatever we're doing is not working well enough. Maybe 38:41it's working to some degree, but it's not working well 38:43enough. We haven't slayed that dragon yet, and we should 38:46be able to. We have technology that can help that's 38:49not being deployed widely enough. We can train the heck 38:54out of users, but you could, you know, they're going 39:02in some cases for years and years on phishing attacks, 39:05the advice was look for bad spelling, bad grammar, these 39:09kinds of things. And that will be your tip? Well, 39:12if AI is writing the phishing attacks, which is what 39:15it's going to be happening more and more going forward. 39:18It's going to be in perfect English or Spanish or 39:21Portuguese or whatever. So there's those kinds of things. We 39:25actually need to go unlearn that from people. We need 39:28to tell them, ignore that. Now, if you see really 39:31bad grammar and it claims to be your bank, well, 39:33then probably it's not your bank. But the bottom line 39:36is we can't rely on a lot of those kind 39:40of cues that we have trained people to in the 39:42past. We're going to have to solve a lot of 39:45these things so that it's foolproof. Now, that's hard because 39:49they keep making better fools all the time. But to 39:52the best extent possible, we have to make it so 39:54that a user can't do the wrong thing, because it's 39:59going to be hard to just teach them not to 40:01do the wrong. And organizations still keep making again these 40:04same mistakes. Even if an organization said we're not doing 40:07multi factor, it's too expensive or whatever, well, I would 40:10disagree with them if they look at the cost of 40:12cost of a data breach. Multi factor is pretty darn 40:15cheap compared to the alternative. But even if you're going 40:18to keep using passwords, at least use the best practice 40:23guidelines from someone like the US National Institute of Standards 40:27and Technology, which in 2017 changed all the advice that 40:32people are still following and enforcing the things of making 40:35people change their password on a regular basis. NIST says 40:39no because that forces people to write them down. The 40:42making your password complex. NIST says no because again, it's 40:47forcing people to write their passwords down. You know, length 40:50is strength when it comes to passwords, not complexity. Because 40:54we're ignoring, we're looking at the math when we make 41:02if they can't remember it well, then they're going to 41:04use the same one everywhere or they're going to write 41:06it down or other kinds of things. So we've got 41:10to think in the security department, we've got to do 41:14a better job of making it foolproof so they can't 41:16do the wrong thing. While we have a couple more 41:18minutes left, I'd like to touch on something that Jeff 41:20brought up earlier about targeting higher education institutions. Do we 41:26think that maybe this is a personal vendetta? Because this 41:29isn't a place, like Jeff mentioned, that you might be 41:33getting the most bang for your buck, I guess, out 41:35of an attack. Whereas maybe a larger organization or maybe 41:40a VIP client or something would be a better target 41:43if you're looking to get more money. My suspicion is 41:46no. My suspicion based on working with a few higher 41:49ed clients is that they don't have the best security. 41:55So they're easy targets, they're soft targets, they might not 41:58be the highest value targets, but there's a kind of 42:02in the academic world, there's an under underwriting current of 42:07information wants to be free. So we don't want to 42:10put a lot of impediments, we don't want to put 42:12a lot of barriers and security looks like a barrier. 42:15So we're kind of against that. And I don't know, 42:18maybe information wants to be free, but the company that 42:21just paid you a ton of money for that grant, 42:23they don't want it to be Free. So if you 42:25want to keep funding your research, you need to be 42:28able to guard this stuff. But I know universities are 42:32strapped, so they're doing the best they can. The university 42:35that I work for is, I mean, they've had multi 42:39factor authentication for at least the last five years that 42:41I've been there. So good for them. And I think 42:44the others should as well. One school of thought, right. 42:47I mean, I work with a lot of universities as 42:49well and you know, I understand resources and funding can 42:54be challenged, but again, it has never. I don't think 42:57it's a technology issue. Right. I feel like this may 43:02be onset of something bigger in the sense it is 43:06easy to understand the current environment at a university because 43:12it's open. The fact that there is a situation going 43:17on, whether it is a political situation or an education 43:23situation or a health situation, it's a lot more easily 43:26available than what is going on within a company. So 43:30being able to use that as a mechanism to go 43:33create the phishing email so there's a higher chance of 43:36being successful. I think what we need to aim to 43:39is the takeaway for me from this is attackers are 43:45getting more contextual, attacks are getting more contextual in the 43:49phishing email. So we need to do a better job 43:52of number one, educating and creating the education material which 43:56is more contextual and increase the rigor of testing with 44:01penalties to say if Jeff clicked on it in the 44:04penalty box for 20 minutes. Right. Sorry Jeff. Right. So 44:11I mean, sorry Jeff, but I gotta put somebody in 44:13the penalty box. If anybody deserves it, it's me. So 44:16I'll go. So we gotta put like. And, but at 44:20the same time I'm talking about improving the technology of 44:25doing a better job of monitoring if somebody's cutting through 44:30the gate all the time, cutting through the fence all 44:32the time. Do a better job of monitoring it as 44:35an example. Right. Do a better job of making it 44:37tougher and tougher and tougher because we as human beings 44:41don't seem to learn even after this many years, less 44:44than 10%. It just makes me sad. Michelle, final thoughts. 44:48Great, thank you. Lots of pressure there with final thoughts. 44:52Take your time. I've got something, you know, based on. 44:56It'S got to be you because I'm in the penalty 44:57box. So. Yeah, that's a good point. Yeah. Just kind 45:01of going off of what both Jeff and Sridhar said. 45:04One size doesn't fit all when it comes to education. 45:08So really understanding your threat landscape. And again, I guess 45:12I'm talking to all the organizations out there, but actually 45:14also on a personal level as well, knowing what is 45:18your likely target and threat landscape in terms of who 45:22is going to target you, what types of attacks. So 45:25the example given with regards to this campaign against the 45:31universities, do the HR and payroll departments of universities understand 45:37that this is happening? Do they have this type of 45:39threat intelligence that's disseminated to them, and is it incorporated 45:43and integrated into their cybersecurity training program? So if you're 45:48being trained on something that's likely not going to happen 45:52in your environment, then maybe you need to reassess what 45:55your training program looks like. Well, that's all the time 45:58we have for today. Thank you for joining me, Michelle, 46:02Sridhar and Jeff. Also, thank you to all our viewers 46:05and listeners. Subscribe to Security Intelligence wherever podcasts are found, 46:08and stay safe out there.