Learning Library

← Back to Library

AI Security Donut: Discover, Assess, Control, Report

Key Points

  • The speaker proposes protecting AI systems with a “donut” of layered defenses that cover data, models, usage, infrastructure, and governance.
  • Effective AI security requires four core capabilities—discover, assess, control, and report—to create a comprehensive protection framework.
  • “Discover” involves locating all AI workloads across cloud and on‑premises environments, including hidden or unauthorized “shadow AI,” ideally using agentless methods, and then aggregating their logs into a searchable data lake for monitoring.
  • “Assess” focuses on scanning the AI landscape for known vulnerabilities and misconfigurations, with the goal of not only identifying but also automatically remediating security issues.

Full Transcript

# AI Security Donut: Discover, Assess, Control, Report **Source:** [https://www.youtube.com/watch?v=2A94Mxn3jAc](https://www.youtube.com/watch?v=2A94Mxn3jAc) **Duration:** 00:09:08 ## Summary - The speaker proposes protecting AI systems with a “donut” of layered defenses that cover data, models, usage, infrastructure, and governance. - Effective AI security requires four core capabilities—discover, assess, control, and report—to create a comprehensive protection framework. - “Discover” involves locating all AI workloads across cloud and on‑premises environments, including hidden or unauthorized “shadow AI,” ideally using agentless methods, and then aggregating their logs into a searchable data lake for monitoring. - “Assess” focuses on scanning the AI landscape for known vulnerabilities and misconfigurations, with the goal of not only identifying but also automatically remediating security issues. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2A94Mxn3jAc&t=0s) **Untitled Section** - - [00:03:05](https://www.youtube.com/watch?v=2A94Mxn3jAc&t=185s) **AI Security Posture Management Overview** - The speaker explains AI security posture management—detecting policy drift, conducting penetration testing, and automatically scanning imported third‑party models for malware to maintain consistent, secure AI operations. - [00:06:11](https://www.youtube.com/watch?v=2A94Mxn3jAc&t=371s) **Implementing AI Guardrails and Data Leakage Controls** - The speaker outlines the need for trustworthy control mechanisms to block jailbreak attempts, prevent sensitive data from exiting the system, and establish reporting for risk management. ## Full Transcript
0:00AI is at the center of everything we do these days. 0:03But what goes around this center to protect it? 0:06In many cases, not very much. 0:08I'm gonna suggest that we consider wrapping this AI with a donut of defense capabilities. 0:14Why a donut? 0:16Because donuts are delicious, right? 0:18Previously, I did a video on "AI, the New Attack Surface", where I talked about the need to secure the data, secure the model, and secure the usage. 0:29And also have a security for the infrastructure that all of this runs on, 0:34and ultimately a governance layer so that we make sure that the whole system is in alignment with our intent. 0:41In this video, we're gonna take a look at an approach to securing the data, securing the model, and securing the usage, 0:48and leverage a donut diagram to tie all these defenses together. 0:52So, let's dig in. 0:53Okay, so now let's take a looks at what kind of security capabilities we should add into this donut. 0:59The other donut tasted good, but now we need something we can really sink our teeth into. 1:03So what we need are four major sets of capabilities. 1:07We need to be able to discover, assess, control, and report. 1:12We're going to take a look at some of the capabilities we need in each one of these areas. 1:16So let's start with discover. 1:18So I'm going to need to discover all uses of AI in my environment, especially looking across all the platforms, cloud platforms, as well as in-house platforms on premises, 1:32because what I'm looking for are not only the known uses, which I will then inventory, but I also want to know about the unknown, unauthorized uses of AI. 1:43We call this shadow AI. 1:45So you can't secure what you can see. 1:48If you don't know that somebody's got an AI implementation in your environment, you definitely can't security it. 1:53So we need to be able to find all of those things. 1:55We need to be able to see all the AI that we have in our environment, whether it's a machine learning or a large language model, all of those kinds of things. 2:04And hopefully we can do it with an agentless type of approach because I don't know where to deploy all the agents. 2:10I need to just be able discover them. 2:12So, the next thing then is to observe. 2:15I need be able look at some of the, after I've discovered some of these systems, I need able to be see the logs 2:24that those AI systems are creating and examine them. 2:27I'd like to collect all of them into a large open data lake that I can then do more searching. 2:33I could use that searching to do threat management and things of that sort. 2:37So, if I can discover it and then I can see it, I can start drilling down. 2:42And that drill down is when I really start getting the ability to do more security. 2:47The next piece of our security defense for AI involves assessing. 2:51So in this case, what I need to be able to do is scan my AI environment, looking for vulnerabilities, known vulnerabilities. 2:58And we need to look for misconfigurations and things like that that may occur. 3:03And if possible, even correct some of those things. 3:06This is what we essentially call AI security posture management. 3:10So stand up straight when you're doing posture management, right? 3:14We're basically trying to make sure that any mistakes that occur, anything where we were maybe once in policy and then drifted out of policy, 3:23we've discovered those things and now we're gonna get the system back in line. 3:27We discovered maybe some cases of AI that were shadow. 3:30Now we're going to get them in line and make sure that they are in lockstep as it needs to be. 3:35Another thing we need to do is to be able to scan our AI and pen test it. 3:41Pen testing is another word for, short for penetration testing. 3:45But pen testing, basically the bad guys are going to be doing this. 3:48They're going to probing your system and seeing what they can do, seeing what can get away with. 3:54And we're also going to importing models into our environment and those models could be infected. 3:59Most organizations are not going to create their own models. 4:02It's too expensive, it's too time consuming, they don't have the expertise. 4:06So what they're going do is pull models in from some other source, either from a vendor or from some open source. 4:13And they're a Places like Hugging Face that have more than a million and a half AI models with billions of parameters, 4:21nobody has been able to inspect all of those by hand, manually. 4:25We don't have enough time in our lives to do all of that. 4:27So I need to be able to scan those models just like I would software to make sure there's not malware. 4:33I want to scan these models and make sure that they're not infected as well, because they're essentially introducing an element of third party risk. 4:40And I want to be able to pin test these models 4:43to make sure that the things that the bad guys might be trying to do against my AI system are not gonna work. 4:50So we try it first before they get a chance to. 4:53Continuing with our donut, now we need to add some control capabilities. 4:57We'll talk about a couple of different major classes of controls. 5:01The first I wanna mention is an AI gateway. 5:05In other words, something that is between the user, which is gonna come in and put a prompt into our system, 5:12and we need to decide, is that a legitimate prompt or not? 5:15Do we really want to allow this to go? 5:17Because they may be trying to do a prompt injection attack. 5:20I did a whole video on that topic. 5:22And OWASP, the Open Worldwide Application Security Project says prompt injection is the number one type of attack against generative AI and large language models. 5:32So we need be able to look for those kinds of essentially social engineering attacks against our AI. 5:38And we to detect those. 5:40And once we do ... 5:41then we can decide, do I want to allow this to go? 5:44If it seems legitimate, then okay, sure, we'll go ahead and let this hit our AI. 5:48But if it's not, then I wanna block it in that case. 5:52Now, I could do a couple of different things here. 5:54I could just monitor and report if it looks like our AI is under attack, or I could actually block it. 6:02Now, why would I only monitor instead of block? 6:04In some cases, if the installation is new, we wanna make sure that the controls are appropriate. 6:09And we don't want to interrupt the business. 6:12So at some point we'll decide that we can trust our controls are correct, and then we can go ahead and do the blocking. 6:19So that ability and also adding guard rails so that if someone tries to do a jailbreak against our AI, 6:25have it do things that it's really not supposed to do, maybe violate safety rules or things like that, then we want to be able to block that as well. 6:32So that's an important capability that we could put in. 6:35And we have this gateway where 6:37all our requests are proxied through, or it's through an API call so that we can have AI applications or other applications calling it, 6:46then that's where we put the control point in. 6:49Another thing we need to be able to do is guard against privacy violations. 6:53So we might have lots of sensitive information. 6:56We might have what we refer to as personally identifiable information, personal health information, 7:04or we could have company confidential information, anything that's sensitive. 7:08In this case, I was talking about controlling the stuff coming into the AI. 7:12Here I'm much more concerned about what's going out. 7:15And I want to make sure that any of that stuff that's really sensitive is not in fact leaving my environment, because that could be bad news for us as well. 7:23And now we're up to the last part, the reporting part. 7:26And what we want to do here is some form of risk management. 7:31There's risk in every system and we have to figure out how much we're willing to tolerate but, I can't do any of that if I can visualize what all of that is. 7:40I can make informed decisions. 7:41That's why we did the discovery. 7:43That's what we did all of these other kinds of things as well. 7:46So what I really need then is a dashboard that visualizes all of that for me. 7:50Something that tells me what are the prioritized risks. 7:54We found a vulnerability here. 7:55We found one there. 7:57Somebody's trying to do something here. 7:58Is this critical? 8:00Is it low importance? 8:01Where does this fit in the scheme of things? 8:03And I'd like to have one place where I can see it all. 8:06Double click down and continue to figure all of this stuff out from. 8:10So that's an important part of being able to do prevention detection. 8:15Now here we're looking at the response. 8:17And then the final piece of this is compliance. 8:21Compliance, we've got to follow certain rules, regulations, our own security policies. 8:26We need audit reports and things of that sort that tell us if in fact we're following our own policies or not. 8:33If we're matching some of those frameworks. Maybe we. 8:36use the Miter AI Risk Management Framework. 8:39Maybe I wanna map myself against that OWASP Top 10 list that I mentioned to you earlier. 8:45So there could be other frameworks that we develop on our own, 8:48but I wanna be able to make sure that I've got all of these things, everything's operating properly and I can report and prove that that's the case. 8:56So if you take all of things together, discover, assess, control and report, 9:01then your AI at the center of this defensive donut will be delicious and it won't be able to be breached.