Learning Library

← Back to Library

Key Trust Principles in NIST AI Framework

Key Points

  • AI is reshaping sectors such as healthcare, finance, and defense, but its powerful capabilities also introduce significant risks that must be actively managed.
  • The U.S. National Institute of Standards and Technology’s AI Risk Management Framework provides a structured method to keep the risk‑reward balance in check.
  • For AI to be trustworthy, NIST outlines core attributes: validity/accuracy, safety, security and resilience, explainability/interpretability, privacy preservation, fairness, and accountability/transparency.
  • Without proper oversight, AI can magnify bias, breach security policies, and make catastrophic decisions, underscoring the need for robust risk controls.
  • Applying the framework helps ensure AI systems remain reliable, transparent, and aligned with societal values while delivering their promised benefits.

Full Transcript

# Key Trust Principles in NIST AI Framework **Source:** [https://www.youtube.com/watch?v=0oeD2Wf25wY](https://www.youtube.com/watch?v=0oeD2Wf25wY) **Duration:** 00:08:33 ## Summary - AI is reshaping sectors such as healthcare, finance, and defense, but its powerful capabilities also introduce significant risks that must be actively managed. - The U.S. National Institute of Standards and Technology’s AI Risk Management Framework provides a structured method to keep the risk‑reward balance in check. - For AI to be trustworthy, NIST outlines core attributes: validity/accuracy, safety, security and resilience, explainability/interpretability, privacy preservation, fairness, and accountability/transparency. - Without proper oversight, AI can magnify bias, breach security policies, and make catastrophic decisions, underscoring the need for robust risk controls. - Applying the framework helps ensure AI systems remain reliable, transparent, and aligned with societal values while delivering their promised benefits. ## Sections - [00:00:00](https://www.youtube.com/watch?v=0oeD2Wf25wY&t=0s) **Untitled Section** - - [00:03:06](https://www.youtube.com/watch?v=0oeD2Wf25wY&t=186s) **Governance and Mapping in AI Systems** - The speaker outlines how the governance layer sets system culture and compliance, while the mapping layer provides context, end‑to‑end risk visibility, goal alignment, and actor definition across the AI pipeline. - [00:06:13](https://www.youtube.com/watch?v=0oeD2Wf25wY&t=373s) **Managing AI Risks in Practice** - The speaker outlines the “manage” phase of an AI risk management framework, emphasizing goal reassessment, risk prioritization, and selecting responses such as mitigation, acceptance, transfer, or insurance within a governance‑driven, iterative lifecycle. ## Full Transcript
0:00Artificial intelligence is transforming everything, 0:02from healthcare and finance to national defense. 0:05But with great power comes, 0:06well, the risk of things going terribly wrong. 0:09That's why we need frameworks to manage these risks. 0:12And one of the most promising ones is from the US National Institute of Standards and Technology. 0:17It's called the AI Risk Management Framework. 0:23AI systems can offer massive benefits—speed, 0:25efficiency and insight—but 0:27poorly managed, they 0:28can amplify bias, violate security policy 0:31and make some pretty catastrophic decisions. 0:34So we need a structured way to manage these risks 0:36and keep the risk–reward scale in balance. 0:39And that's where the AI Risk Management Framework comes in. 0:42Let's take a look. If we're going to trust AI, 0:45we need certain characteristics in place 0:47for it to be a truly trustworthy AI. 0:50NIST defines these as follows. 0:52So it says first of all, it needs to be valid. Uh ... 0:55it needs to be accurate, in 0:58other words. It needs to be reliable. 1:01If the information coming out of it doesn't make sense or isn't true, 1:04well, then the rest of it isn't going to be trustworthy. 1:06What else does it need? Well, 1:07it needs to be safe. 1:09That is, we want the AI not to endanger human life or property or the environment or anything like that. 1:17It also needs to be secure and resilient. 1:21In this case, 1:22we know that AI is going to have stuff of value. 1:25So that means bad guys will try to break it. 1:28They're going to try to make it 1:29where it's not available or where it's leaking information, 1:32or where they poison it and again make it untrustworthy. 1:35So it needs to be secure and resilient. 1:37It needs to be explainable and interpretable. 1:40We need to be able to explain why 1:43it has done what it is done, 1:46why it's saying what it's saying, 1:48and what does all of that mean? 1:50It ought to be able to be interpreted 1:52by someone who's an expert in the field that we're asking these questions 1:55of, not necessarily a technology expert. 1:57So if I'm asking it a medical question, 1:59a doctor would look at it and say, yeah, 2:01that's an explainable result and it's interpretable. 2:03can ... I can see what that is. What else? 2:06It needs to preserve privacy. 2:08It needs to have private ... uh ... preserving, enhancing capabilities in it 2:13so that everything I put into it, it 2:15doesn't just go blab to the rest of the world. 2:17You wouldn't trust someone if you told them a secret 2:20and then they published it on the internet. 2:22It's the same way with AI. 2:23So we want it to preserve privacy. 2:25It needs to be fair. 2:27We don't want it to be biased for 2:29or against any particular population. 2:32That goes without saying. Also, 2:33if it were biased, we wouldn't get accurate information. 2:36So that would affect the validity of the system. 2:40And then ultimately, it needs to be accountable. 2:43It needs to be accountable and transparent. 2:46We want to be able to see into it. 2:47We can't have this whole thing as a big black box. 2:50We need to be able to understand how it's working. 2:52What are the technical underpinnings? 2:54You put all of these things together 2:56and now you have a system that you can trust. 2:59The NIST AI 3:00Risk Management core is made up of four functions: govern, map, 3:03measure and manage. 3:06Let's start with govern. 3:07The governance function is where we're going to start 3:10by setting the overall culture for the system. 3:13That is, how do we want to operate this? 3:15What are we trying to do with this thing? 3:17Also think about this as a cross-cutting concern. 3:20In other words, all the things that we do in the governance layer, 3:24they're going to affect these other components as well, these other functions. 3:28So, this is going to be the thing 3:30that really lies at the core of everything. 3:32And the other thing we need to consider among many is compliance. 3:36I need to be able to make sure 3:38that I'm, in fact, following all the regulations of the organization. 3:42If there are regulatory compliance issues that I need to follow, those 3:45all have to be in place as well. 3:47Next, we take a look at the map function. 3:49Here is where we set context. 3:52And context in this case means there are a lot of different people 3:55that are using the AI system that are involved in the 3:58AI pipeline—building it, 4:00using it, operating it, actually benefiting from it. 4:04And they don't all have visibility of what everyone else is doing. 4:07So we need this context to sort of tie all these things together. 4:11If we're going to assess risk, 4:12I need to be able to see what it is end to end. 4:14I'm also going to do goal setting. 4:17No point having a system if we don't know what it's supposed to accomplish. So, 4:21I want to see what my goals are 4:23and then see if I'm actually mapping up to that. 4:26Another thing is, we need to define 4:28all the actors in the system—who's 4:31involved in doing what. 4:33What are those stakeholders? Who are the different roles that are involved? 4:36And therefore, assess how they are operating with the system, 4:40how they may be introducing risk or reducing it. 4:43And then, what is the tolerance for risk within the organization? 4:48Uh ... the tolerance that one organization has for risk might be very different than another. 4:53And the tolerance for risk in one particular 4:55application may be very different than another. So, 4:58that all goes into a better understanding of what risk is. 5:02The third function of the AI Risk Management Framework 5:06is this business of measuring. 5:08Now, we can measure with a lot of different kinds of tools, and 5:10there's a lot of different ways of considering 5:12and measuring the things that we're trying to measure. 5:15In this case, it's risk. 5:16Well, one school of thought says we want to do 5:19quantitative risk analysis. 5:21In other words, we want lots of numbers. 5:23Another school of thought says I want a more qualitative, 5:27maybe a high, medium, low kind of rating system. 5:31There's advantages to both. 5:32I wouldn't want to be a slave to either. 5:35Sometimes numbers can lead us to a false sense of security 5:38that we know more and have more precision than we actually have. 5:42So we can just multiply those errors out as if we're not careful. So, 5:46maybe a combination of those, but tools that allow us to do 5:49quantitative and qualitative risk measurements. 5:53Another thing is we want to do analysis. 5:55I want to analyze my system. 5:59I want to be able to see what the risk is 6:01and see if we're, in fact, matching what these goals are that we set before. 6:05And then another one is Test, 6:08Evaluation, Verification and Validation. 6:13We want to make sure that we've got tools and procedures 6:16in place to measure across all of that lifecycle, 6:19so that we can make sure we have eyes on the whole thing. 6:23The fourth function of the core 6:25AI Risk Management Framework is manage. 6:27In this case, remember the goals that we talked about back in the map stage? Well, 6:31we're going to go back and re-examine those. 6:34And we're going to determine: Did we meet them or not? 6:37Because that's going to be important for us to understand. 6:40We're going to take some of these risks that we found in other phases, 6:44and we're going to prioritize them. 6:46I need to be able to know which is the most important, 6:48which is the least important. 6:50I need to be able to respond to those risks 6:52that I have identified. 6:54And there's a lot of different responses I could take here. 6:57In one case, I could mitigate a risk. 6:59In other words, put in some kind of compensating control 7:02so that we don't have that problem anymore. 7:04But some of these risks, we just have to accept; 7:06some of them we can transfer; some of them 7:08we can buy insurance and indemnify against. 7:11So there's a lot of different responses that could actually occur here. 7:14But ultimately this is all about managing these risks. 7:18And then, if you look back 7:20at the whole Risk Management Framework core, 7:23what you notice is we start with govern, 7:25and it's a cross-cutting concern that leads into all of these others. 7:29Also, the things we do from each one of these phases 7:33leads in to the other phases. 7:35So we end up with this virtuous cycle 7:37of continuous improvement 7:40and hopefully reduction of risk and a more trustworthy AI. 7:43Now, that's the core function. 7:45After we've gone through this type of exercise, what we actually want to do 7:49is develop some profiles. 7:52These profiles would be things that are specific 7:55to a particular implementation, 7:57to a particular environment, to a particular use case. 8:00may have multiples of these. 8:02So these are the instances where we've actually spelled out details 8:06as to what all of these things are. 8:07So you've got the core, and you've got the profiles. 8:10And you put all of those together, and then you end up with something 8:13that we think is a solution for this problem. 8:16In a world where AI is everywhere, 8:18trust is everything. 8:20The NIST AI Risk Management Framework helps us build that trust—not 8:24just in the technology, but in how we use it. 8:27If you work with AI or are affected by it, 8:30you now have a tool to help manage these risks.