Key Trust Principles in NIST AI Framework
Key Points
- AI is reshaping sectors such as healthcare, finance, and defense, but its powerful capabilities also introduce significant risks that must be actively managed.
- The U.S. National Institute of Standards and Technology’s AI Risk Management Framework provides a structured method to keep the risk‑reward balance in check.
- For AI to be trustworthy, NIST outlines core attributes: validity/accuracy, safety, security and resilience, explainability/interpretability, privacy preservation, fairness, and accountability/transparency.
- Without proper oversight, AI can magnify bias, breach security policies, and make catastrophic decisions, underscoring the need for robust risk controls.
- Applying the framework helps ensure AI systems remain reliable, transparent, and aligned with societal values while delivering their promised benefits.
Sections
- Untitled Section
- Governance and Mapping in AI Systems - The speaker outlines how the governance layer sets system culture and compliance, while the mapping layer provides context, end‑to‑end risk visibility, goal alignment, and actor definition across the AI pipeline.
- Managing AI Risks in Practice - The speaker outlines the “manage” phase of an AI risk management framework, emphasizing goal reassessment, risk prioritization, and selecting responses such as mitigation, acceptance, transfer, or insurance within a governance‑driven, iterative lifecycle.
Full Transcript
# Key Trust Principles in NIST AI Framework **Source:** [https://www.youtube.com/watch?v=0oeD2Wf25wY](https://www.youtube.com/watch?v=0oeD2Wf25wY) **Duration:** 00:08:33 ## Summary - AI is reshaping sectors such as healthcare, finance, and defense, but its powerful capabilities also introduce significant risks that must be actively managed. - The U.S. National Institute of Standards and Technology’s AI Risk Management Framework provides a structured method to keep the risk‑reward balance in check. - For AI to be trustworthy, NIST outlines core attributes: validity/accuracy, safety, security and resilience, explainability/interpretability, privacy preservation, fairness, and accountability/transparency. - Without proper oversight, AI can magnify bias, breach security policies, and make catastrophic decisions, underscoring the need for robust risk controls. - Applying the framework helps ensure AI systems remain reliable, transparent, and aligned with societal values while delivering their promised benefits. ## Sections - [00:00:00](https://www.youtube.com/watch?v=0oeD2Wf25wY&t=0s) **Untitled Section** - - [00:03:06](https://www.youtube.com/watch?v=0oeD2Wf25wY&t=186s) **Governance and Mapping in AI Systems** - The speaker outlines how the governance layer sets system culture and compliance, while the mapping layer provides context, end‑to‑end risk visibility, goal alignment, and actor definition across the AI pipeline. - [00:06:13](https://www.youtube.com/watch?v=0oeD2Wf25wY&t=373s) **Managing AI Risks in Practice** - The speaker outlines the “manage” phase of an AI risk management framework, emphasizing goal reassessment, risk prioritization, and selecting responses such as mitigation, acceptance, transfer, or insurance within a governance‑driven, iterative lifecycle. ## Full Transcript
Artificial intelligence is transforming everything,
from healthcare and finance to national defense.
But with great power comes,
well, the risk of things going terribly wrong.
That's why we need frameworks to manage these risks.
And one of the most promising ones is from the US National Institute of Standards and Technology.
It's called the AI Risk Management Framework.
AI systems can offer massive benefits—speed,
efficiency and insight—but
poorly managed, they
can amplify bias, violate security policy
and make some pretty catastrophic decisions.
So we need a structured way to manage these risks
and keep the risk–reward scale in balance.
And that's where the AI Risk Management Framework comes in.
Let's take a look. If we're going to trust AI,
we need certain characteristics in place
for it to be a truly trustworthy AI.
NIST defines these as follows.
So it says first of all, it needs to be valid. Uh ...
it needs to be accurate, in
other words. It needs to be reliable.
If the information coming out of it doesn't make sense or isn't true,
well, then the rest of it isn't going to be trustworthy.
What else does it need? Well,
it needs to be safe.
That is, we want the AI not to endanger human life or property or the environment or anything like that.
It also needs to be secure and resilient.
In this case,
we know that AI is going to have stuff of value.
So that means bad guys will try to break it.
They're going to try to make it
where it's not available or where it's leaking information,
or where they poison it and again make it untrustworthy.
So it needs to be secure and resilient.
It needs to be explainable and interpretable.
We need to be able to explain why
it has done what it is done,
why it's saying what it's saying,
and what does all of that mean?
It ought to be able to be interpreted
by someone who's an expert in the field that we're asking these questions
of, not necessarily a technology expert.
So if I'm asking it a medical question,
a doctor would look at it and say, yeah,
that's an explainable result and it's interpretable.
can ... I can see what that is. What else?
It needs to preserve privacy.
It needs to have private ... uh ... preserving, enhancing capabilities in it
so that everything I put into it, it
doesn't just go blab to the rest of the world.
You wouldn't trust someone if you told them a secret
and then they published it on the internet.
It's the same way with AI.
So we want it to preserve privacy.
It needs to be fair.
We don't want it to be biased for
or against any particular population.
That goes without saying. Also,
if it were biased, we wouldn't get accurate information.
So that would affect the validity of the system.
And then ultimately, it needs to be accountable.
It needs to be accountable and transparent.
We want to be able to see into it.
We can't have this whole thing as a big black box.
We need to be able to understand how it's working.
What are the technical underpinnings?
You put all of these things together
and now you have a system that you can trust.
The NIST AI
Risk Management core is made up of four functions: govern, map,
measure and manage.
Let's start with govern.
The governance function is where we're going to start
by setting the overall culture for the system.
That is, how do we want to operate this?
What are we trying to do with this thing?
Also think about this as a cross-cutting concern.
In other words, all the things that we do in the governance layer,
they're going to affect these other components as well, these other functions.
So, this is going to be the thing
that really lies at the core of everything.
And the other thing we need to consider among many is compliance.
I need to be able to make sure
that I'm, in fact, following all the regulations of the organization.
If there are regulatory compliance issues that I need to follow, those
all have to be in place as well.
Next, we take a look at the map function.
Here is where we set context.
And context in this case means there are a lot of different people
that are using the AI system that are involved in the
AI pipeline—building it,
using it, operating it, actually benefiting from it.
And they don't all have visibility of what everyone else is doing.
So we need this context to sort of tie all these things together.
If we're going to assess risk,
I need to be able to see what it is end to end.
I'm also going to do goal setting.
No point having a system if we don't know what it's supposed to accomplish. So,
I want to see what my goals are
and then see if I'm actually mapping up to that.
Another thing is, we need to define
all the actors in the system—who's
involved in doing what.
What are those stakeholders? Who are the different roles that are involved?
And therefore, assess how they are operating with the system,
how they may be introducing risk or reducing it.
And then, what is the tolerance for risk within the organization?
Uh ... the tolerance that one organization has for risk might be very different than another.
And the tolerance for risk in one particular
application may be very different than another. So,
that all goes into a better understanding of what risk is.
The third function of the AI Risk Management Framework
is this business of measuring.
Now, we can measure with a lot of different kinds of tools, and
there's a lot of different ways of considering
and measuring the things that we're trying to measure.
In this case, it's risk.
Well, one school of thought says we want to do
quantitative risk analysis.
In other words, we want lots of numbers.
Another school of thought says I want a more qualitative,
maybe a high, medium, low kind of rating system.
There's advantages to both.
I wouldn't want to be a slave to either.
Sometimes numbers can lead us to a false sense of security
that we know more and have more precision than we actually have.
So we can just multiply those errors out as if we're not careful. So,
maybe a combination of those, but tools that allow us to do
quantitative and qualitative risk measurements.
Another thing is we want to do analysis.
I want to analyze my system.
I want to be able to see what the risk is
and see if we're, in fact, matching what these goals are that we set before.
And then another one is Test,
Evaluation, Verification and Validation.
We want to make sure that we've got tools and procedures
in place to measure across all of that lifecycle,
so that we can make sure we have eyes on the whole thing.
The fourth function of the core
AI Risk Management Framework is manage.
In this case, remember the goals that we talked about back in the map stage? Well,
we're going to go back and re-examine those.
And we're going to determine: Did we meet them or not?
Because that's going to be important for us to understand.
We're going to take some of these risks that we found in other phases,
and we're going to prioritize them.
I need to be able to know which is the most important,
which is the least important.
I need to be able to respond to those risks
that I have identified.
And there's a lot of different responses I could take here.
In one case, I could mitigate a risk.
In other words, put in some kind of compensating control
so that we don't have that problem anymore.
But some of these risks, we just have to accept;
some of them we can transfer; some of them
we can buy insurance and indemnify against.
So there's a lot of different responses that could actually occur here.
But ultimately this is all about managing these risks.
And then, if you look back
at the whole Risk Management Framework core,
what you notice is we start with govern,
and it's a cross-cutting concern that leads into all of these others.
Also, the things we do from each one of these phases
leads in to the other phases.
So we end up with this virtuous cycle
of continuous improvement
and hopefully reduction of risk and a more trustworthy AI.
Now, that's the core function.
After we've gone through this type of exercise, what we actually want to do
is develop some profiles.
These profiles would be things that are specific
to a particular implementation,
to a particular environment, to a particular use case.
may have multiples of these.
So these are the instances where we've actually spelled out details
as to what all of these things are.
So you've got the core, and you've got the profiles.
And you put all of those together, and then you end up with something
that we think is a solution for this problem.
In a world where AI is everywhere,
trust is everything.
The NIST AI Risk Management Framework helps us build that trust—not
just in the technology, but in how we use it.
If you work with AI or are affected by it,
you now have a tool to help manage these risks.