Learning Library

← Back to Library

AI Trust: Five Essential Pillars

Key Points

  • The AI trust framework currently centers on five evolving pillars—fairness, robustness, privacy, explainability, and transparency—though the field continues to change rapidly.
  • Fairness requires identifying and mitigating bias in both training data and model outcomes to avoid systematic advantages or disadvantages for any group, which can be defined by various sensitive attributes.
  • Robustness focuses on maintaining reliable model performance under exceptional conditions and over time, monitoring data and accuracy drift especially when external factors like a pandemic shift user behavior.
  • Privacy ensures that data and model insights remain under the control of their owners throughout the entire lifecycle, complying with data protection regulations from building to monitoring.
  • Explainability and transparency together demand that stakeholders can understand why a model makes specific decisions and have full visibility into its development—who built it, what data and algorithms were used, and how it was validated.

Full Transcript

# AI Trust: Five Essential Pillars **Source:** [https://www.youtube.com/watch?v=_522RWxFS88](https://www.youtube.com/watch?v=_522RWxFS88) **Duration:** 00:09:03 ## Summary - The AI trust framework currently centers on five evolving pillars—fairness, robustness, privacy, explainability, and transparency—though the field continues to change rapidly. - Fairness requires identifying and mitigating bias in both training data and model outcomes to avoid systematic advantages or disadvantages for any group, which can be defined by various sensitive attributes. - Robustness focuses on maintaining reliable model performance under exceptional conditions and over time, monitoring data and accuracy drift especially when external factors like a pandemic shift user behavior. - Privacy ensures that data and model insights remain under the control of their owners throughout the entire lifecycle, complying with data protection regulations from building to monitoring. - Explainability and transparency together demand that stakeholders can understand why a model makes specific decisions and have full visibility into its development—who built it, what data and algorithms were used, and how it was validated. ## Sections - [00:00:00](https://www.youtube.com/watch?v=_522RWxFS88&t=0s) **Key Pillars of AI Trust** - The speaker outlines the five evolving trust pillars for AI—fairness, robustness, privacy, explainability, and transparency—defining each and highlighting the challenges in ensuring unbiased data, stable performance, and clear, accountable models. - [00:04:06](https://www.youtube.com/watch?v=_522RWxFS88&t=246s) **Scaling Trustworthy AI Across Organizations** - The speakers discuss how companies newly adopting large-scale AI can overcome trust challenges by implementing a governance framework and beginning with assessments or pilot use‑cases to productionalize trustworthy AI throughout the enterprise. ## Full Transcript
0:00So when we're talking about trust for AI we  hear about these five pillars, right. Awareness, 0:05robustness, privacy, explainability, and  transparency. So, what is all of this? 0:11You're right Aishwarya. There are you know,  at this point in time we usually talk about 0:16five different pillars, but keep in mind that  this is a fast evolving space. This field is 0:23changing rapidly. But at this point we  usually talk about fairness, robustness, 0:29privacy, explainability, and transparency. Let's  maybe talk about each of them quickly. Fairness 0:36is probably obvious it is to make sure that the  models are not behaving in a biased way. Now it 0:44may actually start, the challenges may start way  before a model is built. It might be understanding 0:49if the data itself is biased. If it is how do you  deal with that? When you build a model how do you 0:56make sure that the model is not systematically  giving an advantage or a disadvantage to a 1:03certain group. And the definition of the  group varies by industry, by use case, 1:09it could be based on sensitive attributes like  age and gender and ethnicity, but may not be 1:15limited to any of those. You want to make sure  that the system is not consistently favoring one 1:23over the other in an unfair way. Robustness, you  want to make sure that your models behave well 1:34in exceptional conditions. How do you make sure  that the model performance is good over time? 1:41What is happening with the effective data drift?  Or for example, in the context of the of the 1:49pandemic, we know that customer behavior has  changed you know customer patterns have changed, 1:56customer touch points have changed. Is your  model still behaving as expected, or if it is not 2:04can you at least have an understanding  of how the model behavior is changing, 2:09how data is drifting, how accuracy is drifting,  etc. Privacy, can you make sure that the model, 2:15the data, the model that is built off of  that model, the insights from that model, 2:20they are all that the the model builder  owns and retains control of those insights. 2:28And how do you do this not just as in terms  of consumption of the output of the model, 2:32but across the life cycle. How do you make  sure that data protection rules are in place 2:39through the model building testing validation  and monitoring stages. Explainability is probably 2:46pretty obvious. How can you explain the behavior  of a model. Why was someone approved for a loan, 2:52why was someone rejected. When somebody applied  for a job and that person was selected but someone 3:01with very similar qualifications applied that  person was rejected, can you explain the behavior 3:07to the end user or to a decision maker.  Transparency, you want to be able to inspect 3:14everything about a model. Can you understand  all the facts surrounding the model. Who 3:19built it, what data is being used, what  algorithms, what packages are being used, 3:26who approved it, who validated it. All of these  aspects of the model, facts about the model, 3:33should be easily available. Just like you know,  you have you buy a food product and there is a, 3:39there's a label on it, you know, it has the  nutritional facts, when was it manufactured, 3:44where was it manufactured, all of that. Just  like that for a model, you should be able to 3:48get the facts of that model very quickly. So  these I would say are sort of the fundamental 3:53pillars of Trustworthy AI. The challenge is  making sure these can be done in a systematic 4:01way regardless of what tools are used to build  the models and where the models are deployed. 4:07So John, in the recent past, we have seen that  as AI systems were new to a lot of organizations, 4:13organizations have very recently adopted such  large-scale AI applications or systems in 4:19their workflow. And that's where we started  seeing these side effects of AI, right. And 4:24that's where we pinpointed that, hey, like these  were some of the aspects which we need to target 4:29to make sure that AI doesn't have an ill effect on  the community, or doesn't have an ill effect on it 4:35on an entire perspective. So when we see that  organizations are facing such challenges, 4:41when they are seeing such like roadblocks  with respect to building trust for the AI, 4:46what is the recommended methodology on making sure  that building such AI systems, or like building 4:52such trusted AI systems is easily done throughout  different business units of the organization and 4:58doesn't surely, you know, it doesn't really  streamline to just one particular department 5:04or team. How can we make it a big thing and how  can an entire organization productionalize this 5:10streamlined work? So Aishwarya, you know, you're  talking about expanding this across our company, 5:16sort of setting up this governance framework and  that was one of the patterns we talked about. Many 5:22companies may not start there, but they may start  with one of the other patterns we talked about, 5:26which is let's start with assessment or building  out a new use case, a new application that follows 5:33Trustworthy AI principles, but yeah some companies  may want to look at a top-down approach and 5:39and set up the governance framework taking  into account that there are multiple streams of 5:46data science and AI activities going  on concurrently. But in all of these, 5:51you know, regardless of which approach you take,  I think three elements need to come together. 5:57And I would say these these are  these three elements are technology, 6:01people, and process. Technology is probably  obvious, we need to have guardrails across 6:08each of the stages of the life cycle.  When you're working with data, how do you 6:13check for bias in the data, how do you correct  that. That's a guardrail at the data exploration 6:19time. When you're building the model you  need a guardrail in place for model building 6:25for checking the robustness of the model, for  providing an explanation in development time. 6:32You need a guardrail which will allow you to go  into valid, through validation into deployment, 6:38and you need an outermost guard where  you think of it as a one-time guardrail 6:43which can continue to do monitoring of your model  and look at how it is behaving against thresholds, 6:50whether the thresholds are being breached, etc.  Now, so technology provides these guardrails 6:57for all of the different five pillars that we  talk about. Now technology in itself is not 7:02sufficient that's why I was mentioning people and  process. People because you need a set of skills 7:08to come together. It is not just data science  skills. The MLOps paradigm requires you to have 7:15the operational skills come together with data  science skills. You might have risk and compliance 7:22expertise coming into the picture. You might have  business analysts and business stakeholders coming 7:28into the picture, and so on. So the right level  of expertise, personas who are collaborating to 7:35achieve this common goal is important. And then  finally, process. In a process, that term process, 7:42you know, people may not always like that, but  the reality is you need a set of best practices 7:48for each stage of the life cycle. Whether it  is coping and building, or it is validation 7:54or deployment or monitoring over time, you need  a set of best practices. So technology, people, 8:03best practices coming together make it possible  to loot Trustworthy AI at scale and operationalize 8:11it. Great, thank you so much John, like  it was very insightful for me because to 8:17understand kind of the AI systems we build, from  understanding it from a data science perspective, 8:22to how it can be productionized and run  successfully in these large organizations. It is 8:28very important that organizations are responsible  to the people who are using it, right. So it was, 8:34it was really insightful that we got to learn so  many different things from you. In the meanwhile, 8:38I feel like there's a lot of other resources  which is available for us to dig deeper and 8:43learn about fairness, robustness, transparency,  privacy, and explainability. So everyone who's 8:48watching this you can find the right resources  in the description below, and soon we'll be 8:54posting more series of videos talking deeper  into each of these pillars. Thank you so much.