Learning Library

← Back to Library

Trust, Transparency, and Governance in AI

Key Points

  • Trust is identified as the foremost prerequisite for deploying large‑scale generative AI in enterprises, as without confidence in model outputs the technology’s benefits cannot be realized.
  • The speakers highlight the prevalence of AI “hallucinations” and other toxic behaviors (e.g., bullying, gaslighting, copyright violations, privacy leaks) that erode trust and create fear among organizations.
  • Kush Varshney’s extensive background—hundreds of publications, open‑source fairness and explainability toolkits, a book on trustworthy machine learning, and leadership roles at IBM and the MIT‑IBM Watson AI Lab—underscores the depth of research effort behind trustworthy AI initiatives.
  • The discussion points out a gap between the rapid push to operationalize generative AI and the need for robust governance frameworks that address reliability, transparency, and ethical risks.
  • Real‑world examples, such as chatbots producing a mix of accurate and fabricated information, illustrate the concrete challenges of ensuring AI honesty and the necessity of systematic mitigation strategies.

Full Transcript

# Trust, Transparency, and Governance in AI **Source:** [https://www.youtube.com/watch?v=odrD0OLPeiY](https://www.youtube.com/watch?v=odrD0OLPeiY) **Duration:** 00:09:21 ## Summary - Trust is identified as the foremost prerequisite for deploying large‑scale generative AI in enterprises, as without confidence in model outputs the technology’s benefits cannot be realized. - The speakers highlight the prevalence of AI “hallucinations” and other toxic behaviors (e.g., bullying, gaslighting, copyright violations, privacy leaks) that erode trust and create fear among organizations. - Kush Varshney’s extensive background—hundreds of publications, open‑source fairness and explainability toolkits, a book on trustworthy machine learning, and leadership roles at IBM and the MIT‑IBM Watson AI Lab—underscores the depth of research effort behind trustworthy AI initiatives. - The discussion points out a gap between the rapid push to operationalize generative AI and the need for robust governance frameworks that address reliability, transparency, and ethical risks. - Real‑world examples, such as chatbots producing a mix of accurate and fabricated information, illustrate the concrete challenges of ensuring AI honesty and the necessity of systematic mitigation strategies. ## Sections - [00:00:00](https://www.youtube.com/watch?v=odrD0OLPeiY&t=0s) **Introducing Trustworthy AI Leadership** - In the opening of the AI Academy series, Kate Soule greets distinguished researcher Kush Varshney to highlight the paramount importance of trust, transparency, and governance for enterprise adoption of generative AI, emphasizing his extensive work and public impact in the field. - [00:03:13](https://www.youtube.com/watch?v=odrD0OLPeiY&t=193s) **Navigating Trust in Generative AI** - The speaker stresses that AI development should prioritize trust, transparency, and governance, highlighting new risks like hallucinations, data leakage, and bullying that emerge with massive generative AI datasets. - [00:06:16](https://www.youtube.com/watch?v=odrD0OLPeiY&t=376s) **Open Kitchen Analogy for AI Transparency** - The speaker argues that AI systems need transparent, open‑concept “kitchens” revealing data sources, processing steps, testing, and auditing to build trust, and connects this need to fairness concerns such as stereotyping and toxicity that disproportionately harm vulnerable populations. ## Full Transcript
0:00[AI Academy] 0:10[Trust, transparency and governance in the age of generative AI] 0:13[Chapter 1 - Introduction to trustworthy AI] 0:17Hello, and welcome to AI Academy. 0:20My name is Kate Soule. 0:21I'm a Business Strategy Senior Manager at IBM Research 0:24and the MIT-IBM Watson AI Lab, 0:26and this is my colleague, 0:28Distinguished Research Scientist Kush Varshney. 0:31Kush is an AI researcher with a focus on trustworthy AI. 0:35Kush, I'm really excited we get 0:37to have this conversation today. 0:39I've been working with clients 0:40and thinking about trustworthy AI 0:43from a business perspective for a while now, 0:45but I know you've been innovating in trustworthy AI 0:49from a research perspective for a number of years. 0:52When it comes to AI, I think you and I can both agree, 0:55trust is the number one important thing. 0:58Yeah, it has to be. 0:59If we don't have that trust in those models 1:02that have billions of parameters and they're really huge, 1:05but until we have that trust, 1:07we can't really get the benefit of that AI in enterprises. 1:11Now, you have quite a few accomplishments 1:13to your name in this space, right? 1:14You've published hundreds of papers, you have algorithms 1:18that are working in labs around the world. 1:20You're a sought-after speaker, right? 1:22Yeah. 1:24And I say this to emphasize 1:26that you have a big footprint in this space, 1:28a public footprint in this space, 1:31and given your public accomplishments, 1:33I thought it might be interesting 1:35if I ask some consumer chatbots to learn a little bit more 1:39about some of the work that you're doing in trustworthy AI. 1:42Yeah, that sounds like a fun thing to do. 1:45So you published a book on trustworthy machine learning? 1:48Yep, that's absolutely correct. 1:49You were named an Elevate Fellow 1:51by the government of Ontario, Canada. 1:55Um, I've never heard of that fellowship. 1:56You're a co-founder 1:58of the Machine Learning for Good Social Foundation. 2:01That's almost right. 2:02So I did found the IBM Science for Social Good initiatives, 2:06so we're close. 2:07You've created many open-source toolkits. 2:10So we created the 360 toolkits around AI fairness, 2:14360 AI Explainability, 360 and some others, yep. 2:16You have a PhD in electrical and computer engineering 2:20from the University of Illinois at Urbana-Champaign. 2:23I went to MIT. 2:24So Kush, what's going on 2:26with these chatbot responses here? 2:27Some of these are right, 2:28and some of them are complete fiction. 2:31What's going on? 2:32So I would call that hallucination, 2:33and so that means that these AI systems, 2:36they'll make some things up, 2:37they'll make associations that aren't exactly correct, 2:40and I think that's what happened in our last example. 2:43So it kind of created this association 2:44that didn't exactly exist. 2:47Got it. 2:47I think everyone is feeling the pressure 2:49of operationalizing generative AI as fast as possible, 2:53but when companies hear about AI hallucinating 2:57or other toxic behaviors like bullying or gaslighting, 3:00and there's other concerns around generative AI 3:03in copyright infringements 3:04or the revealing of personal or private information, 3:07and it makes companies concerned 3:09and nervous and even fearful 3:11about adopting generative AI in their organization. 3:13Yeah, and what we have to remember 3:16is that AI is not a race. 3:19It's a journey. 3:20We have to be careful, and as anything 3:24that we want to get into enterprise AI, 3:26it has to have these principles of trust 3:29and transparency throughout. 3:31We have to slow down, 3:33put in all of these governance aspects, 3:35make sure that we're putting in safeguards, guardrails, 3:37and just doing the right thing. 3:40[ Chapter 2 - New risks with generative AI] 3:44I know you and your team 3:45have worked on this for a while, right? 3:47How have the risks changed with the advent of generative AI 3:52compared to the risks we were seeing before 3:54with traditional machine learning? 3:56Yeah, so predictive machine learning and generative AI, 3:59they're kind of two sides of the same coin, 4:02so a lot of the techniques are very similar, 4:04but there are differences. 4:06So the hallucination that you mentioned, 4:08the leakage of private information, the bullying-- 4:11all of those are new risks that we haven't seen before. 4:15We still have a lot of other risks as well 4:17that kind of carry over, 4:19but the difference mainly is around the solutions. 4:22How do we address these issues? 4:24And a lot of the reason we can't apply 4:27the same techniques from before 4:28is because of the huge data that we're dealing with now. 4:32It's just humongous, humongous datasets. 4:35Yeah, can you talk a little bit more 4:36about that, specifically? 4:37So when we have these huge volumes of data, 4:41how does that impact our ability to trust a model? 4:44Yeah, the data is so huge. 4:46We can put in data governance techniques. 4:48We can ensure that certain sites are not scraped, 4:51that certain filtering is done and so forth, 4:54but it's beyond the ability of any individual human 4:57or a team of humans to even read through 5:00every single piece of content, 5:03so that's where the challenge comes from. 5:05[Chapter 3 - Elements of trustworthy AI] 5:09Now, let's take a step back for a second 5:11and talk about trust as a concept. 5:13When I talk to clients about trust, 5:15most of the time, their minds jump straight to accuracy, 5:19thinking about quality, and can they trust the model 5:22in the use case that they're trying to deploy it in. 5:25How do you define trust? 5:27Yeah, so I think the starting point is that, 5:29so the quality, the accuracy, 5:31just the general performance of these models, 5:34because without that, nothing else follows, 5:36but that's just the starting point, right? 5:38Yeah. 5:39So there's all sorts of other considerations, 5:41whether it's reliability and robustness or fairness. 5:46Can we as humans understand how the model is working. 5:49Can we understand the entire process 5:51of how it came together. 5:54Can we ensure that the models, these AI systems, 5:57are working for our benefit, not doing something else. 6:01Yeah, I think a valid criticism of AI in general, 6:05including generative AI, 6:07is that it can be a bit of a black box. 6:10Can you speak a little bit more about transparency 6:14as a dimension of trustworthy AI? 6:16Transparency says it, I mean, already, right? 6:19So we think of these AI systems, 6:21they're black boxes in some capacity, 6:23and what we need is more openness. 6:26We need to shed light on them. 6:27And what transparency allows us to do 6:29is kind of understand what's going on from beginning to end. 6:33So an analogy to that is, let's say you're at a restaurant 6:36and it has an open-concept kitchen. 6:39You can see all the ingredients before they're chopped up. 6:41You can see what the chef is doing, 6:43and all of that gives you confidence 6:45that there's just general goodness happening. 6:48And the same thing applies to AI systems. 6:51If we can know where the data came from, 6:54what sort of processing steps were performed, 6:56what sort of testing was done, 6:58what sort of auditing was done, all of that together 7:02gives us the understanding of what's going on. 7:05[Chapter 4 - Fairness, bias and governance] 7:09Now, Kush, you and your team 7:10have also spent a lot of time thinking about fairness. 7:13Can you speak a little bit more about that? 7:15Yeah, fairness is a topic I'm really passionate about, 7:20and in the traditional machine learning sense, 7:22we talked about fairness for, like, hiring algorithms, 7:25for lending algorithms, these sort of things, 7:28but when we moved to the generative AI world, 7:31things are a little bit different. 7:33So the thing that we're most concerned about 7:35is stereotyping and other toxicity, 7:38because it's the most vulnerable members of society 7:40that suffer the most when these systems 7:43are actually doing things in a harmful way. 7:46And this is one of the areas 7:47where I feel like generative AI and machine learning 7:50have a lot in common. 7:52At the end of the day, if they're trained on biased data, 7:54they're going to create biased outputs, and 7:58generative AI, for better or worse, 7:59is trained on human-created data, 8:01and humans have conscious and unconscious biases, 8:05and the data that they create can reflect that. 8:07Yeah, absolutely. 8:08And it's the algorithms that just amplify 8:11all of those societal and cognitive biases as well. 8:15So with all these risks and considerations around trust, 8:19how can clients adopt generative AI 8:22in a safe, responsible, and ethical way? 8:24Yeah, I think the only word I need to say is governance, 8:28and AI governance really starts at the beginning. 8:33What is the intended use 8:34of these systems that we're creating? 8:36Where's the data coming from? 8:38Where's it sourced? How are we processing it? 8:41Putting in all these different checks and balances, 8:43and doing all of the testing in deployment as well. 8:47Can we continuously monitor how they're performing 8:49and step in if they go beyond those guardrails? 8:53Absolutely. 8:54I think you put it really well. 8:56When the stakes are high, you need to be able to trust, 9:00but have that trust validated and verified, 9:03and not just trust for trust's sake. 9:06Yeah. 9:07Okay, it's time to wrap up. 9:08Thank you so much, Kush. 9:10And for everyone else, 9:12thank you for watching this episode of AI Academy. 9:15Please join us again for future episodes 9:17as we unpack some of the most important topics 9:20in AI for business.