Learning Library

← Back to Library

AI Governance, Trust, and Business

Key Points

  • The episode of “Smart Talks with IBM” spotlights AI as a transformative multiplier for business, featuring IBM’s Chief Privacy & Trust Officer and AI Ethics Board chair, Christina Montgomery.
  • Montgomery explains that her role blends global data‑protection compliance with AI governance, positioning trust and transparency as a strategic competitive advantage for IBM.
  • She argues that effective AI regulation should target specific real‑world use cases rather than trying to govern the technology in abstract, emphasizing the need for clear foundational principles.
  • Montgomery references her high‑profile congressional testimony from May, highlighting IBM’s leadership in shaping policy around AI ethics and privacy.
  • While noting that the AI landscape has dramatically evolved since her 2021 interview, she stresses that the core principles of trust, transparency, and responsible governance remain unchanged.

Sections

Full Transcript

# AI Governance, Trust, and Business **Source:** [https://www.youtube.com/watch?v=MluoD8Z1ARQ](https://www.youtube.com/watch?v=MluoD8Z1ARQ) **Duration:** 00:26:58 ## Summary - The episode of “Smart Talks with IBM” spotlights AI as a transformative multiplier for business, featuring IBM’s Chief Privacy & Trust Officer and AI Ethics Board chair, Christina Montgomery. - Montgomery explains that her role blends global data‑protection compliance with AI governance, positioning trust and transparency as a strategic competitive advantage for IBM. - She argues that effective AI regulation should target specific real‑world use cases rather than trying to govern the technology in abstract, emphasizing the need for clear foundational principles. - Montgomery references her high‑profile congressional testimony from May, highlighting IBM’s leadership in shaping policy around AI ethics and privacy. - While noting that the AI landscape has dramatically evolved since her 2021 interview, she stresses that the core principles of trust, transparency, and responsible governance remain unchanged. ## Sections - [00:00:00](https://www.youtube.com/watch?v=MluoD8Z1ARQ&t=0s) **AI Governance, Privacy, and Ethics** - Malcolm Gladwell opens a Smart Talks episode where IBM’s Chief Privacy & Trust Officer Christina Montgomery explains why businesses need core principles, use‑case‑focused regulation, and shares insights from her landmark congressional testimony on AI ethics. - [00:03:23](https://www.youtube.com/watch?v=MluoD8Z1ARQ&t=203s) **Governance, Trust, and AI Bias** - The speakers describe their established governance approach that embeds transparency and accountability for powerful emerging AI models, and stress the regulatory imperative to mitigate bias by rigorously testing both the training data and model outputs. - [00:06:44](https://www.youtube.com/watch?v=MluoD8Z1ARQ&t=404s) **Precision AI Regulation Explained** - In a Senate hearing, IBM’s Christina Montgomery outlines a “precision regulation” approach that applies stricter rules to high‑risk AI applications—such as medical diagnosis—while allowing lighter oversight for low‑risk uses like movie recommendations. - [00:10:01](https://www.youtube.com/watch?v=MluoD8Z1ARQ&t=601s) **Authentic AI Leadership & Regulation** - The speaker discusses representing IBM’s historic AI expertise authentically and the personal, societal impact of that visibility, while Malcolm Gladwell emphasizes the swift, worldwide push for AI regulation that makes compliance a constantly evolving challenge for businesses. - [00:13:11](https://www.youtube.com/watch?v=MluoD8Z1ARQ&t=791s) **AI Governance: Preserving Trust and Alignment** - The speakers stress that rapid AI adoption must be paired with a robust governance framework—covering explainability, data provenance, and continual model alignment—to prevent erosion of customer and stakeholder trust. - [00:16:43](https://www.youtube.com/watch?v=MluoD8Z1ARQ&t=1003s) **Building Baseline Strategies for Global Compliance** - Experts explain that amid exploding data‑privacy and AI regulations worldwide, companies must adopt an operational approach—defining privacy and AI baselines—to consistently meet ever‑changing compliance demands across all jurisdictions. - [00:19:51](https://www.youtube.com/watch?v=MluoD8Z1ARQ&t=1191s) **Creativity Fuels Trustworthy AI Strategy** - Christina Montgomery explains how creative, agile problem‑solving is essential for adapting IBM’s rapidly evolving technologies—cloud, AI, neuro‑tech, and quantum computing—to develop and implement trustworthy AI. - [00:23:11](https://www.youtube.com/watch?v=MluoD8Z1ARQ&t=1391s) **Future AI Transparency & Regulation** - Christina Montgomery predicts that over the next five to ten years, mandatory transparency and explainability standards—driven by U.S., European, and voluntary frameworks—will spur research into trust mechanisms such as AI watermarking, with regulation shaping and accelerating these developments. - [00:26:27](https://www.youtube.com/watch?v=MluoD8Z1ARQ&t=1587s) **Podcast Production Credits & Thanks** - This segment thanks Carly Migliori, Andy Kelly, Kathy Callaghan, the EightBar and IBM teams, and the Pushkin marketing team, while noting that Smart Talks with IBM is produced by Pushkin Industries and Ruby Studio at iHeartMedia and directing listeners to the podcast on iHeartRadio, Apple Podcasts, and other platforms. ## Full Transcript
0:00Malcom Gladwell: Hello, hello. Welcome to Smart  Talks with IBM, a podcast from Pushkin Industries, 0:08iHeartRadio and IBM. I’m Malcolm Gladwell. This season, we’re continuing our conversation 0:15with New Creators— visionaries who  are creatively applying technology in 0:19business to drive change—but with a focus  on the transformative power of artificial 0:25intelligence and what it means to leverage AI as  a game- changing multiplier for your business. 0:32Our guest today is Christina Montgomery, IBM’s  Chief Privacy & Trust Officer. She’s also chair of 0:39IBM’s AI Ethics Board. In addition to overseeing  IBM’s privacy policy, a core part of Christina’s 0:47job involves “AI governance”—making sure the way  AI is used complies with the international legal 0:54regulations customized for each industry. In today’s episode, Christina will explain 1:01why businesses need foundational principles when  it comes to using technology, why AI regulation 1:08should focus on specific use cases over the  technology itself, and share a bit about her 1:15landmark congressional testimony last May. Christina spoke with Dr. Laurie Santos, 1:21host of the Pushkin podcast The Happiness  Lab. A cognitive scientist and psychology 1:28professor at Yale University, Laurie is an  expert on human happiness and cognition. 1:34Ok! Let’s get to the interview. 1:39Laurie Santos: So Christina, I'm so excited to talk to you today. So let's start by talking  a little bit about your role at IBM. What does 1:45a chief privacy and trust officer actually do? 1:48Christina Montgomery: Yeah, it's a really dynamic profession. And it's not a new profession, but the  role has really changed. I mean, my role today is 1:56broader than just helping to ensure compliance  with data-protection laws globally. I'm also 2:02responsible for AI governance. I cochair our AI  Ethics Board here at IBM on—for data clearance, 2:08and data governance as well, for the company. So I have both a compliance aspect to my 2:14role—really important on a global basis—but also  help the business to competitively differentiate, 2:21because really, trust is a strategic advantage  for IBM and a competitive differentiator, as a 2:26company that's been responsibly managing the most-  sensitive data for our clients for more than a 2:31century now, and helping to usher new technologies  into the world with trust and transparency. And 2:36so that's also a key aspect of my role. 2:39Laurie Santos: And so—you joined us here on Smart Talks back in 2021, and you chatted with us about IBM's approach of building trust 2:46and transparency with AI. And that was only two  years ago, but it almost feels like an eternity 2:51has happened in the field of AI since then. And  so I'm curious: How much has changed since you 2:56were here last time? The things you told us  before, are they still true? How are things— 3:01Christina Montgomery: You're absolutely  right. It feels like the world has changed, 3:05really, in the last two years. But the same  fundamental principles and the same overall 3:11governance apply to the IBM program, for  data protection and responsible AI that we 3:18talked about two years ago, and—not much  has changed there from our perspective. 3:23And the good thing is, we've put these practices  and this governance approach into place, 3:29and we have an established way of looking at these  emerging technologies as the technology evolves. 3:35The tech is more powerful, for sure. Foundation  models are vastly larger and more capable, 3:41and are creating, in some respects, new issues,  but that just makes it all the more urgent to 3:46do what we've been doing and to put trust and transparency into place across the business—to 3:51be accountable to those principles. 3:53Laurie Santos: And so our conversation today is really centered around this need for  new AI regulation. And part of that regulation 3:59involves the mitigation of bias. And this is  something I think about a ton as a psychologist, 4:04right? I know, my students and everyone who's  interacting with AI is, is assuming that the, 4:10the kind of knowledge that they're getting  from this kind of learning is accurate, right? 4:14But of course, AI is only as good as  the knowledge that's going in. And 4:17so talk to me a little bit about, why  bias occurs in AI and the level of the 4:22problem that we're really dealing with. Christina Montgomery: Yeah. I mean—well, 4:25obviously AI is based on data, right? It's trained  with data, and that data could be biased in and of 4:33itself. And that's where issues could come up.  They come up in the data. They could also come 4:37up in the output of the models themselves. So it's really important that you build bias 4:44consideration and bias testing into your product  development cycle. And so what we've been thinking 4:49about here at IBM, and doing—we had—some of our  research teams, uh, delivered some of the very 4:54first tool kits to help detect bias years ago  now, right? And deployed them to open source. 5:00And we have put into place for our  developers here at IBM an “ethics by 5:05design” playbook that's a sort of a step-by-step  approach, which also addresses very fully bias 5:12considerations. And we provide not only,  like, “Here’s a point when you should test 5:19for it and you consider it in the data.” You have to measure it both at the data 5:23and the model level or the outcome level. And we  provide guidance with respect to what tools can 5:28best be used to accomplish that. So it's a really  important issue. It's one you—you can't just talk 5:34about. You have to provide, essentially,  the technology and the capabilities and 5:38the guidance to enable people to test for it. Laurie Santos: Recently you had this wonderful 5:42opportunity to head to Congress to talk about  AI. And in your testimony before Congress, you 5:47mentioned that it's often said that innovation  moves too fast for government to keep up. 5:52And this is something that I also  worry about as a psychologist, 5:54right? Are policymakers really understanding  the issues that they're dealing with? And so I'm 5:58curious how you're approaching this challenge  of adapting AI policies to keep up with the 6:02sort of rapid pace of all the advancements  we're seeing in the AI technology itself. 6:08Christina Montgomery: It gets really critically  important that you have foundational principles 6:12that apply to not only how you use  technology, but whether you're going 6:17to use it in the first place and where you're  going to use and apply it across your company. 6:21—and then your program, from a governance  perspective, has to be agile. It has to be 6:26able to address emerging capabilities,  new training methods, et cetera. And 6:32part of that involves helping to educate and  instill and empower a trustworthy culture at 6:38a company so you can spot those issues—so you  can ask the right questions at the right time. 6:44If we talked about, during the Senate hearing,  and—and IBM's been talking for years about 6:49regulating the use, not the technology itself,  because if you try to regulate technology, you're 6:56very quickly going to find out, um, regulation  will absolutely never keep up with that. 7:01Laurie Santos: In your testimony to Congress,  you also talked about this idea of a “precision 7:05regulation approach” for AI. Tell me more about  this. What is a precision regulation approach, 7:10and why could that be so important? Christina Montgomery: It's funny, 7:13because I was able to share with Congress our  precision regulation point of view in 2023, 7:20but that precision regulation point of view was  published by IBM in 2020. So we have not changed 7:27our position that you should apply the tightest  controls, the strictest regulatory requirements, 7:34to the technology where the end use and  risk of societal harm is the greatest. 7:39So that's essentially what it is. There's  lots of AI technology that's used today 7:44that doesn't touch people—that's  very low risk in nature. And even 7:48when you think about AI that delivers a movie  recommendation versus AI that is used to diagnose 7:56cancer, right? There's very different implications  associated with those two uses of the technology. 8:02And so essentially what precision regulation  is “Apply different rules to different risks,” 8:07right? More-stringent regulation to the use  cases with the greatest risk. And then also 8:13we build that out, calling for things like  transparency. You see it today with content, 8:19right? Misinformation and the like. We believe that consumers should always 8:24know when they're interacting with an AI system.  So: be transparent. Don't hide your AI. Clearly 8:30define the risks. So as a country, we need to have  some clear guidance, right? And globally as well, 8:36in terms of which uses of AI are higher risk,  where we'll apply higher and stricter regulation, 8:44and have sort of a common understanding  of what those high-risk uses are, 8:48and then demonstrate the impact in  the cases of those higher-risk uses. 8:53So companies who are using AI in spaces  where they can impact people's legal rights, 8:59for example, should have to conduct an impact  assessment that demonstrates, you know, that 9:05the technology isn't biased. So we've been  pretty clear about “Apply the most-stringent 9:10regulation to the highest- risk uses of AI.” Laurie Santos: So far, we've been talking 9:16about your congressional testimony in terms  of, you know, the specific content that you 9:20talked about. But I'm just curious on a personal  level, what was that like, right? Like right now, 9:25it feels like at a policy level, like there's a  kind of fever pitch going on with AI right now. 9:30You know, what did that feel like, to kind  of really have the opportunity to talk to 9:33policymakers and sort of influence what  they're thinking about AI technologies, 9:36like in the coming century, perhaps? Christina Montgomery: It was really an 9:39honor to be able to do that, and to be one of the  first set of invitees to the first hearing. And 9:47what I learned from it essentially is, really  two things. The first is really the value of 9:52authenticity. So both as an individual and as  a company, I was able to talk about what I do. 10:01I didn't need a lot of advance prep,  right? I, I talked about what my job is, 10:07what IBM has been putting in place for years now.  So this isn't about creating something. This was 10:13just about showing up and being authentic. And we  were invited for a reason. We were invited because 10:18we were one of the earliest companies in the AI  technology space. We're the oldest technology 10:25company, and we are trusted, and that's an honor. And then the second thing I came away with was 10:32really how important this issue is to society.  I don't think I appreciated it as much until, 10:38following that experience, I had outreach from  colleagues I hadn't worked with for years. 10:44I had outreach from family members who  heard me on the radio, my mother and my 10:49mother-in-law and my nieces and nephews and  my— friends of my kids were all like, “Oh, 10:54I get it. I get what you do now. Wow. That's  pretty cool.” You know, so that was really, 10:59the best and most impactful takeaway that I had. 11:02Malcom Gladwell: The mass adoption of generative AI happening at breakneck speed has spurred societies and governments 11:09around the world to get serious  about regulating AI. For businesses, 11:15compliance is complex enough already. But  throw an ever-evolving technology like AI 11:20into the mix and compliance itself  becomes an exercise in adaptability. 11:26As regulators seek greater accountability in  how AI is used, businesses need help creating 11:32governance processes that are comprehensive enough  to comply with the law but agile enough to keep up 11:39with the rapid rate of change in AI development. Regulatory scrutiny isn’t the only consideration, 11:46either. Responsible AI governance—a business’s  ability to prove its AI models are transparent 11:53and explainable—is also key to building  trust with customers, regardless of industry. 12:00In the next part of their conversation, Laurie  asks Christina what businesses should consider 12:06when approaching AI governance. Let’s listen. Laurie Santos: So what's the particular 12:11role that businesses are playing in AI  governance? Like, why is it so critical 12:14for businesses to be part of this? 12:17Christina Montgomery: I think it's really critically important that businesses understand the impacts that technology can have, 12:24both in making them better businesses—but  the impacts that those technologies can 12:28have on the consumers that they are supporting. Businesses need to be deploying AI technology 12:37that is in alignment with the goals that  they set for it and that can be trusted. 12:42I think for us and for our clients, a  lot of this comes back to trust in tech. 12:48If you deploy something that doesn't work,  that hallucinates, that discriminates, 12:54that isn't transparent, where decisions can't  be explained, then you are going to very rapidly 13:00erode the trust—at best, right?—of your clients.  And at worst, for yourself; you're going to create 13:06legal and regulatory issues for yourself as well. So trust in technology is really important. 13:11And I think there's a lot of pressure on  businesses today to move very rapidly and 13:15adopt technology. But if you do it without  having a program of governance in place, 13:19you're really risking eroding that trust. 13:21Laurie Santos: And so this is really where I think a strong AI governance comes in.  You know—talk about, from your perspective, 13:27how this really contributes to  maintaining the trust that customers 13:31and stakeholders have in these technologies. 13:33Christina Montgomery: Yeah, absolutely. I mean, you need to have a governance program because  you need to understand, that the technology, 13:40particularly in the AI space that you  are deploying, is explainable. You need 13:45to understand why it's making decisions and  recommendations that it's making, and you 13:50need to be able to explain that to your consumers. I mean, you can't do that if you don't know where 13:54your data is coming from; what data you're using  to train those models; if you don't have a program 13:59that manages the alignment of your  AI models over time to make sure—as 14:05AI learns and evolves over uses, which is in  large part what makes it so beneficial—that 14:13it stays in alignment with the objectives  that you set for the technology over time. 14:18So you can't do that without a robust governance  process in place. So we work with clients to share 14:25our own story here at IBM in terms of how we  put that in place, but also in our consulting 14:31practice, uh, to help clients work with these new  generative capabilities and foundation models and 14:38the like, in order to put them to work for their  business in a way that's going to be impactful to 14:43that business, but at the same time be trusted. 14:46Laurie Santos: And so now I wanted to turn a little bit towards watsonx.governance. And—so IBM recently announced their AI platform, watsonx, 14:53which will include a governance  component. Could you tell us a 14:56little bit more about watsonx.governance? 14:58Christina Montgomery: Yeah. I mean, before I do that, I'll just back up and talk about  the full platform, and then lean into watsonx, 15:06because I think it's important to understand  the delivery of a full suite of capabilities. 15:12To get data, to train models, and then to govern  them over their life cycle—all of these things 15:20are really important. From the onset, you need  to make sure that you have—for our watsonx.ai, 15:28for example; that's the studio to train  new foundation models and generative AI 15:35and machine-learning capabilities, and we are  populating that studio with some IBM-trained 15:43foundation models, which we're curating and  tailoring more specifically for enterprises. 15:49So that's really important. It comes back to the  point I made earlier about business trust and 15:53the need,to have enterprise-ready technologies  in the AI space. And then the watsonx.data is 16:03a fit-for-purpose data store or a data lake,  and then watsonx.gov. So that's a particular 16:10component of the platform that my team and the  AI Ethics Board has really worked closely with 16:17the product team on developing. And we're using  it internally here in the chief privacy office 16:22as well to help us govern our own uses of  AI technology and our compliance program 16:30here. And it essentially helps to notify  you if a model becomes biased or gets out 16:38of alignment as you're using it over time. So  companies are going to need these capabilities. 16:43I mean, they need them today  to deliver technologies with 16:47trust. They'll need them tomorrow to comply  with regulation, which is on the horizon. 16:52Laurie Santos: I think compliance becomes even  more complex when you consider international 16:56data-protection laws and regulations. Honestly,  I don't know how anyone on any company's legal 17:01team is keeping up with this these days. But my  question for you is really, “How can businesses 17:05develop a strategy to maintain compliance and to  deal with it in this ever- changing landscape?” 17:11Christina Montgomery: Increasingly more  challenging. In fact, I saw a statistic 17:14just this morning that the regulatory obligations  on companies have increased something like 700 17:21times in the last 20 years. So, it really is  a huge focus area for companies. You have to 17:28have a process in place in order to do that. And it's not easy, particularly for a company 17:33like IBM, that has a presence in over 170  countries around the world. There's more 17:40than 150 comprehensive privacy regulations.  There are regulations of nonpersonal data. 17:47There are AI regulations emerging. So you  really need an operational approach to it, 17:54in order to stay compliant. But, but one of the things we do 17:57is we set a baseline—and a lot of companies do  this as well. So we define a privacy baseline, 18:02we define an AI baseline, and we ensure, then,  as a result of that, there are very few deviances 18:10because it incorporates that baseline. So that's one of the ways we do it. Other 18:14companies, I think, are similarly situated in  terms of doing that. But, again, it is a real 18:21challenge for global companies. It's one of the  reasons why we advocate for as much alignment as 18:27possible—in the international realm as well as  nationally here in the U.S.—as much alignment as 18:35possible to make compliance—easier—and not just  because companies want an easy way to comply, 18:43but the harder it is, the less likely  there will be compliance. And it's 18:49not the objective of anybody—governments,  companies, consumers—to have to set legal 18:56obligations that companies simply can't meet. 18:59Laurie Santos: So what advice would you give to other companies who are looking to rethink or strengthen their approach to AI governance? 19:04Christina Montgomery: I think you need to start  with, as we did, foundational principles. And 19:08you need to start making decisions about  what technology you're going to deploy, 19:13and what technology you're not, what are you  going to use it for and what aren't you going to 19:15use it for. And then when you do use it, align  to those principles. That's really important. 19:21Formalize a program. Have someone within  the organization—whether it's the Chief 19:26Privacy Officer, whether it's some other  role, a Chief AI Ethics Officer—but have 19:32an accountable individual, an accountable  organization. Do a maturity assessment, 19:38figure out where you are and where you need to  be, and really start, putting it into place today. 19:45Don't wait for regulation to apply directly  to your business because it'll be too late. 19:51Laurie Santos: So as Smart  Talks features New Creators, 19:53these visionaries like yourself who are creatively  applying technology in business to drive change, 19:58I'm curious if you see yourself as creative. 20:00Christina Montgomery: I definitely do. I mean, you need to be creative when you're working in an industry that evolves so very quickly. 20:11So you know, I started with IBM when  we were primarily a hardware company, 20:16right? And we've changed our business  so significantly over the years. And 20:20the issues that are raised with respect to  each new technology—whether it be cloud, 20:26whether it be AI, now, where we're seeing a  ton of issues, or you look at emergent issues, 20:32in the space of things like neuro technologies  and quantum computers—you have to be strategic 20:40and you have to be creative and thinking about  how you can adapt agilely, quickly, a company to 20:49an environment that is changing so quickly. 20:51Laurie Santos: And with this transformation happening at such a rapid pace,  do you think creativity plays a 20:57role in how you think about and implement,  specifically, a trustworthy AI strategy? 21:03Christina Montgomery: Yeah. I absolutely think  it does. Because again, it comes back to 21:07these capabilities. And, there are ways, I guess how  you define “creativity” could be different, 21:13right? But I'm thinking of creativity in the  sense of, sort of agility and strategic vision and 21:19creative problem-solving. I think that's really  important in the world that we're in right now, 21:25being able to creatively problem solve with  new issues that are rising sort of every day. 21:33Laurie Santos: And so how do you see the role  of Chief Privacy Officer evolving in the future 21:37as AI technology continues to advance? Like,  what steps should CPOs take to stay ahead of 21:42all these changes that are coming their way? 21:44Christina Montgomery: So the role is evolving, in most companies, I would say, pretty rapidly. Many companies are looking to 21:53chief privacy officers, who already understand  the data that's being used in the organization 21:58and have programs to ensure compliance with laws  that require you to manage that data in accordance 22:05with data-protection laws and the like. It's a natural place and position for, 22:10AI responsibility. And so I think what's  happening to a lot of chief privacy officers is 22:16they're being asked to take on this AI-governance  responsibility for companies,—and if not take it 22:22on, at least play a very key role working with  other parts of the business in AI governance. 22:29So that really is changing. And if Chief  Privacy Officers are in companies who maybe 22:34haven't started thinking about AI yet,  they should, so I would encourage them 22:40to look at different resources that are available  already in the AI-governance space. For example, 22:46the International Association of Privacy  Professionals—which is the 75,000-member 22:52professional body for the profession of chief  privacy officers—just recently launched, 22:58an AI- governance initiative on—an AI-governance  certification program. I sit on their advisory 23:04board. But that's just emblematic of the  fact that the field is changing so rapidly. 23:11Laurie Santos: And so, speaking of rapid  change—when you were back here on Smart Talks 23:15in 2021, you said that the future of AI will  be more transparent and more trustworthy.What 23:20do you see the next five to 10 years holding?  You know, when you're back on Smart Talks in, 23:24you know, 2026, you know, 2030, you know,  what are we going to be talking about 23:28when it comes to AI technology and governance? 23:30Christina Montgomery: So I try to be an optimist, right? And I said that two years ago, and I think we're seeing it now come into 23:39fruition. And there will be requirements,  whether they're coming from the U.S., 23:44whether they're coming from Europe, whether  they're just coming from voluntary adoption 23:48by clients of things like the NIST risk-management  framework, a really important voluntary framework, 23:54you're going to have to adopt transparent  and explainable practices in your uses of AI. 23:59So I do see that happening. And in the next  five to 10 years, boy, I think we'll see more 24:04research into trust in, in techniques, because we  don't really know for example, how to watermark. 24:14We were calling for things like watermarking;  there'll be more research into how to do that. 24:19I think you'll see regulation that's specifically  going to require those types of things. So I 24:26think—again, I think the regulation is going  to drive research. It's going to drive research 24:30into these areas that will help ensure that we can  deliver new capabilities, generative capabilities 24:37and the like, with trust and explainability. 24:39Laurie Santos: Thank you so much, Christina, for joining me on Smart Talks to talk about AI and governance. 24:44Christina Montgomery: Well, thank  you very much for having me. 24:48Malcolm Gladwell: To unlock the transformative growth possible with artificial intelligence, businesses need to know 24:53what they wish to grow into first. Like Christina  said, the best way forward in the AI future is for 25:01businesses to figure out their own foundational  principles around using the technology, 25:06drawing upon those principles to apply AI in  a way that’s ethically consistent with their 25:12mission and complies with the legal frameworks  built to hold the technology accountable. 25:18As AI adoption grows more and more widespread,  so too will the expectation from consumers and 25:24regulators that businesses use it responsibly.  Investing in dependable AI governance is a way for 25:32businesses to lay the foundations for technology  their customers can trust, while rising to the 25:38challenge of increasing regulatory complexity. Though the emergence of AI does complicate 25:45an already tough compliance landscape,  businesses now face a creative opportunity to 25:51set a precedent for what accountability in  AI looks like and to rethink what it means 25:56to deploy trustworthy artificial intelligence. 26:01I’m Malcolm Gladwell. This is a paid advertisement from IBM. 26:07Smart talks with IBM will  be taking a short hiatus, 26:09but look for new episodes in the coming weeks. Smart Talks with IBM is produced by Matt Romano, 26:16David Zha, Nisha Venkat, and Royston  Beserve, with Jacob Goldstein. We’re 26:22edited by Lidia Jean Kott. Our engineer is  Jason Gambrell. Theme song by Gramoscope. 26:28Special thanks to Carly Migliori, Andy Kelly,  Kathy Callaghan, and the EightBar and IBM teams, 26:35as well as the Pushkin marketing team. Smart Talks with IBM is a production 26:40of Pushkin Industries and Ruby Studio at  iHeartMedia. To find more Pushkin podcasts, 26:47listen on the iHeartRadio app, Apple  Podcasts, or wherever you listen to podcasts.