Learning Library

← Back to Library

Open‑Source AI: Hugging Face & watsonx Collaboration

Key Points

  • Hugging Face, founded by Jeff Boudier, is the premier open‑source platform where AI researchers share and access pretrained models, making it a central hub for data scientists and developers.
  • IBM’s watsonx partnership with Hugging Face integrates the company’s open‑source model repository into IBM’s AI suite, giving businesses the ability to fine‑tune models with proprietary data while leveraging a curated catalog of ready‑to‑use solutions.
  • Open‑source AI is presented as essential for innovation because it lets firms build customized, high‑quality models faster and more transparently than relying on a single, monolithic “omnipotent” AI system, which the hosts argue is a myth.
  • Jeff Boudier’s background in engineering and entrepreneurship (including co‑founding Stupeflix, later sold to GoPro) fuels his focus on the business side of technology, emphasizing how open‑source AI acts as a multiplier for enterprise growth and competitive advantage.

Sections

Full Transcript

# Open‑Source AI: Hugging Face & watsonx Collaboration **Source:** [https://www.youtube.com/watch?v=THEQbJIAxvE](https://www.youtube.com/watch?v=THEQbJIAxvE) **Duration:** 00:30:42 ## Summary - Hugging Face, founded by Jeff Boudier, is the premier open‑source platform where AI researchers share and access pretrained models, making it a central hub for data scientists and developers. - IBM’s watsonx partnership with Hugging Face integrates the company’s open‑source model repository into IBM’s AI suite, giving businesses the ability to fine‑tune models with proprietary data while leveraging a curated catalog of ready‑to‑use solutions. - Open‑source AI is presented as essential for innovation because it lets firms build customized, high‑quality models faster and more transparently than relying on a single, monolithic “omnipotent” AI system, which the hosts argue is a myth. - Jeff Boudier’s background in engineering and entrepreneurship (including co‑founding Stupeflix, later sold to GoPro) fuels his focus on the business side of technology, emphasizing how open‑source AI acts as a multiplier for enterprise growth and competitive advantage. ## Sections - [00:00:00](https://www.youtube.com/watch?v=THEQbJIAxvE&t=0s) **Open‑Source AI Powers Business** - In this Smart Talks episode, Jeff Boudier of Hugging Face explains IBM’s watsonx collaboration and how open‑source AI models let companies tailor and accelerate intelligent solutions using their own data. - [00:04:11](https://www.youtube.com/watch?v=THEQbJIAxvE&t=251s) **Explaining AI Models Simply** - Jeff clarifies that a model is a huge set of numerical parameters that process inputs—such as text prompts, audio, or images—to generate outputs, and notes that Hugging Face offers around 300,000 freely accessible models for tasks like transcription or thumbnail creation. - [00:09:15](https://www.youtube.com/watch?v=THEQbJIAxvE&t=555s) **Open‑Source Transformers & Hugging Face/GitHub Relationship** - Tim Harford asks what “open‑source” means for transformer model libraries and how Hugging Face relates to GitHub, and Jeff Boudier explains model and training‑data availability as well as the platforms’ collaborative roles. - [00:12:38](https://www.youtube.com/watch?v=THEQbJIAxvE&t=758s) **Open-Source Transformers Fuel AI Boom** - Jeff Boudier explains how the Hugging Face Transformers library grew from a community effort and now powers AI development, with open‑source collaboration driving the rapid pace of innovation rather than merely keeping up with it. - [00:15:48](https://www.youtube.com/watch?v=THEQbJIAxvE&t=948s) **IBM-Hugging Face Partnership Overview** - Jeff explains how IBM’s collaboration with Hugging Face combines open‑source AI models with the enterprise‑focused watsonx platform to let businesses customize ethical AI while meeting compliance requirements. - [00:19:19](https://www.youtube.com/watch?v=THEQbJIAxvE&t=1159s) **Debunking the One‑Model Myth** - The speaker argues that relying on a single massive language model for all tasks is impractical, emphasizing the need for specialized, efficient models due to resource limits and use‑case requirements. - [00:22:54](https://www.youtube.com/watch?v=THEQbJIAxvE&t=1374s) **Choosing Specialized Models with Watsonx** - The conversation explains that smaller, task‑specific models can deliver superior results compared to large general models and outlines how Hugging Face is integrating its extensive model library into IBM’s watsonx platform to simplify model selection and deployment for both individual users and complex enterprise environments. - [00:26:58](https://www.youtube.com/watch?v=THEQbJIAxvE&t=1618s) **Empowering Enterprises with Own AI Models** - The speaker contrasts a single, all‑powerful AI vision with IBM’s approach to enable each company to create, customize, and govern its own models using open‑source tools and the watsonx platform. - [00:30:05](https://www.youtube.com/watch?v=THEQbJIAxvE&t=1805s) **Production Credits and Podcast Promotion** - The segment thanks specific contributors, identifies “Smart Talks with IBM” as a Pushkin Industries and Ruby Studio production for iHeartMedia, and invites listeners to find the podcast on iHeartRadio, Apple Podcasts, or any podcast platform. ## Full Transcript
0:00Hugging Face and watsonx: why open  source is the future of AI in business 0:02Malcolm Gladwell: Hello, hello. Welcome to Smart  Talks with IBM, a podcast from Pushkin Industries, 0:07iHeartRadio and IBM. I’m Malcolm Gladwell. This season, we’re continuing our conversation 0:14with New Creators – visionaries who  are creatively applying technology in 0:19business to drive change — but with a focus  on the transformative power of artificial 0:25intelligence and what it means to leverage AI as  a game- changing multiplier for YOUR business. 0:32Our guest today is Jeff Boudier [BOO-dee-ay],  Head of Product and Growth at Hugging Face, 0:37the leading open-source and open-  science artificial-intelligence 0:41platform. Before getting into the world of  open-source AI, Jeff co-founded a company 0:42called Stupeflix, a video-editing software which  was eventually acquired by GoPro. An engineer 0:42by background, he has a self-professed  obsession with the business of technology. 0:48Recently, IBM and Hugging Face  announced a collaboration, 0:52bringing together Hugging Face’s repositories  of open-source AI models with IBM’s watsonx 0:59platform. It’s a move that gives businesses  even more access to AI while staying true to 1:05IBM’s long-standing philosophy of supporting  open-source technology. With open source, 1:13businesses can build better AI models that  suit their specific needs, using their own 1:19proprietary data, while browsing a  ready catalog of pretrained models. 1:24In today’s episode, you’ll hear why open  source is so crucial to the advancement of AI, 1:30how IBM’s watsonx interacts with open-source  AI, and Jeff’s thoughts on why the singular, 1:37omnipotent AI model is a myth. Jeff spoke with Tim Harford, 1:42host of the Pushkin podcast Cautionary Tales.  A longtime columnist at the Financial Times, 1:48where he writes the “Undercover Economist,” Tim is  also a BBC broadcaster with his show More or Less. 1:56OK, let’s get to the interview. Jeff Boudier: Hi, I'm Jeff Boudier, 2:07and I'm a Product Director at Hugging Face. 2:11Tim Harford: So I'm immediately  intrigued. Hugging Face—is this a 2:14reference to the Alien movie, or something else? Jeff Boudier: It is not. And it may be not obvious 2:21to a listener, but “Hugging Face” is the name  of that cute emoji—you know, the one that's 2:27smiling with his two hands extended like that  to give you a big hug? That's Hugging Face. So 2:32basically we named the company after an emoji. Tim Harford: Okay. And it is—I saw your website, 2:39and it is a very friendly emoji. So that's  nice. So tell us a little bit about Hugging 2:44Face and, and about what you do there. Jeff Boudier: Of course. Well, Hugging 2:47Face is the leading open platform for AI  builders, and it's the place that all of 2:55the AI researchers use to share their work,  their new AI models, and collaborate around 3:03them. It's the place where the data scientists  go and find those pretrained models and access 3:12them and use them and work with them. And increasingly, it's the place where 3:17developers are coming to turn all of  these AI models and datasets into their 3:24own applications, their own features. Tim Harford: So like the Facebook group 3:30or the Reddit or the Twitter for people who are  interested in, particularly, generative-language 3:36AI, or all kinds of artificial intelligence? Jeff Boudier: All kinds of AI, really. And 3:42of course generative AI is this new wave  that has caught the world by storm. But 3:49on Hugging Face you can find any  kind of model. The new, sort of, 3:53transformers models,to do anything from,  translation—or if you wanted to transcribe 4:00what I'm saying into text, uh, then you would  use a transformer model. If you wanted to, 4:06then take that text and make a summary,  that would be another transformer model. 4:11If you wanted to create a nice little thumbnail  for the, this podcast by typing a sentence, 4:17that would be, uh, another type of model. So all  of these models you can find—there's actually, 4:23300,000 that are free and publicly accessible—you  can find them on our website at huggingface.co, 4:31and use them, using our open-source libraries. Tim Harford: And so this is—this is fascinating. 4:37So, so there are 300,000 models. Now, when  you say “model,” I'm thinking in my head, 4:42“Oh, it's kind of like, um, like a computer  program.” There were 300,000 computer programs. 4:47Is that—is that roughly right? Or it—not really? Jeff Boudier: It's a general idea. A model is a 4:54giant, set of numbers that are working together  to sift through some input that you're going to 5:05give it. So think of it as a big black box  filled with numbers. And you give it as a, 5:13as an input maybe some text, maybe a prompt. So you're asking—you're giving an instruction 5:21to the model, or maybe you  give it an image as an input, 5:25and then it will sift through that information,  thanks to all of these numbers, which we call, 5:32in the field, “parameters,” and it will  produce an output. So when I told you, “Hey, 5:39we can transcribe this conversation into text,”  the input would have been the conversation in 5:45an audio file, and then the output would  have been the text of the transcription. 5:50If you want to create a thumbnail for this  podcast episode, then the input would be 5:55what we call the “prompt,” which is really  a text description— like, uh, “French man in 6:01San Francisco talking about machine learning.” And the output would be a completely original 6:07image. So that's how I think about what an AI  model is. And I think what we're starting to 6:15realize is that this is becoming the new way of  building technology in the world. It has been—for 6:24the field of dealing—understanding,  generating text—for quite some time. 6:28But now it's sort of moving across every field  of technology. We have models to create images, 6:36as I say, but also to generate new proteins,  to make predictions on numerical data. So 6:44every kind of field of machine learning  is now using this new type of models. 6:52But what's interesting is that if you're,say, a  product manager at a tech company, and you say, 6:58“Hey, I want to build a feature that does this,”  a few years ago the approach would have been to 7:05ask a software developer to write a thousand  lines of code in order to build a prototype. 7:12And a new way of doing things today is to go look  for an off-the-shelf, pretrained model that does a 7:20pretty good job of solving exactly that problem.  So you can create a prototype of that feature 7:27fast. So it's a new approach to building tech. Tim Harford: I'm not a programmer, but I'm aware 7:33that there's this idea of open-source code,  and now we have open-source models. So what 7:38does it mean for something to be “open source”? Jeff Boudier: Yes, “open-source AI” actually 7:43means a lot of different, specific things. It's  the open-source implementation of the model. So 7:51if you use the Hugging Face transformers  library to use a model, you're using an 7:56open-source code library to use that model. Tim Harford: Just to interrupt on the 8:01“transformers”—these are these, kind of,  ways of turning a picture of a dog into 8:08a text output that says, “Hey, this is a  picture of a dog,” or “This is a French 8:12text,” and—with the transformers helping you  turn it into English text. Or it's doing all 8:15of these things that you've been describing. That's—the, the transformer is the—kind of 8:20the engine at the, at the heart of that. Jeff Boudier: Yes, exactly. And we call 8:24them “transformers” because they correspond  to this new way of building machine-learning 8:30models that was introduced by Google,  actually, with a very important paper 8:36called “Attention Is All You Need.” And it  was published in 2017 by researchers out 8:42of Google DeepMind. Tim Harford: Wow, 8:45that's just six years. That's so new. Jeff Boudier: It is very new, and ever since, 8:51the pace of innovation has really, really  accelerated. But it really started from this 9:01inflection point that came from this paper  and its implementation in what is now called 9:08“transformer models.” The transformer—that has  conquered every area of machine learning since. 9:15Tim Harford: Okay. Sorry to interrupt. So,  so—you've got this library of transformer models, 9:21and they're open source, and that means, that  means what? Anyone can use them for free? Or that, 9:26or that anybody can implement  them for free? What does it mean? 9:29Jeff Boudier: So again, there's, like, lots that  go into it, but the most important thing is for 9:35the model itself to be available so that a  data scientist or an engineer can download 9:42them and use them. And also there are a lot  of considerations about how you make them 9:50accessible. And a very important one is whether or  not you give access to the training data—all the 9:59information that went into training that model and  teaching it to do what, what it's trained to do. 10:07That's, uh— Tim Harford: Have fed millions 10:08of words into a, into a language transformer, or  I might have fed millions of photographs into a, 10:14into a picture transformer. Yeah. Jeff Boudier: Yes. And the accessibility 10:20of that training data is very, very important. TIM HARFORD: What's the relationship between the, 10:26the Hugging Face libraries and, uh, GitHub?  Which, if I understand GitHub correctly, 10:33it's this—the repository of open-source code,  lots and lots of lines of code and routines and 10:40programs that are, that are shared and updated  and tracked, and they're all available on, 10:45on GitHub. Which sounds similar to what you're  doing with Hugging Face for AI. So what, 10:50what is the interaction or the relationship there? JEFF BOUDIER Yeah, I think you nailed it on the 10:55head. So Hugging Face is to AI what GitHub is to  code, right? It's this central platform where AI 11:03builders can go find, and collaborate around, AI  artifacts, which are models and datasets. So it's 11:12quite—it's different than software, but we play  the central, the central role in the community to 11:18share and collaborate and access all of those  artifacts for AI like GitHub offers for code. 11:28Tim Harford: And that community must be incredibly  important. I mean, the—open source is nothing if 11:32you don't have a community of people working  on it. So how have you been able to foster— 11:37Jeff Boudier: Well, I think it goes to the  origins of the transformer model, and, and 11:44Hugging Face rolled into that. So when the first,  sort of, open model came out, it was called BERT, 11:54and it came out of Google. The only way you could  access it was to use a tool called TensorFlow. 12:02But it happened that most of the AI  community was using a different tool, 12:09called PyTorch. And something that Hugging  Face did is to make that new model, BERT, 12:18accessible to all PyTorch users. And they did it  in open source. It was a project called “BERT's 12:29Pretrained PyTorch” or “BERT PyTorch Pretrained.” Tim Harford: So this is like being able to play my 12:32Zelda game on, on an Xbox or a PlayStation, right?  Or am I not really understanding what's going on? 12:38Jeff Boudier: No, that's exactly what it is. And  the thing is, everybody was using the Game Boy, 12:44and so it became very popular. And from there,  the community sort of gathered to make all of 12:52the other models that were then published by  AI researchers available through that library, 12:58which was quickly renamed from “BERT’s Pretrained  PyTorch” into “transformers” to welcome, like, 13:04all of these different, new models. And today, that's—open-source library 13:10transformers is what all AI builders are  using when they want to access those models, 13:17see how they work and build upon them. Tim Harford: What's striking about this field 13:22is that it's changing so fast. It's improving so  quickly. So how do open-source models keep up with 13:29that? How do they get iterated and improved? Jeff Boudier: Actually, it's not so much that 13:34open source is keeping up with it; it's actually  open source that is driving that is driving this 13:40pace of change. And that's because— with open  source and open research, data scientists’, 13:48researchers can build upon each other's work. They can reproduce each other's work. They can 13:53access each other's work using our  open-source libraries, et cetera. 13:58So in a sense, it's not really that open-source AI  is a new idea. It's rather the opposite. There's 14:05been a blip of time in which closed-source AI  seems to be the dominant way, but it's really a 14:13blip. In fact, you know, none of the incredible  advances that we're marveling about today would 14:20be possible without open source. We're  standing upon the shoulders of 50 years 14:24of research in open-source software. So  I think that that's really important; 14:30if it wasn't for that, we'd probably be 50  years away from having this—amazing experiences 14:37like ChatGPT or Stable Diffusion, et cetera. So it's really open source that is fueling this 14:45pace of change—all, all these new models, all  these new capabilities. To give you an example: 14:51so Meta released, uh, LLaMA large language  model just a few months ago. And ever since, 14:58there's been this Cambrian explosion of  variations and improvements upon the original 15:04models. And today there are over a thousand  of them that we host and track and evaluate. 15:12So, yeah, open source is really  the gas and the engine for that. 15:18Malcolm Gladwell: Jeff just made it  clear that it is open source, not closed, 15:23that sets the pace for AI innovation. If that’s  true, then forward-thinking businesses shouldn't 15:29shy from leveraging open-source AI to solve  their own proprietary challenges. But how? 15:36Businesses can face serious obstacles when  trying to adopt open-source technologies, 15:42like complying with government regulation or  making sure their customers’ data stays protected. 15:48In the next part of their conversation, Jeff  and Tim discuss how IBM’s collaboration with 15:53Hugging Face empowers businesses to  tap into the open-source AI community, 15:59and how the watsonx platform can enable them  to customize those AI models to their needs. 16:06Tim Harford: I just wanted to ask  about the partnership between Hugging 16:08Face and IBM. How did that come about? Jeff Boudier: Well, it came through a 16:16conversation. A conversation between our CEO,  Clément Delangue, and Bill Higgins, IBM, who's 16:26really, really close to all the amazing research  work and open-source work that's happening at 16:33IBM. And that conversation sort of sparked the  evidence that we needed to do something together. 16:42We share a lot of values in terms  of the importance of open source, 16:48which is fundamental to us; with the  importance of doing things in a, an 16:54ethics-first way, to enable the community  to incorporate ethical considerations in 17:01how they're building AI. And we sort of have  a different audience to start with, which is 17:09all the AI builders use Hugging Face today. To access all the models we talked about; 17:16to use them using our open source, and build  with them. And IBM has this incredible history 17:24of working with enterprise companies  and enabling them to make use of that 17:30technology in a way that's compliant with  everything that an enterprise requires. 17:36And so being able to marry these two things  together is an amazing opportunity. And now 17:42we can enable the largest corporations that  have, sort of, complex requirements in order 17:49to deploy machine-learning systems and  give them an easy experience—to take 17:55advantage of all the latest and greatest  that AI has to offer through our platform. 18:01Tim Harford: Let's talk about this idea of a, of  a single model or a variety of models. Because 18:07what I've been hearing you say—you've been saying,  “Oh, there are lots of models, there are hundreds 18:12of thousands of models available on Hugging Face.” But you've also said “There's this single thing, 18:18the transformer, and they're all transformers.”  So if they're all basically the same thing, 18:26why can't you just build one super-clever  model that can do everything? 18:32Jeff Boudier: That's a really interesting idea,  and very much a new idea. And the reason we have 18:39over a million repositories, 300,000 free  and accessible models, on the Hugging Face 18:44platform is that models are typically trained  to do one thing—and they're typically trained 18:51to do one thing with specific types of data. And what became new and evident in the research 19:00that came out over the last couple years is that  if you train a big enough model with enough data, 19:09then those models start to have a—sort  of general capabilities. You can ask 19:14them to do different things. You can even  train them to respond to instructions. 19:19So with the same model you can say, “Hey,  summarize this paragraph, translate this 19:25into English, start a conversation in French, and  then pivot to German.” And so these are general, 19:31sort of, language capabilities. And I think when  ChatGPT came online, and the world, sort of, 19:40discovered these new capabilities, there was,  at least for a short period, this sort of idea, 19:47this sort of myth, that “the end game of all  this is maybe one or a handful of models are 19:56so much better than anything else that exists,  that they can do anything that we can ask them to 20:03do. And that's the only model that we will need.” And I, for one—I think it is a myth. I don't think 20:12it is practical, for a variety of reasons. Say you're writing an email and you have, 20:19like, this great suggestion of text to, sort of,  complete your sentence. Well, that's AI. That's, 20:26that's a large language model. That's  a transformer model that does that. 20:29So there are a ton of existing use cases like  this, and these use cases are powered by specific 20:36models that have been trained to do one thing well  and to do it fast. If you wanted to apply these, 20:42sort of—all-knowing, powerful, Oracle type  of model, you would not be able to serve 20:51millions of customers through a search engine. You would not be able to complete people's 20:58sentences, because the amount of money that  you would need, the number of computers that 21:04you would need, to run such a service—it just  exceeds what is available on the planet. So 21:12one reason for which it's not a practical  scenario is that it's just very expensive 21:21to run those very, very large models. Tim Harford: What I'm hearing is, 21:25it's like, “Look, if you want to screw in  a screw, you need a screwdriver.” You don't 21:31want an entire toolshed full of tools if the  task is to screw in—a screwdriver. And sure, 21:37you could bring the toolshed. There are all  the tools; there's a screwdriver there. But 21:42it's not necessary. It's incredibly  expensive. It's incredibly cumbersome, 21:46and that cost exists even though, maybe, as the  user who's just typing in a, into a prompt box, 21:53the user may not see it, but it's still very real. Jeff Boudier: That's right. And then another 21:58one is performance. So, taking the  screwdriver example, so—and by the way, 22:03like, we're not quite there at this moment where  we have this all-knowing, powerful Oracle; that 22:08is still sort of a sci-fi scenario. But—we have  screwdrivers, but we also have the Leatherman, 22:16right? The, the multitool—a Swiss Army knife, and  that's, sort of the moment that we are in today. 22:22But now if I'm trying to open up my computer,  it turns out that it requires a specific kind 22:29of a screw, like these tiny little Torx screws,  and having a Torx screwdriver will get me 22:35much further than trying to use my Leatherman,  where maybe I'll get the knife blade and it'll, 22:41it will mess up the screw, and maybe eventually  I'll get to what I need—but my point is that if 22:47you take a very specifically trained model for  a particular problem, it will work much better. 22:54It will give you better results  than a very, very generalistic, 22:59big model that can do a lot of things. And so  for things like search engines, for things like 23:06translation, for things that are very specific,  companies are much better off using smaller, 23:13more-efficient models that produce better results. Tim Harford: That's really interesting and 23:18presumably then being able to know which model to  use, or being able to know who to ask which model 23:25to use, becomes a very important capability. Jeff Boudier: Yes, and that's what we're 23:30trying to make easy through our platform. Tim Harford: So tell me about how this works 23:37with IBM's watsonx platform. How do you see  Hugging Faces’ customers benefiting from that? 23:44Jeff Boudier: The end goal is to make it really  easy for watsonx customers to make use of all 23:53the great models and libraries that we talked  about—all the, the, the 300,000 models that are 23:59today on Hugging Face. And to do this, we need to  really collaborate deeply with the IBM teams that 24:08build the watsonx platform, so that our libraries,  our open source, our models are well integrated 24:16into the platform. If you're a single user, if  you are a data-science student, and you want to 24:22use a model, we make it super easy, right? We  have our open-source library. You can download 24:27the model on your computer and run with it then. But in enterprises, there is a vast complexity of 24:35infrastructure and rules around what  people can do and how the data can 24:41be accessed. And all this complexity is,  sort of, solved by the watsonx platform. 24:50Tim Harford: This season of the  Smart Talks podcast features what 24:54we're calling New Creators. Do you, do you  see yourself as being a creative person? 24:59Jeff Boudier: I think it's a requirement for  the job. I mean, we're in such a new and rapidly 25:05evolving industry that we have to be creative  in order to invent the business models, the use 25:14cases, of tomorrow. My role within the company  is really to create the business around all of 25:22the great work of our science and open-source  and product team. And by and large the business 25:29model of AI within the whole ecosystem is still  something that companies are trying to figure out. 25:38So creativity is really, really important—to  really have the conversation with companies, 25:43understand what they're trying to do and then  build the right kind of solution. So that's, 25:47like, where creativity comes into play. Tim Harford: And one of the things that you've, 25:55you've been talking about is just this, this  growing number of models, this growing number 26:00of capabilities, this growing number of use  cases—enormously exciting. But also, I think, 26:09completely bewildering for most people who are  trying to navigate their way through this, this 26:17maze of possibilities that is, that is growing  faster than they can even learn about it. So 26:23how are you helping people navigate and make  choices in that environment? And how does the 26:28partnership with IBM help with that? Jeff Boudier: Hmm. Well, as I said, 26:34like, our vision is that AI machine  learning is becoming the default way 26:39of creating technology. And that  means, like, every product, app, 26:44service that you're going to be using is going  to be using AI to do whatever it is better, 26:51faster. And I guess there are two competing  visions of the world coming from that. 26:58There is this vision of the Oracle all-powerful  model that can do everything. And our vision is 27:09different. Our vision is that every single company  will be able to create their own models that they 27:17own, uh, that they can use, that they control.  And that's the, the, the vision that we're trying 27:24to bring to life through our open-source tools  that make this work easy, through our platform 27:32where you can find all those pretrained  models that are shared by the community. 27:36So we really want to empower companies to  build their own stuff, not to outsource all 27:41the intelligence to a third party, and the  watsonx platform from IBM gives those tools 27:50to enterprise companies. So that's—you can use  the open-source models that Hugging Face offers. 27:59Then you can improve them with your own data  without sharing that data to a third party. 28:06And then you could do every—all of this work in  compliance with whatever governance requirements 28:14you have for your company. Maybe you’re a finance  services company and you have a specific set of 28:20rules. Maybe you’re a healthcare company and  you have very strong privacy requirements 28:27for patients’ data. Maybe you’re a tech  company and you have your, your customers’, 28:34your users’ personal information. So you need to  be able to do this work, respecting all of that. 28:41Tim Harford: Jeff Boudier. Thank you very much.  Jeff Boudier: Thanks so much, Tim. It was fun. 28:47Malcolm Gladwell: To create the AI models of  the future, we are going to need open source. 28:52That means there’s a place for business  in the open- source community to harness 28:55the game-changing potential of AI innovation. Like Jeff said, businesses face unique challenges 29:03they need to solve at scale. Without proper  support systems, tapping into open- source AI 29:09at enterprise level is daunting: finding the  right-sized model for the job, fine-tuning 29:14its purpose, all while addressing governance  requirements around data, privacy, and ethics. 29:22So for businesses, IBM’s collaboration with  Hugging Face is a mark of progress because 29:27it signifies that business can tap into open-  source AI while preserving enterprise-level 29:33integrity. Businesses should embrace the  open-source community and the AI future, 29:40much like Hugging Face (and  its emoji namesake) suggests. 29:44I’m Malcolm Gladwell. This is a paid advertisement from IBM. 29:49Smart Talks with IBM is produced by Matt  Romano, David Zha [JAH], Nisha [Nih-sha] Venkat, 29:54and Royston Beserve, with Jacob Goldstein. We’re  edited by Lidia Jean Kott. Our engineers are 30:00Jason Gambrell, Sarah Bruguiere [Brew-Ghare (hard  G!)], and Ben Tolliday. Theme song by Gramoscope. 30:05Special thanks to Carly Migliori, Andy Kelly,  Kathy Callaghan [Calla-Han], and the EightBar and 30:11IBM teams, as well as the Pushkin marketing team. Smart Talks with IBM is a production of Pushkin 30:17Industries and Ruby Studio at iHeartMedia.  To find more Pushkin podcasts, listen on 30:23the iHeartRadio app, Apple Podcasts,  or wherever you listen to podcasts.