Learning Library

← Back to Library

Open AI Transforming Enterprise Operations

Key Points

  • The episode explores “openness” in AI, examining how transparent, open‑source approaches are reshaping business models and expanding what enterprises can achieve with artificial intelligence.
  • Maram Ashuri, IBM Watson x’s Director of Product Management, explains how IBM’s foundational models—particularly the Granite family—enable faster, more accurate customer‑care responses by leveraging internal company data while maintaining higher levels of model transparency.
  • She outlines the shift from mere generative‑AI experimentation to production‑grade deployments, noting challenges such as latency, cost, and energy use, and how smaller, proprietary‑data‑tuned models can deliver comparable performance at a fraction of the expense.
  • Ashuri highlights generative AI as a paradigm shift on par with the rise of the internet and personal computers, arguing that it will dramatically boost workforce productivity by automating routine tasks and freeing employees for higher‑value work.
  • With over 15 years of data‑driven technology experience, Ashuri’s role at IBM places her at the forefront of bringing enterprise‑grade AI capabilities to market, emphasizing the strategic importance of open, transparent models for future business innovation.

Full Transcript

# Open AI Transforming Enterprise Operations **Source:** [https://www.youtube.com/watch?v=YIIbugJbgpE](https://www.youtube.com/watch?v=YIIbugJbgpE) **Duration:** 00:31:09 ## Summary - The episode explores “openness” in AI, examining how transparent, open‑source approaches are reshaping business models and expanding what enterprises can achieve with artificial intelligence. - Maram Ashuri, IBM Watson x’s Director of Product Management, explains how IBM’s foundational models—particularly the Granite family—enable faster, more accurate customer‑care responses by leveraging internal company data while maintaining higher levels of model transparency. - She outlines the shift from mere generative‑AI experimentation to production‑grade deployments, noting challenges such as latency, cost, and energy use, and how smaller, proprietary‑data‑tuned models can deliver comparable performance at a fraction of the expense. - Ashuri highlights generative AI as a paradigm shift on par with the rise of the internet and personal computers, arguing that it will dramatically boost workforce productivity by automating routine tasks and freeing employees for higher‑value work. - With over 15 years of data‑driven technology experience, Ashuri’s role at IBM places her at the forefront of bringing enterprise‑grade AI capabilities to market, emphasizing the strategic importance of open, transparent models for future business innovation. ## Sections - [00:00:00](https://www.youtube.com/watch?v=YIIbugJbgpE&t=0s) **Untitled Section** - ## Full Transcript
0:12bushkin hello hello welcome to Smart 0:14talks with IBM a podcast from Pushkin 0:17Industries iHeart radio and IBM I'm 0:20Malcolm glao this season we're diving 0:23back into the world of artificial 0:25intelligence but with a focus on the 0:27powerful concept of open it's Poss AB 0:30ilities implications and misconceptions 0:33we'll look at openness from a variety of 0:35angles and explore how the concept is 0:37already reshaping Industries ways of 0:40doing business and our very notion of 0:42what's possible on today's episode Jacob 0:46Goldstein sat down with maram ashuri the 0:49director of product management and head 0:51of product for IBM's Watson 0:54x. where she spearheads the product 0:57strategy and delivery of IBM's Watson 1:00Foundation models she is a technologist 1:03with more than 15 years of experience 1:06developing datadriven Technologies the 1:09conversation focused on how Enterprises 1:11can use technology to build and deliver 1:14greater transparency in AI with granite 1:18maram explained how granite can be 1:20utilized to improve efficiency across 1:23various domains she discussed how these 1:26models are being used in real world 1:28business applications 1:30particularly in areas like customer care 1:33where AI can help enable quick accurate 1:36responses based on internal company data 1:40Mariam provided a fascinating look into 1:43how Enterprises have moved from Mere 1:45experimentation with generative AI to 1:48actual production navigating challenges 1:52such as increased latency cost and 1:54energy consumption she highlighted how 1:57the emerging trend of smaller models 2:00customized with proprietary data can 2:02potentially deliver high performance at 2:05a fraction of the cost marking a 2:07significant shift in how Enterprises 2:10leverage AI whether you're an AI 2:12Enthusiast or a business leader looking 2:14to harness the power of artificial 2:17intelligence this episode is packed with 2:20valuable insights and forward-thinking 2:24[Music] 2:27strategies let's just start with your 2:29background how did you come to work at 2:31IBM I joined IBM right after I graduated 2:35I have an AI background and throughout 2:37the years I've 2:39held many roles in design engineering 2:44development research mostly focus on AI 2:46application development and design in my 2:49current job I'm the product owner for 2:52what's the next that AI which is the IBM 2:56platform for Enterprise AI what excites 2:59me about this job I would say is the 3:01technology advancements over the last 18 3:04months in the market we've been 3:05witnessing how generative AI has been 3:07changing the market the way that I see 3:10that is Gen has been perhaps one of the 3:13largest Paradigm shifts when we think 3:15about productivity the same way that 3:18internet and personal computers impacted 3:20the productivity of Workforce now we are 3:23witnessing another wave of all those 3:26opportunities that it can unlock for 3:29especially Enterprise AI when it comes 3:31to enhancing the productivity of the 3:33workforce and releasing some time that 3:36can potentially be put into creating 3:39more value work for Enterprise so that's 3:42the major part that I picked this team 3:46to have an impact on the market and the 3:49community but also of course uh using 3:52the skills that I gain through all these 3:54years through IBM to help to establish 3:58IBM as the market lead there for 4:00Enterprise AI so you talked about geni 4:03as this sort of generational 4:06transformational technological force and 4:09I'm curious just in terms of how it's 4:11going to come into the world like how do 4:13you see market adoption of geni sort of 4:17evolving from here well last year was 4:20the year of excitement about generative 4:22AI most of the companies were 4:24experimenting and exploring with Gen we 4:26see that energy shifted towards how to 4:29best monetized that technology almost 4:31half of the market has moved from 4:33investigation to Pilots 10% has moved to 4:37production when you are exploring with 4:39this technology you're looking for a 4:41valve Factor you're looking for an aha 4:43moment that's why very large general 4:46purpose models shine but as companies 4:49move toward production and scale they 4:51soon realize the path to Success is Not 4:53That 4:54straightforward for example the larger 4:56the model the larger compute res as it 5:00requires that translates to increased 5:02latency that's your response time that 5:04translates to increased cost that 5:07translates to increase carbon food print 5:09and energy consumption so think about 5:11that at the scale of Enterprise in 5:14production some of them can be a 5:16showstopper because of this reason what 5:19actually see is emerging in the market 5:22is instead of focusing on very large 5:26general purpose models coming back to 5:29very small trustworthy models that they 5:33can customize on their own proprietary 5:36data that's the data about their 5:38customers that the data about their 5:40specific domains to create something 5:42differentiated that is much smaller and 5:45delivers the performance that they want 5:48on a Target use case for a fraction of 5:50the cost uhuh so let's talk a little bit 5:54more specifically about what you're 5:56working on let's talk about Granite 5:59first first of all tell me what is 6:00granite Granite is our industry leading 6:04family of models Flagship IBM models 6:07okay these are the models that we train 6:10from scratch when offered through our 6:12platform we offer indemnification and we 6:15stand behind them today it comes in four 6:18flavors language code time series and 6:22GEOS special models Granite language 6:26series is covering English Spanish 6:28German Portuguese and Japanese we have a 6:32combination of commercial and 6:34open-source language models on granite 6:37for example we recently released the 6:39granite 7B language model small powerful 6:43English model okay on the code front our 6:46models are state-ofthe-art models 6:49ranging from 3 billion to 34 billion 6:52parameters these are very powerful 6:55models that performs or outperforms in 6:58some cases the popular open- Source 7:00models in their weight class so very 7:02powerful models so I get the idea big 7:05picture about these models but it would 7:06be helpful to just get a sense 7:08specifically of what they're doing like 7:09can you give me any specific examples of 7:12how these models are being used in 7:15businesses in the real world right now 7:18well the top use cases for generative AI 7:21are really content generation 7:23summarization information extraction 7:26perhaps the most popular use case that 7:29we are seeing in Enterprise is content 7:31grounded question and answering so using 7:34these models as a base to connect them 7:37to a body of information let's say their 7:40policies their documents that is 7:42internal to the Enterprise and get the 7:44model to provide answers based on that 7:48questions one example of that is for 7:51customer agents customer care when a 7:53customer is asking a question previously 7:57the agent that responds to the customer 7:59had to answer the question and if they 8:02don't know the answer escalated to the 8:04product specialist keeping people on 8:06hold on the line to go figure out the 8:09answer for that and then come back you 8:11can think of the time it takes to 8:13resolve an issue but now with llms we 8:16have an opportunity to automatically 8:18retrieve the information based on the 8:20internal documents of the company 8:22formulate an answer show it to the human 8:24agent and then if they verify with the 8:27sources of where is coming from they can 8:29just translate it directly to the 8:30customer right this is a very simple 8:34example of how it's impacting the 8:35customer care so one big theme of this 8:40season is this idea of open and one of 8:43the things that's interesting to me 8:45about the work you're doing is you are 8:48using not only Granite this uh model IBM 8:51developed but you're also using uh 8:53third-party models right from other 8:55places so tell me about that work and 8:57how that is sort of fitting into your 8:59kind of real world typically Enterprise 9:01gen work when it comes to model strategy 9:05our strategy is really focused on two 9:07pillars multimodel and multi- deployment 9:10it means that we don't believe one 9:12single model rules all the use cases and 9:15I think at this point the market has 9:17also realized the Enterprise markets in 9:19average today are using 5 to 10 9:21different models for different use cases 9:24oh interesting so in our portfolio if 9:27you look into Boston x. a today we are 9:29offering a large sets of high performing 9:32state-of-the-art models coming from 9:33open- Source commercial models that we 9:36are bringing through our partners and 9:39also IBM develop models in addition to 9:42all of these we also have an option for 9:44bring your own model from outside the 9:47platform let's say you have a custom 9:49model that you made it yourself you can 9:51bring it to the platform and really 9:53helping the customers to navigate 9:56through a wide range of models and the 9:59right model for their target use case 10:02throughout that we've been heavily 10:04working with our partners and you know 10:07this is the market that is evolving 10:09rapidly we've been at the Forefront of 10:11speedit to delivery one example that I 10:13like to highlight is recently meta 10:15released a llama 405 billion such a 10:19powerful model on the same day that it 10:21was released to the market we made it 10:23available in our platform to our 10:26customers the same day and not only we 10:28delivered it on the the same day we are 10:30offering competitive pricing but also 10:32flexibility in where to deploy so we are 10:35giving an option to Enterprise to deploy 10:38these models on the platform of dat 10:40choice either multicloud it can be gcp 10:43AWS Azure IBM cloud or on premises the 10:47same for U misol ai misol ai recently 10:51released the model misol large to on the 10:54same day we deliver that through the 10:56platform that's an example of a 10:58commercial model llama was open source 11:00but mrr large 2 is a commercial model 11:04that we made available through the 11:05platform great so I want to talk about 11:10Enterprise grade Foundation models just 11:13to get into it briefly what's a 11:15foundation model people associate 11:17Foundation models with a large language 11:19model but large language models are 11:21really a subset of foundation models 11:23large language models are focused on 11:25language but Foundation models can be 11:28code generator 11:29can be focused on time series model we 11:32talked about they can be images it can 11:34be GEOS special models so Foundation 11:37model as the term suggests your 11:39foundations to create a series of 11:43subsequent models that can be customized 11:46for a downstream use case and that's why 11:49they are calling them Foundation models 11:51llm is a good example of that as a 11:53subset for language that you can further 11:55customize on your specific data to get 11:59them model to do other work so the core 12:02of these Foundation models they are 12:04basically trained on an absurd amount of 12:07data LGE data sets that uh most of the 12:10institutions today are sourcing them 12:12from the internet so you can imagine 12:14what can potentially go to those models 12:16and then it comes to the Enterprise and 12:18they start using it so for us also when 12:22we started looking into in particular it 12:25was triggered by customers asking us to 12:28Prov client protections on these models 12:31and we started thinking about let's look 12:33into how the models are trained and if 12:35we are comfortable offering client 12:38protections on the models that are 12:40available in the market and guess what 12:42for a majority of these models there is 12:44absolutely no visibility into what data 12:47went into those models not much 12:49transparency into how the model trained 12:52and the responsibility lies on you as a 12:54customers to start using those models so 12:56just to be clear that is presenting like 12:59potential risk real potential risk to a 13:02company that is using these models it is 13:04it is a potential risk in particular for 13:07the customers in highly regulated 13:10Industries so what we did for granite 13:13was when we started training this models 13:15from scratch basically we went to the 13:17Corpus of data that was available to us 13:20so for example the very first version of 13:23granite was exposed to 20% of its data 13:27from finance and legal because we have a 13:29lot of um financial institutions as our 13:32clients we work directly with our IBM 13:35research to identify detectors for 13:38harmful information like hate abuse and 13:40profanity detectors uhuh okay so we're 13:43talking about Granite we're talking 13:44about this set of models IBM has 13:46developed let's talk about using Granite 13:48on Watson X compared to downloading 13:52open- Source models like how do those 13:54differ by using granite and bson X you 13:57get two things the first one is the 14:00client protection and identification 14:02that we talked about you get that if the 14:04model is consumed through our platform 14:06and the second one is really the 14:09ecosystem of platform capabilities that 14:11we are offering to help you create value 14:14on top of those data so for example 14:16bringing your data to customize Granite 14:19for your own specific use case but also 14:22one thing that I like to highlight in 14:25particular is the AI governance so when 14:28you get one of these pre-trained models 14:30you put it in front of your own users 14:33through the input and instructions that 14:35the user provides for that model they 14:38can nudge the model to potentially 14:41create undesired behavior and change the 14:43behavior of the model and because of 14:46this is extremely important to 14:48automatically document the lineage of 14:50who touched the model at what point so 14:53if something happens you can trace it 14:55back and see where it's coming from and 14:58that's what what's the next that 14:59governance is offering automatically 15:01documenting the lineage when you use the 15:04granite within the platform you get all 15:06of those you can have the endtoend 15:08governance you can have access to all 15:12this scalable deployment opportunities 15:14that is available for you like to allow 15:16you deploy them on the platform of your 15:18choice that we talked about either multi 15:21uh cloud or um Prem and it also helps 15:24you to have access to a wide range of 15:26model customizations approaches promp 15:29tuning fine-tuning retrieval augmented 15:31Generations agents there is a series of 15:33them available to use and apply to your 15:36model this distinction between large 15:38language models and Foundation models is 15:41eye openening Miriam emphasized that 15:44Foundation models can be tailored to 15:46specific tasks but with that versatility 15:50comes a significant challenge the lack 15:52of transparency and how these models are 15:55trained this composee a real risk 15:58especially in high regulated Industries 16:00like Finance essentially by using 16:03granite and Watson X together 16:05Enterprises get powerful and 16:07customizable 16:09tools so let's talk about the future a 16:11little bit what do you think are some of 16:13the big developments we're likely to see 16:15in the realm of AI models very good 16:18question I feel like the generative AI 16:22of the past was powered by large 16:25language models the generative AI of the 16:28future is going to reason plan act and 16:32reflect huh and so I mean in the context 16:35of granite in particular like what are 16:38we likely to see both you know in the 16:41near term and in the sort of medium to 16:43long term there are multiple elements to 16:46implement an agentic workflow that I 16:48just mentioned one element of that is 16:51the llm itself to be able to do the 16:55planning and reasoning and acting and 16:57doing something that we call tool 17:00calling so basically a series of tools 17:03are available to the model you ask the 17:06model to call those and make a call for 17:08example we can say hey Granite what is 17:11the weather like where uh Jacob lives 17:15it's going to connect to web search API 17:17look up your location then it's going to 17:20connect to Weather API calculate the 17:23weather and come back and formulate an 17:25answer and respond to that so during 17:28this process 17:29it first has to plan the task of how to 17:32answer that question look into what are 17:34the tools that are available to it and 17:36call them and that's an ability of the 17:38model to do that what we did with 17:40granite was we expanded the granite 17:43capabilities to be able to do function 17:46calling so for example today we have an 17:49open- Source granny 20b function calling 17:51that is available on hogging face to try 17:54on and you can grab the model and the 17:56model has capability to do the tool 17:58calling I'm anticipating that in the 18:00near future the planning and reasoning 18:03and acting and reflecting capabilities 18:05of the large language models are going 18:07to continue to 18:09evolve so thinking now from the point of 18:12view of buyers and users of AIS really 18:16people who are listening kind of from 18:17that perspective as people are 18:20evaluating AI tools and solutions what 18:25is the most important thing they should 18:27be thinking about how do you think about 18:29kind of that 18:30process I think they should always start 18:32with the area at which they think it 18:36would benefits from Ai and then within 18:39that area looking to what data they have 18:42available to potentially fit into those 18:45AI service Architects do they have 18:47access to Quality data and the second 18:50question that they have to ask 18:51themselves is do I have a trusted 18:53partner that can supply what I need to 18:56be able to implement AI that can can be 18:59a collection of the foundation models 19:02that you're going to need that can be a 19:03collection of the platform capabilities 19:06that the trusted partner can offer you 19:09to implement such a thing the third 19:11thing is go and evaluate the 19:14regulations does regulation allow you to 19:17apply AI to that specific area that you 19:21are investigating and you're targeting 19:23for AI and the last part but not least 19:26is back to the principles of design 19:28thinking what is the problem in that 19:31area I'm solving with AI and if AI is 19:35even appropriate because we want to make 19:37sure that you use AI not just because 19:39it's a cool hot toy in the market but 19:42you are convinced that it can 19:44significantly enhance the user 19:46experience of your customers in that 19:49area and once you have an answer to 19:51those all these four questions then 19:53maybe you have a good candidates to 19:54start applying AI to and what about from 19:58the side of project managers who are 20:01trying to just keep up with how fast 20:03things are changing how fast Innovation 20:06is happening like what advice would you 20:08give those people my advice would be 20:12focus on agility this is a market that 20:15is evolving rapidly and the winners of 20:18the market would be those that are able 20:20to take advantage of the best the market 20:23can offer at any point of time so in 20:26order to do that they need to be open to 20:30experimentation continuous learning and 20:34open to rapidly adopting the new 20:38ideas and when you think about the 20:40future and geni is there a particular 20:44say problem that you are most excited to 20:47solve I think that would be productivity 20:50if you look into the stats that are out 20:52there there are surveys that confirm 20:55that 60 to 70% of the time of our 20:58employees can be 21:00potentially enhanced through the 21:03productivity gains of generative AI for 21:05example I personally myself use my 21:08product for Content generation a lot so 21:10the time that it frees up can be 21:13potentially put into generating a higher 21:16value work and because of that I'm super 21:20excited with all the opportunities that 21:23it represents for Enterprises to go and 21:26dedicate the time of their employees to 21:28higher value items great okay a couple 21:32Granite specific questions so what are 21:35like the key things you want the world 21:37to know about Granite Granite is open 21:41trusted and 21:42targeted two ways to think about 21:46openness One open as open weights it's 21:49available for public to download and the 21:52second one is open as in there is less 21:56restrictions on how the customers can 21:59legally use these models for a range of 22:02use cases we have released Granite open 22:04source models under Apache license that 22:07is enabling a large range of use cases 22:10the second one was trusted we talked 22:13about that like it's a rooted in the 22:15trustworthy governance process that we 22:17established around how we are training 22:19these models and the responsibility that 22:22we take for these models and the third 22:24one is targeted targeted for enterprise 22:27we talked about like exposing Granite to 22:29Enterprise data or the domain specific 22:33Granite some of them like Cobalt to Java 22:35translation that is targeting to solve a 22:38specific Enterprise needs and that's 22:40Granite so open trusted and targeted so 22:43there are a lot of models out in the 22:45world all of a sudden right it's a it's 22:47a crowded Market where does granite fit 22:50in that Universe what is the market for 22:53granted we talked about the Enterprise 22:56Market shifting away from very large en 22:58purpose models to targeted smaller 23:01models and granite is a small model that 23:06Enterprise can pick up and customize on 23:09their proprietary data to create 23:12something that is differentiated for a 23:14Target use case so Granite is well 23:17suited as a small domain specific 23:20business ready tailored for business and 23:23trained on Enterprise data to solve 23:26Enterprise questions you mentioned small 23:29as one of the things that granted is why 23:31is that useful in some contexts for 23:35Enterprise for businesses the larger the 23:38model the larger compute resources it 23:41requires it translates to increased 23:45latency that's your response time it 23:48translates to increased cost add in 23:52translates to increase carbon footprint 23:55and energy consumption so at the scale 23:57of enterprise transactions when you move 23:59to production and you want to scale some 24:02of these challenges can be multiple 24:06times stronger like cost can add up the 24:09energy consumption can be a serious 24:11thing and the latency is depending on 24:14the application can be a showstopper and 24:17um blocker because for longer larger 24:21models more powerful models it just 24:23takes a way longer time to process and 24:26calculate the output for you 24:28we are going to finish up with a speed 24:31round and I want you to just answer with 24:35the first thing that comes to mind don't 24:37overthink these okay complete this 24:39sentence in five years AI will be 24:43invisible ah I like that what do you 24:45mean by that today AI is everywhere but 24:49if you ask my kids at home they know AI 24:53but if you say where is AI like how do 24:55you use AI they don't know the answer 24:57because it's so Blended in their life 25:01that they don't feel like it's something 25:04that they are using they are getting 25:05used to that so when I think of Next 25:08Generation and the years to come that 25:11generation is so used to AI being part 25:15of their life that they feel like it's 25:17just there that's one and the second one 25:20is the Simplicity of interaction with AI 25:22that you don't feel like you're 25:23interacting with the system it's just 25:26there like you talk to AI everything is 25:28automated so I would say the Simplicity 25:31and being Blended to solve the right 25:35problems is the part that I'm referring 25:37to as invisible like internet is 25:40everywhere and it's invisible but we 25:42used to dial in like you remember the 25:44dialing some to connect to Internet it's 25:47gone internet is completely invisible 25:49today right like we used to talk about 25:51logging on right and you don't log on 25:54anymore cuz you're always logged on yep 25:57you're always connected 25:59what's the number one thing that people 26:01misunderstand about ai ai is inevitable 26:06but should not be feared what advice 26:09would you give yourself 10 years ago to 26:12better prepare you for today I would say 26:16develop a broad range of skills even if 26:19you think they will not help you today 26:22they may be valuable in the future so on 26:25the consumer side right now we hear a 26:27lot of about chatbots and image 26:30generators but on the business side what 26:33do you think is the next big business 26:35application AI influencers generating 26:38content huh how do you use AI in your 26:41day-to-day life today one simple example 26:45is LinkedIn posts I love it to just go 26:48to my product I'll give you an example 26:50which is my favorite one llama 3.1 26:53405b the post that I announced on 26:56LinkedIn on hey IBM is releasing the 26:58model on the same day it was generated 27:01by llama 3.1 405 billion so using the 27:04same model to post generate the 27:07announcement uh note very elegant is 27:10there anything else I should ask you oh 27:12we didn't talk about instruct lab so 27:15when you grab a model you start from the 27:17model but you need to then customize it 27:21on your proprietary data to create value 27:24on top of that so instru lab is giving 27:27you a method based on open-source 27:31contributions to collectively contribute 27:35to improve the base model so if you're 27:39an Enterprise you can leverage your 27:43internal employees to collectively all 27:46contribute to improve the models and 27:49I'll give you an example of why it 27:51matters like if you go to hogging face 27:53today and look for llama there are about 27:5550,000 different llamas coming up and 27:59the reason is because there is no way to 28:01contribute to the base model if you're a 28:04developer you have to make a clone of 28:06the copy of the model and F tune it for 28:08your own purpose we figure the method 28:10that is we call instruct lab to be able 28:13to collectively collect all that 28:15information and contribute to the base 28:18model and enhance that so that's 28:20instru I just wanted to highlight the 28:22value of being open uhuh because that's 28:25another topic that has been emerging in 28:27the market over the past 18 months in 28:29particular I believe the future of AI is 28:32open and we've been seeing how the open 28:35source markets has been changing how the 28:40models are accessible to a wider 28:42audience and good things typically 28:45happen when you make technology pieces 28:47accessible to a broader range of 28:49community to stress test them and that's 28:52the direction that we've been adopting 28:54with granite and I feel like that's 28:56really the adoption that the market is 28:57going to immersed to moving forward yeah 29:00there's this interesting I think maybe 29:02naively unintuitive but it makes sense 29:05once you think about it thing that open- 29:08Source things are safer you might 29:09naively think oh no put it in a box so 29:12nobody can see it and that'll be safer 29:14but like it turns out in the world if 29:15you let everybody poke at it the world 29:17will find the vulnerabilities for you 29:19and you can fix them right that's 29:21exactly what's going to happen yeah 29:23great it was lovely to talk with you 29:26thank you so much for your time this 29:28same here thanks 29:30Jacob and that wraps up this episode a 29:33huge thanks to maram and Jacob today's 29:35conversation opened my eyes as to how 29:38open technology and AI are intersecting 29:41to create more transparent and efficient 29:43systems for Enterprises from the power 29:46of smaller more targeted models like 29:48Granite to the importance of trust and 29:50governance in AI these developments are 29:53reshaping how businesses operate at 29:56their core as we continue to unpack the 29:59complexities of artificial intelligence 30:02it's clear that openness whether in data 30:05technology or collaboration is not just 30:08a concept but a driving force that can 30:11unlock new 30:14possibilities smart talks with IBM is 30:16produced by Matt Romano Joey fishr Amy 30:19Gaines McQuade and Jacob Goldstein were 30:22edited by Lydia Jean cot our Engineers 30:25are Sarah buger and Ben toall the song 30:28by gramoscope special thanks to the 30:30eight bar and IBM teams as well as the 30:32Pushkin marketing team smart talks with 30:35IBM is a production of Pushkin 30:37Industries and Ruby Studio at iHeart 30:39media to find more Pushkin podcasts 30:42listen on the iHeart Radio app Apple 30:44podcast or wherever you listen to 30:47podcasts I'm Malcolm 30:51Gladwell this is a paid advertisement 30:53from IBM the conversations on this 30:55podcast don't necessarily represent 30:58IBM's positions strategies or opinions 31:03[Music]