Learning Library

← Back to Library

Generative AI Takes Center Stage at IBM Think

Key Points

  • IBM Think’s research keynotes introduced a “new wave of computing” that expands beyond classical and quantum paradigms to include generative computing models.
  • The conference announced the launch of Watsonx Orchestrate, delivering more than 150 enterprise‑ready AI agents for immediate use.
  • The keynote’s light‑hearted moments—such as mascot penguin Arvind marching across the stage—showed a playful side that resonated with the audience.
  • Tim Hwang’s Mixture of Experts podcast highlighted related AI news, including a New York Times story on AI hallucinations and recent OpenAI organizational moves.
  • Kate Soule revealed her new book, **AI Value Creators**, which expands on generative AI strategies discussed at Think, and offered a free download link for podcast listeners.

Sections

Full Transcript

# Generative AI Takes Center Stage at IBM Think **Source:** [https://www.youtube.com/watch?v=5M-VD9F1W7A](https://www.youtube.com/watch?v=5M-VD9F1W7A) **Duration:** 00:27:56 ## Summary - IBM Think’s research keynotes introduced a “new wave of computing” that expands beyond classical and quantum paradigms to include generative computing models. - The conference announced the launch of Watsonx Orchestrate, delivering more than 150 enterprise‑ready AI agents for immediate use. - The keynote’s light‑hearted moments—such as mascot penguin Arvind marching across the stage—showed a playful side that resonated with the audience. - Tim Hwang’s Mixture of Experts podcast highlighted related AI news, including a New York Times story on AI hallucinations and recent OpenAI organizational moves. - Kate Soule revealed her new book, **AI Value Creators**, which expands on generative AI strategies discussed at Think, and offered a free download link for podcast listeners. ## Sections - [00:00:00](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=0s) **Generative Computing Takes Center Stage** - Panelists highlight IBM Think’s research keynotes, celebrating the debut of generative computing, the launch of over 150 enterprise‑ready AI agents on Watsonx Orchestrate, and the light‑hearted mascot moment that energized the audience. - [00:03:05](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=185s) **From Prompt Engineering to Programmatic AI** - The speaker critiques massive, brittle essay prompts as unsustainable, advocating the adoption of software‑engineering abstractions and clear control flow to integrate LLM capabilities into maintainable, production‑scale systems. - [00:06:11](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=371s) **Flexible Hybrid AI Model Overview** - The speaker highlights a modular approach to AI, describing IBM's new Granite hybrid expert models that are memory‑efficient, fast, and support long context lengths as a complement to larger models. - [00:09:18](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=558s) **Rise of Hallucinations in Reasoning Models** - The hosts discuss a recent New York Times article highlighting a surge in hallucinations among newer reasoning AI models, reference model‑card data showing the trend, and acknowledge they lack a clear explanation for why it’s occurring. - [00:12:22](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=742s) **Persistent Hallucinations in AI** - The speakers discuss the ongoing problem of AI model hallucinations, questioning optimistic predictions of a near‑term fix and concluding that hallucinations are likely to remain a recurring challenge despite future techniques. - [00:15:30](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=930s) **Managing LLM Hallucinations in Business** - The speakers debate how hallucinations affect different downstream applications, argue that reliability needs depend on use‑case, and acknowledge that hallucinations will persist despite research advances. - [00:18:36](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=1116s) **Grounded AI and Truth Constraints** - The speaker argues that as models get smarter their factual reliability drops, calling for new hybrid architectures that explicitly enforce truth constraints and noting the rumored $3 billion OpenAI acquisition of Windsurf as evidence that AGI hype may be more marketing than substance. - [00:21:45](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=1305s) **Wraps vs Integrators in AI** - The speakers debate whether emerging AI firms are merely GPT “wrappers” or genuine system integrators, noting OpenAI’s dominance in model building, the scarcity of robust integration, and how this perspective supports a $3 billion valuation. - [00:24:51](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=1491s) **Vertical Integration and AI Moats** - The speakers argue that as AI models become commoditized, firms will pursue competitive advantage by building end‑to‑end, vertically integrated ecosystems that generate high switching costs, likening future AI companies to Apple’s hardware‑software model. - [00:27:54](https://www.youtube.com/watch?v=5M-VD9F1W7A&t=1674s) **Next Week's Mixture of Experts** - The host announces that the upcoming episode of the Mixture of Experts series will air next week. ## Full Transcript
0:00What's the most exciting thing to come out of IBM Think this year? 0:03Kate Soule is Director of Technical Product Management for Granite. 0:06Kate, welcome back. 0:07Uh, what's your pick for IBM? 0:08Think 0:08My pick is the research keynotes. 0:10We talked about a new wave of computing, so we've got traditional classical 0:14computing, we've got quantum computing and at Think we announced a new way of 0:18building with models generative computing. 0:21It's really exciting. 0:21Kaoutar El Maghraoui is a Principal Research Scientist and Manager 0:25for Hybrid Cloud platform. 0:26Kaoutar, welcome back. 0:27What was your favorite? 0:28My favorite was also the generative computing part, but also the launch 0:32of a lot of a AI agents and, uh, at what's our watsonx Orchestrate 0:37platform, over 150 enterprise ready 0:39AI agents. 0:40That's really huge. 0:41Yeah, that is huge. 0:42And we will talk about that. 0:43And finally, last but not least is Skyler Speakman Senior Research Scientist, 0:46Skyler watching, uh, the conference. 0:49What was your favorite? 0:49Yeah, a non-technical take on this is just how much fun they 0:53were having during the keynote. 0:55Arvind marched a mascot penguin across the stage and the crowd loved it. 1:00Uh, so it was really cool to see people having fun, um, up on stage 1:04during his keynotes, penguins, agents, and programming all that. 1:06And more on today's Mixture of Experts. 1:14I am Tim Hwang and welcome to Mixture of Experts. 1:16Each week, MOE brings together the smartest, most talented, most wonderful 1:20experts in all of artificial intelligence, uh, to talk a little bit about the 1:24biggest news, uh, in the sector. 1:26And this is a big episode. 1:27We've got a lot that we need to talk about as per usual, a really 1:30fascinating story coming outta the New York Times about AI and hallucination. 1:33A bunch of news coming out of OpenAI, uh, in terms of its corporate organization 1:37and its recent acquisition of Windsurf. 1:39But first, uh, I wanted to start with IBM Think, which was the 1:43big IBM conference of the year. 1:45Tons and tons of announcements and things to go through. 1:48But I think, uh, the one that was most important to me, of course, was that 1:52I, and I do wanna start with is Kate, I realize, uh, you have a book coming out. 1:55I. That was also kind of announced that IBM Think so maybe I'll 1:58just start there for the plug. 2:00Yeah, no thanks Tim. 2:01So we did, uh, release a book. 2:03I've got it here with me. 2:04It's called AI Value Creators. 2:06Really excited, uh, to be able to share it more broadly. 2:08A lot of what we talked about at Think particularly, uh, in some of the future 2:13looking sessions like on generative computing, we actually have whole 2:16chapters dedicated to, in the book. 2:18It's really all about how can, you know, 2:21folks looking to not just build with generative AI, but kind of 2:24build a competitive moat with generative AI, get the most value 2:28and, and invest in strategic places. 2:30So really, really excited for folks to check it out. 2:32We actually have a download link for all of our Mixture of Expert 2:36listeners, so we'll include that in the show notes and would love any, 2:39uh, feedback the team has, uh, as they, they read through the content. 2:43That's great. 2:43And Kate, I guess for those who are kind of just getting their head 2:46around generative computing. 2:47What's the general concept there? 2:48Do you wanna give us like a little bit of a flavor of how, you know, it 2:51sounds like it's a big part of the keynote, it's a big part of the book. 2:53Just kind of interested in how all these pieces are fitting together and 2:56well, what is generative computing? 2:57Yeah, so I think at the end of the day, it's really just trying to 3:00bring some of generative AI back to the realm of computer science. 3:05You know, if you look at how we've emerged building, uh, applications and agents with 3:10LLMs today, it's all basically a form of 3:13prompt engineering where we end up with these really massive, you 3:17know, pages and pages of prompts. 3:19We call them essay prompts in our book, where it can be 3:22very difficult to maintain. 3:24These prompts are very brittle. 3:25You look at how they're written, they're kind of like over optimized and force 3:29fit for a specific model, and it's just not very, uh, sustainable, secure. 3:33There's all sorts of issues. 3:36If we think about how we build in a more computer science forward 3:39discipline, you know, there needs to be abstractions for key activities 3:43that we want a model to take on. 3:45And there needs to be ways to set, you know, clear control flow of how 3:49we build programs versus, you know, instead of asking a model, first do 3:53this, then do this, then do this. 3:55You know, we can actually 3:56build a lot of the same code. 3:57We don't need to ask a model to do everything. 4:00So it's really about how can we take some of these best practices from 4:04software engineering and computer science and bring in all the power 4:07that models have to be able to express, uh, natural language and run functions 4:13in natural language and bring them together in a much more maintainable way. 4:17Nice. 4:18Yeah, that really is, I think the future is just like now moving 4:20into like, how do we make this production, you know, at scale. 4:23So it's very exciting to see. 4:24Absolutely. 4:25And I think there's a lot also that goes on when you start to build 4:28things in a little bit more structure where you can take advantage of a 4:32lot of techniques that are coming out in the field around inference 4:35scaling and inference time compute. 4:37So instead of running one big, massive prompt once, how do you 4:40break it up into smaller parts, run multiple generations and use that to 4:47create an even richer response often in far less time, far less compute. 4:51Uh, and so all of that and more we, we really get into in the book. 4:54That's great. 4:55Yeah. 4:55Well, I encourage everybody to check it out. 4:56Um, I think the next one I want to touch on is Kaoutar. 4:59You have already won the MOE award for mentioning agent first in the episode. 5:03Um, but, uh, but it is genuinely exciting. 5:05I mean, in some ways it's no surprise that IBM would be announcing a, a a a 5:09kind of like product leap in agents. 5:11But do you wanna talk a little bit about what's happening and, 5:13and why you find it exciting? 5:15Yes, definitely. 5:16So IBM, you know, at Think introduced, you know, over 150 pre-built AI agents, um, 5:22through the watsonx Orchestrate platform. 5:24And I, I thought that's really huge, you know, enabling, you 5:27know, basically enterprises to deploy AI driven workloads rapidly. 5:30So these agents, they're. 5:32They're designed to, to be kind of prebuilt, uh, uh, you can integrate them 5:37seamlessly with popular enterprise tools like Salesforce and Workday and Adobe, and 5:43allows, you know, businesses to automate tasks and enhance also productivity. 5:47So. 5:48And you know, this is, you know, kind of showcasing our approach, IBM's approach to 5:53support the creation of custom AI agents. 5:55I think, which is also very important, relying first on the Granite models 6:00as well as models from Meta and Mistral. 6:02So it's also modular approach that provides you flexibility, 6:06that also facilitates, you know, tailoring, you know, your solutions 6:09for diverse business needs. 6:11I think that that was also very, very important. 6:14Um. 6:15So basically, you know, this flexibility that provides is not just about, you 6:20know, one, you know, uh, one approach, but you know, you can integrate different 6:24models, you know, in a, in a flexible and modular way and allows you also 6:29to customize in addition to pre the prebuilt the existing AI agents that 6:34you can just add and, uh, customize. 6:36Yeah, for sure. 6:37And I did wanna touch on that. 6:38I mean, Skyler, before we talk about the mascot. 6:40Which I do want to hear more about. 6:42But, um, I guess, uh, Kate, uh, the mention of Granite, I guess 6:45you've been name checked, so I do gotta kind of bring it back to you. 6:48Um, there, I understand there is a announcement coming out about 6:51Granite actually from IBM Think 6:53so on Friday actually. 6:55So we did a sneak, uh, preview. 6:57We didn't tell anyone we were gonna do this. 6:59We released a preview of our Granite 4 models, and we got to 7:02talk about them a lot at Think. 7:04That was also a really exciting part of the conference. 7:07These models, if you, we can, um, post a link to the blog that that talks 7:10about the new architecture behind them. 7:12But basically they're a mixture of experts hybrid, uh, model. 7:17So they are very fast, very efficient. 7:21The tiny preview that we just released only takes 15 gigs of memory. 7:25So, uh, even running, you know, 1 20 k context length 7:28with multiple concurrencies. 7:30So we think these models are gonna be really efficient and excellent 7:34counterpoints to complement much larger models that are being deployed. 7:37You know, having those bigger models and then the smaller efficient Granite 7:40models working together hand in hand. 7:42I really like the emphasis here on smaller domain specific and 7:45also the energy efficiency. 7:48'cause you know, if you see these models, they, you know, the, the sizes, 7:50they range from three to 20 billion parameters as opposed to what you see, 7:54like, uh, you know, trillion parameters or many billion parameters in the other, 7:58in the open source or in other models. 8:00So it's, it's, you know, the, the, the, the key thing here is, you know, 8:04how do you build these things that are optimized for specific industries? 8:08And offering cost effective and efficient alternative to the 8:11larger general purpose model. 8:13So I really like, you know, the, uh, focus on the efficiency here. 8:16Yeah, for sure. 8:17So, Skyler, uh, curious if you wanna tell us more about the mascot, but I 8:20think in general, like, I, I thought what was very striking about your response 8:23was you're like, it's so much fun. 8:25Uh, which I think is actually like an important part of all this. 8:28Um, but exactly. 8:29To kind of hear what you saw. 8:30Yeah, I know. 8:30I think that just sort of captures it. 8:32They had kind of this transition from having these Ferrari race. 8:36Car team members up on stage talking about how they're using IBM Tech. 8:40And then there was this, uh, pivot to IBM's relationship with Red Hat, 8:44and of course, Linux more broadly, and a penguin mascot just starts 8:48walking across the back of the stage. 8:51Great. 8:51So hats off to whoever had that planned. 8:53Maybe it was last minute. 8:54Maybe that's been someone's dream for a, for a year. 8:57I don't know. 8:58But I thought it was, uh, I thought it was well done. 9:00Yeah, for sure. 9:01And I do like, it's like one of the things I'm really fascinated by is 9:03like how all the companies that are kind of in the AI space are kind of 9:07coming up with their own brands about how they present AI stuff, right? 9:10Like some companies are very serious and some companies are very 9:13technical, uh, uh, like in, like, in kind of like a very granular, 9:17kind of like almost academic way. 9:18And it's, it's kind of fun seeing IBM kind of take like a certain 9:20level of fun in terms of like how to present and talk about this stuff. 9:24So it's very cool. 9:29I'm gonna move us on to, uh, our next topic. 9:32Um, super interesting article that kind of hit the New York Times, uh, 9:35I believe this week or last week. 9:37Um, focusing on sort of the kind of rise of hallucinations with, um. 9:43the emergence of reasoning models. 9:45Um, and we haven't talked about hallucinations on the show for a 9:47little while, but obviously it kind of remains a sort of big question 9:51and a big problem that people are sort of working on in the space. 9:54Um, and I guess maybe Skyler, maybe I'll stay with you, is do you have 9:58an intuition for why it seems so? 10:00The article seem to argue that like reasoning models are like 10:03newly hallucinatory in a way that we are learning to deal with. 10:07And is that, is that the case? 10:09And do you have an intuition for 10:10why hallucinations themselves are not new? 10:13Um, it does appear that they are on the rise. 10:16There was this great contra position of they had asked, uh, you 10:20know, a spokesperson for comment. 10:21They said, no, they're, they're not on the rise. 10:23But if you go and check the receipts and look at the model cards that 10:26OpenAI also produces, you do see, o4 mini hallucinating more than 10:31o3 and o3 hallucinating more than o1 is like definitely on the rise. 10:35Yes it is. 10:36Um, and but they're also very clear to say they don't know why and. 10:41I, I'm also gonna draw a blank. 10:42Sorry. 10:43I'm not quite sure. 10:44I don't have any really gut instincts as to why those are 10:47increasing accuracies going up. 10:49Uh, they're getting better at math, but hallucinations are also increasing, 10:52so it is something that really does need a lot more attention paid to it. 10:56Yeah, and I think this is one of the really interesting things is like, I feel 10:59like the AI era is teaching us all the ways in which intelligence is very lumpy. 11:03You know, like the model gets really good at one thing, but, and you kind of expect 11:06that it'll be good at everything else in a well-rounded way, but like that kind of. 11:09It doesn't seem to be the case. 11:11Um, I guess, uh, Kate, like I'm curious if you've got 11:15intuitions or similar like Skyler. 11:16You're like, I, I don't know. 11:17It's just weird. 11:19Yeah, I mean, I will, uh, give, give my thoughts obviously. 11:23I think there's a lot that's still left to be discovered, 11:26but to me it seems like it's a 11:29kind of classic example of just misaligned incentives. 11:32So we've got, you know, these models are going through extensive reinforcement 11:38learning pipelines in order to improve the model's verbosity among other 11:43things to get it to say more and to try and craft these well-rounded 11:48responses that humans will prefer. 11:51And, you know, there is some degree of, you know, 11:55any human likes to hear people who are persuasive speakers talk. 11:59We're not very good at fact checking things, and we don't 12:02naturally resonate with something that is just black and white. 12:05The answer is X. We wanna know, y we wanna hear more and more thought, 12:08and we question things less when hear that, uh, thought, um, process. 12:14And that's a little bit encountered to a different objective function 12:18that was originally solved for which is much more get the answer. 12:21Exactly correct. 12:22And that's how pre reasoning models were cer. 12:25That was certainly the focus. 12:26And so I expect there's just some, you know, misalignment in those objective 12:30functions and we're trying to solve for a lot of different things and we're waiting, 12:34having these really verbose thought processes that are much harder to check 12:38for factual accuracy when that training data is created and that, you know, 12:43just innately are going to promote having more chances to hallucinate in any 12:48given response than you know the answer. 12:50Is, the answer's x. 12:52Kaoutar. 12:52Are you, um, optimistic, uh, in the end with all this? 12:55I remember a few years ago I was talking to a researcher who is like, 12:58don't worry, and like 18 months 13:00there will just no be, no more hallucinations. 13:02We're gonna just crack the problem. 13:03It's solved, right? 13:04Like clearly there's gonna be less and less hallucinations 13:05and it's just gonna be done. 13:07And I guess kind of what's interesting about this article is almost the 13:09idea that like hallucinations might be kind of like a thing that keeps 13:12coming back as the technology advances. 13:15Um, and I guess from where you're sitting, I mean, do you feel like yeah, 13:18maybe in 2030, you know, we won't even be talking about hallucination anymore 13:21'cause it's kind of a solved problem? 13:23Or is this really something persistent that we're gonna be 13:24dealing with for a long time? 13:26Yeah, I think it's gonna be persists. 13:28Uh, maybe they'll, we'll have, you know, I. Different techniques or 13:31methods or maybe hybrid approaches where we need to do also factual check. 13:35So what's happening here is these models they use probabilistic, not logic, you 13:40know, uh, probabilities, you know, and not logic to predict these responses. 13:44And reinforcement learning helps in math and coding, but also causes the model, 13:49like Kate mentioned, to forget, you know, some of these al consistencies, you know, 13:53to, you know, the, the reasoning models. 13:55They take these multi-step approaches to the problem solving. 13:59Each step introduces also this compound effect of hallucination. 14:04So the tools today, they can't keep up. 14:06So of course a lot of work in research to build tools to trace, you know, the 14:10AI output back to the training data. 14:13But these systems are very, you know, complex too, too 14:16large to fully understand. 14:18And the explanations even that are shown to the user sometimes 14:21they really don't reflect the model's actual internal process. 14:25So what are really these, the broad implications here? 14:28So accuracy is kind of eroding here. 14:30Even as the LLMs become more powerful in cognitive tasks, their grip on the factual 14:35reliability, you know, is loosening here. 14:38And of course this has a lot of enterprise concerns. 14:41And so I think the challenge still remains unresolved. 14:44You know, there's quite 14:45many efforts from OpenAI, Google, DeepSeek, and others, there is no clear fix. 14:50So hallucination appears to be, you know, kind of an intrinsic limitations 14:55of the current model architectures. 14:57So what I'm thinking is we need kind of hybrid approaches, not just relying 15:01on the model, but see if we can. 15:04You know, combine that with other systems to, to do these reasoning, symbolic 15:09reasoning, combine them with symbolic reasoning systems or factual check-ins. 15:13So hopefully that can kind of resolve these issues that we find. 15:17Yeah, and I did wanna get into that as like, I mean, you Kaoutar, I think 15:20you point out quite rightly, like from an enterprise standpoint, I'm 15:22a company that's about to implement this stuff I'm reading in the New 15:25York Times that like these great new models that people are trying to 15:27pitch me on, like I hallucinate more. 15:30I mean, Skyler, what's what's to be done, right? 15:32I think. 15:32Kaoutar is kind of throwing out like maybe we need more symbolic approaches, like 15:36what is the kind of toolkit of things that we do to try to kind of deal with 15:39this, particularly in a setting where, you know, a business is trying to 15:42implement this, they need the reliability. 15:44I think that point right there at the end is very important. 15:46Which use case are these being built for hallucinations during your Google search? 15:52It's annoying, but it's not, not game breaking. 15:55Uh, using a tool in order to improve some sort of legal argument or medical 16:00diagnosis, incredibly important. 16:02So I, I think these, these hallucinations will always be with us. 16:06Um, I did think it would be on a downward trend. 16:08Tim, as you had said earlier, I am surprised there're going up because 16:12there are teams of researchers working on this problem and 16:15they seem to be falling behind the pace 16:18the progress of the LLMs is if we're just kind of, you know, reading the 16:21hallucination rates as they increase. 16:23Um, so I think what's probably the most key important part here is 16:27what's your downstream use case? 16:29And, are hallucinations, game breaking in those. 16:34Um, then, then there will be some serious pause about how you really 16:38roll out AI into your workflows. 16:40Um, if you're using it to, to speed up a, uh, internet query, 16:44um, I think we're gonna have some entertaining hallucinations for 16:47another five years to come yet. 16:49And if I can make a plug for generative computing, like I think this is exactly 16:54the type of thing we're trying to solve and to wrap our heads around 16:57for real deployed use cases, how do we set up workflows so that it's not 17:03just a model giving carte blanche to go and create tons of chain of thought, 17:08do a bunch of actions, hallucinate some things, give a response back, 17:12but instead, how can you have. 17:14Very programmatic control steps with checks where you're validating the 17:18outputs programmatically, uh, and where you really reduce the scope of 17:23what the model does at any one point in time so that you can really try 17:27and reduce your risks of hallucination and other safety issues and a keep. 17:32Part to that is also bringing in additional layers of security. 17:35So for example, we've got Granite Guardian models, which can detect hallucinations 17:40in any grounded response or function call. 17:43So there's all sorts of tools that you can start to layer in if you're 17:46not taking what I call like the YOLO prompt approach where you just. 17:50Create one big approach, one big prompt, throw it at the model and you know, 17:53fingers crossed hope for the best. 17:55But if you start to break this out, it takes a little bit more work to set up, 17:58but it gives you so much more control over the risks and the performance at any 18:03given part in the process that I think it will be, you know, really critical for. 18:08Real life enterprise deployments. 18:10Yeah. 18:10I think this is still like one of the kind of funniest ironies I think of the 18:13AI era is, you know, you've built a thing that's like, it's in the computer, but 18:18it doesn't really behave like computing. 18:20And like there's all this work now to kind of like put it back in the box and make it 18:23behave like a more traditional computer. 18:25'cause you need it for all sorts of like very practical, you know, reliability 18:28reasons, security reasons, safety reasons. 18:30Like there are prompts out there where it says in all caps, do not hallucinate. 18:35Like that's not computer science. 18:36Like this is, we've lost all, you know, uh, grounding to reality here. 18:42That's not how computer science is done. 18:44So we need to get to a better way of working. 18:46Yeah. 18:47It is the fact that we're seeing right now, the smarter these models 18:50are getting at reasoning, the less we can trust them on facts. 18:53So put in hallucinations, you know, they may require more than just reinforcement 18:57learning as it is being used today. 19:00So like, uh, Kate mentioned it, we really need new architectures and 19:04new programing paradigm that really explicitly encode truth constraints. 19:08On, or modular hybrid systems like that's combine LMS with verifiable databases 19:14or symbolic logic engines, you know, and that's, you know, I think at the core of 19:18what generative computing is trying to do. 19:25I wanna move us to the last story of today. 19:28Uh, it was announced, or rather it was leaked ultimately, um, that OpenAI 19:32is about to make an acquisition of Windsurf, um, which is, uh, effectively 19:36kind of a coating environment. 19:38Um, and it would be the number that has been leaked is that the 19:41acquisition would be $3 billion, right? 19:43Which would make it the biggest OpenAI acquisition to date. 19:46And obviously just like a gigantic acquisition, uh, in its own right. 19:50Um, and. 19:52You know, I guess maybe Kate, to go back to you, I like some people were saying 19:55online that this is kind of like, in some ways like evidence that a lot of 19:58this a GI stuff is marketing, right? 20:00Because if you really believe that a GI was about to come about, why would 20:04you spend $3 billion on, you know, essentially like a text editor with 20:08like some AI components added to it. 20:10Um, and, and so yeah, kind of curious about like how you size that up. 20:14Like do you buy that argument, which is like, yeah, it kind of seems 20:16like maybe opening eyes is speaking out two sides of its mouth here, 20:19so. 20:20I think OpenAI probably is speaking out of many different 20:23sides of its mouth at all times. 20:24But, um, I do think that it makes a lot of sense and I don't 20:29think it's mutually exclusive. 20:31Uh, so if you look at how OpenAI became the BM Methodist today, 20:37they released a chat interface. 20:38They found a UI that all of a sudden made their models relevant to the 20:43mass consumers, and then they had 20:46millions of people all of a sudden using that interface, generating 20:51data that they use to bootstrap their way, like rocket ship their way 20:55into really high performance models. 20:57And I think what we're seeing is the killer use case of 2025 and probably 21:03for a while is coding assistance. 21:05And they don't have their own UI, their own access to developers in that arena. 21:10So they're losing that advantage that 21:12gave them this amazing starting point in position, and so I see it very much as 21:18their, you know, and it makes total sense. 21:20They would spend this type of money on it their way to try and regain some of 21:23that advantage and to better understand how their users are using the models and 21:29figuring out how to continue to improve the models moving forward. 21:33Skyler, this is like a little bit of a weird outcome though, right? 21:36Because I, I could have remembered when like Chat GPT first came out and everybody 21:40was doing kind of like startups around AI, people were like, oh, well you're 21:43just like a thin wrapper around GPT. 21:45Or like, you know, that's not a real company, that's just a wrapper around 21:48GPT, but like $3 billion, like, it doesn't really feel like these 21:52rappers are, are quite valuable now. 21:54Right. 21:54And it's kind of almost like an inversion from what we thought, 21:57you know, earlier in the game. 21:58Um, is that the right interpretation? 22:00I think 22:00while we're talking about doublespeaker or talking about both sides 22:04of your mouths, I think on one hand you can call it a wrapper. 22:07I think another hand you can view Windsurf or some of these other, 22:10uh, companies as integrators and OpenAI great at model building. 22:16Um, but they haven't, as Kate's pointed out, they haven't really 22:19integrated into other spaces. 22:20They had a great chat bot interface. 22:22Um, and I think while these models are continuing to grow, integration is 22:28the complimentary scarce factor that's lagging behind and I, so yes, wrapper. 22:32Or integrator, depending on which way you really view it. 22:35Um, I do think, I, I do think OpenAI knows where it sits in 22:39terms of the model building game. 22:41Um, and they probably saw a bit of a, a bit of a weakness in their 22:45own structure of how do we actually deploy this on people's machines? 22:48That's not 22:49a chat interface. 22:50And so again, maybe thinking of this more as, uh, integrating systems into 22:55the, uh, language models, uh, rather than a wrapper is probably why you 22:59can up with a 3 billion as opposed to the, uh, just a wrapper I take. 23:03Um, how, how it plays out. 23:05We don't know. 23:06Uh, but I do think there's this interesting take on the difference 23:10between building models and then actually integrating those into workflows. 23:13And this might be OpenAI covering its spaces on the ladder. 23:17Yeah. 23:17I love the idea that's kind of like a valuable wrapper is an integrator. 23:20Yeah. 23:21It's like, yes. 23:21That's when, when once you get valuable enough, like that's what 23:24you've transformed into, um, Kaoutar where's where does this all go? 23:27Right. Because it kind of suggests. 23:29Like this sort of vertical integration in the space where, you know, coding 23:33assistance of obviously is like a really big use case as Kate mentioned. 23:36And so it kind of makes sense that the model provider would eventually 23:38kind of like get one of those, right. 23:41And it would be vertically integrated. 23:42Like I'm kind of thinking about like are there other domains you think 23:45that an OpenAI might be interested? 23:46Because I think what's interesting about AI right, is of course 23:48that it can be applied across all these different domains. 23:51And so it's kind of like, well maybe it's not gonna be a $3 billion 23:54acquisition, but like where else could they be going, I guess. 23:57That they might want to kind of create this sort of like, you know, 23:59they both control the model layer and then also the application layer. 24:03Yeah, that's a very good point. 24:04And I think the, the example that's when surf shown showed us here is they build 24:09this sticky developer workflow and, uh, additional trust layer over GPT. 24:14Like, you know, what we all were referring to as the wrapper 24:17and here OpenAI's reaction. 24:19It's just not what they don't want just to own the model, but also the 24:23developer experience in the ecosystem. 24:25So it, it seems like we 24:27enter in here a phase where these verticalized copilots, for 24:30example, for finance, for law, for science, for medical, et cetera, 24:34they're the new bottle ground. 24:36And owning the UX layer is a very strategic approach here, and I think 24:40that's what's, you know, it's a smart play that's open AI is doing, because 24:45as the model layer commoditizes here. 24:48The moat is the ecosystem and the developer tooling. 24:51And especially as we are moving in more these agentic AI, this vertical 24:54integration becomes very important if you really wants to have a strategic advantage 24:59and be competitive in the marketplace. 25:01Yeah, and I think it kind of leads to a world, um, 25:04where it kind of feels like maybe OpenAI is gonna become, like, they're gonna 25:07almost like take the Apple model, right? 25:09Where like everything's vertically integrated, you know, they build the 25:11hardware, they have like, you know, apps that are like, definitely their 25:14apps and it's just kind of end to end. 25:16Um, I mean, Kate, do you think that's gonna be the sort of future of AI where 25:20you almost have like, kind of like some companies that are like Apple, other 25:23companies that are just like kind of, you know, it's like the ThinkPad, right? 25:25It's like a, a piece of a computer that you can run anything on? 25:28No, I, I definitely agree. 25:29And building on Kaoutar, I really like how you framed it. 25:32As you know, we're starting to see 25:33commoditization at the model layer. 25:35And I think for a lot of, you know, tasks like coding assistance, we are 25:41absolutely hitting a point where many models are gonna start to converge on 25:45very similar layers of performance. 25:47And so then how do you differentiate? 25:48You make really high switching costs. 25:50You may, or how do you develop your competitive moat? 25:52Rather you make really high switching costs so that it's, once you're kind of 25:56in the ecosystem, you're not gonna switch over to whoever's, you know, offering the 26:00same offering for a few cents cheaper. 26:03And from that perspective, I think OpenAI and I think other providers are 26:07going to continue to invest in that. 26:09And that's why it's really important we continue to support a robust 26:13open source ecosystem in order to make sure that we have kind of. 26:18Diversity of technology, of thought and ultimately are optimizing the 26:23efficiency of generative AI and trying to continue to bring down costs and, 26:27and push advantages and make sure that we don't just get kind of locked into 26:30these, uh, single provider ecosystems. 26:32Yeah, for sure. 26:33Skyler, any thoughts on this? 26:34An analogy I've heard once before was, I don't know, you go back 30 years and 26:38people define their compute experience by what OS they used, you know, or you 26:42Windows or your Mac, and then that's 26:44it's converged and then it was what browser you use that identified your user 26:48experience, and those have converged. 26:50Um, right now we're in the space where people, you know, swear by one 26:54particular, uh, LLM and I do think that will eventually converge as well. 26:58There will be small nuances here and there, but at least from 27:01a consumer perspective, I do see the some, uh, converging. 27:04Um, so yeah, we've seen it before happening over technology 27:07where that sort of decision defined your compute experience. 27:12And then fast forward five years and you can see that actually a lot 27:15of the options are pretty similar. 27:17Um, I can see that sort of progression happening, um, here with 27:21your chatbot of choice. 27:23Yeah. 27:23It kind of makes me think a little bit about, if you remember 27:24that old commercial, like, oh, I'm, I'm a Mac, I'm a PC. 27:27Yep. It's like I'm waiting for that commercial. 27:29That'll be like. 27:29I'm a, I'm an open, I'm an OpenAI coding assistant. 27:32You know, like I'm an open source e coding assistant. 27:35Uh, well more to come soon. 27:37Um, as always, action packs a lot to cover, uh, way more to 27:40cover than we have time for. 27:42Um, but as always, thanks for joining us. 27:43Skyler, great to see you again, Kaoutar, Kate, always great 27:46to have you, uh, on the show. 27:48And, uh, thanks to all you listeners. 27:49Uh, if you enjoyed what you heard, you can get us on Apple Podcasts, Spotify, 27:52and podcast platforms everywhere. 27:54And we'll see you next week on Mixture of Experts.