Learning Library

← Back to Library

AI Takes Over Hollywood

Key Points

  • The panel speculates that by 2030 most summer blockbusters will be fully computer‑generated, with mixed hopes that traditional filmmaking—especially directors like Tarantino—will still survive.
  • Guests Marina Danilevsky, Abraham Daniels, and Gabe Goodhart share contrasting views: Marina is upbeat, Abraham worries about losing real actors, and Gabe hopes AI‑generated animation still involves practical effects like bodysuits.
  • Host Tim Huang introduces “Mixture of Experts,” previewing topics such as the “end of Stack Overflow,” a new project called llm‑d, and Microsoft’s NLWeb release.
  • Google I/O’s headline AI announcements are highlighted, including a $250 “AI Ultra” subscription tier and the launch of VEO 3, a text‑to‑video (and now audio) generation model.
  • Abraham expresses skepticism that high‑quality, AI‑generated video can replace live‑action filmmaking, citing current limitations on video length and overall realism.

Sections

Full Transcript

# AI Takes Over Hollywood **Source:** [https://www.youtube.com/watch?v=rNk3OuUj1UA](https://www.youtube.com/watch?v=rNk3OuUj1UA) **Duration:** 00:41:55 ## Summary - The panel speculates that by 2030 most summer blockbusters will be fully computer‑generated, with mixed hopes that traditional filmmaking—especially directors like Tarantino—will still survive. - Guests Marina Danilevsky, Abraham Daniels, and Gabe Goodhart share contrasting views: Marina is upbeat, Abraham worries about losing real actors, and Gabe hopes AI‑generated animation still involves practical effects like bodysuits. - Host Tim Huang introduces “Mixture of Experts,” previewing topics such as the “end of Stack Overflow,” a new project called llm‑d, and Microsoft’s NLWeb release. - Google I/O’s headline AI announcements are highlighted, including a $250 “AI Ultra” subscription tier and the launch of VEO 3, a text‑to‑video (and now audio) generation model. - Abraham expresses skepticism that high‑quality, AI‑generated video can replace live‑action filmmaking, citing current limitations on video length and overall realism. ## Sections - [00:00:00](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=0s) **AI‑Generated Summer Blockbusters 2030** - Experts humorously predict that by 2030 most summer blockbuster films will be entirely computer‑generated, while yearning for some live‑action elements amid a broader discussion of AI news. - [00:03:07](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=187s) **Evaluating LLM Subscription Value** - The speaker questions the practicality and consumer appeal of high‑priced LLM subscriptions amid emerging open‑source alternatives and a challenging market environment. - [00:06:13](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=373s) **Open Source Unbundling the AI Market** - The speaker compares the emergence of open‑source AI models and frameworks to early streaming services that dismantled bundled software, suggesting a competitive split between paid, bundled solutions and open alternatives will define the industry's future. - [00:09:22](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=562s) **Search as Baseline, Google’s Momentum** - The speaker argues that reliable search is a fundamental requirement for any agentic AI framework and wonders whether recent events like Google I/O signal that Google is narrowing the competitive AI gap. - [00:12:23](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=743s) **AI Threats to Stack Overflow** - The speaker cautions that generative AI could undermine Stack Overflow by replacing nuanced human expertise with homogenized answers, concentrate fresh knowledge in proprietary tools, and proposes anonymized AI‑generated contributions to preserve the platform’s knowledge base. - [00:15:30](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=930s) **AI Content Shift & Collaboration** - The speaker notes how AI tools are siphoning traffic from traditional content creators, argues for shared high‑quality AI‑generated answers, and envisions subscription‑based platforms that curate expert‑driven responses. - [00:18:41](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=1121s) **AI Threat to Stack Overflow** - The participants debate how rapidly improving coding AIs are diminishing Stack Overflow traffic—accelerating a pre‑existing decline—and prompting developers to migrate to Discord, WhatsApp, and similar chat platforms, raising questions about the future relevance of the site and its SEO impact. - [00:21:48](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=1308s) **Introducing LLM-D: Kubernetes Inference Stack** - The speakers unveil the open‑source, Kubernetes‑native llm‑d platform for distributed LLM inference and argue that as models become commoditized, developers will select them based on ecosystem support and performance‑to‑cost ratios. - [00:24:51](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=1491s) **Prefix Caching and Request Routing** - The speaker explains how reusing pre‑computed token prefixes and intelligently routing requests to servers that already hold those prefixes minimizes redundant computation and maximizes GPU utilization, a strategy aimed at improving LLM serving efficiency especially for constrained enterprise environments. - [00:28:03](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=1683s) **Red Hat's Open‑Source Support Play** - The speaker argues that open‑sourcing complex LLM technology creates a market for Red Hat to monetize by offering enterprise support, mirroring the proven business model of Kubernetes. - [00:31:09](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=1869s) **Unified Agent Protocol for Web** - The speaker debates conversational interfaces becoming dominant, explains the MCP server concept that makes website content discoverable to agents via a standardized data handshake, and stresses that a unified protocol—not just chat—will enable search, scraping, and actions across the web. - [00:34:10](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=2050s) **Toward Bidirectional AI Content Interaction** - The speaker argues that while a simple chat‑window UI and a unidirectional MCP protocol can conveniently expose site content to AI agents, true value requires a two‑way interaction model that lets sites act not just as data providers but as interactive AI applications. - [00:37:14](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=2234s) **Anti-Agent Strategies and Open Protocols** - The speakers debate emerging anti‑agent tactics, the need for incentives that encourage creators to share content with AI, and whether open standards such as HTTP or a new MCP/Tim Context protocol will become dominant over walled‑garden alternatives. - [00:40:16](https://www.youtube.com/watch?v=rNk3OuUj1UA&t=2416s) **Coalescing Web Protocol Evolution** - The speaker contends that emerging web protocols will succeed only if they combine solid creator‑side technology with reliable, user‑friendly consumption—potentially aided by AI and driven by engineers’ impatience with flaky implementations. ## Full Transcript
0:00It's 2030 are the majority of summer blockbuster films, 0:03entirely computer generated. 0:05Marina Danilevsky is a Senior Research Scientist. 0:07Marina, welcome back. 0:08What's your prediction? 0:09Summer blockbuster films may be submissions to Cannes, no. 0:12Okay, 0:13Cool. Sounds great. 0:14Abraham Daniels is a Senior Technical Product Manager for Granite uh, Abraham. 0:18Welcome back. 0:18We haven't seen you in a while. 0:19Welcome back to the show. 0:20Uh, what's your prediction? 0:21Uh, I really, really hope not. 0:22Um, I'm hoping that Tarantino still makes movies and he only 0:25does film, so cross my fingers. 0:28Keeping it strong. 0:28And Gabe Goodhart is Chief Architect AI Open Innovation. 0:32Gabe, welcome back! Movies in 2030 - 0:35what do you think? 0:35Uh, I only watch animated movies with my kids at this point. 0:38So yes, those ones are computer generated. 0:40My hope is that some of the ones that I don't get to watch are, uh, at least have 0:45somebody wearing a bodysuit behind them. 0:46All 0:47right, all that and more on today's Mixture of Experts. 0:55I am Tim Huang, and welcome to Mixture of Experts. 0:57Each week, MoE brings together the sharpest team of researchers, 1:01engineers, and product leaders that you'll find anywhere in podcasting 1:04to discuss and debate the biggest news in artificial intelligence. 1:07As always, we have a lot to talk about. 1:09We've got, um, uh, the end of Stack Overflow. 1:12We're gonna talk a little bit about, uh, a new project called llm-d, a new release 1:17from Microsoft called NLWeb. But first I want to start by talking about Google 1:21I/O. So Google I/O, if you don't know, is Google's annual developer conference. 1:26Uh, it happened this week and there was a raft of announcements, uh, to be expected. 1:31It was basically AI, AI, AI. 1:33I dunno if you've seen the super cut of, uh, Sundar Pichai, the CEO of Google, 1:37just saying AI on a previous I/O um, but this year's I/O is actually no exception. 1:42Um, and, uh, perhaps one of the biggest announcements that I want to 1:46get into first with the panelists. 1:47Is, uh, the announcement of a $250 AI Ultra Plan, which now kind of 1:53joins the Anthropic plan and the OpenAI plan in terms of these like 1:57very highly priced subscriptions. 1:59Um, and then with that, the launch of, uh, VEO 3 which is 2:03their video generation model. 2:04And, uh, a lot of people have been having a lot of chatter about it. 2:07It's able to generate kind of text to video and then interestingly 2:10can do, uh, audio now as well. 2:14Um, and so I guess, uh, Abraham, maybe we'll start with you. 2:17Um, you had a little bit of skepticism or you hoped, in the very least, you 2:21know, that Hollywood stay would stay true to its roots and keep doing, 2:24you know, movies with real people. 2:25Um, curious about this though. 2:26I mean, this technology seems really good and, and it feels like 2:29if anything is in the cross hairs, it's basically movie production. 2:32Yeah. 2:33So in terms of the movie production aspect of it, yeah, I'm not super convinced. 2:37Just a couple reasons. 2:38One, in terms of the actual length of video you can generate, um, it 2:41may be high quality, but in terms of actually being able to, you know, 2:44cobble together an hour and a half movie via, you know, a very specific 2:49prop, like I don't buy that quite yet. 2:51Um, but I do think it'll be. 2:53You know, really cool in terms of adding to special effects or being 2:56able to enhance certain, you know, scenes or features within movies that 2:59I think it can really play a part in. 3:01Um, in terms of the, the price tag for the actual, you know, using their models. 3:07Um, I, I think it's kind of interesting given, you know, you've got this, uh, you 3:12know, open source community that's really starting to, you know, catch wind in terms 3:16of a lot of the major developers kind of. 3:18You know, loading their models into, you know, Apache 2 MIT licensing. 3:22So, uh, I'm curious to see how that actually pans out. 3:26Uh, I wish I had more information on how, uh, OpenAI was kind of, you 3:29know, surfacing with their $200 per month subscription, but, um, you know, 3:34I, I guess everybody's kind of, kind of calling for dollars right now. 3:37It's a, it's a tough market to make money in right now in LLM development. 3:40So, you know, it's, uh, it's kind of a winner take all. 3:43Yeah, for sure. 3:44And Marina, I had kind of an interesting question. 3:46I was describing very excitedly to my partner last night. 3:48I was like, oh, look at this new Veo 3 thing. 3:51And, uh, she had this great response, which was like, what is this for? 3:55What are you gonna use this for? 3:57Why would you pay $200 for this per month or $250 for this one, for this per month? 4:02Is there really a consumer angle here, or is this kind of just like a fun toy? 4:05Like it feels like there's a mismatch between how much you would pay for, 4:09you know, one of these features, which presumably I think is like one of the 4:11features why you'd wanna buy their bundle and, and I guess what you'd even 4:15just use it for on a day-to-day basis. 4:17So putting. 4:18My economist had on for a second. 4:20I think there's something very nuanced to the fact that you can't 4:24really get these things separately. 4:25You have to get them in a bundle. 4:27So some of them, these features immediately are useful right now, 4:31and you could use them right now. 4:32Others you might just be playing around with. 4:34But again, to improve these things like Veo and whatever Google needs 4:37data. 4:38They need data. 4:39They need data. 4:40They need data. 4:40They're gonna get it from you playing around with stuff while 4:43actually making immediate use of the things that are more advanced. 4:47So yes, to your point, it's a little bit hard to make money right now in LLM 4:51space, but also the fact that they're bundling several technologies that 4:53are a different degrees of readiness. 4:56It's kind of clever. 4:57I gotta say, because that is that exactly. 4:59They're gonna, you're gonna bring people in with something that's already 5:00working pretty well and then you're going to be able to that way get already 5:05an ecosystem of people and that's, that's gonna work to their advantage. 5:08Yeah. That bootstrapping is really interesting. 5:10I think another dynamic, um, and Gabe, curious, you have any thoughts about 5:13kind of like this pricing war that we're seeing now across all these 5:16companies is, I guess in effect people are gonna have to choose one, right? 5:20Like I think is where some of this is going. 5:22If they want all the features and so this kind of market for people who are 5:25willing to pay significant money on a month to month basis for these services 5:29almost will end up being a little bit, kind of like you have to choose one. 5:32Um, and, and yeah. 5:34I dunno, curious about how you think about that. 5:35Yeah, no, I mean, uh, to lean into what Marina said it, it's 5:37a little bit like cable bundling wars and cable providers, right? 5:41Like you've got different companies each competing to be the one with 5:44the best bundle of capabilities, and you gotta have an anchor show, or 5:49in this case feature that somehow differentiates itself from the pack. 5:53And then, you know, you hang all the rest of your shows off of that and hopefully 5:57a couple of them catch fire and, uh, you know, bring in some more eyeballs. 6:00But it's, uh, it'll be really interesting to see where this 6:03bundling thing, uh, sits. 6:05And then when slash if the equivalent of, you know, the streaming revolution. 6:10Uh, comes around and starts to decompose things into a la carte. 6:13Um, whether that's may, maybe open source models and open source frameworks are the 6:18equivalent of, you know, early streaming services that take apart the bundling. 6:22Um, but it's all happening at the same time this time rather than, 6:25you know, an entrenched industry. 6:27So, uh, I think it's gonna be really interesting to see. 6:30How those two sides of the coin shake out. 6:33Well, one of those sides of the coin is, is bitterly 6:35fighting with its peers, right? 6:36So in the bundling market, in the paid for market, the peers are gonna be 6:39fighting, uh, and then the open source is gonna be the alternative to the whole 6:44paradigm, I guess. 6:45Yeah, that's right. 6:46And do you wanna go into that a little bit more? 6:47I mean, I know you spend a lot of time thinking about open here. 6:50Yeah. 6:51Obviously, I guess in, in my title it kind of states my bias 6:53here, but, um, yeah, that's right. 6:55I mean, like ultimately you think that they're open's gonna win, but is this 6:58sort of like the, it's the, you know, it's the exhaust event on the Death star, 7:02like open source is gonna be the thing that really kind of... 7:04No, I, you know, I think there, there are roles to be played for 7:08um, put it together yourself. 7:11And I think, you know, we've seen this in development 7:14for a long time. 7:15This is not new to AI. 7:17Um, I think back to early Visual Studio days where you had to pay for expensive 7:22subscriptions or at least expensive boxes of software with a CD inside to 7:26install a good IDE on your computer. 7:28Um, and then, you know, eventually things either caught up or surpassed 7:32an open source and we found that the tools themselves were not 7:36what people wanted to pay for. 7:37So I think software development in general has always sort of been, um, 7:42a game of searching for where there is value that is worth paying for. 7:45And a lot of times the things that initially seem like extremely 7:48high value propositions eventually migrate to a commodity that people 7:52expect to be able to get for free. 7:54Uh, and then the value moves somewhere else. 7:56So I think in our space, uh, I'll, I'll win the mixture of experts game. 8:00I think we're gonna see that shifting in the agent's direction. 8:03Um, and uh, I think, I think we'll start to see, probably, 8:07vendored proprietary agents that people are willing to pay for, that gets you 8:11a higher entry point into the stack. 8:13Uh, and then the model itself is gonna be a bit more commodity, but that's my 8:17prognosis going forward. 8:18Yeah, and I really do want to talk about, I mean that was the kind of second 8:21set of things coming out of I/O that obviously announced a lot of things. 8:24But the other big thing was, well, we now really want search to be more agentic. 8:29You know, Gemini is gonna have agent mode that you can turn on. 8:33And I guess Abraham, I'm curious about your response to that is like. 8:35You know, maybe to put a finer point on is like, is is search the killer, 8:40you know, agent, uh, capability? 8:43Like, I think that's kind of what was being offered by Google here is to 8:46say, look, we do search really well. 8:47If we can do that in a more agentic way, then that's, that's 8:50the killer in the agent space. 8:52But I think you also see a lot of other companies kind of competing for this. 8:54So kind of curious, uh, what you think. 8:56Yeah. 8:56Um, I mean, it's a great question. 8:58I'm just kind of thinking about, you know, all the organizations that kind of 9:01pass search out of either like native, as part of their agent capabilities, 9:06you know, with, with, uh, ChatGPT and Perplexity AI, where it, it really 9:11is not something that's net new. 9:12Um, with respect to agent capabilities, I, I think it's kind of gonna be table 9:17stakes where you, at the base minimum, you have to be able to support search 9:20as part of your agent framework. 9:22So it may not necessarily necessarily be something that is, you know, 9:26uh, you know, bleeding edge. 9:27Uh, but I think it's really like in order to be able to come to the table, um, and 9:33you know, be a player as part of the agentic framework. 9:35Search has to be one of the baseline, uh, you know, 9:37capabilities that you can support. 9:39And on top of that, you know, you can kind of decide what makes the 9:42most sense for your user base. 9:43But I think it really search is, is basically like a starting point 9:46that if you can't support that at the bare minimum, then um, 9:50you know, I, I, I think it, it, it just kind of raises some flags. 9:53Mm-hmm. 9:53Yeah, 9:54for sure. 9:55Um, Marina, I hate, I know you hate this kind of question, but 9:58I'm gonna ask it to you anyways. 9:59Is, um, is Google suddenly kind of catching up in this race? 10:04Uh, I know the mood of the conversation, you know, as last year was like, 10:08oh, they've fallen terribly behind. 10:10Like, one of the things I love about AI is like anyone who is 10:13up will be down in a few months. 10:15Anyone who is down will be up in a few months. 10:18Again, like kind of just like. 10:19I dunno, Google is suddenly like back on the board again and, and 10:21so I kept curious about like. 10:23In the kind of battle for market share for this whole scope of tools, like 10:27how, what should we take from io? 10:29Is like I/O kind of a sign of strength from Google, or do you feel like this 10:32still, they're still not quite getting it? 10:35I think what's interesting with Google actually, it's how much 10:37they're leaning into multimodality. 10:39Like two thirds of their announcements are about. 10:41Something that has to do with modalities besides text. 10:45If you go and look like we touched on video, but look at all the stuff 10:48that they're doing with Project Astra and with the fact that you're gonna 10:51be able to search with what's on your camera and all these other things. 10:54So, um, I think that Google's definitely in a better position this year than 10:58they were last year, but again 11:01compare public perception with what's probably actually going on under the hood. 11:05They're, they're doing a pretty good job of saying, oh, don't worry guys. 11:08We, we still have things going on, but if even after all the talking 11:11we're doing right now, everybody's still using Google to search, whether 11:13you're in AI mode or not AI mode. 11:15And so they're continuing to get the data and get the data and, you know, 11:18this is like my favorite topic ever. 11:20So there's still gonna be, continue to be pretty ahead in. 11:24Data on which to train any kind of new models. 11:27So you're not just having people only interact with ChatGPT, you're 11:30having people interact with Google. 11:31Generally it's, it's a rich thing and I think there they 11:34continue to have an advantage. 11:40So piggybacking on that, uh, this is actually a great 11:43segue to our next segment. 11:45I wanna talk a little bit about Stack Overflow, which, uh, if as many of 11:48you may know, is a much loved forum for technical questions and answers. 11:53Founded in 2008, has become really like a pillar of. 11:56Being a technical person, uh, online. 11:58And the bad news is that the website is dying. 12:01Traffic has been dropping and has been dropping in particular, 12:05um, arguably because of AI. 12:07Um, whereas in the past you would, uh, have to go to this website to 12:11kind of look up the answer to your question on a, a coding issue. 12:14Um, a lot of that's being replaced now by, um, auto complete right code generation. 12:19So I wanna talk a little bit about like what that means and what it means 12:22for traffic on the web as a whole. 12:23So I guess maybe Marina to kind of like give you kind of a concrete question. 12:27It's like, do you buy that sort of AI is killing Stack Overflow And if so, 12:31you know, does it pose a danger to like even bigger places like, like Google? 12:35So the Stack Overflow story does make me sad, um, because that is, it's, I think 12:41we've all used it to a quite a decent amount of degree and it's really hard to, 12:45um, not realize that you can't replace human expertise in these kind of more 12:50nuanced questions with just what got. 12:52Autogenerated, which means once again, regression towards the mean 12:55regression towards homogenous. 12:57Answers. 12:58But what's really terrifying is that like, look, software always engineering moves 13:01really, really, really, really quickly. 13:03Things are out of date. 13:04Immediately you need more people asking about what is the newest 13:06thing, what is the latest thing? 13:07If everyone's asking Cursor and Cursor fixes it and you say, 13:10oh yeah, great, that was good. 13:11Who's got all that fresh data? 13:12Now only Cursor has it, or whoever it is that you're using. 13:15So the market share ends up being really 13:17important here. 13:18So something that I would wish for people to do is to put AI to 13:23good work and say, Hey, I accepted that answer you just gave me. 13:26Why don't you make an anonymized version of my code and post it in the internet? 13:30And now we keep stack overflow going so that other people can still have this 13:33data, can still use this data, keep it, you know, really frictionless way 13:36going as a repository of knowledge. 13:39There's often more than one answer to these kind of questions, like you 13:42really wanna continue to have these kind of barriers broke, broken down. 13:45I understand. 13:46Ease. 13:46I understand access. 13:47I understand. 13:47It's right there. 13:48And it's nice to you and it doesn't have the moderators telling you 13:51this has been answered before. 13:52Why are you dumb? 13:53Sure. 13:54But it's, uh, it's short-term gains for a real long-term loss. 13:58Um, and so I really hope that we can not fall down that hole. 14:02That's right. 14:02And I don't know, I mean, I also like the angry mods. 14:05Like I feel like that's a key part of the experience is to feel, feel 14:08the burn of someone just being very angry about your question. 14:11Gabe. 14:12So I think, I mean, Marina's describing something, which I think is like 14:14potentially really important, right? 14:16Which is like, well, there are ways of architecting AI systems so they can 14:20feed back into a human system, right? 14:23But I think here, you know, I guess a cynic would say, well, someone like Cursor 14:27has no interest in doing that at all. 14:29Right. 14:29And I guess I'm kind of curious if you feel like. 14:32How we can change that, right? 14:33Like, if we feel like this is a good approach, like what do we do? 14:36It, it's interesting. 14:37I, I had exactly the same sort of paradigm idea in my head when I read 14:41about the demise of Stack Overflow. 14:43Um, which is that I think right now, you know, I. I think we talked in the last 14:48episode I was on about the, the state of Wikipedia transitioning to bot scraping. 14:52Um, and I think, you know, even some of the ones we're gonna touch later on in the 14:55episode are all about how does the content on the internet transition to an AI first 15:00world where the primary consumer is AI. 15:03And I think in this case, um, from a purely consumption 15:06standpoint, yeah, it's great. 15:08Like I don't need to spend time searching. 15:10I can just get the code snippet directly into my editor. 15:13Um, or worst case, uh, a concise version with you know, no need to scroll 15:17through the comments to actually get, you know, a very clear representation 15:21of, of what I want in a, in a chat context or something like that. 15:25So from a user perspective, there's a clear win here, which 15:27makes it in some ways a no-brainer that this is going to happen. 15:30Like, I don't think we can stop it with that convenience, but I think 15:34it's a one-way street right now. 15:35Right. 15:35Um, I was actually talking with someone, um, who used to run a food 15:39blog and essentially her traffic is dead now because everyone just 15:43asked ChatGPT for the recipes. 15:45Um. 15:46And I think we're seeing this sort of, the state is all the 15:51data is going into the models. 15:53I. To some degree, like we talked about with Wikipedia, the, the state 15:58of affairs is still valid for recency and for rag type of use cases. 16:03Um, but it's really changing the shape of who's gonna use the data. 16:07And I think we'll probably see the shape of the data creation changing. 16:11And what I would love to see is exactly what you propose, marina, 16:13is that this kind of a collaborative effort where, um, when an AI usage 16:20performs well then, that performed well 16:22content then somehow, somewhere gets shared. 16:25And I think, um, you know, there's a lot of, you said, you know, someone like 16:31Cursor has no interest in that and I think they probably don't have interest 16:34in necessarily, I. Um, sharing it publicly, but I bet that they might have 16:39interest in trying to become the new Stack Overflow where they actually have the 16:43ability to subscribe to shared answers. 16:45And then if you subscribe to shared answers, you can get, 16:48uh, you know, well thought out. 16:51Results conversations had by other experts with their AI bots 16:55accessible in your experience? 16:56Right. 16:57So there's probably a play to be made there for a vendored solution 16:59and then hopefully a open solution that comes along and does a similar 17:04thing, but with an actual, you know, fully open ecosystem that could look 17:07something like an agent running 17:09client side, you know, I know for example, the Continue team, uh, keeps all of their 17:13code open source for the client side. 17:15So you could imagine a plugin there that actually hosts this in some kind of a 17:18neutral vendor, third party type of space. 17:21Right? 17:22Um, so I think, you know, just like we're seeing with vendor AI versus 17:26open AI, open source AI, um, we'll probably see something similar hopefully 17:31emerge around, you know, vendored AI. 17:35Sharing AI content sharing versus open AI content sharing. 17:40Yeah, no, I think that's great. 17:41And I think one of the things is, I think the comparison to the 17:43recipe website is so interesting. 17:45'cause one thing I find particularly perverse about this is like how a bunch 17:49of sites had to construct themselves in a particular way to be sustainable. 17:53Right? 17:54So like the classic thing with the recipe site is like, it's got the long narrative 17:57thing and all of these ads and you know the reason you're doing this, you're 18:00trying to make a living being a recipe blogger and like you need to increase time 18:04on site and you wanna increase engagement and, but like, it's exactly that kind of 18:07stuff that has made them very vulnerable 18:09to say a chatbot where you just get the recipe and so there's kind of this 18:14weird cycle of the economics where, you know, all the decisions you made 18:17early on are now making, you know, your industry like kind of particularly 18:21vulnerable to what's what's happening. 18:23Exactly. 18:23And I, I think there's gonna be a shift in 18:26creating content that both appears well for AI, so the, the 18:30SEO for AI type of experience. 18:32And then also creating content that hopefully can be monetized through AI 18:36so that the content creators, uh, the experts in whatever field it is, whether 18:40it's experts on Stack overflow or. 18:42Chefs, uh, can actually make a living here or actually have value 18:47placed on their contribution. 18:48Tim, that was my comment. 18:49I don't know how many episodes ago. 18:50Do you remember? 18:51SEO It's gonna be deeply affected by all this. 18:53That's right. The buy. 18:54SEO, you're ahead of the game. 18:56We go, um, Abraham, can I play like tech bro jerk for a bit, right? 19:01Like, I feel like there's one argument, which is, look, these 19:04models are getting so good at coding. 19:07That in a few years, why do we even need Stack Overflow anymore? 19:10Right? 19:10Like we're, we're past the world of Stack Overflow. 19:13Cogen is gonna just be able to happen in the future. 19:16And so like, it's very sad, you know? 19:18But I guess the tech pro view is like, isn't the technology making stuff 19:22like Stack Overflow kind of obsolete? 19:24Do you buy that? 19:25Um, well, yes and no. 19:27I think there's kind of two sides to it. 19:29Like when the read the article that you shared, if you notice from the peak of. 19:33Covid to, you know, the introduction of chatGPT, it was a pretty big drop. 19:38And obviously GPT kind of accelerated the decline of overflow traction. 19:43Um, more traffic. 19:44But I think it was already a, like, you know, something that was in progress. 19:48And kind of digging a little bit deeper, I saw that, you know, a lot of people 19:52moving from Stack Overflow we're going to Discord channels to be able to 19:55have conversations or WhatsApp groups. 19:56WhatsApp groups. 19:57So I think it really signaled that it wasn't necessarily, 20:00um, the immediacy that. 20:02ChatGPT could provide, but more so like a, an on, like a dialogue that 20:06you can have in terms of navigating the problem as opposed to, you know, 20:10posting a problem, having a solution at certain points in time being answered. 20:14And then, you know, if you had another question to, to follow up on 20:17it was, there was an issue was just, you know, being able to have time 20:20to value in terms of using overflow. 20:22So I, I think it's, it's less of a, you know, you know, LLMs are 20:26going to, you know who, who cares? 20:28You know, LLMs are gonna, you know, remove the need for anything like this. 20:31And I think it's, how do we find ways to be able to have, you know. 20:35Developers or software engineers have more natural engagement when 20:39they're trying to navigate a problem? 20:40Um, I think it's less about, you know, being able to code, um, again as, um, 20:45I'm, I'm not an engineer so I say this lightly, but the, the being able to code 20:50is being democratized relatively quickly. 20:52I think it's actually having a, like, you know, understanding the strategy 20:55behind what you're actually coding that I think is a lot more valuable right now. 20:57And that takes a dialogue between yourself and whether it's an LLM or 21:01another individual in your space. 21:02And I think that's gonna be a really key, um, drive. 21:05Either for whatever becomes the next, you know, catalyst or focal point for 21:09how do we, you know, um, have a forum for, for these kind of conversations. 21:13So, um, yeah. 21:14So as from a tech bro perspective, I get it. 21:17Yes. 21:17It just makes it easier. 21:18But then from an actual, you know, user perspective, I think it's more about I 21:22want to be able to engage with somebody as I'm, you know, driving these projects. 21:25Yeah, for sure. 21:26Yeah. 21:26And I think that there is something there around kind of like. 21:30You know, again, like with all the jokes aside on Stack Overflow being kind of 21:33occasionally sort of an unfriendly place, like actually, like part of like the idea 21:38is that you're like kind of communicating with others and solving a problem. 21:41And like that there may be some value that we are losing actually in that transition. 21:46Um, I think is is sort of interesting. 21:48I guess the future may be that you're like arguing with your, uh, Cogen tool 21:52on a particular implementation, who knows? 21:54Yeah. 21:58Cogent was like, did you read 21:59the documentation? 22:00Geez. 22:07I'm gonna move us on to our, uh, next topic. 22:10Um, project launched, uh, with a collab number of collaborators 22:15from a couple different companies. 22:16Open source project called llm-d. 22:19And, uh, I wanna start the segment just by reading the description of llm-d. 22:23So LMD "is a Kubernetes native distributed inference serving stack, 22:28a well-lit path for anyone to serve large language models at scale 22:32with the fastest time to value and competitive performance per dollar 22:35for most models across most hardware accelerators". 22:40Gabe, what is this? 22:42What does it do if you don't know anything about Kubernetes or you dunno 22:45anything about distributed inference? 22:47Like what, what, what is this? 22:48Why should we care? 22:49Yeah. 22:50Okay. 22:51Um, so going back to, uh, something I said earlier. 22:55You know, I, I, I do believe that. 22:58We're gonna approach a space where the models themselves are commoditized 23:01and individual models have some strengths, uh, over other models. 23:05Uh, but ultimately the, the model you choose is gonna have a lot to do with 23:08the ecosystem you can choose it with. 23:11Um, and if you are a model provider, the thing that makes 23:14you attractive is your price tag. 23:16Uh, and the thing that drives your price tag is the ratio 23:20of performance to dollars. 23:23Um, and so. 23:24There's a really big divide between, you know, sort of the open source developer 23:29tinkerers that wanna be able to load models on their laptop or connect to an 23:32API service and run occasional queries. 23:34I'm not worried about rate limiting and throughput 'cause I'm just 23:37dabbling, I'm just messing around here. 23:39And the people who are actually running the models, they really, really 23:42care that all of the GPUs that they spent millions of dollars to buy are 23:47actually getting used all the time. 23:50Um, I thought one of the things that the, the technical. 23:53Article about llm-d spelled out really well is that traffic for LLMs is very 24:01different than the assumptions that a lot of the internet is based on, right? 24:04So the internet is based on the assumption that most requests are 24:09roughly the same shape and size. 24:12Um, you know, sure you sometimes download big files, but by and 24:16large you've got small requests with small sizes, and so you can just do. 24:20Pretty naive, like spray 'em around to a bunch of horizontal replicas of 24:24your website and your website's the same no matter which server you hit. 24:27And, um, cool. 24:29Great! Problem solved. Round-robin 24:31load balancing for the wind. 24:33For the win. 24:33Um, but. 24:35Uh, LMS are very, very different, right? 24:38And we've talked about different use cases for LLMs, whether it's a huge 24:41amount of context for rag type of, uh, scenarios or a huge amount of output 24:45for, um, thinking type of scenarios. 24:48Um, they all look and behave differently. 24:51And, um, another hugely important part is prefix caching, right? 24:55So, to get briefly technical here, you know, all of these models are auto 25:00aggressive, which means they compute up to a certain point in the token sequence 25:03and then to compute the next one. 25:05And the math for the next one is based on all of the 25:07math that was computed for the previous one. 25:10Um, and this is great if you have one instance of your server running right? 25:14Uh, because you've already got all that math pre-computed, it's 25:17just sitting there in memory. 25:18You can just do whatever the little delta is to get the next token. 25:21But if you happen to somehow land on a different server that has not pre-computed 25:25all of that, you gotta go back to the beginning and start over again. 25:28And that's a really wasteful operation. 25:30So the thing that the llm-d team has really, really focused on is the 25:35routing of requests to make sure you're. 25:37Maximally taking advantage of all of the stuff that's already in 25:40memory, those common prefixes. 25:42Um, and then also maximally taking advantage of, you know, saturating those 25:47GPUs based on the shape, uh, and the expected output length of the requests. 25:52Um, which I think. 25:53You know, is really technical and nitty gritty in the details. 25:56But what it's ultimately gonna do is mean that for providers of LLMs, and this is 26:01not just hyperscalers to be clear, right? 26:03Hyperscalers are gonna build their own stacks. 26:05This is targeted at enterprises that have constrained environments 26:09where they wanna approve and manage and run their own models. 26:12Um, this is gonna give those enterprises the ability to actually. 26:16Run models that fit their business needs, uh, at a cost that is 26:20actually approachable to, uh, you know, adopt AI inside their space. 26:23Right. 26:24So that's the real target of a tool like this. 26:26Yeah, for sure. 26:27And Marina, I guess one question that's I think worth asking is, you know, like 26:31Gabe was saying, right, the hyperscalers are gonna do this in-house, um, and like. 26:38But like what's being described here, I mean this is a lot of work, right? 26:40To like get all of the routing to be optimized to maximize GPU usage 26:46and the end result is that you save people a lot of money, right? 26:49Ultimately, um, why is this getting released in an open way, right? 26:53I think it's like another set of questions like what's the open source play here? 26:57'cause this would seem like the kind of thing that you'd want to 26:59keep in-house secret proprietary. 27:01I mean, the scale at which they would like this to work, you can't keep it in house. 27:06This is almost like having to re-figure out how you want to do work in the 27:10way that when we realized how to handle databases more efficiently, 27:13you wanna have a lot of basic ways of how do you represent data, how do you 27:17handle transactions, how do you handle collisions, and things of that nature. 27:21There was not just one company that was like, Nope, we're going to own 27:25databases, you can't do it. 27:27Not if you actually truly want something that is that widely adopted. 27:30So in this case, I'm gonna go hit back to what Gabe said before about 27:34agents, you know, being a new thing. 27:36I don't know what agents means either, and nobody knows what agents means, 27:39but it does mean complexity and it does mean lots of things, having to do lots 27:43of actions and take lots of choices. 27:44So now we're really getting back into database transactions, but for the 27:49gen AI world, and so this is really interesting and important work. 27:54To almost have a new set of standards across the board for everyone, 27:58no matter what particular agentic flow you are or are not using 28:02for your own particular use case. 28:04So, um, yeah, that's my perspective on it. 28:05To build on Marina's point, the complexity of llm-d, you know, open 28:10sourcing, it really leads to a, a market, like an opportunity for Red 28:13Hat to be able to provide support. 28:14So when you talk about from a commercial enterprise, how do you 28:16actually make money off of this? 28:18It's right in Red Hat's wheelhouse where, you know, you 28:20provide a very, you know, you. 28:22An open source technology gets widely adopted and your commercial 28:26strategy is really to be able to provide support for it. 28:28So I think given the, you know, the consortium of organizations that 28:32are on board from a commercial to, you know, an educational standpoint, 28:37I think that's really the play here in terms of, okay, well, I. 28:40Why do you wanna open source this? 28:41Well, because the complexity of it and the adoption of it will really 28:44drive support back to Red Hat. 28:46Yeah, a hundred percent. 28:46And I think, you know, the, again, this business model is also not new. 28:50Uh, I mean, how many technologies out there do people have as the cornerstone 28:53of their business that the, you know, play around with it, version 28:57of it exists on your laptop and the scale it up to production usage is. 29:02Complex enough that you either need to just hire somebody that has already done 29:06it and use it as a service, or you need to hire somebody to help you do it yourself. 29:09I think that's exactly the same thing here, right? 29:11I mean, it's the reason Kubernetes itself has traction, Kubernetes is open source. 29:17Um, but there's not a lot of people probably out there. 29:19Well, certainly home laborers might be running their own Kubernetes cluster, 29:22but when it comes time to running your business, you're generally gonna, 29:25you know, buy a cluster from, uh, somebody or, uh, if you are, you know, a privacy 29:31sensitive or otherwise constrained industry, you're gonna run it on-prem 29:34with either a big team in-house or with support from a company like Red Hat. 29:39Um, that, that does this for a living. 29:41Yeah. 29:41One of the things I love about this is, uh, and I guess it's true of the AI 29:44space in general, is like, you know, the technology itself is like so weird and so 29:49cutting edge and like, you know, people are like, we're building a machine, God. 29:53And then you're like, actually, but the business model's B2B SaaS 29:56or like, actually, actually the business model is like Red Hat. 29:58You know? 29:59It's just like, it turns out like the way we monetize and build 30:01businesses around these technologies is like in some ways the same game 30:04we've always played, you know? 30:05Which I think is like very interesting. 30:12I'm gonna move us on to our last topic. 30:14Um, Microsoft, uh, did a fun little release that I do want to kind of talk 30:18about before we wrap up the episode. 30:19It's a project called NL Web. 30:21Um, and it's an open project that, and again, I'm doing a lot of quoting this 30:24episode, but it's, there's good quotes. 30:26So quote, turn your website into an AI app, allowing users to query the contents 30:31of the site by directly using natural language, just like an AI assistant or. 30:35Copilot. 30:36Um, and it ally comes with a little thing where you can set up your website 30:40as a a model context protocol server so that agents can interact with your site. 30:45And I guess maybe Marina, I'll kick it back to you. 30:49Um, this is kind of a fun project 'cause it envisions a version of the 30:52internet where like everything is just. 30:54Talking like you're just talking to every single website. 30:57And indeed, websites can talk to one another in natural language. 31:01Um, and it seems to be in some ways a bet that like, actually these conversational 31:05interfaces become ubiquitous in a way that is almost like a little bit funny. 31:09It's just like, oh yeah, I'm gonna go have a conversation with Yelp 31:13and then I'm gonna go over there and have a conversation with, 31:15you know, Twitter or whatever. 31:17Um. 31:17Are we about to see conversational interfaces take over? 31:20I know we've debated it a little bit about like just how dominant that paradigm is 31:24gonna become for interacting with ai. 31:26I'm curious if you kind of buy that as a vision for where the web is going. 31:30I am gonna also quote something from what they wrote, which is every 31:33instance is also in MCP server. 31:36Allowing websites to make their content discoverable and accessible to agents and 31:40other participants in the MCP ecosystem. 31:43And that was the thing that I caught onto, not the, you're talking to your website. 31:46It's your website is now something that is going to be discoverable by agents 31:51and agentic flows that are gonna have one particular protocol to get stuff 31:55and information from your website. 31:57And that is the data. 31:57The data, data. 31:59Hello, once again data and a standard in how to handshake. 32:05Is where I sort of zoned in on, um, I believe it was, uh, sort of 32:09intended more in that direction. 32:11So like is it fun to have that kind of a website? 32:15Yeah. 32:15But I think the deeper thing is not that we wanna have a conversational 32:19interface with everything. 32:19It's, we want this to be a way that the applications that you build on 32:22top that again, to, to search and to scrape and to take actions and whatever. 32:26There's a, a unified protocol that, that was my perspective on this. 32:29That's right. 32:30And I think it's the direction we could take it, I mean. 32:32Like MCP, right? 32:34Could be the new like thing that just becomes standard built 32:38into every resource on the web. 32:40Um, you know, I guess like curious how you size that up, like we, we may be 32:44headed in that direction is that everybody eventually wants to make themselves 32:47very indexable, if you will, to agents. 32:50Um, so, uh, full transparency. 32:53I, I may not be the best person to ask a question, but just kind of after a few 32:57conversations at MCP with some people here at in research, um, you know, there's 33:02a couple different protocols out there. 33:04If this is a agent play, I mean, I think there's room for, you know, agent to agent 33:10or IBM's agent protocol to be able to step in there and, you know, play a factor. 33:15Um. 33:16I, I, I might have to take a step back here and just better understand, you 33:19know, from Gabe or one of the team members, you know, do, do you foresee 33:22this as being actually the defacto protocol for, uh, for agents or for 33:27models being used across websites? 33:29Yeah, and I think to maybe do a twist on that for Gabe, it's like we were talking 33:32a little bit about like the economics of where attention flows and all that. 33:36It kind of feels like. 33:37If that people don't get that right. 33:39I dunno. 33:39I I might just say I'm not, I'm not making my site easily interactable with agents. 33:44Like, I don't want them touching my stuff, you know? 33:46Yeah. Like, I don't want them getting my 33:47data, you know? 33:47That, that, that's, that's a great, yeah. 33:49I think, I think you might see one of two things, like, you know, like, uh, 33:52the equivalent of MCP's robots.txt like, don't, don't use this for, for agents. 33:59Um, no, but I think, I mean, I think that's exactly right. 34:01I think this is a attempt 34:03to put a stake in the ground about standardizing the 34:07transition to an AI first web. 34:10Right. 34:10I think, um, saying that, Hey, this is a better user experience. 34:15You get a little chat window and now you don't have to go search around through 34:17navigations, like that's genuinely useful. 34:20Also, at the same time, you're exposing the same index ability to agents, um, and. 34:28It's basically back to the same problem we talked about with Stack Overflow. 34:31It makes these things eminently consumable by ai. 34:35The question is, is there any way to give back or to sort of 34:39make it a two directional street? 34:40I think, um, you know, and Abe, to your point about the multiple 34:44different protocols out there, I actually see this usage as exactly the 34:48right usage for MCP relative to the other protocols that are out there. 34:53Um, so I think. 34:54Because this is unidirectional, this is like I'm, it's, it's 34:58like an HTP server, right? 34:59You know, you're gonna see MCP colon slash slash instead of 35:02HTTPs colon slash slash right? 35:04This is going to be like, go grab uh, my information from my site 35:10for the sake of an agent rather than for the sake of a web browser. 35:12And I think, I think that's actually, to me, that's a very sensible 35:16technology direction for exposing content to something, whether 35:21that's something as a user or. 35:22A user interface or a agent. 35:26But I do think, um, when you start getting. 35:29Turning, the notion of turning a site into an AI app is more nuanced 35:34and interesting to me because that speaks to not just being a content 35:38provider, but also being an interactor. 35:41Uh, and when you start having that site, then also want to go take 35:43action against, say, other sites. 35:45Now you start getting into an agent to agent or an agent communication protocol 35:49or something that actually has to allow the, the interaction that came to me. 35:54Now I'm gonna go choose to take some other action. 35:56It's that. 35:56It's that, um, having agency, shall we say, uh, that will make, make the 36:02site in fact actually take action and not just sort of respond to queries. 36:06I think that's the delta in my head between the different protocol groupings 36:11and MCP makes a lot of sense as sort of a, a first step to turn a website 36:15from a pile of HTML into something that can be directly exposed to 36:21a consumer that has AI on the other end a model? 36:24Yeah, definitely. 36:25I mean, presage is like, I think a really interesting world where. 36:28There's like the human web that we all interact with, and then 36:30there'll be this like agent web that you kind of maybe never really see 36:34that's going on below the surface. 36:36And that like you actually may have sites that are like not 36:38even human readable, right? 36:39They're just like agent protocol. 36:41Yep, absolutely. 36:42Or the other way around, 36:42right? Like I think there's kind of like, and. 36:44And to your, to your question at the top of this, your twist on the 36:47top of this, I think it's gonna be an economics question, right? 36:49Like at some point, um, we will either see that there is a reasonable business 36:54model where folks creating that content can get some return on their investment. 36:58Um. 37:00Or we won't, at which point, you know, either people will explicitly not 37:04expose MCP servers or add obfuscation to the source code of their website 37:08so that it is terrible for crawling. 37:10Right. 37:10Like a whole bunch of gorpy extra tags in there that now the 37:13models have to learn to ignore. 37:14Yeah. I mean, the rise of 37:15like anti agent technology is also something we're about to see, right? 37:18Which is like, just get your agent lost. 37:20This maze that it never gets out. 37:22Exactly. Exactly. 37:22Go, go 37:22send them down some malformed XML and good luck. 37:25Right. 37:26So it'll be really interesting to see, you know, and as a, a collaborator by 37:30nature, I'd love to see something where there actually is a good incentive for, 37:34um, folks to make content available to AI and get some return on that. 37:38And I think that would make for a great user experience downstream. 37:42But as we know, you know, this is gonna be the wild west of the internet. 37:44Like there will be combativeness and you know, people looking 37:48for the best advantage. 37:48So, uh, it'll be really interesting to see where that swings. 37:51Yeah, for sure. 37:52So Marina, maybe a last thought from you. 37:54Um, you know, I always think a little bit about how like HTTP and 37:58like open protocols in general are this kind of weird miracle that 38:01everybody just agrees to kind of like be interoperable with one another. 38:05Um, are you bullish on MCP becoming like a uniform standard across the board? 38:10Because I mean, what Gabe was saying, you could imagine a world 38:12where you say, look, it's gonna be the, the Tim Context protocol. 38:16And actually only sites that correspond to the Tim Context protocol will have 38:20agents that'll talk to one another. 38:21Um, and so it feels like there's a lot of incentives to kind of like break 38:24away and create these kind of like more walled garden type experiences. 38:28Um. 38:29I don't know. 38:29Do you think kind of open winds here or is it anyone's guess? 38:32I mean, to some extent hard to predict the future, and I agree with how 38:36Gabe had described MCP as a protocol. 38:38It's not the only protocol out there, right? 38:39We've got FTP, we've got all these protocols for sharing different 38:42information in different ways, and very often it really does start because 38:45a small enough group of people that are really deeply in the middle of 38:50it, get real annoyed and say, guys, we're just gonna agree on something. 38:54And because that is a result, they can actually get somewhere. 38:56Everybody else says, yeah, okay, great. 38:57We're, we're gonna adopt this protocol as well. 38:59Um, we need these protocols because otherwise we, you cannot continue to grow. 39:03And especially in the world of AI and agents, scale is more important than ever. 39:07HTTP, HTTPS all right now is just like serving a website, MCP. 39:11I don't know that it's gonna be all that useful unless you 39:13have enough people adopted. 39:14So you really are gonna have to have, uh, ways to drive adoption. 39:18I agree with Gabe that everything's an economics question. 39:21At the end of the day, are you getting value from it and are 39:23you gonna contribute or not? 39:24There's a lot of security questions here. 39:26Again of our, if you're gonna go and talk to a different website, are you 39:30gonna come back with like poison pill actions to take on your own website? 39:34This all gets a little terrifying pretty quickly. 39:36Um, but I think that yes, it is going to, uh, either this or version of this 39:41is gonna have to be adopted because you can't have proper scaling in the world 39:46without the infrastructure, the tubes. 39:49Underneath it. 39:50So yeah, we're, we're gonna have some version of this. 39:52Absolutely. 39:53Well, and I think the flip side of that too is if you look at browser 39:55technology, browsers are really good at handling really bad output, right? 40:00Yeah. They have to be. 40:01Um, it's not like, yeah, we we're finally at a, a, a space where 40:05there probably is some, you know. 40:06Coalescence around, you know, the actual source code of webpage that get served up 40:11to the browser so they can render it, but like man handling malformed xml, you know? 40:15Yeah. It took some time. 40:17PHB, JavaScript, every other piece of web technology that's ever been thought 40:21of as a way to revolutionize, you know, what a server is hosting that gets 40:25eventually rendered in your browser. 40:26So it's a, it's a two-sided coin, like one, there's some coalescence on the 40:29creator side, and then two, there's gotta be really good technology that is 40:33very good at handling edge cases and failure cases on the consumer side. 40:38So I think, we'll, you know, whether we see MCP as the one and only protocol to 40:42win them all, or whether we see a handful of them that persist and eventually 40:46shake out, or whether we just see a bunch of like mediocre implementations 40:49of different protocols that all, you know, like drop the trailing curly 40:52brace on their JSON just for fun. 40:54Now, maybe the most likely scenario, you know, uh, well, you know, if, if 40:58the token doesn't get generated for the last curly brace, so that's right. 41:00Be it. Right? 41:01So, um, I think. 41:04It's gonna be a push and pull and ultimately it all comes down 41:07to people or you know, people in conjunction with AI writing software 41:11that can make this stuff usable. 41:13Right. 41:13And so I think, I think we'll see some emergence of coalescence 41:17on the consumption side as well. 41:18Probably. 41:19Um, I. So it'll all be driven by, uh, people tolerating, you know, 41:24gorpy experiences for only so long and eventually they'll just watch 41:27and then doing something about it. 41:27Yeah. Everything 41:28is driven by engineers getting real annoyed and going 41:32and writing some software. 41:33Yeah. 41:35Well, that's all the time we, we have for today. 41:37Uh, Gabe Abraham Marina, thanks for coming on the show. 41:40Always great to have you. 41:41Uh, and, uh, Abraham, Gabe, you should come by more often. 41:44It's good to see you. 41:45Um, and finally, uh, thanks for joining us. 41:47Listeners, if you enjoyed what you heard, you can get us on Apple 41:50Podcasts, Spotify, and podcast platforms everywhere, and we will see you all 41:53next week on mixture of Experts.