Learning Library

← Back to Library

Open-Source Models Will Rule 2026

Key Points

  • The panel agrees that no single model will be universally “top” by 2026; instead, open‑source models are expected to become the most widely used across the industry.
  • DeepSeek‑V3‑0324 is being highlighted for its record‑breaking scores on the Artificial Analysis Intelligence Index, but its claim as the “best reasoning model” is contested.
  • Kate Soule argues that the marginal benchmark gains (often ≈0.01) offered by newer models rarely translate into meaningful improvements for real‑world tasks, so the “best” model is the one that performs best on a user’s specific workload.
  • The episode also teases additional AI news topics, including Gemini’s new release, a novel thermodynamic computing paradigm, and OpenAI’s latest image‑generation advancements.

Sections

Full Transcript

# Open-Source Models Will Rule 2026 **Source:** [https://www.youtube.com/watch?v=CgqHN38l6Ko](https://www.youtube.com/watch?v=CgqHN38l6Ko) **Duration:** 00:41:33 ## Summary - The panel agrees that no single model will be universally “top” by 2026; instead, open‑source models are expected to become the most widely used across the industry. - DeepSeek‑V3‑0324 is being highlighted for its record‑breaking scores on the Artificial Analysis Intelligence Index, but its claim as the “best reasoning model” is contested. - Kate Soule argues that the marginal benchmark gains (often ≈0.01) offered by newer models rarely translate into meaningful improvements for real‑world tasks, so the “best” model is the one that performs best on a user’s specific workload. - The episode also teases additional AI news topics, including Gemini’s new release, a novel thermodynamic computing paradigm, and OpenAI’s latest image‑generation advancements. ## Sections - [00:00:00](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=0s) **Untitled Section** - - [00:03:06](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=186s) **Assessing Model Performance vs Cost** - The participants contend that meaningful benchmarks must balance performance with cost and that declaring any AI model “the best” depends on the specific metrics and use‑case experiments employed. - [00:06:11](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=371s) **Rapid Iterations Reveal Team Agility** - The speakers discuss DeepSeek’s swift model releases, emphasizing that performance metrics highlight the team’s speed of improvement and the strategy of bootstrapping reasoning models to boost non‑reasoning model performance. - [00:09:17](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=557s) **Hidden Usage Metrics & Google Canvas** - The speaker muses on the gap between hype and actual usage of popular AI models, then shifts to discuss Google’s Gemini 2.5 release and the Canvas tool that offers live coding previews. - [00:12:21](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=741s) **Personalized Multi‑User AI Canvas** - The speaker explores a future where each user customizes their own collaborative AI interface and questions whether this is the first widely released multiplayer AI platform, drawing parallels to tools like Google Docs and Mural. - [00:15:28](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=928s) **Gemini 2.5’s Efficient Reasoning Test** - The speaker compares Gemini 2.5’s concise, accurate reasoning on a simple arithmetic prompt to DeepSeek’s overly verbose output, highlighting the value of lightweight qualitative evaluations. - [00:18:32](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=1112s) **Separate Guardrails Enable Custom Safety** - The speaker argues that organizations should combine safety alignment, input/output guardrails, rigorous data curation, and a dedicated guardrail model rather than picking a single solution, to retain flexibility and tailor safety measures to each application's unique requirements. - [00:21:38](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=1298s) **Thermodynamic Computing as Future AI Hardware** - The speakers discuss Extropic’s investment in thermodynamic computing, compare its potential to GPUs and quantum approaches, and ask whether AI practitioners are monitoring such emerging hardware developments. - [00:24:44](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=1484s) **Embracing Chip-Level Randomness** - The speaker argues that for massive AI training, exact binary precision isn’t crucial, so hardware should intentionally incorporate randomness at the chip level to better reflect data distributions. - [00:27:51](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=1671s) **Thermodynamics, Computation, and Analog Matrix Inversion** - The speaker links Maxwell’s demon, Landauer’s principle, and Bennett’s information‑energy argument to IBM’s legacy, then proposes using simple capacitor‑inductor circuits to perform energy‑efficient matrix inversion. - [00:30:53](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=1853s) **Image Gen Trend Sparks Debate** - The speaker observes a flood of AI‑generated images on social media, questions whether GPT‑4o finally solves language‑image multimodality, and reflects on shifting goalposts in AI progress. - [00:34:05](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=2045s) **Integrating Specialized Experts Early** - The speakers discuss shifting from late‑stage tool calls to embedding multimodal expert modules directly within the model’s architecture, noting that entrenched design practices and scaling considerations make this transition difficult. - [00:37:18](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=2238s) **Shift Toward Open Generative Visuals** - The speaker observes that companies are increasingly permissive with image‑generation models—offering broader business applications and fewer legal restrictions than text models—and illustrates the technology’s promise and subtle trade‑offs through a personal example of converting a summer scene to winter. - [00:40:24](https://www.youtube.com/watch?v=CgqHN38l6Ko&t=2424s) **Safety Checks via Reasoning LLM** - The hosts note a brief mention of using a dedicated reasoning model to sift through edge‑case content for safety, lament the lack of further details, and stress the need to monitor how much time the system spends policing its own output. ## Full Transcript
0:00It's 2026 is the top model in the world an open source model? 0:04Kate Soule is Director of Technical Product Management for Granite. 0:07Kate, welcome to the show. 0:07What do you think? 0:08I don't know. I agree with that framing, Tim. 0:10I don't think any model is top. 0:11I don't think there'll be one model that is overall best at anything or 0:15that will rule them all, so to speak. 0:16Alright, uh, Kush Varshney, IBM Fellow AI Governance. 0:19Uh, Kush, welcome to the show. 0:21What do you think? 0:21I think Open is here already and Open's gonna dominate into 2026. 0:25All right, great. 0:26And Skyler Speakman, Senior Research Scientist. 0:28What's your hot take on this question, please. 0:30If you define the top as the most used, then definitely open models 0:34will be the most used models in 2026. 0:37All right, everybody's fighting my questions today and all that, and 0:39more on today's Mixture of Experts. 0:47I am Tim Hwang and welcome to Mixture of Experts. 0:48Each week, MoE brings you the best minds and artificial intelligence 0:52to walk you through the biggest headlines, uh, that are dominating news. 0:55Um, as always, there's a lot to cover. 0:57We're gonna be talking about Gemini's new release. 1:00We're gonna be talking about a new thermodynamic computing paradigm. 1:03We'll be talking about OpenAI's image gen. But first I really wanted to 1:06start by talking a little bit about DeepSeek-V3, and specifically not V3, 1:12but a checkpoint that DeepSeek released. 1:14Um, so to just give the full numbers if you're interested, is DeepSeek-V3-0324 1:19um, and, uh, there's a lot of kind of hype about this release because by some 1:24measures, um, one specific one is this artificial analysis intelligence index. 1:28It is now kind of the best reasoning model, the best 1:31model out there in the world. 1:32But maybe, Kate, I'll start with you. 1:34I know you kind of fought the premise of this question when I 1:36just asked you it a moment ago. 1:37Um, should we think about models as being the best in the world? 1:40Like is that even a useful way of thinking about this space? 1:43Well, well, a couple of things. 1:44I think DeepSeek-V3 is a non reasoning model, so I think a lot of the press's, 1:48best non reasoning model in the world, uh, according to, uh, reports like 1:52artificial analysis, you know, I, I think a lot of these analyses are 1:57trying to come up with tools to help people better evaluate models and pick 2:02ones to, um, use in, in production. 2:05The reality is these models all are differentiated by like 0.01. 2:10You know the differences in performance. 2:12Do we really think that tiny lift in performance of one benchmark 2:16is going to result in meaningful performance improvements on a rag or 2:21even a agent based task that you're trying to deploy in production? 2:25I don't think so. 2:25I think there are great ways to give you a list of models to start to 2:29test, but ultimately the best model is the best model that does best 2:32on your task that you care about. 2:34And that could be any model, regardless of how it kind of scores on some 2:37of these top level benchmarks. 2:38You're almost saying like we're like posts almost benchmarking in some ways. 2:42Like all the models are so performant now that like, it's almost difficult to 2:45say like there's one absolute measure. 2:47I don't know if that's putting words in your mouth, but. 2:49I mean, I think different model providers have different priorities. 2:52I think DeepSeek is actively chasing OpenAI. 2:55They're trying to have the same, you know, pursuit of AGI, and so 2:59some of these benchmarks are being used as demonstrations of capability 3:03on that broader pursuit of AGI. 3:06That's fair. 3:07I don't think that means for like a everyday production task or use case 3:12that really reflects a, necessarily a, a meaningful difference in performance. 3:16I think there's a, you know, some of these big models are frankly overkill, uh, and 3:19so boosting it a little bit further isn't going to make a real actionable impact. 3:24And the benchmarks that matter the most if you're trying to deploy a model is. 3:28What is the performance for a given cost profile? 3:31And those are some things that you know, really you just have 3:33to test use case by use case, using information like artificial 3:37analysis to help you get started. 3:38But ultimately, you know, you have to run your own experiments. 3:41Kush, maybe I'll turn it over to you. 3:42I don't know if you agree. 3:42I guess there's a one way of reading your answer, which might be like, 3:45almost like a. Uh, a contrasting position, I guess, to Kate, right? 3:49Where I think the way I heard you sort of respond to the question was, 3:52by any measure, open is winning. 3:54And so like, it doesn't matter how you measure this, open will 3:57be the best in the world in 2026. 3:59Is that another way of thinking about it? 4:01Or maybe you were nodding, so maybe you actually agree. 4:04Violently with what Kate just said, 4:05um, with, uh, both Kate and Skyler on this point, that, I mean, there's 4:09different ways of measuring what is best and like even asking the question of 4:13what is best is probably, um, kind of not the, the right way to think about it. 4:17But, um, I think the main point is that 4:22Open is, uh, a way the, that the world is gonna move forward. 4:26So, um, whether we wanna count best or not best, um, or usage, or adoption or 4:31not adoption, um, I think open is, uh, gonna have a, a very strong sort of play. 4:37Um, uh, just continuing. 4:39Uh, so whether the, that number is, uh, a little bit above a little bit 4:43below, that's not the, the critical point, just that it's in the same 4:46ballpark as the, the important point. 4:48And, um, I think, uh, a couple of months ago when I was on the show, um, I was 4:53talking about just the culture of, um, how DeepSeek is doing their work. 4:57Um, the fact that they can rapidly iterate and, um. 5:00Uh, kind of make this difference and, uh, and, and reach their goals, whatever 5:04those happen to be, uh, very quickly. 5:06And, uh, I think that's the continuing story in, in my mind that, uh, 5:11whatever happens, uh, I think deeps seek will be able to adapt to, to the 5:15changing environment, whatever the needs happen to come across, 5:19uh, in the, in the actual world. 5:21So, um, uh, just in terms of, uh, the, the culture aspect, uh, I mean, 5:28Open culture is gonna be what's gonna dominate actually, um, 5:31not maybe the, the open model. 5:33So maybe I'll clarify that a a little bit. 5:36I'd like to jump in on, on Kush's point there about the role of DeepSeek and 5:40for sure the headline that got me to click was Open Source is now Best. 5:45Uh, but below that headline. 5:47Was this really cool graphic that showed where Deepseek from January was 5:52to where DeepSeek, DeepSeek-V3 in March now is, and that delta, I think 5:57is worth paying attention to. 5:59I agree about the difference from the other leaders. 6:01Depends what metric you're using at, et cetera. 6:04But for this same metric, the increase that DeepSeek has made from January to 6:08March, uh, really quite, quite impressive. 6:11So think about that change that has happened in that short period of time. 6:14And I think that just kind of echoes, uh, Kush's sentiment about 6:17the way DeepSeek is going about creating and releasing these models. 6:22Very cool 6:23release in January and a great follow up 3 months later. 6:27So that's how, that's, that's the really cool headline after 6:30the fact, after that line. 6:32Yeah, that's right. 6:32Yeah. 6:32I think it's like very interesting as a way of looking at these metrics is 6:35that we, we tend to think about them as like, is the model good or not? 6:38Right? 6:38And I guess, Skyler, kind of what you're seeing is like almost maybe like, well 6:41maybe that's not the real question. 6:42Like this is really useful for almost knowing 6:45how good the kind of team and their improvement method is, right? 6:49Like it's almost like how quickly can the team hill climb? 6:52That's the really interesting thing that's kind of revealed by these numbers 6:54more so than like the quality of the release in some objective, uh, sense. 6:58Well, and I also think there's something interesting going on 7:01about being able to kind of 7:03bootstrap reasoning models to improve non reasoning model performance. 7:07So the initial V-3 that was launched back in December, uh, DeepSeek had a 7:11internal version of R-1 , which was their reasoning model that they said they used 7:15to train it and then they released R-1 in January and that was, uh, you know, 7:19market moving and now they've released an updated version of V-3. And I 7:24think what, so part of that momentum, which is really exciting to Skyler's 7:27point, is that we see them able to kind of innovate on some of these core 7:31building blocks that they've released. 7:33And that's probably gonna unlock all sorts of ways that the 7:36broader open source community can also innovate given that they've 7:39released these building blocks out into the world, like the R-1 model. 7:42Yeah. 7:42There's a final theme I guess I wanna pick up on from Skyler's 7:45original response, which is, you said maybe one metric we should 7:48just look at is usage, right? 7:49Like, nothing beats usage, right? 7:51Like if there's a lot of adoption, you know, almost like we can not, 7:54we can debate what's better and what's not, but it's almost like 7:56it's the one that's being used. 7:57Uh, do you wanna talk a little bit about that? 7:58Like, we don't really 8:00talk about that so much. 8:00I feel like often we're very obsessed with like, how did it do on this benchmark? 8:04But I wonder if like usage over time becomes like a more important 8:07way of kind of measuring, not model quality exactly, but like who's 8:10winning I guess in some sense. 8:12I think, 8:12I think downloads on Hugging Face is a thing, right? 8:15That's, that's kind of a stab at that idea of usage. 8:19And I, and I think that is something that these model developers, uh, 8:23keep track of it and watch over time. 8:24So, uh, no, I don't think we're too far off the mark by talking 8:28about adoption and, and usage. 8:30I will push back a little bit just because DeepSeek is a huge model. 8:34Like, you know, if we talk about downloads and usage, I think small models are 8:39gonna lead and win something a developer could literally download and run. 8:44Uh, the DeepSeek model is kind of a bear. 8:46I mean, it's 600, uh, 70 plus billion parameters that would have 8:50to be loaded in memory to run. 8:52So I, I think usage is really important, but I think usage for 8:56these larger models is going to be predominantly more in a, a hosted setup. 9:00Uh, and you know, there are interesting 9:03ways to look at demand based off of model size. 9:06Uh, and I think we see a lot of small models that are more cost effective, 9:10are gonna get more usage in 2025 and 2026, and some of the bigger models 9:15that are just monsters to, to run. 9:17Yeah, for sure. 9:18I, one of the things I've always been obsessed with is like, one of my data, 9:21like secret data points of the world that I would love to know is like, what's the 9:25book that's most downloaded on Kindle? 9:26That's like never read. 9:28And I actually wonder if there's like, almost like a similar dynamic 9:30for language models where you have like these models that are widely hyped 9:33and very much downloaded, but the question is like, how much use are 9:36they actually getting in practice? 9:37And we have a lot more limited sense of like that. 9:40Right. 9:40And that, that's almost kind of like an invisible part of the, the question 9:44of like, you know, who's winning? 9:45Right. 9:45You know, and I think we're already attacking the premise of that question. 9:53Great. I'm gonna move us on to our next topic. 9:54Um, speaking about big models and, uh, and also, you know, 9:58the battles over benchmarks. 9:59Um, Google did another kind of raft of releases. 10:02They've, seems like they've really been picking up the pace. 10:04Um, there was an announcement for Google, uh, Gemini 2.5, um, and then also kind of 10:09the release of this sort of canvas feature that they've been playing around with. 10:13Um, and you know, because we've. 10:15Spoken just a lot about kind of models and benchmarks. 10:17I do wanna maybe start by talking about Canvas. 10:19Um, one of the really cool features about it I thought was actually the 10:23idea that you can kind of be coding and then also automatically see a 10:26preview of what you're building. 10:27At the same time, and we've talked about this a little bit in the past about just 10:30like we're still trying to figure out how, 10:33you know, kind of AI assisted coding will look in the future and 10:36a lot of the innovation seems to be like on the interface level. 10:39Um, and so I guess maybe Kush, I'm curious to get kind of your thoughts on, you 10:43know, these types of approaches, right? 10:44Where like, it seems like we're moving away from pure, 10:47just kind of auto complete. 10:48Um, but um, just kind of interested in how you think about it, uh, as a 10:51research run some of these issues. 10:53Um, let me start, uh, with a little bit of a, of a history lesson, just 10:57if you'll, uh, accommodate that. 10:59So, um, of course there was an, uh, a person, an IBM Fellow, Irene Grief. 11:04And, uh, she was in our, uh, Cambridge lab. 11:06She's pretty much started it. 11:08Um, and she founded the field of 11:10computer supported, uh, cooperative work and, uh, that, um, she started it in Lotus 11:17and then IBM acquired Lotus, which became part of IBM Research and, and so forth. 11:21And that field brought together all these different sort of things. 11:25It was the human factors sort of things, the distributed systems, I mean like a lot 11:30of different stuff of what it really means for humans to work together, supported 11:36by computers and computer technologies. 11:38And I think the paradigm is shifting a little bit, and it's more about individual work. 11:44So, um, and how that's supported by AI and, uh, the collaboration between 11:49humans and AI and, and doing that. 11:50So kind of the co creativity and, and, and these sort of things. 11:54And I think, uh. 11:56Just the fact that this whole paradigm is changing is calling for 12:00exactly that, the innovations in the interfaces, in the, um, interactions. 12:05And, uh, I think there needs to be a lot more kind of control given to the 12:09user, the ability to tinker with the interface to make it what works for them. 12:14And the canvas is very much a, a great starting point, but I think, uh, because 12:19I mean, just a single chat box is not. 12:21The answer. 12:22I, I think everyone can, uh, appreciate that. 12:24But um, uh, once we go beyond that, then the world opens up into 12:28lots of different possibilities and I think the canvas is one. 12:32Um, but why not just let me as the user determine what is 12:37the right in interface for me? 12:38And maybe that'll actually be the, the next step. 12:40Oh, like that. 12:41The future will be like almost purely like everybody will have their own basically 12:44interface for this sort of stuff. 12:45It's very interesting. 12:46I guess my question to the rest of the panel, is this the first 12:51broadly released multiplayer AI? 12:54You know, the, the, the interface where you've got multiple people 12:57interacting at that with the same 12:59interfaces; have there been versions of this before? 13:02Is this the, is this the one to make the splash where we look back and say, this 13:05is the first time where people are going to be, uh, interacting together over 13:11the canvas, you know, over that case? 13:13Or am I, am I blanking on some, uh, examples? 13:17Previous examples? 13:18No. I mean, I think like the 13:19Google Docs, I mean, you're just all editing at the same time. 13:22And then you can have some AI, um, helping each of the people a 13:26little bit is in that same pathway. 13:29It's not like, uh, we haven't seen canvas type things when you, uh, we 13:34use mural for, um, design thinking sort of things then, and there's multiple 13:38people moving things around and our team, um, to develop kind of a AI 13:43Mural version sort of thing, uh, in, in our Cambridge Lab. 13:47So, um, yeah, a lot of things that are happening, but uh, yeah, 13:52it, it is a step I would say. 13:54Yeah. 13:54I think there's one question we've talked a little bit about in previous 13:56shows is, you know, it's kind of funny that in some ways because ChatGPT was 14:00like this big kind of moment for AI. 14:02All of the interfaces that kind of have followed since have like 14:05fallen into the gravitational well of everything needing to be chat. 14:08And it feels like maybe, I think what's exciting about Canvas is 14:10like, and you know, a bunch of other experiments as well is like kind of 14:13like finally people are trying to like, kind of stretch beyond that. 14:16Um, and I think it's kind of an interesting debate on just like how much 14:19path dependence there is here, right? 14:20Like whether or not people will. 14:22Sort of, I, I mean myself, I'm kind of like, oh, there's no chat. 14:24Like, or like chat is like a less part of this interface. 14:26It feels like a little bit weird for me. 14:28Um, and, and I think that's like pretty interesting to see. 14:31Kate, any thoughts on this? 14:32I don't know if Canvas is like something you'd use or, uh, how you feel about it, 14:35kind of particularly from sort of thinking about this from a product standpoint? 14:38Yeah. 14:38I mean, in general, I'm, I'm always a fan of finding ways to move 14:42beyond kind of the initial chat based constraints. 14:45I think Canvas is probably more of a stepping stone than a final destination. 14:49I, I think it's got that a little bit of chat feel while still being different. 14:53For coding, I really think it's about being embedded in 14:56where developers are coding 14:58today versus having a standalone kind of canvas app where you 15:01kind of iterate in terms of where you'll get the most productive use. 15:05So, you know, I, I think it's a little bit more of a, a demo perspective 15:08there. From a product strategy 15:10I think it's interesting to kind of look at how some of the 15:12big players are focusing on 15:15more of the endpoint side of usage, like Entropic, I think 15:18is focused pretty heavily there, versus more the application side. 15:21Um, with UIs where, you know, it seems like Google's focusing a little 15:27bit more on that with this release. 15:28Um, certainly with some of these new features. 15:30Honestly, from my perspective, I was most excited by the 15:33Gemini 2.5 model simply from the reasoning. 15:38Uh, I do a, a basic sniff test, uh, for different reasoning models 15:42and just ask what is two plus two and see like how much thought will 15:45the model put behind this answer? 15:48Like, can it, can it figure out how not to reason if it's simple? 15:52I like that a lot. 15:52Yeah, and the, the model did actually pretty well, like compared to 15:55DeepSeek it'll give you like five paragraphs of R-1 will give you 15:58five paragraphs of, okay, I've got 16:002 fingers on this hand and 2 fingers on this hand. 16:02And, you know, it goes way into it. 16:04Um, you know, uh, Gemini was able to give a very reasonable short 16:08response that was still correct. 16:10So, you know, I, I thought that boated, well, I haven't done more, 16:13you know, exhaustive testing. 16:14Obviously that's just a quick sniff test, but that's the first time I've seen 16:18like a, a more practical, just like a... 16:20It is an easy question. 16:22I'm not gonna spend a million paragraphs and tokens trying to give you a response. 16:25Mm-hmm. 16:26Yeah. That's great. 16:26I love that. 16:27Is like, the idea is like, actually now we need to be doing simpler evals 16:31because the question is whether or not you're overcommitting resources. 16:33It's like death by reasoning. 16:34Very, very interesting. 16:35Yeah. 16:36Um. 16:37Kush, Skyler, other sniff tests, vibe checks on 2.5. 16:40I do think these qualitative like evals are pretty valuable, I think 16:43in terms of people like navigating, like, is this something I should 16:45spend time on or look into? 16:46Not in the last 36 hours, sorry. 16:48No. 16:48Okay. 16:49Same here. 16:51I also do, where is Rome? 16:53That's my other go-to. 16:54Uh, similarly, you know, paragraphs of debate on where 16:58Rome is compared to mm-hmm. 17:00a short 17:00response on Gemini. 17:02So I thought that was pretty good. 17:03Yeah, I will need to try that with deep seek. 17:04I just love the idea of like, kind of grinding away for 17:06like a very simple question. 17:08It is. And really stressing about the answer. 17:10It, it 17:10Literally is like, okay, two fingers plus two fingers. 17:12But then if I have two toes plus two toes, how many toes do I have? 17:15Like it, it gets mind blowingly intricate. 17:19Um, I think one final thing I did wanna touch on and Kush, I 17:22think we should recognize that you're wearing a safety vest. 17:24Um, before I kind of tee up this session, do you wanna explain why you're 17:27wearing a safety vest on the show today? 17:29Yeah, this is, uh, safety vest because, uh, IBM Research with our Granite 17:34program is, uh, very focused on safety, um, through our, uh, uh, red teaming 17:39our, uh, granite safety alignment and our Granite Guardian model. 17:42So yeah, that's, uh, just trying to represent that. 17:44Yeah, absolutely. 17:46And I did wanna finally just talk a little bit about. 17:48Like model safety here. 17:49Um, and you know, uh, I think one of the things we've talked a little bit about 17:53in the past is like how much safety is built into the model versus kind of a 17:56future where safety is kind of like a separate model that you're working on. 18:00Um, and I don't know, I guess Kush like looking at a release like this, 18:03it still feels like at least a lot of the big companies are still kind of 18:06rr I would say like at least kind of, you know, Google, like, let's just 18:09say, um, is kind of still chasing after this kind of like, well, it's just 18:13all gonna be embedded in the model versus kind of safety being outside. 18:16You wanna talk about kind of like the pros and cons of that? 18:18And I guess why, you know, Google isn't kind of like doing what a lot of 18:22other companies are doing is saying, well, like Meta or like IBM like, hey, 18:25we're gonna actually separately think about safety as a, as a its own kind 18:28of like model construct in some ways. 18:30Um, just was curious to get your thoughts on that. 18:32I mean, Google does have, uh, something called Shield Gemma, so they do 18:36have, uh, uh, a player in, in this, uh, separate model sort of field. 18:40But, um, yeah, it, it's really not a question of like choosing between 18:45the, the different ways of doing it. 18:47I mean, you really should do everything because, um. 18:50Uh, there's never any perfect sort of solution. 18:52So yes, do the safety alignment as best as you can, um, and then still have, uh, 18:58an input and output guardrail, um, because I think, uh, it's, uh, it's critical. 19:02And then even on the data curation side, I mean, um, uh, try to exclude as much of 19:06the, uh, the bad content, uh, as possible. 19:08And. 19:09Uh, to me a big reason for keeping a separate guardrail model alive is 19:15because, um, uh, beyond the performance sort of question, um, where yes, 19:19I mean, that does show that, uh, you can do a little bit better. 19:23But, um, the other thing is customizability because, um, not every, 19:27um, sort of application, every use case is gonna be exactly the same. 19:31So. 19:32Uh, the notion of safety, the notion of what is, uh, desired 19:35and undesired is gonna change. 19:37And so, uh, if you just pick everything in, uh, you don't 19:40have that flexibility anymore. 19:41So, uh, just, uh, we need to think that, uh, uh, every, uh, customer, every 19:47sort of application needs some level of customizability and that applies 19:52to the overall model, but uh, uh, also on the safety side. 19:55Yeah. 19:55And I do think that's actually a great way of sort of thinking about 19:57it, is you, you're sort of saying safety at every level, right? 20:00Was like do safety everywhere. 20:02Um, and it's like how we'll end up doing it 20:03Kush in 10 seconds. 20:05Could you compare and contrast safety and security? 20:08Uh, the reason I ask is the UK recently rebranded their AI Safety Institute 20:14into the AI Security Institute tour. 20:18Yeah. What's, what are your thoughts? 20:20Not necessarily on that particular rebrand, but along those two dimensions? 20:24Yeah. 20:24No, I mean, both of us were in San Francisco in November, right? 20:27When, uh, it was a convening of the AI Safety Institutes. 20:31Um, you were a part of the Kenyan delegation. 20:33Right. 20:34And, um, uh, the, yeah, things have changed a little bit. 20:38I think that's more politics, more just wording sort of things. 20:41But, um, to me, like security is 20:45at the application level. 20:46Um, that's, those are things that you do kind of, um, in a general sense. 20:51Um, and then the safety is at the model level. 20:54Um, things that you're trying to bake into the model or put a extra Guardian. 20:57And then when you kind of meet in the middle, um, the model, uh, comes 21:01up and the application comes down. 21:03Uh, that's where kind of the, the confusion might be a little bit, so it's 21:08security that's kind of becoming more AI-ish, and then safety that's 21:12become, or the AI model is becoming more secure in some capacities. 21:16So yeah, to me there's, uh, the general idea is just reducing the risk of 21:21harms and, um, uh, the more you can do that, uh, the, that's the, the goal. 21:30For our next topic, uh, I wanted to kind of bring us to a hardware 21:33story, a really interesting, uh, feature coming out in Wired this 21:36week on a company called Extropic. 21:38Um, and what Extropic is investing in is an idea called thermodynamic computing. 21:43Um, and I really want to kind of bring this up just 'cause, you know, a few 21:46episodes ago we talked about quantum. 21:48And these guys I think are really making the argument that like, well, 21:50it's not gonna be GPUs, it's not gonna be quantum, it's gonna be this new 21:54thing called thermodynamic computing. 21:56Um, and I think it's just really interesting as we kind of think 21:58about the ways in which hardware influences, uh, the work of AI. 22:02Um, and, uh, was kind of interested in, in like the, the takes of this 22:06group as the people who kind of like work in AI day in, day out. 22:09You know, to what degree are you kind of like paying attention to 22:12these kinds of developments. 22:13Right? 22:13Because I feel like one way of thinking about this company is 22:15that it's, it's big if true, right? 22:17Like if you can actually do it, then maybe it's a really big deal. 22:19But we kind of don't know at this point. 22:21Um, and so I, I'm kind of curious like on a day-to-day level, like are 22:25folks kind of thinking about like these alternative computing platforms? 22:28Are they sort of still so far in basic research that they're 22:31kind of not impacting day to day sort of thinking. 22:33Okay. 22:34Maybe I'll kind of turn to you for, for the first take here. 22:36Yeah. 22:36I mean, and I'm not a expert at all on chip design or, or hardware, but I 22:41think it's something that certainly, uh, IBM and we have huge teams working on, 22:47specialized in alternative chip design, uh, and AI accelerator chips is paying 22:52really close attention to, and there's a lot of innovations going on in that space. 22:57So, you know, I think some of these headlines, like normally we let it 23:00mature a little bit before we start paying more close attention, but 23:04as a field and as a whole, you know, I think there's a ton of opportunity 23:07to better optimize and redesign chips based off of the inference loads 23:12that we expect to see in the future. 23:14Um, moving into, for example, running smaller models more times at 23:19inference versus one big model one time at inference in order to improve 23:23performance as everyone starts investing more heavily in, uh, a phenomenon, 23:27we're calling inference time compute. 23:29Um, so, you know, I think that 23:31there's just tons of opportunities in this space. 23:33So certainly eager to see how onic evolves and, you know, if something 23:37becomes more mature that, that the field can take advantage of. 23:40And this is kind of where I wanted to point the discussion 'cause I think, 23:43um, you know, in some ways kind of like the, the kind of uniformity of 23:47GPUs and even the uniformity of like NVIDIA has been in some ways like 23:50really good for the AI space just because like, I think there's been kind 23:53of like a common standard that people can build around on the hardware side. 23:57Um, and I think one of the questions that I'm sort of curious about on whether 24:00or not like as this evolves is if you have all these kind of alternative 24:03kind of computing platforms that end up being good ways of doing ai. 24:06If that kind of fragments up the space a little bit, right? 24:09Like I assume the way that you would kind of try to do AI on top of something like a 24:13thermodynamic computing chip or a quantum chip might look really, really different. 24:17Um, and so, yeah, I just kind of here, like as you kind of 24:20think about the future, maybe Skyler I'll turn to you is like. 24:23Do we think there's gonna be more fragmentation in the 24:24space or is it, I don't know. 24:25Maybe we'll find some way to just get Coda to work on everything. 24:28You know, 24:29I am. 24:29I'm not ready to invest in Extropic yet, but I do think they've 24:33got some interesting takes and I was reading it about it today. 24:38You don't want any randomness in your floating point operations, 24:42our typical zeros and ones. 24:44But if you're doing billions and trillions of these floating point 24:48operations, that noise is actually okay. 24:51Uh, the idea of AI and how we train those sorts of things, distributions of data. 24:55So the problem is you don't want any randomness at any individual 24:59calculation, but you wanna simulate randomness at the larger scale. 25:03Their approach seems to be, let's not bother, bother with zeros and 25:07ones anymore at the chip level. 25:09Let's embrace randomness down at the chip level because that's where we're 25:13eventually going anyways, thinking always about distributions rather than the 25:17answer being, you know, four for example. 25:19So I'm really glad people are asking those questions. 25:24Whether or not they'll be able to induce the desired distribution 25:28by passing, you know, electronics through, uh, metal wafer. 25:33That'll be, that'll remain to be seen. 25:35But I'm, I'm really glad that people are considering this idea of the extreme 25:40accuracy required for our zeros and ones. 25:42And then in the bigger picture, actually, we don't need that specific, 25:47specific accuracy when you're talking about training these massive models. 25:51Uh, so with some really cool tension to see how it plays out. 25:54Uh, but like I said, I'm, I'm not taking my, um, I'm not 25:57taking my money there quite yet. 25:59Yeah, absolutely. 26:00I actually really love that you're, I think, revealing a bias in how I 26:03kind of frame this segment, which is hardware is the upstream thing and all 26:07the AI people kind of have to like dance depending on how the hardware evolves. 26:11Skyler you're almost making the reverse argument, right? 26:13Which is actually like what we're seeing now and I guess what this company is 26:16an example of is an attempt to make the hardware kind of like match. 26:20Like what we know about sort of AI now. 26:23And so it's actually the, the power is actually going the other way now, which 26:25is like the, in some ways, like GPUs were always kind of an accident in some ways. 26:29And so we're kind of like trying to rebuild that, 26:31huh. 26:31Yeah. 26:32That's a 26:32nice take. 26:32Yeah. 26:33Kush, any final thoughts on all this? 26:34Sure. 26:34I can maybe go back to, to some more of my, uh, my history lesson if 26:38you guys are okay with that. 26:40So this is good. 26:41I feel like, you know, yeah, it's like 26:42we have, we have Chris on for like, the, the crazy take. 26:44Yeah. 26:44And we have Kush on for, you know, the, the historical history, philosophy, 26:47you know, perspective history and 26:48everything like that. 26:49Right. 26:49So, um, uh, just, I mean, what is thermodynamic computing? 26:54Right? 26:54Um, so I think it's to understand a little bit of, uh, of like how this has 27:00come about a little bit more as well, because, um, uh, I mean, uh, you said it. 27:05Uh, at the beginning, right, that, uh, there's some sort 27:08of like hardware lottery. 27:10Um, and Sarah Hooker, um, is a researcher who wrote an essay all about this, 27:13that, whatever the hardware happens to be, that's kind of what, uh, uh, makes 27:19things go forward and, and so forth. 27:21So even the whole IBM company, it started, um. 27:24Uh, at a time there was this guy, Herman Hollerith, and he was doing punch 27:27cards and he, um, I mean, did the US uh, census in 1890 and stuff, right? 27:33And that's like a paper with a hole in it. 27:36That's a very basic sort of technology. 27:38And then, um, in the sixties, um, Bob Denar here at IBM Research 27:42invented dram, which, um. 27:44Took a capacitor and a transistor, and you could do memory that that 27:48way instead of through these, uh, hole punching sort of things. 27:51And, uh, then you get to like the thermodynamics of it. 27:54So, uh, you have James Clark Maxwell, um, second Law of Thermodynamics, 27:59and he is trying to think about this demon that's trying to like 28:03make heat flow without any energy expended or anything like that. 28:08Right. 28:08And, um, uh, there were these two researchers here at IBM, uh, Raul 28:12Landauer and, uh, Charlie Bennett. 28:14And, uh, uh, what they figured out is like how to argue against 28:19what, uh, was this Maxwell's demon, um, sort of thought process. 28:23So. 28:24Uh, land Hour showed that, uh, any sort of computation, um, actually 28:29requires the use of energy. 28:31Um, so it requires, uh, uh, heat, right? 28:34And then, uh, Bennett took that idea and said that this demon who is sorting 28:40uh, hot and cold molecules, um, uh, must, uh, actually do 28:45some information processing, so that's actually using energy. 28:48And, um, so the, uh, second law of thermodynamics must hold. 28:52So like all of this is part of like IBM's heritage as well. 28:56And, um, but this new thing, um, I, I think it's exciting. 29:00Uh, it's. 29:00Been in the works for, for a long time as well. 29:03This, these thermodynamic ideas. 29:04So, uh, the claim is that things like matrix inversion, which is a very 29:09important computation, um, and it's very expensive to do, um, with large 29:14matrices, can be done naturally with, uh, with, with this sort of approach. 29:18So I think, uh, uh, that makes a lot of sense. 29:21So just take a capacitor and a, uh, inductor. 29:25Um, uh, and then, uh, with those, you can actually, um. 29:30Set up, uh, the, the matrix on the conductors, uh, let it dissipate 29:35energy, uh, however it's supposed to. 29:37And then the, uh, correlation among these different circuits actually 29:41tells you the inverse of the matrix. 29:43So, uh, all of that is like really cool stuff. 29:46Um. 29:47And I don't see why we shouldn't be looking at those alternatives. 29:51Like a lens we know does the reciprocal operation. 29:55We know that, uh, resistors do this or that or whatever, right? 29:58So like, why not do it this way where we shouldn't be beholden to digital 30:03logic just because, um, that's how it, it's happened over the years. 30:07I think, uh, all of these things, I guess you take a look back and 30:09it's always like, well, actually it's been going on for decades. 30:11You know? 30:12I feel like all of these kind of new developments, I, 30:15I mean, AI included, right? 30:16Is like, just like part of this like very long kind of historical legacy. 30:24So for our final topic, uh, this was kind of a, a fun thing that I 30:28did want to talk a little bit about, um, is in a week that was just 30:31packed with different announcements. 30:33Um, the one that seems to have taken the cake, at least in my social media feeds. 30:37Has been the release of 4o OpenAI's 4o Image Generator. 30:42Um, I think, uh, most importantly I guess for me is that, um, this kind of 30:47meme of rendering everything in a studio Ghibli format, in an anime format, 30:52um, has just kind of like taken over. 30:53Like I, my social media feed is nothing but these images right now. 30:56Um, and so it's a kind of funny moment I think to take a step back and say 31:01like, okay, image gen like is suddenly kind of like, you know, uh, uh, trending 31:05again in a way that almost kind of like dampened, I think all the, a bunch 31:09of the other announcements this week. 31:10Um, and I guess, I don't know, playing around with it, it 31:13is really quite impressive. 31:15Um, and, uh, I guess similarly, maybe Skyler, I'll throw 31:18it to you for kind of like 31:19the vibe check if you've played around with it, what you think. 31:22Um, and if this is actually like a big improvement or, I mean, we've done 31:25style transfer in the past, right? 31:26So this is in some ways not new, but this seems to have really hit a 31:29nerve in a way that like has not been the case for previous announcements. 31:33It has. 31:34I have not played with it. 31:35But again, okay. 31:36My feed. 31:37Has been filled with people, rememeing, all of these different, uh, 31:42all of these different styles. 31:44Um, I think, I think with this, are we, are we in a position? 31:49Is, has multimodality, at least between language and images, has that been solved? 31:54Is this, are we gonna move the goalposts goalposts further down away, 31:58or can we say we have cracked, um. 32:02GPT-4o has cracked multimodality. 32:05I think. 32:06I think they've done that. 32:07I think this is, this is some really, really cool, impressive tech. 32:10Um, so yeah, I don't know. 32:12Otherwise we're gonna again say, no, but I can't do this and we'll 32:16keep moving those goalposts. 32:17So I think it really is, uh, quite impressive, at least again, 32:20from all my friends, uh, playing on it and, and sending images 32:23over, uh, over social media. 32:25Yeah. 32:25In some ways, I think, excuse me, having played around with it a little bit, it 32:29is sort of a triumph, not necessarily of 32:31images or text to images, but it's almost a triumph of like the ability to kind of 32:35correctly infer what someone is looking for when they, when they search, right? 32:41Like I think that's always kind of my reflection is it's kind of 32:43the first time like playing with older versions of Midjourney. 32:45It was like, oh, well not quite this. 32:47Can you make this change? 32:48Can you make this change? 32:48And you finally get to the end. 32:50This one is like kind of magical 'cause it's like very one shot. 32:53You're like, I want this. 32:54And it generates an imagery. 32:55You're like, oh, that's. 32:56Kind of exactly what I was looking for. 32:58Um, and I think that's really interesting and I don't know if, is there, Kay, 33:01you're nodding if there's like a good name for like that achievement. 33:04It feels like, what's the big jump here in some ways? 33:07Well, I think it's important to recognize where we were before, which was DALL-E 3 33:11which was back in 2023, you know, ancient, 33:15Ancient in generative AI terms. Way outta date! 33:19So, you know, DALL-E 3 more or less kind of being called as a tool. 33:22So like being swapped in when it's, you know, being called to generate 33:25an image based off a part of the conversation and then turned off and, 33:28you know, GPT-4o oh or whichever model can take back up the conversation. 33:32And so I think what we're seeing here, I, I mean obviously OpenAI does not 33:37share tremendous amount of details on their broader architecture and design, 33:41but based off of what I've read, like on their docs and the on the release notes, 33:45you know, they talk about this being a more native capability, embedded far 33:49deeper in the architecture of the system. 33:51And so I think what we're really seeing is some really exciting innovations from 33:55multimodality focused on system design and how can we bring down some of these 34:01multimodal components far more core to where the language model operates. 34:05But that could mean, for example, potentially sharing some parameters 34:09and, um, being able to kind of bring different components together, uh, much 34:14earlier on in the process than kind of at the very last minute kind of 34:17Tool call, you know, call a friend and, and then pick back up the conversation. 34:21And I think that is the future, not just for multimodality, but for all types of 34:27understanding and more specialized tasks. 34:29Um, being able to have different experts, whether that's an expert on documents 34:33or expert on images, or expert on audio. 34:37Being integrated at a systems level, far more internal to the model and 34:41having a model or an application about, you know, a chat bot, be far more of 34:45a systems based approach versus here are some weights that, you know, we're, 34:49we're calling, uh, for a given prompt. 34:51Yeah. 34:51And I guess as someone who's been less involved in this, in the 34:54day-to-day, I mean, Kate, could you explain maybe a little bit like 34:56why ha, why has that been hard? 34:58I guess like from moving from like something kind of, it sounds like 35:00bolted in, at the end to kind of like fully integrated in the system. 35:05Like what, what makes that difficult? 35:07I think part of it is just momentum of how things have been built the previous way. 35:13You know, starting with the release of the original GPT, you know, 35:173.5 models and ChatGPT, uh, to scale performance has been. 35:22Baking it into the model at training time, train on more data, have more parameters, 35:27and boost your performance by just baking it all in, in that upfront training step. 35:32And so I think a lot of the system design and architecture and applications have 35:35been focused on, okay, there's this big black box that we make a single, you 35:40know, call to, and we get a response back. 35:42And we're starting to see more of a shift. 35:45And you know, I don't know that it's necessarily more difficult. 35:47I think in some ways it's actually a lot easier to innovate if we're innovating 35:51outside of that training and innovating more on the systems based approach. 35:55But you know, we do have to make a conscious shift. 35:57To enable that we don't have the same tools and capabilities available. 36:01Um, the community needs to build those. 36:04Particularly if you're talking about doing this in the open versus, you know, 36:07OpenAI doing this behind a closed gated wall. 36:08They've got a whole inference, orchestration layer that they haven't released to the broader world. 36:14So I think this is a big challenge that open source models actually 36:17face in particular is being able to catch up to the same degree of 36:21this more systems based approach. 36:22'cause we don't have the same infrastructure or the same kind of 36:25revenue coming in, so to speak, to pay for that, that build 36:28out, um, to enable that system. 36:31That's really helpful. 36:32Thank you. 36:33Um, Kush, I'm gonna call on you not just as the history person here, 36:35but also as the safety person. 36:37Um, I think one of the things I observed in this kind of like wave has been, I 36:41mean, indeed even like the Studio Ghibli meme um, is something that I think 36:46traditionally, I think companies have been a lot more restrictive on, right? 36:49To say, oh, you, you really don't want to copy a style. 36:51Right. 36:52Um, I've also seen a number of image generations that are a 36:55little bit at the edge of kind of like what you would consider sort 36:58of acceptable image generation. 36:59Mm-hmm. 37:00Do you think this also marks maybe like a shift in how companies 37:02are thinking about image gen? 37:04Like I think there's one way of reading this, which is OpenAI is concluding 37:07that actually we should kind of let up a little bit, we should actually allow 37:10people to use these kind of image gen products more freely, even though it 37:14might occasionally generate some stuff which is offensive, harmful, toxic, so on. 37:18And, and I wanted to just get a comment from you on sort of like. 37:21The meta here? 37:22Like are companies kind of opening up in a way that they haven't in the past, 37:24and what are the trade offs of that? 37:26Yeah, I think, uh, they are, and uh, I think the, uh, the image side of 37:31things is maybe a little bit more forgiving on, on this because, um, 37:35uh, for the natural language, the text is more used in business sort 37:40of applications, generative imagery. 37:42It's, um, uh, I mean less, uh, sort of. 37:46Uh, legalistic in, in some ways. 37:48So I, I do think that that is probably the, the case and for demonstration 37:52and, and for many other things. 37:53So, um, yeah, I was actually playing around with this, um, and uh, like 37:59an example that my wife and I were running, so she actually did her, 38:02MFA and computer art a decade ago. 38:05And she took a class in, uh, digital mat painting. 38:08And one of the assignments was for a week. 38:10Like they had to take an image, um, it was a summertime image, and then 38:14change it so that it looked like a wintertime image of the same scene. 38:17And, um, this thing does it really well. 38:20I mean, like in, in a minute you have the, I mean, what you were looking 38:23for, but then what she was zooming in on was the windows of the building. 38:28Um. 38:28Had had some minor changes across, um, the, the summer and winter image. 38:33And so, uh, like at first glance, like I didn't notice it. 38:37Like, I mean, she is like an expert at this, so she was like zooming in and going 38:41back and forth and like really looking at, uh, whether something has changed or not. 38:45And, um, uh, from the safety perspective, like, um, those sort 38:50of little minor things that like. 38:52Uh, someone like me, um, doesn't notice is probably fine. 38:56But, um, uh, once you're at a very expert level, like if you're, um, an 39:00actual like movie maker, um, doing digital net painting or like other 39:04stuff, then it becomes critical. 39:05So as a consumer tool, I think it's all good, but there's still a gap. 39:10And, um, uh, we have this researcher, um, uh, Morro Martino. 39:14Um, he is a world famous AI artist and, um, uh, he just 39:18created a 12 minute long, uh, AI. 39:20Fully AI generated, uh, film, and he couldn't use any of 39:24the tools that are out there. 39:25I mean, he has to innovate the tools and everything else. 39:28And this is being shown in Seville, Spain these days. 39:31And, um, this film is like so professional level, like you can 39:35imagine the difference between what 39:37this image generation stuff is able to do and what the 39:40professionals are truly able to do. 39:42So, um, this is not the tool for them, but, uh, I think it's 39:46safe enough for, for all of us. 39:48So, uh, I think that's the, the way to think about it. 39:51Mm-hmm. 39:51Yeah, that's really interesting. 39:52And I almost love this kind of threshold of like, I. Good 39:55enough to fool the amateurs. 39:57It's like, I think actually a really important threshold. 39:59I like sent an image to a friend being like, you know, it's really impressive 40:01that they get the fingers right now. 40:03And then he shot back with like a zoomed in version of the image to 40:06show that there was this like little fingertip still hanging out somewhere. 40:09And I was like, oh no. 40:10It's like, again, it was like kind of enough to get past my sniff 40:13test, but I think anyone who with a keener eye would've clearly just 40:16not seen, like seen the problem. 40:17So 40:18on OpenAI's blog release, they have a little paragraph about how 40:21they've used, uh, a reasoning LLM. 40:24On the safety side of this generation, and I, I don't know if we'll get. 40:28Any more details beyond that paragraph, but I, I thought that was interesting. 40:32I don't know if they're, you know, covering themselves or, or trying, but 40:37it was, yeah, there was this very clear paragraph about how they've used their, 40:40uh, their reasoning LLM to help, uh, parse through, uh, some of these more, 40:45you know, I think edge cases of things that, uh, unquestionable generation. 40:49Um, so yeah, love to see if we get more details about that going forward. 40:53Uh, and why, why they were. 40:55Why they called that, uh, that nuance out in particular. 40:58Yeah, that'll be really interesting. 40:59I missed that and I think it's definitely worth keeping an eye on. 41:01And I think back to Kate's little kind of like reasoning kind of vibe 41:04check about like how much time does it spend, thinking about whether or not it's 41:07a good thing or a violation of its content guidelines is like a very interesting set. 41:11If it happens 41:11at training, I don't care how long it takes. 41:13That's true. Exactly. 41:13I don to see it like, take as long as you like. 41:15It's a reference time. 41:17Yeah. 41:17Well, as usual, so many things to cover. 41:19Not enough time to cover it all. 41:21Kush, Kate. 41:22Skyler, thanks for joining us and thanks to all you listeners 41:24for joining us as well. 41:25If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify, and podcast 41:29platforms everywhere, and we will see you next week as always, on Mixture of Experts