Learning Library

← Back to Library

WWDC, AI Wars, and Quantum Advances

Key Points

  • The hosts debate Apple’s recent WWDC announcements, questioning the rushed design changes and speculating whether the new “glass” OS will become a “Windows Vista‑like” flop.
  • They analyze Meta’s strategic acquisition to secure its AI supply chain, emphasizing that infrastructure—training data, evaluation, and human feedback—is now the primary battlefield in the AI wars.
  • Tim Hwang assigns “homework” on Apple Intelligence and Sam Altman’s latest blog, highlighting how quickly listeners dive into new papers and test emerging models like O3‑Pro.
  • The episode previews upcoming discussions on OpenAI developments, a major Scale AI partnership, and a breakthrough quantum‑advantage announcement slated for the show’s closing segment.

Sections

Full Transcript

# WWDC, AI Wars, and Quantum Advances **Source:** [https://www.youtube.com/watch?v=Y5yiQDnTimU](https://www.youtube.com/watch?v=Y5yiQDnTimU) **Duration:** 00:52:02 ## Summary - The hosts debate Apple’s recent WWDC announcements, questioning the rushed design changes and speculating whether the new “glass” OS will become a “Windows Vista‑like” flop. - They analyze Meta’s strategic acquisition to secure its AI supply chain, emphasizing that infrastructure—training data, evaluation, and human feedback—is now the primary battlefield in the AI wars. - Tim Hwang assigns “homework” on Apple Intelligence and Sam Altman’s latest blog, highlighting how quickly listeners dive into new papers and test emerging models like O3‑Pro. - The episode previews upcoming discussions on OpenAI developments, a major Scale AI partnership, and a breakthrough quantum‑advantage announcement slated for the show’s closing segment. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=0s) **Untitled Section** - - [00:03:14](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=194s) **Apple AI Rollout: Hype vs Reality** - The speaker critiques Apple’s evolving partnership with OpenAI, notes Siri’s lag behind Meta and Google, comments on new “liquid glass” UI and ultra‑dark mode features, and highlights excitement for on‑device large language model support. - [00:06:20](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=380s) **Apple’s Last AI Play** - The speaker argues that after abandoning a larger, unready model, Apple’s only remaining strategy is to subtly integrate small on‑device AI models across its ecosystem, a move that still trails Google’s more advanced Gemini models. - [00:09:27](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=567s) **Disappointment with Siri's AI Strategy** - The speaker laments Apple's delayed Siri overhaul, attributing it to an overly cautious, privacy‑first approach despite the company's strong hardware‑software integration. - [00:12:36](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=756s) **Unified Design System Across Devices** - The speaker praises a platform's design system for delivering consistent UI across iPhone, iPad, and Mac, positioning it for future AI integration and better developer experiences, while acknowledging personal Apple fandom. - [00:15:41](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=941s) **Apple’s Missed AI Opportunity** - A speaker criticizes Apple’s delay in releasing advanced generative AI features—such as photorealistic image creation and a more conversational Siri—arguing that the hold‑up may stem from a desire to perfect the platform despite the missed market chance. - [00:18:43](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1123s) **AI Optimism vs Real-World Challenges** - The speaker argues that while leaders like Sam Altman portray an overly optimistic, utopian AI future, significant issues—misalignment, job impact, unequal global access, and concrete technical shortcomings such as models failing basic tasks—remain unresolved. - [00:21:47](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1307s) **Debating Apple’s AI Claims** - The speakers critique Apple’s intelligence paper, testing it with O3 Pro, questioning its prompting strategy and emphasizing the importance of puzzle‑based evaluations. - [00:24:50](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1490s) **Debating LLM Reasoning vs Copying** - The speaker examines whether large language models are just stochastic parrots that copy diverse data or demonstrate genuine problem‑solving reasoning, arguing that the utility of the models matters more than a strict definition of intelligence. - [00:27:57](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1677s) **Debating AI Rights and Trust** - The speaker argues that AI displays reasoning abilities, launches a campaign against AI discrimination, and examines how companies might treat AI models as trusted employees comparable to human hires. - [00:31:00](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1860s) **Meta's $15B Scale AI Bet** - The speaker explains Meta's massive $15 billion acquisition of data‑annotation firm Scale AI as a strategic move to close the gap with rivals like Anthropic, Google, and OpenAI, secure talent, and accelerate its AI research pipeline. - [00:34:04](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2044s) **Young Billionaire, Scale AI, Meta Hire** - The speaker lauds a 28‑year‑old billionaire’s talent, praises Scale AI’s synthetic‑data capabilities and recent acqui‑hire, and points out Meta’s unusual external senior hiring amid its AI organizational shifts. - [00:37:13](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2233s) **Discussing AGI Investments and IBM Quantum Preview** - The panel talks about a massive $15 billion bet on high‑quality AGI data, then previews an interview with IBM Quantum CTO Oliver Dial to discuss recent quantum computing announcements. - [00:40:16](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2416s) **Surfaces, Errors, and Near‑Term Quantum Advantage** - The speaker explains how surface‑induced imperfections cause high error rates in quantum processors, how statistical mitigation allows useful computations in chemistry, optimization, and materials science, and predicts a demonstrable quantum advantage over classical computers within the next few years. - [00:43:29](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2609s) **Introducing the Gross Quantum Code** - The speaker contrasts the surface code’s simple checkerboard layout—requiring thousands of physical qubits per logical qubit and thus impractical for current ~1,000‑qubit devices—with a new “gross” code that abandons the nearest‑neighbor constraint by adding long‑range connections, enabling a far more efficient error‑correction scheme. - [00:46:35](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2795s) **Quantum Computing: New Problem Frontiers** - The speakers discuss how quantum computers are expanding the map of solvable problems—from modest chemical simulations to complex materials science and, most compellingly for clients, large‑scale optimization—by offering efficiency gains impossible for classical machines. - [00:49:38](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2978s) **Questioning the Quantum Roadmap** - The speaker outlines a modular plan for building large‑scale quantum computers, acknowledges ongoing research and scalability challenges, and pushes back against hype that such systems will be practical within the next year. ## Full Transcript
0:00I think double WWDC would start being called, why was design changed? 0:05I really want Apple to focus on having a good platform, right? 0:11The mobile devices, the iPad, the Mac all coming together, and I, I actually 0:15think that's the bigger story of WWDC 0:18So I think Meta is really betting on securing its foundational AI 0:23supply chain with this acquisition, which I think is, is very strategic 0:27and it's the right move for them. 0:29Infrastructure matters as much as models, training, data evaluation, human 0:34feedback, and I think this is where the AI wars are being fought right now. 0:38Tim gives his homework every time we appear. 0:40He is like, you need to read the Apple Intelligence one. 0:42You need to read, but Sam Altman's blog post and you're like, okay, 0:46I'm gonna stick it through o3-pro 0:48I might as well test that out. 0:5013 minutes later. 0:5213 minutes later for it to read the paper. 0:54You're like, come on, dude. 0:56I could have read that myself. 0:58You're saying like quantum's like 0:59already here. 1:00It's kinda what you're saying. 1:01We're saying you're probably gonna show quantum advantage, so 1:04doing something better, faster, cheaper than a classical computer. 1:07By 2026 and actually think it's gonna be even sooner than that. 1:11All that and more 1:11on Mixture of Experts, A Think podcast. 1:20I am Tim Hwang, and welcome to Mixture of Experts. 1:22Each week, MOE brings together the sharpest team of researchers, 1:25engineers, and product leaders that you'll find anywhere in podcasting 1:28to discuss and debate the biggest news in artificial intelligence. 1:32Today I'm joined by a rockstar cast, Chris Hay, distinguished engineer 1:35and CTO of customer transformation, Shobhit Varshney, who is head 1:38of data and AI for the Americas. 1:40And Kaoutar El Maghraoui, principal research scientist 1:42and manager for hybrid AI cloud. 1:45As always, we have a ton to talk about. 1:46We're gonna talk about a bunch of news out of OpenAI, a huge deal for scale ai, 1:51and you should stay tuned for the end where we're gonna have a special 1:53segment that's gonna focus specifically on a very interesting announcement 1:56that just came out about quantum. 2:02Before we get to all that, uh, I want to talk about obviously Apple's, 2:06WWDC , which is their annual developer conference, the big kind of showcase 2:12for what Apple is going to do. 2:14And there were a bunch of announcements, uh, that kind of came out. 2:17But first I wanted want to just go around the horn and get everybody's opinion on, 2:21you know, what coming out of like the keynote I'd say, you know, are we gonna 2:24still be talking about in six months? 2:26Sobe? 2:26I'm curious what you think. 2:27I think double WWDC would start being called. 2:31Why was design changed? 2:34I think this may end up being a Windows Vista moment, but I just have been, I've 2:39been, uh, I've been rocking the new glass, uh, os for, for a while now, and it's just 2:44not quite ready yet and I am very confused when the glass design is not quite ready. 2:50They're comfortable shipping that. 2:52They're saying that they can't ship AI because AI is not quite 2:54ready, so they have to pick a lane. 2:56Yeah, for sure. 2:57Kaoutar what's your reflection? 2:58Anything that we're gonna be remembering this WWDC for? 3:02I think the still highly competitive pressure from open 3:06ai, meta Google, Samsung, uh, especially kind of emphasizing 3:12open, uh, Apple's late AI entry. 3:14Uh, it puts this under a lot of pressure. 3:17And, uh, you know, questions around did Apple's partnership 3:20with open AI evolve or backfire? 3:22How does Apple AI stack, you know, compare to what Meta and Google are doing with 3:27their, you know, Llama, Gemini, and so on. 3:29So, uh, and especially another, you know, I think point of concern is 3:33the, and disappointment for many. 3:36It's, uh, you know, the overhaul around Siri, uh, and you know, the. 3:41The AI advancements, you know, that they're claiming here, which seems to 3:45be kind of still lagging behind, you know, the, the competition out there. 3:50So I also agree with the show bits around the liquid glass 3:54interface, uh, I think is a change. 3:57It's a final dig in there on there is that, you know, really making it pseudo 4:02reception and any potential flows more. 4:04I think immediately apparent and maybe all even more controversial here. 4:09Yeah, for sure. 4:10And uh, last but not least, uh, Chris, what did you think of, uh, the show? 4:13Even darker dark mode. 4:15In fact, I didn't realize how on dark my dark mode was, and then I was like, 4:20oh, I can get dark mode even darker. 4:22That's what I want. 4:23So, so I thank you Apple because I love my dark mode and the even 4:28newer, even darker, darker, dark, dark mode is gonna be awesome. 4:32So I'm excited about that. 4:34Apple, I'm just welcome news. 4:34Yeah, but on a more serious note, I actually think that the, the ability 4:38to access LLMs device is gonna be huge as part of the framework. 4:43And I think because otherwise, we'll there, there would've been a risk of 4:46loads of apps trying to install small models onto your phone, et cetera. 4:51And I know some people have been doing that already. 4:53So actually just. 4:55Uh, ubiquitous access to on-device LLMs and being able to easily hook 4:59that into your own applications. 5:02Uh, we can argue about how good, uh, the AI on those devices are, but the fact 5:07is Apple does have the best hardware. 5:09Apple, silicon is incredible. 5:11So I'm excited to see new and in applications via these SDKs and I think. 5:17That's a little bit of a glimpse into the future because I think we're a 5:21little bit too low level at the moment and Apple's gonna lead a little bit 5:24of that way on, uh, framework access. 5:27Yeah, for sure. 5:27And actually that's a great place to kind of pick up because I think that, 5:31you know, I was looking at say, the verge coverage of WWDC And there's 5:35like a long list of announcements, but in pretty stark contrast to 5:39where they were on the last WWDC. 5:42Like the AI stuff is actually weirdly kind of not as prominent 5:46or not as talked about. 5:47Certainly people are complaining a lot more about the UI UX designs, but if 5:50there's anything that Apple seems to be doing on the AI side, it's like. 5:54Opening up new surfaces for the AI to interact with. 5:57Right? 5:57So it's one, any developer can go and play with their models. 6:00There's also kind of this new idea that like Apple Intelligence will 6:03be able to see what's on your screen and kind of interact with it, which 6:06I think is also pretty interesting. 6:08Um, and so I guess a show, but maybe a question for you is like. 6:12Apple seems to almost be kind of admitting, look, we're not necessarily 6:14gonna be a leader on the model game, but we, we might try to kind of use our unfair 6:18advantage in terms of like our ecosystem. 6:20Is that the right way of thinking about what they're trying to do here? 6:22So I think it's, um, 6:24it's their only play left, right? 6:26And they, they, they had a lot of one-on-one sessions, post event 6:30announcements with different, uh, YouTubers on explaining and rationalizing 6:34why they're missing out this year on ai. 6:37They went through two different architectures and the first one. 6:40It could do a lot of the stuff they wanted to announce, but it was not quite 6:44primetime ready for them to roll it out. 6:46Right. 6:46So their only other choice at this point was to make sure that we do subtle 6:49things across the entire, uh, ecosystem. 6:52Having, uh, Apple intel happening, Apple intelligence move away from 6:56becoming the oxymoron of the year. 6:58To actually starting to deliver some value. 7:01And that is the future direction that they have to take. 7:03And they need to come to a point where if they, if they either champion 7:08the small models running on, uh, on device, and hence they can tout a 7:12lot of security and stuff like that. 7:14In the current phase, the Google, Gemini, uh, the, the Gemma models, the 3 billion 7:18parameters that run on device, uh, 4 billion parameters, those are actually 7:22pretty good right now and they're actually better than what Apple just released. 7:26So Google is still ahead on the game, even on on device. 7:29It seemed like a lot of the stuff that they were announcing were catching up. 7:32I was like, Ooh, I can change the wallpaper in my iMessages, like 7:35WhatsApp did like a decade back or stuff like that has to like, they 7:39have to get away from these small incremental changes to something 7:43that adds some real intelligence. 7:45The amount, 7:45There are very few companies where I would trust all of my 7:49personal data with, and Apple has definitely won that trust for me. 7:52So I'm very comfortable with my kids having access to an Apple phone. 7:56The parental, uh, security res and stuff like that. 7:59They've done a really good job around it, right? 8:01So when we have as a family, trusted Apple with so much trust, they have all 8:06the data, and I would rather have them deliver personalized experiences to 8:10me versus all the competitors, right? 8:11It'll be a nightmare for me to even think about, say, DeepSeek 8:14or some of the other competitors to have access to all of my data. 8:18So we have made our choices email. 8:20Gmail, right? 8:21I'm okay with Google having images of my passport and my medical records. 8:25All that goes through my emails and stuff, right? 8:28Both Google and and Apple have an. 8:30Extraordinary opportunity to hyper-personalized intelligence 8:34that open AI or AWS and stuff just cannot get close to. 8:38But I think Apple, in this case has, has to really double down. 8:42They could not have afforded a gap year in AI, and that's exactly 8:46what they're doing right now. 8:47If I was in that, like heads will, heads should roll on on 8:50these kind of announcements. 8:52Yeah, for sure. 8:52Kaoutar, I think kind of one of the most interesting parts of this discussion, and 8:55I think the last time we really talked about Apple was the daring Fireball, 8:59kind of gruber take down of Apple. 9:02Um, and I think the narrative through all of this is like, it's almost like 9:06what Google was maybe six to 12 months ago where everybody was like, how 9:09can you not be winning in this space? 9:11Chris just talked about this, like incredible hardware. 9:13They have access to, Chaba just mentioned all the incredible 9:16trusts they have access to. 9:18What's your diagnosis of Apple? 9:19Like, what's going wrong? 9:20Like, by all rights they should be crushing it. 9:23Um, but, um, I'm curious from your, your vantage point, like 9:26how you diagnose the issue. 9:27Yeah, I agree. 9:28I, I'm also like a bit disappointed in terms of, you know, this 9:31underwhelming AI progress. 9:34Um. 9:35So the, for Siri, you know, I think they delayed, you know, their siri 9:39overhaul, which is, you know, I think a big significant point of disappointment. 9:43I, I see that across many reactions, you know, online. 9:47Uh, they had promised a much more, more versatile Siri. 9:50But, you know, comments, you know, they's not ready. 9:52I think they're trying to be very cautious or maybe very, um. 9:56Kind of conservative in their approach. 9:59Uh, is it because they're worried about, you know, they want this privacy, uh, 10:02first approach where they're trying to be really careful and they don't 10:07wanna, uh, kind of mess up things with the AI, uh, and the privacy 10:12aspects that, you know, they take. 10:14So hard, you know, so in hard, like Shobhit mentioned. 10:18So we, we, I also trust their platform. 10:20I love, you know, all the integration they have, you know, the, the control 10:24that they have over the entire stack, you know, all the way, you know, from 10:26the apps all the way down to the silicon. 10:28It's a very strong position that they have. 10:31But I feel they're overly cautious here in terms of their AI strategy and, uh, 10:36especially with all the hype happening. 10:38Uh, elsewhere, what's, what's, uh, holding them back. 10:42I also don't understand, you know, what's, what's going on here, why they're 10:46taking a very conservative approach. 10:48And especially if you look also at the paper, I think we're gonna talk about it. 10:51Maybe the paper that they released. 10:54It was also timely, maybe the timing of the paper right before 10:58the WWDC, where they had all of these skepticism and so on about ai. 11:03Um. 11:04Kind maybe to justify why they're very kind of incremental and 11:09cautious and taking a slow move. 11:11In the WWDC,, they focus on a lot of other things other than AI. 11:15They say, okay, there is ar, but there's all this other stuff like 11:18productivity and visualization and the integration and the liquid glass. 11:23A lot of focus on that, but all other stuff that I think they're 11:27positioning also is important, which is important, but. 11:30AI is also kind going so strong elsewhere, and we need a 11:36strong position from Apple on this. 11:38So I, this might have been just regrettable timing, who knows? 11:40Right. 11:41But basically, Apple also released a paper that got a lot of chatter this same 11:45week entitled The Illusion of Thinking. 11:47Um, and in some ways it's kind of a critique of what 11:49reasoning models are doing. 11:50And I, I think the, the narrative online at least was, well, it's weird that these 11:54researchers don't even have confidence in the technology they're pushing. 11:57I dunno, Chris, like how you're gonna respond, but I think there is a question 12:00of just like, you know, does the company culture even really believe in the 12:03technology that they're pushing here? 12:05I like Apple's approach to this and, and I and I, I have to say we are in a 12:11hype cycle just now and I know we'll get to the paper a little bit later, but. 12:16I, I really want Apple to focus on having a good platform, right? 12:22The mobile devices, the I, iPad, the Mac, all coming together, 12:25and I, I actually think that's the bigger story of WWDC, right? 12:29Which is that, 12:32they've unified, at least on the numbering system for the operating system. 12:36Right. 12:36But actually I think they are trying to make this a stack where I can easily 12:41go from one device to another and they have control of the full vertical device 12:45and the platform and they all connect and, and we can mock the liquid glass. 12:51Um, but actually they've done a very clever thing here, which is they've 12:55built a design system that allows you to be able to, uh, have consistency 13:02regardless of the form factor. 13:04Now that's gonna be important in the future because actually we are gonna 13:09wanna switch from our iPhone to our iPad to our Mac and not feel as if 13:12we're on a different operating system. 13:14So I, I weirdly think they're setting themselves up for the 13:18future at a platform level. 13:20And I also think they're setting themselves up from an AI perspective 13:24because they're focusing on the sds, the hooks, and how 13:27people are gonna hook into that. 13:29Yes, their AI. 13:31Clearly is not matching up to, uh, the ecosystem and the platform 13:35that they're building, but. 13:37They'll catch up on that. 13:38So I, I would rather they fix all of these things anyway and prepare 13:42for the future, and then we're gonna have great developer experiences. 13:46I don't feel as if I'm missing any AI stuff on my phone. 13:49If I need to go and speak to an LLM, I'll just bring up the chat GPT app. 13:53It's fine. 13:54Right. 13:54So I'm, I'm okay with the approach that they're taking, but full 13:58disclosure, I'm a full on Apple fanboy, so, you know, it's fine. 14:02So, 14:02so Chris, um, I'll summarize what you just said. 14:05iPad. 14:06Is more like Mac. 14:08Mac is now more like iPhone, and iPhone is more like Android. 14:13Is that fair? 14:15No, I am not, I'm not taking your Google centric view on this show, but 14:20with your love for Gemini, and if you ask Gemini to code, then you might 14:24as well get a 3-year-old bashing at a typewriter because you're gonna 14:28get equivalent code coming out of it. 14:30And, and, and I get that you love Gemma and that fact, 14:33and I do love Gemma as well. 14:34But, but the, the reality is, you know. 14:37These smaller models, the Apple  will catch up on, on that in time. 14:42So I, I, I just think there's two approaches. 14:44You can chase the AI part and everybody is chasing that and that's important, but 14:48actually they're chasing their platform and trying to make that consistent. 14:52And I don't think that's a 14:54bad move. 14:54So, uh, I think, uh. 14:57Windows has done a better job at opening up the Mac Os with more MCP based 15:02things that I can actually tap into. 15:03So my apps that I'm building are actually being able to take actions. 15:07I was hoping that, uh, Apple will create a ecosystem where we can 15:10start to build apps on top of that. 15:11I mean, we will, we'll see what comes out coming out of this. 15:13I think there's a huge potential that we can do there. 15:16Second, my challenge right now is if I use a different app. 15:20Say ChatGPT on my phone ChatGPT does not have the rich data about me. 15:24I don't want ChatGPT to follow me and track my location preferences 15:28and emails and all of that quite yet. 15:30So instead of replicating the trust I built in Apple or Gmail with Google and 15:35stuff, or Google Photos, I don't want a third party app to give me a service 15:39that's not hyper-personalized to my needs. 15:41Then there's a huge missed opportunity by Apple, hopefully next year. 15:44Yeah, hopefully. 15:45And I think also, you know, even, you know, I think in their. 15:48Image, you know, playground, for example. 15:51Uh. 15:52The, the features like the photorealistic image generation or the, you know, 15:56these advanced generative features. 15:58I think it would be nice, you know, for them to have that. 16:01Uh, and of course on the Siri, uh, I agree with you Shobhit, you know, because 16:06of, you know, their hand on the data, the customization is, is gonna be a big leap. 16:11You know, if you can integrate that all together with the, uh. 16:14With their Apple AI intelligence with more advanced conversational 16:18capabilities and generative AI features. 16:21But hopefully, you know, I think they're delaying because they wanna get it right. 16:24So that's, I think what I'm thinking, why it's taking, but I think Chris's 16:28point on the getting the platform right is also a very important point. 16:31Yeah. 16:32Well, we'll have to wait and see. 16:33There's more common, uh, and I mean, I think with Apple, they can just keep 16:36taking more and more shots on goal, right? 16:38I think that is one thing that people frequently miss is they can get it wrong. 16:42For a very long time before they get it right and still largely be okay. 16:50Alright, I'm gonna move us on to our next segment. 16:52Um, some interesting news coming out of open ai, uh, this week, both 16:56of a product nature and also of a, a. Philosophical nature, I suppose. 17:00And I kind of wanna talk about both stories together. 17:03So one announcement not unexpected is OpenAI announcing the availability 17:08of o3-pro which is basically the more advanced version of their reasoning model 17:12that has kind of taken the world by storm. 17:14Uh, and also some kind of pricing updates there about how cheaply they're able to 17:18offer a lot of their existing models. 17:20Um, and then secondly, Sam Altman published this sort of essay called 17:24The Gentle Singularity, where he kind of like makes the argument. 17:27That we're already living through the singularity. 17:30You know, it's not a future thing. 17:31It's happening right now and you know, the world is gonna feel like 17:34very different over the next decade. 17:36And maybe, you know, Kaoutar, maybe I'll throw it to you first is, you 17:40know, when you play with the oh three and oh three Pro, like, I guess I kinda 17:45wanna ask the question like, do you agree with Sam Altman that we're like 17:47already living in the singularity? 17:49Like basically that like it's wild that we have. 17:52Technologies that can do what o3-pro can do. 17:54You know, how do you kind of think about that? 17:56But like, I think that's one reflection is like how much 17:58we buy what Sam is telling us. 18:00Yeah, very, a very good question here. 18:02I think of course the o3 model has brought a lot of advanced features, 18:06especially on the reasoning capabilities, the multim modalities, you know, I 18:10think really nice features there. 18:12Uh, you know, especially focusing on the reasoning aspects. 18:16But I think, uh, if you look at, you know, Sam's paper, the essay, 18:22what I think is, it's a better, you know, kind of optimistic, uh, 18:26you know, he's over optimistic. 18:29Uh, he's, you know, softening the idea, you know, of this, uh, 18:32runaway AGI, he's saying that. 18:34This won't be a terminator, it'll be an assistant helping you solve hard problems. 18:39The super intelligent, that's gonna exceed our intelligence. 18:42So it's a smart narrative shift. 18:43You know, I think it's more politically and socially acceptable 18:47than machines can take over. 18:49But, uh, I think, uh, maybe he's overall optimistic. 18:53There are still a lot of challenges that we have to, uh, to solve, uh, and I think. 18:59I think he's trying, especially to align with OpenAI uh, vision. 19:04You know, they, they're trying to deliver on the vision of the reasoning tools 19:07that help humans think it could reshape science, economics, and education. 19:12But I think there are still a lot of issues around the misalignment 19:16around, you know, the impact of these things, you know, on jobs around also. 19:21Kind of the equal access and democratization of AI because 19:26this might also kind of intensify the division globally. 19:30You know, who gets access to these technologies and my privilege, 19:34you know, certain people versus, or certain nations versus others. 19:37So there are a lot of other issues I think that's are much deeper 19:41than, you know, I think the nice 19:43and Eutopic vision that Sam painted. 19:46So I feel, you know, he's over optimistic, but there are 19:50a lot of issues that still needs to be resolved. 19:51Right. 19:52You're almost like, it's still ahead of us in some sense. 19:55Um, yeah, I mean, show the thoughts, like, I don't know if you've played 19:57with o3-pro any impressions? 19:59So the first thing I did 20:00was look at Apple's paper that said, oh, these models, can't think they have 20:05the full prompt for the Tower of Hanoi problem, where you remove the pegs over. 20:10I took the exact same thing, gave it to oh three slam dunk, like 20:14you just crush it right away. 20:15So within a few days opening, I just came and swinging. 20:18I was like, guys, stop making excuses. 20:20AI is working really, really well. 20:22And if you think about this from an enterprise perspective, for us, 20:25we are more, it doesn't matter that the world has somebody who won Nobel 20:30Prize or somebody who's like Einstein level IQ and stuff like that, right? 20:33As an enterprise, it's very important for me. 20:35Can I take that capability and apply it to a particular, uh, system? 20:39Can I solve for a problem? 20:40Can I deliver economic output? 20:42And that requires us to have a hierarchy of intelligence. 20:46Generally in our organizations, you're an obscenely, overpaid, CEO at the very 20:50top who's an expert in multiple areas. 20:51But by the time you get to the accounting department, the procurement 20:55department, HR, and so on and forth, you have a really small model that 20:58went to a community college that has been doing accounting really well, 21:01and that's the right price point and the capability intelligence that we're 21:04expecting for that particular job role. 21:06So I don't think that o3 coming out with a brilliant model is something that's 21:12gonna drastically change the narrative within the enterprises quite yet. 21:16We can deliver an insane amount of value with the existing intelligence that 21:19we already have and has to be more ROI driven intelligence versus o3 probing. 21:24It's quite, quite expensive, but the fact that it just came and just rained 21:28over Apple's parade on one side, somebody saying, Hey, AI ain't quite ready yet. 21:32The other one is saying, Hey, just. 21:33Come on, stop talking about, uh, like this, this crushing 21:36through this very, very quickly. 21:37Right. 21:38Two ends of spectrum of uh, the same story. 21:40That's right. 21:41And I think Chris, I think what's so useful on the show, but I'm really glad 21:43you brought up that Apple paper again. 21:44'cause it kind of definitely looms large in my mind. 21:47Right. 21:47Is basically like I. It definitely offers a challenge to what we're 21:51seeing with the reasoning models. 21:53And I guess, I dunno, Chris, is what your take is like, is Apple just wrong 21:56in what they're arguing here, given everything that we're seeing outta OpenAI? 21:59Well, I did exactly what Shobhit did as well. 22:01I took the paper and gave it to o3. 22:03Um, I'm, 'cause I was like. 22:05That's what I love about this show. 22:06Tim gives us homework every time we appear. 22:08It is like, you need to read the Apple Intelligence one. 22:10You need to read Sam Altman's blog post. 22:13And you're like, okay, I'm gonna stick it through 03 Pro. 22:16I might as well test that out 13 minutes later. 22:2013 minutes later for it to read the paper. 22:22You're like, come on dude. 22:24I could have read that myself. 22:25You know what I mean? 22:26So, um, so I love o3 Pro, but I, it does, it takes a long time. 22:31Um. 22:32On the, the Apple paper. 22:34I think it is really interesting, um, actually if you ask, uh, o3 Pro 22:40about it, uh, it's kind of like, yeah, the core of the paper's about right, but 22:46actually, um, they've only gone with a single prompting strategy, et cetera. 22:51Is that really gonna pan out over time? 22:54So I think, um, even, even the o3 pro was a little bit skeptical of the paper. 22:58I, I think there is some. 23:00Relevant points though, which is that, um, I think puzzle based approaches, 23:06so if you look at the Apple paper, they were talking about doing things 23:09like the Tower of Hanoi, et cetera. 23:11I think these puzzle based approaches are important because there is a lot 23:16of contamination within these models. 23:19They have seen things before, and therefore how do you distinguish 23:22between regurgitation and, uh, and true, true thinking in that sense. 23:27But the flip of that is, I mean. 23:30If you take some, especially let's take code for example, and I'll go back to 23:34my favorite model, which is, uh, Claude four, Opus and Claude four sonnet, right? 23:38I mean, even without thinking, these models are able to just 23:42spit out, you know, unique code. 23:45Um. 23:46You know, end to end thousands of lines of code and it's just perfect. 23:50Uh, you know, oh, okay, you're gonna have to change some stuff, et cetera. 23:54But it is incredible. 23:55So I think even if it's simulation and, uh, you know, and you, they're saying 24:00that the reasoning isn't going that far, the reality is it is able to do the 24:05tasks that I wanted to do to some extent. 24:08Um, and I think, and I think. 24:10Maybe that's the thing to focus on. 24:12Is it providing value? 24:13And the answer is yes. 24:14Now the other thing on that paper that I think is useful 24:16is there a saying is that, um. 24:20The models sort of collapse in and itself right after it generates 24:23enough tokens, for example. 24:25So when it's sort of reasoning about something, um, after a while it 24:30will sort of go in a bit of a spiral because it doesn't know the answer. 24:33And there is a bit of a fundamental flaw to this, which is it doesn't have 24:36access to things like tools in this case. 24:39So there's not a freshness of information coming in. 24:42So it's sort of going into a spiral. 24:44So I. I, I understand the paper and I get where they're going with this, and I 24:48get the, what they're saying about that. 24:50Actually, if you take multiple samples, you can get to, you know, fire the 24:54pass side of things very similar to, uh, you get similar results 24:58to what you get from reasoning. 24:59But I, I think the reality is, Diversity of data in that 25:03case becomes more of a thing. 25:05So I am, I'm somewhere in between on that paper. 25:08Hmm. 25:08Yeah, 25:09that's actually a really interesting response. 25:10And yeah, I think that's like, I mean, you know, I'm just reviving a 25:14phrase that I don't think we barely use anymore nowadays, which is like, 25:17I think people used to be like, these are stochastic parrots, right? 25:19All they do is they copy. 25:21And I think like, I guess my response to this is kind of like, I don't 25:24know if debating what intelligence really is, kind of, is very helpful. 25:28'cause in some respect it's like, okay, so. 25:30Say it is copying, it's like incredibly useful for all the tasks 25:33that I need and can have a huge impact even with that in mind. 25:38Um, and, and I know Kaoutar, it looks like you might want to kind of jump 25:41in, but that I, I feel like that's kind of one of the interesting. 25:44Tensions that we have with this era of these language models is, 25:47yeah, I mean they might just be copying in some respects, but 25:50it really feels pretty strongly. 25:52Like it's a kind of reasoning that solves problems and I almost, I can, I kind 25:56of get, can't get too fussed about it. 25:58Yeah, and, and you know, if you look at, you know, this paper, you know, 26:01the illusion of thinking paper. 26:03You know, some of the things, you know, that I thought also the methodology that 26:09they also used, uh, which, you know, they cherry picked, you know, these tasks. 26:13So, uh, the, for example, the logic of the puzzles that was chosen, uh, 26:17even I think humans struggle some sometimes with these, uh, things. 26:21So does that, you know, represent the full spectrum of reasoning tasks? 26:25So, and, and another thing, you know, these artificial constraints, 26:28you know, so things like, um. 26:32So it did not, for example, in this paper, uh, the Apple did not allow 26:35the models to use coding as a tool to solve the problems for humans 26:39and also for powerful AI models. 26:42Writing code is a fundamental way to tackle complex, logical problems. 26:46So these restrictions you restricting, you know, this capability. 26:49Could artificially also limit the, the model's performance and 26:53also misrepresent, you know, kind of the true reasoning potential. 26:58Um, the, you know, there were also things like the output, you know, the 27:01token output limits, uh, that they had, you know, some tasks assigned 27:04reportedly, you know, I think exceeded the models token output limits. 27:09So, and uh, essentially setting the models up for failure and. 27:14It's kind of like a all, all or nothing grading, uh, uh, thing, you 27:18know, the strict complete accuracy collapse, you know, quote, quote metric. 27:23I think it might be too harsh. 27:25Uh, so I think the approach, you know, the, I feel there are some flows, you 27:29know, in the methodology that they use. 27:32You know, also some of these exaggerated claims or mis 27:36misinterpretation of the reasoning. 27:38So for example, the semantics of reasoning, you know, are, you know, 27:41some people you know from the analysis I looked at, some people argue that know the 27:45papers title and conclusions are kind of, 27:49they could be looked as inflammatory and mis misrepresent, you know, 27:54what really LLMs are actually doing. 27:57Uh, while you know, they may, might, might not, maybe reason in a human 28:01conscious way, their ability to perform these complex calculations and generate 28:08coherent logic and use tools in a form of, you know, a, a kind of reasoning form. 28:14Even if it's bid on pattern matching, it is also a type of reasoning and really 28:18that's gonna evolve and improve over time. 28:20So, um, Tim, 28:21I, I'm gonna start a whole campaign around AI rights matter. 28:26Like we should not discriminate against ai. 28:28And the reason I say, say that, 28:30I thought we started with o3-pro we talked about the Apple 28:32paper, and now we're, we're on the Shobhit's campaign for AI right? 28:36Here we go. 28:37Hermoine 28:38Grainger here. 28:39Go on Shobhit. 28:40So here's my take on this, right? 28:41I, uh, recently we had modern, uh. 28:43Combine the CHRO and CIO function into one. 28:46So one of my CHRO clients got spooked and you know, she and I was over 28:50sketching up what a trusted employee is in a company today, human employee. 28:54And I was trying to extrapolate from there where a trusted AI employee would be. 28:57So if you offers, all of a sudden when we are hiring people, newcomers 29:00into the, into the company, fresh hires from the industry or interns. 29:04We do a really good job of car carving out the right set of costs that I can 29:09delegate to a, a individual intern or a new hire without having to give 29:14them step by step instructions and handholding them all the way through. 29:17We already have mechanisms of thinking about it in that way, but somehow 29:21when we get to an AI model and if it's performing at 95% accuracy, 29:25we are still not satisfied, right? 29:26We, I think, have to do a better job of appreciating what are the strengths 29:29and weaknesses of a new hire and give them, allocate them work accordingly, 29:33and the same thing should extract, extrapolate over to an AI model as well. 29:37There will be a place to get a very high end. 29:42PhD level o3 pro. 29:44There'll be a lot of places where we need a much, much smaller model. 29:47I'm doing some multi-agent frameworks in production for some clients. 29:51I need the orchestration agent to have the logic and think through how do you 29:55go delegate this down to smaller tasks. 29:58Right now, anything less than 70 billion parameter model is 30:01not working out well for me. 30:02So I'm struggling with how do I influence the reasoning model 30:06to do things a certain way. 30:08And right now the bigger models are just completely slamming this through. 30:11So we'll get to a point where smaller models in the enterprise environment, 30:15we would be able to influence the reasoning and make it work. 30:17There'll be much, much smaller models like intern and we'll have 30:20a better position for what accuracy and how should we measure the, the 30:24model into. 30:24And I, yeah, I think these kind of discussions like we're gonna 30:27increasingly get asked, right? 30:28Is this like this kind of weird line between, well, what is this 30:32that the model's actually doing? 30:34But I think to your point, right, like I think. 30:36It's this battle between pragmatism and then kind of like having to know, 30:39and I feel like that's kind of like a classic theme in, in the AI space. 30:47I'm gonna move us on to our third segments. 30:50Uh, you know, I think one of the things I keep looking for every time one of these 30:53AI headlines comes out is the fact that the transaction dollar amount is just 30:57getting bigger and bigger and bigger. 30:58And this week was no exception. 31:00Um, a huge $15 billion transaction announced, uh, between meta and 31:05scale ai, uh, which if you're not aware, is one of the big players 31:09in the data annotation space. 31:10So this is a massive. 31:12Transaction. 31:13Um, and I guess, Chris, maybe I'll throw it to you. 31:16I, I guess the question with always all of these is like $15 billion. 31:19So why, why is meta so interested in this? 31:22Um, I think probably I. They want to try and get ahead in 31:28some way, shape, or form, right? 31:30So the reality is we, I love the Llama models. 31:32I think Llama four is great, et cetera, but I think there's a little 31:36bit of frustration that they're not maybe ahead of anthropic. 31:40They're not ahead of Google and they're not ahead of open AI at the moment. 31:45However, with that being said, they are great models and they're some of my 31:49sort of go-to models all at the time. 31:51So. 31:52I think this is, let's put another bet in the ring there and run something 31:58else in parallel and then hopefully get to where they want to go. 32:03And I think the reality is, you know, this is a, this is being 32:09touted as a winner takes all market. 32:11I'm not convinced it is, but I think the reality is that that. 32:16You have to spend that type of money to be in that game. 32:19It is fierce competition. 32:21So if that means attracting new talent and uh, bringing that in and being 32:26able to set up a lab focused on super intelligence, I don't, I don't think 32:29that's a bad idea because there's probably two competing things also 32:33going on at the moment, which is. 32:35They need to get AI into their products. 32:38They need to get AI out to their consumers, et cetera. 32:42And that's different from actually, how do I set myself up for the future? 32:46So... 32:47By actually splitting this up a little bit and saying, okay, you're gonna be focused 32:50on super intelligence and we're gonna continue to get Llama out to folks and get 32:55it into our platform so people can use it and sort of gain productivity benefits. 33:00Then you don't have sort of competing, uh, goals and intentions and therefore. 33:05And the folks can be focused on different things. 33:08One is gonna be focused on the research element, one is gonna be 33:11focused on the, uh, the product. 33:13And so I don't think it's a bad thing. 33:14Well, and on that, and I think Shobhit, you wanna jump in? 33:16I mean, I think what's curious about this transaction is in the past some of the 33:21really big deals have been around like. 33:23Like sort of technical leaders in the space, right? 33:27So, you know, nom shazer, right? 33:29Big acquisition built around that, right? 33:31Like obviously the kind of like key leadership of OpenAI has been 33:35involved in all of these big deals. 33:37What's interesting about Wang is that he's not like necessarily like 33:40a, a noted machine learning, you know, expert or pioneer, uh, Shobhit. 33:45I think that almost like, kind of suggests that there's like other things 33:47leading meta to this acquisition. 33:48I dunno if you agree with that analysis. 33:50So, uh, meta is trying to make sure 33:52that Llamacon does not become a WWDC. 33:58Uh, 33:58sure. 33:58But the, the Alexander and Lucy Co-founded Scale AI, both of 34:02them are brilliant, both of them. 34:04The youngest billionaires, uh, like he is worth like 4 billion plus at this point. 34:07He is like 28 years old. 34:08Like, what a phenomenal story. 34:10He has, I would argue that he has done an exceptionally good job. 34:13He himself has been a math prodigy, coding genius and whatnot. 34:16So I would, I would say that he has had, the street cred comes from a very, 34:20uh, from very talented family as well. 34:23So I think he has done the right way. 34:25And we, we've worked with scale ai, uh, quite a bit. 34:28Phenomenal assets. 34:29They do a really good job at creating synthetic data. 34:31They will, they've done a lot around how do you get this data ready for, 34:36for training these models, and there are very few competitors who can 34:39actually do what Scale AI is doing. 34:40There's, there's like a dozen more people that we, that we look at work with. 34:44But Scale AI has definitely done a really, really good job. 34:46And this was definitely an acqui hire, right? 34:49There's no other way for you to get, um, Alexander to come and join. 34:53And Meta has recently gone through a couple changes in their org structure 34:57and stuff like that in their machine learning, generative AI and AGI 34:59groups and stuff like that too. 35:01So clearly they, they, they usually do not. 35:04Hire high profile people in leadership roles. 35:07This is an exception and it's a very senior role that they're 35:10being an external, uh, person for. 35:12It just, I think, doubles down on what Chris was saying in terms of where 35:15meta is with respect to the market today with OpenAI On one end you 35:19have Apple saying that, "Hey, this or hype" on the other end, he was like, 35:22"Hey, we are already passing singularity." 35:24So meta somewhere in the middle is saying, which, which side should I go? 35:27I'm gonna go follow where the market is going. 35:29Right? 35:29So I think there is, there's a lot. 35:32The $15 billion, if you look at the overall CapEx that Meta has, has 35:36committed to spending this year, they're looking at about 60 billion ish is 35:40what they're gonna spend this year. 35:42They've been spending half of what AWS and Microsoft are spending in, 35:46uh, in CapEx on this every year for the last couple, two, three years. 35:49Right? 35:50So this year meta is doubling down and catching up. 35:52Even if they spend about, uh, 60, $70 billion, this 15 billion will 35:56come from that particular portion. 35:57Companies are getting very, very creative with how they're. 36:00Handling talent and IP and data Scale AI. 36:03like I think is a really, really good acquisition, or at least 49% 36:07acquisition for meta as a company. 36:09They need it. 36:09We're still waiting for the behemoth, still waiting for a really 36:12good reasoning model from Meta. 36:14There's a long way they need to go and they have to shake things up a bit. 36:17Yeah, for sure. 36:18Kaoutar, our final thoughts, if you had $15 billion, would you use it to 36:22recruit Alexander Wang to your company? 36:24Well, uh, maybe I might not be as bold as, you know, meta, but, but 36:29overall I think it's a strategic move, but this also what is showing it. 36:33I think this is showing a growing trend. 36:36Infrastructure matters as much as models, training, data evaluation, human 36:41feedback, and I think this is where the AI wars are being fought right now. 36:45So infrastructure playing the AI race is becoming very important. 36:49These investments, I think, highlights 36:51the broader industry trend, we're controlling the underlying data 36:55and the infrastructure for AI development is becoming as important 36:59as, if not even more important than just developing the new models. 37:03So I think Meta is really betting on securing its foundational AI 37:07supply change with this acquisition, which I think is, is very strategic 37:11and it's the right move for them. 37:13Uh, so of course, you know, I think they're betting a lot and 37:16I think they're saying also, uh, they want to secure this high. 37:20Quality data for AGI and, uh, AGI, I think is one of 37:24their big bets for the future. 37:27And I think this, you know, this, uh, investment they're doing is kind of 37:31trying to secure for them that future. 37:33Yeah, that's a great response to Kaoutar. 37:35I don't know if I would be as bold, uh, as, uh, Zuck to spend $15 billion will 37:40keep an eye, uh, on, uh, how this deal goes and how this partnership evolves. 37:44As always, this was so good. 37:45Kaoutar, Shobhit, Chris, thanks for joining us and, uh, stay tuned. 37:48Uh, for our next segment, we will have an interview with Oliver Dial, who 37:52I recorded an interview with earlier today, uh, focusing on some recent 37:55announcements, uh, from IBM Quantum. 38:01So as I promised at the very top, we wanted to make some time 38:04at the very end to talk about a super exciting announcement that's 38:07coming out of IBM on Quantum. 38:09And in some ways we have I think, the perfect person to talk about it. 38:11Um, Oliver Dial is joining us on the show. 38:13He's the CTO of IBM Quantum. 38:15Thanks for taking some time today. 38:16Really appreciate it. 38:17Absolutely no problem. 38:18You know, MoE don't, we don't usually talk too much about Quantum, but we 38:22keep an eye on it because our scope is just emergent technologies in general. 38:25Um, and, uh, unfortunately most of the airtime is always taken up 38:28by AI, but we always wanna keep an eye on what's happening in quantum, 38:31uh, in part because there's just really exciting things happening. 38:34And so, um, olive really excited to have you on the show. 38:37You know, the thing I always have with Quantum is that people are always 38:40like, ah, it's never gonna hand happen. 38:42People have been talking about quantum forever. 38:44I think what's really kind of interesting is I always push back and I'm kind of 38:47like, look, in the last 24 months there have been some really kind of major 38:50leaps in quantum that kind of make it feel like, you know, even though 38:54everybody's complaining about how long it's taking, it might actually finally 38:57be, uh, around, around the corner. 38:59Um, and so I think I wanted to bring you on the show to talk first about 39:03kind of like the announcement that you guys have, um, that came out 39:06today from IBM on the idea of sort of fault tolerant quantum computing. 39:11And so I guess maybe let's just start with the problem, I guess Oliver, I'm 39:14curious if you can kind of walk us through why has basically like fault, 39:18um, prominence or like fault frequency been such a big problem for getting 39:22quantum to actually work practically? 39:24Yeah. 39:24Well, at the end of the day, you're trying to do computation with some of 39:27the most sensitive systems anyone's ever made in the world that you're trying to 39:30manipulate single quantum states and use them to store information and compute. 39:36And so the error rates that we have with today's hardware is, are orders 39:40of magnitude larger than what you would have on a classical computer? 39:43Many orders of magnitude. 39:45Um, in fact, for our two cubic gates, which is what we usually talk about when 39:48we talk about air rates, our current failure rate is about one in a thousand. 39:53And so imagine trying to do computation where the computer one time in 39:57a thousand, it makes a mistake. 39:59Yeah. 39:59And I think like those errors are caused by just like, it's like too close to an 40:02electrical field or something, right? 40:04Like it could be anything. 40:05Um, so the biggest one for our qubits is we actually store the one or the 40:08zero in a single microwave photon. 40:11And so anything that can absorb microwave energy. 40:13Can suck that one away and turn it into a zero. 40:16Which is kind of like 40:17everything, right? 40:17Like everything, yeah, 40:19everything on the planet. 40:20Um, the worst thing for us are surfaces, because anywhere where there's a surface, 40:24there's a place where you can have like a thin layer of oxide, you can have 40:28contaminants, you can have all kinds of things that'll absorb microwave energy. 40:32So you're trying to do computation with this fantastically high error rate. 40:35And the really amazing thing to me is even today, despite that high error 40:39rate, we're already doing computations that a classical computer can't run. 40:43Right? 40:43That's what we talk about for the era of utility, and the reason is we have a lot 40:47of ways that we can sort of statistically remove the impact of those errors. 40:51So today we play a game where we run a circuit, we run a quantum computation 40:55many, many times we statistically remove the heirs to get an accurate answer out. 41:00And using that, we're starting to be able to tackle some really 41:02interesting problems in chemistry and optimization and material science. 41:06Um, and we think that, you think quantum quantum's like already 41:08here is kind of what you're saying. 41:10Um, we're saying you're probably gonna show quantum advantage, so doing 41:14something better, faster, cheaper than a classical computer by 2026. 41:17And actually I think it's gonna be even sooner than that. 41:20And, you know, you were talking about this cadence of news releases. 41:22I mean, that is going to be such an all boats rise moment because that's gonna be 41:26saying, okay, wait, this isn't something that's gonna be next month or next year. 41:30This is actually really happening today now. 41:34Um, but the big thing with all these techniques we use today is they have 41:38a overhead that scales exponentially with the size of the problem. 41:42And so it's a little bit of a, in the long term, a losing proposition. 41:46You know, we have this computer that's exponentially faster for some jobs, but 41:49we've added an exponential overhead. 41:51And so, you know, it's a real horse. 41:53'cause like, 41:53I think, like, is it right to say like, kind of almost like the 41:55bottleneck is like computation, right? 41:57Mm-hmm. 41:57Like you're basically saying like, we've got this noisy process and we 41:59kind of de-noise it using mm-hmm. 42:01Computation, but it just kind of can't scale the way we want 42:04it to. 42:05Exactly. 42:06So when we talk about fault tolerance, we bring in another set 42:08of tricks and it's air correction. 42:10Like if you're doing communication between a classical computer, you 42:13might include parody bits so that you can detect if the data got 42:15through correctly and retransmit it. 42:17We can pull the same kinds of tricks with a quantum computer, um, that these error 42:22correcting codes are really nice because they, instead of correcting the errors 42:25after the computation is done, which is where that exponential scaling came from, 42:29you're actually correcting the errors on the fly as you do the computation. 42:34And so the overhead, uh, is, is no longer exponential, right? 42:38So if you can crack error correction in a way that lets you get to this 42:41fault tolerant regime, then now you can really do any computation you want 42:45that the cost ability of the computer is, um, logarithmic, sort of, and 42:49how large you want the circuit to be. 42:52Now, not all the error correcting codes that we can talk about are the same. 42:56Um, a lot of people have talked about using something called the Surface 42:59Code, which is a really neat code. 43:01Um, you've gotta remember at the end of the day, these qubits 43:05aren't, you know, virtual objects. 43:06They are physical things on a chip, and there's actually little superconducting 43:10wires connecting these together. 43:11Like, you know, when you actually look at these devices, 43:13they're really pretty looking. 43:15So those superconducting wires connecting together, if you're doing an air corrected 43:20chip, you really want them to exactly match the parody checks that you want to 43:23be able to run, because that means you can run those parody checks really efficiently 43:27without introducing additional layers. 43:29Um, for the surface code, the layout that you want for the qubits 43:32looks like a checkerboard, and the qubits are on our square lattice 43:36and nearest neighbor connections. 43:37So that's really neat. 43:38Like, uh, as a semiconductor manufacturing person, you can 43:41kinda look at this, you could say, yes, I can see how to build this. 43:44It's not too bad. 43:45Um, the problem with the surface code is you're talking thousands 43:49or tens of thousands of physical qubits per logical qubit. 43:53And so although it's easy looking, it's not practical. 43:57It's too expensive, it's too large. 43:59Um, to put that in perspective, the largest devices that we've ever 44:02made are about a thousand qubits. 44:04So Okay. 44:05So you're, you're kind of priced out of doing this very quickly. 44:09Exactly. 44:09And, and those are also, by the way, the largest devices anyone has ever made, so. 44:14Sure. Right. 44:15Um, and so part of this announcement is we have this new code that 44:18we're calling the gross code. 44:20Uh, and we're calling that, not because it's disgusting, but because dozens keep 44:23on showing up whenever you talk about it. 44:25And a dozen, dozen is called a gross. 44:27So, um, so the gross code breaks the rule that we had in the surface code. 44:32You can no longer lay the qubits out in a checkerboard, uh, in fact each qubit. 44:36In addition to that, checkerboard has two long range connections 44:39that go to other qubits that are far, far away on the device. 44:42Uh, think of it as like a highway overpass across the chip. 44:46And what those let us do is create a code that is much more 44:50efficient, uh, with the gross code. 44:52With 300 physical qubits, we can encode 12 logical qubits. 44:57So the qubits come in 12 packs, which is kind of neat. 44:59Um, but it, it's, uh, order an order of magnitude for your qubits 45:03and you need for the surface code, uh, to build devices like this. 45:07That's fascinating because I, I think, you know, I miss like the way 45:09you described the problem initially. 45:11I guess again, my, my layman's view is like, well just try to shield 45:14it so you don't get so much noise. 45:15And kind of what you're saying is like the solution is a physical solution, 45:19but it's like for the purposes of making the, the kind of computation 45:23process, like just much more efficient. 45:25Uh, but in some ways you're not, you're not trying to deal with the fundamental 45:27problem that like, these are really delicate systems that like, you know, are, 45:31are subject to all sorts of kind of like, um, interference from the environment. 45:34Kind of. 45:34I mean, we do want to solve the fundamental problems 45:37too, as well as we can. 45:38Um, that what these error correction, error correcting codes do is they 45:41sort of exponentiate your error rate. 45:43And so, um, the, uh, gross code, we call it a distance 10 code. 45:48Basically the error rate goes like your physical error rate to the fits power. 45:52And that's really neat in the sense that if we make our physical error rate 10 45:56times smaller, the uh, logical error rate will get a hundred thousand times lower. 46:01Right. 46:01You get this enormous lever arm on that physical error rate. 46:04And so actually your incentive to continue to make the underlying 46:07physical hardware better is still there. 46:09It's if anything even stronger. 46:11Um, but there are fundamental limits that we think are gonna 46:14be really hard to ex exceed. 46:15You're never going to see a physical error rate of 10 to the minus 46:1810 coming outta these systems. 46:20There's just too many sources of error. 46:21Well, and so I think one way of thinking about this is I, I think 46:24a little bit about like we've got. 46:25You know, existing computers, traditional computers, if you 46:28will, that can solve like kind of a certain landscape of problems. 46:31And then I think what you have is quantum now kind of like catching up, right? 46:35You look across this landscape and more areas of the map are filled in by 46:38like, oh yeah, quantum could do that. 46:39Or like, and even it's like new places of the map, right? 46:41Like, oh, you can never solve that problem with traditional computer. 46:43And Quantum can do that now. 46:44Um, do you wanna give our listeners a little bit of an intuition for like, 46:47with this kind of efficiency gain? 46:49I assume like y'all have certain problems in mind where you're like, 46:52oh, suddenly we could start to crack. 46:54ABC problems. 46:55And I think maybe it makes it a little more tangible to, I think, 46:57talk a little bit about like where this, where this goes, right? 47:00Suddenly you can approach someone and say, Hey, maybe quantum is an 47:02application for what you're doing. 47:03Uh, I'm curious what, what 47:05you think those are. 47:06So today we're just barely beginning to be able to do sort of small 47:09chemistry problems like, uh, a couple of atoms, a key part in a molecule, 47:13in, in a way that's compelling, um, with these fault tolerant computers. 47:18Um, really complex chemistry problems in catalysis, uh, and organic chemistry. 47:24Become possible. 47:25Uh, as well as a lot of really interesting problems in material science, studying 47:29super connectivity, studying, um, uh, exotic states of matter become possible. 47:35Um, one of the original motivations for quantum computers, you've gotta remember, 47:38was ultimately simulating quantum systems. 47:41Um, but I think the thing that excites most of our clients 47:44the most is optimization. 47:46And that's a place where having a wider logical register, more qubits that 47:50work better is just a huge benefit because of course, ox optimization 47:54problems, um, nobody really cares if you can call it smaller, co solve a 47:58small optimization problem, right? 48:00Because in practice they're not small. 48:02Like, yeah, exactly. 48:03And so, 48:04you know, even as we're working on better algorithms so that we can solve these 48:07problems on near term hardware, it's this fault tolerant hardware that is 48:10really gonna begin to unlock this space. 48:13Uh, and so when we're talking to financial institutions, when we're 48:16talking to logistics companies, that's really where we get a lot of interest. 48:20Yeah, I think that's like one of the wonderful things about the kind of 48:22like current generation of all these like kind of emerging technologies. 48:26'cause I joke about it all the time in the AI space where you're like, 48:28you've, you've built a machine. 48:32Intelligence. 48:33Right? 48:33And then you're like, actually the primary application is that we use it to like kind 48:36of like optimize, like customer service. 48:38I guess in some ways, like quantum almost has like a very similar quality, right? 48:42Which is like you're literally playing with like the fundamental building blocks 48:45of reality, and then it's like, we gotta solve the traveling salesman problem. 48:48It's like kind of where you end up. 48:50Yeah. 48:50I'm not sure the traveling salesman problem has that much money behind it. 48:53They're really good approximate answers there. 48:56Yeah, that's right. 48:58Well, that's great. 48:58Um, I guess maybe just the last question is, I'm curious about, now that you've 49:01kind of worked on this, like what's next? 49:03Like what do you think is the next big challenge that 49:05the community's focusing on? 49:06Maybe give us a flavor for like what we might expect in the 49:08next, you know, six to 12 months. 49:10So I think, well first big one is quantum advantage. 49:14We are going to see it, and like I said, it, it's just 49:16going to be the gunshot that. 49:18It gets heard around the world on Quantum. 49:21Um, the other one is now that we've announced this roadmap default 49:24tolerance, I think we're going to start to see a lot of other competitors 49:27in the field updating their roadmaps to use some of the same ideas. 49:31And so we're really going to begin a second stage, this race to fault tolerance. 49:36It's like game on basically for you guys, it's game on. 49:38And remember, you know, we have a plan. 49:41It's really neat that in principle we have solutions to all the 49:44problems that you need to build a fault tower and quantum computer. 49:46You know, to the extent that we're actually built, starting to build a 49:49data center in Poughkeepsie, New York to house these things, like that's 49:52where we are in the planning year. 49:54But that's every stage of that can get better. 49:59We, we have a bare minimum. 50:01We are continuing to do research. 50:02We expect every aspect of our plan to still continue to improve and for 50:06these machines to only get more capable than what we're anticipating today. 50:09I think maybe just the one thing, maybe to kind of end where we started is, you 50:14know, I. I've been hearing a lot from quantum people about like how this is 50:18very soon and gonna come in the next 12 months and this is gonna be a shot. 50:21Heard, heard around the world. 50:23I don't wanna be too mean, but kind of like why should we believe you guys that 50:26this is, this time is different, right? 50:28Like what about your roadmap makes it like an actual, practical thing? 50:32Well, if you think about the way we build really complicated machines, 50:35the, the key is to be able to build them out of an individually 50:38testable and composable unit cell. 50:41That if, if we needed to build, uh, blue Jay is gonna be about a hundred thousand 50:45physical qubits, if I needed to build you a hundred thousand physical qubit chip, 50:49um, we would need aliens to come down to Earth and give us that technology. 50:53Um, but the great thing about the gross code is the module 50:55size is actually pretty small. 50:57Once we've added all the extra qubits we need for computation, we need for 51:00communication, we can build a module of about 500 qubits and that module, 51:06we can then step and repeat and connect together to build this system as a whole. 51:10And so the great thing about this design is that we don't need to build 51:14a hundred thousand qubit system. 51:16We need to build a 500 qubit system. 51:18We need to be able to manufacture it. 51:19We need to be able to test it, and we need to be able to 51:21connect a lot of those together. 51:23And so it's really that modularity that is bringing this into reach and the 51:27fact that the module size is no bigger than what we've built in the past. 51:30Right. We built Condor. 51:32We have built a thousand qubit device that was only a little bit 51:34less complex than what we need to build to make this code happen. 51:38And that's why we're saying it's practical. 51:41That's awesome. 51:41Well, great. 51:42I'm really looking forward to having you back on the show, Oliver, and 51:44uh, thanks for taking the time today. 51:46No problem at all. 51:47Thank you. 51:48Thanks all your listeners for joining us. 51:49If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify, and 51:52podcast platforms everywhere, and we will see you next week on Mixture, of Experts.