WWDC, AI Wars, and Quantum Advances
Key Points
- The hosts debate Apple’s recent WWDC announcements, questioning the rushed design changes and speculating whether the new “glass” OS will become a “Windows Vista‑like” flop.
- They analyze Meta’s strategic acquisition to secure its AI supply chain, emphasizing that infrastructure—training data, evaluation, and human feedback—is now the primary battlefield in the AI wars.
- Tim Hwang assigns “homework” on Apple Intelligence and Sam Altman’s latest blog, highlighting how quickly listeners dive into new papers and test emerging models like O3‑Pro.
- The episode previews upcoming discussions on OpenAI developments, a major Scale AI partnership, and a breakthrough quantum‑advantage announcement slated for the show’s closing segment.
Sections
- Untitled Section
- Apple AI Rollout: Hype vs Reality - The speaker critiques Apple’s evolving partnership with OpenAI, notes Siri’s lag behind Meta and Google, comments on new “liquid glass” UI and ultra‑dark mode features, and highlights excitement for on‑device large language model support.
- Apple’s Last AI Play - The speaker argues that after abandoning a larger, unready model, Apple’s only remaining strategy is to subtly integrate small on‑device AI models across its ecosystem, a move that still trails Google’s more advanced Gemini models.
- Disappointment with Siri's AI Strategy - The speaker laments Apple's delayed Siri overhaul, attributing it to an overly cautious, privacy‑first approach despite the company's strong hardware‑software integration.
- Unified Design System Across Devices - The speaker praises a platform's design system for delivering consistent UI across iPhone, iPad, and Mac, positioning it for future AI integration and better developer experiences, while acknowledging personal Apple fandom.
- Apple’s Missed AI Opportunity - A speaker criticizes Apple’s delay in releasing advanced generative AI features—such as photorealistic image creation and a more conversational Siri—arguing that the hold‑up may stem from a desire to perfect the platform despite the missed market chance.
- AI Optimism vs Real-World Challenges - The speaker argues that while leaders like Sam Altman portray an overly optimistic, utopian AI future, significant issues—misalignment, job impact, unequal global access, and concrete technical shortcomings such as models failing basic tasks—remain unresolved.
- Debating Apple’s AI Claims - The speakers critique Apple’s intelligence paper, testing it with O3 Pro, questioning its prompting strategy and emphasizing the importance of puzzle‑based evaluations.
- Debating LLM Reasoning vs Copying - The speaker examines whether large language models are just stochastic parrots that copy diverse data or demonstrate genuine problem‑solving reasoning, arguing that the utility of the models matters more than a strict definition of intelligence.
- Debating AI Rights and Trust - The speaker argues that AI displays reasoning abilities, launches a campaign against AI discrimination, and examines how companies might treat AI models as trusted employees comparable to human hires.
- Meta's $15B Scale AI Bet - The speaker explains Meta's massive $15 billion acquisition of data‑annotation firm Scale AI as a strategic move to close the gap with rivals like Anthropic, Google, and OpenAI, secure talent, and accelerate its AI research pipeline.
- Young Billionaire, Scale AI, Meta Hire - The speaker lauds a 28‑year‑old billionaire’s talent, praises Scale AI’s synthetic‑data capabilities and recent acqui‑hire, and points out Meta’s unusual external senior hiring amid its AI organizational shifts.
- Discussing AGI Investments and IBM Quantum Preview - The panel talks about a massive $15 billion bet on high‑quality AGI data, then previews an interview with IBM Quantum CTO Oliver Dial to discuss recent quantum computing announcements.
- Surfaces, Errors, and Near‑Term Quantum Advantage - The speaker explains how surface‑induced imperfections cause high error rates in quantum processors, how statistical mitigation allows useful computations in chemistry, optimization, and materials science, and predicts a demonstrable quantum advantage over classical computers within the next few years.
- Introducing the Gross Quantum Code - The speaker contrasts the surface code’s simple checkerboard layout—requiring thousands of physical qubits per logical qubit and thus impractical for current ~1,000‑qubit devices—with a new “gross” code that abandons the nearest‑neighbor constraint by adding long‑range connections, enabling a far more efficient error‑correction scheme.
- Quantum Computing: New Problem Frontiers - The speakers discuss how quantum computers are expanding the map of solvable problems—from modest chemical simulations to complex materials science and, most compellingly for clients, large‑scale optimization—by offering efficiency gains impossible for classical machines.
- Questioning the Quantum Roadmap - The speaker outlines a modular plan for building large‑scale quantum computers, acknowledges ongoing research and scalability challenges, and pushes back against hype that such systems will be practical within the next year.
Full Transcript
# WWDC, AI Wars, and Quantum Advances **Source:** [https://www.youtube.com/watch?v=Y5yiQDnTimU](https://www.youtube.com/watch?v=Y5yiQDnTimU) **Duration:** 00:52:02 ## Summary - The hosts debate Apple’s recent WWDC announcements, questioning the rushed design changes and speculating whether the new “glass” OS will become a “Windows Vista‑like” flop. - They analyze Meta’s strategic acquisition to secure its AI supply chain, emphasizing that infrastructure—training data, evaluation, and human feedback—is now the primary battlefield in the AI wars. - Tim Hwang assigns “homework” on Apple Intelligence and Sam Altman’s latest blog, highlighting how quickly listeners dive into new papers and test emerging models like O3‑Pro. - The episode previews upcoming discussions on OpenAI developments, a major Scale AI partnership, and a breakthrough quantum‑advantage announcement slated for the show’s closing segment. ## Sections - [00:00:00](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=0s) **Untitled Section** - - [00:03:14](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=194s) **Apple AI Rollout: Hype vs Reality** - The speaker critiques Apple’s evolving partnership with OpenAI, notes Siri’s lag behind Meta and Google, comments on new “liquid glass” UI and ultra‑dark mode features, and highlights excitement for on‑device large language model support. - [00:06:20](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=380s) **Apple’s Last AI Play** - The speaker argues that after abandoning a larger, unready model, Apple’s only remaining strategy is to subtly integrate small on‑device AI models across its ecosystem, a move that still trails Google’s more advanced Gemini models. - [00:09:27](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=567s) **Disappointment with Siri's AI Strategy** - The speaker laments Apple's delayed Siri overhaul, attributing it to an overly cautious, privacy‑first approach despite the company's strong hardware‑software integration. - [00:12:36](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=756s) **Unified Design System Across Devices** - The speaker praises a platform's design system for delivering consistent UI across iPhone, iPad, and Mac, positioning it for future AI integration and better developer experiences, while acknowledging personal Apple fandom. - [00:15:41](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=941s) **Apple’s Missed AI Opportunity** - A speaker criticizes Apple’s delay in releasing advanced generative AI features—such as photorealistic image creation and a more conversational Siri—arguing that the hold‑up may stem from a desire to perfect the platform despite the missed market chance. - [00:18:43](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1123s) **AI Optimism vs Real-World Challenges** - The speaker argues that while leaders like Sam Altman portray an overly optimistic, utopian AI future, significant issues—misalignment, job impact, unequal global access, and concrete technical shortcomings such as models failing basic tasks—remain unresolved. - [00:21:47](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1307s) **Debating Apple’s AI Claims** - The speakers critique Apple’s intelligence paper, testing it with O3 Pro, questioning its prompting strategy and emphasizing the importance of puzzle‑based evaluations. - [00:24:50](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1490s) **Debating LLM Reasoning vs Copying** - The speaker examines whether large language models are just stochastic parrots that copy diverse data or demonstrate genuine problem‑solving reasoning, arguing that the utility of the models matters more than a strict definition of intelligence. - [00:27:57](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1677s) **Debating AI Rights and Trust** - The speaker argues that AI displays reasoning abilities, launches a campaign against AI discrimination, and examines how companies might treat AI models as trusted employees comparable to human hires. - [00:31:00](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=1860s) **Meta's $15B Scale AI Bet** - The speaker explains Meta's massive $15 billion acquisition of data‑annotation firm Scale AI as a strategic move to close the gap with rivals like Anthropic, Google, and OpenAI, secure talent, and accelerate its AI research pipeline. - [00:34:04](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2044s) **Young Billionaire, Scale AI, Meta Hire** - The speaker lauds a 28‑year‑old billionaire’s talent, praises Scale AI’s synthetic‑data capabilities and recent acqui‑hire, and points out Meta’s unusual external senior hiring amid its AI organizational shifts. - [00:37:13](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2233s) **Discussing AGI Investments and IBM Quantum Preview** - The panel talks about a massive $15 billion bet on high‑quality AGI data, then previews an interview with IBM Quantum CTO Oliver Dial to discuss recent quantum computing announcements. - [00:40:16](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2416s) **Surfaces, Errors, and Near‑Term Quantum Advantage** - The speaker explains how surface‑induced imperfections cause high error rates in quantum processors, how statistical mitigation allows useful computations in chemistry, optimization, and materials science, and predicts a demonstrable quantum advantage over classical computers within the next few years. - [00:43:29](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2609s) **Introducing the Gross Quantum Code** - The speaker contrasts the surface code’s simple checkerboard layout—requiring thousands of physical qubits per logical qubit and thus impractical for current ~1,000‑qubit devices—with a new “gross” code that abandons the nearest‑neighbor constraint by adding long‑range connections, enabling a far more efficient error‑correction scheme. - [00:46:35](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2795s) **Quantum Computing: New Problem Frontiers** - The speakers discuss how quantum computers are expanding the map of solvable problems—from modest chemical simulations to complex materials science and, most compellingly for clients, large‑scale optimization—by offering efficiency gains impossible for classical machines. - [00:49:38](https://www.youtube.com/watch?v=Y5yiQDnTimU&t=2978s) **Questioning the Quantum Roadmap** - The speaker outlines a modular plan for building large‑scale quantum computers, acknowledges ongoing research and scalability challenges, and pushes back against hype that such systems will be practical within the next year. ## Full Transcript
I think double WWDC would start being called, why was design changed?
I really want Apple to focus on having a good platform, right?
The mobile devices, the iPad, the Mac all coming together, and I, I actually
think that's the bigger story of WWDC
So I think Meta is really betting on securing its foundational AI
supply chain with this acquisition, which I think is, is very strategic
and it's the right move for them.
Infrastructure matters as much as models, training, data evaluation, human
feedback, and I think this is where the AI wars are being fought right now.
Tim gives his homework every time we appear.
He is like, you need to read the Apple Intelligence one.
You need to read, but Sam Altman's blog post and you're like, okay,
I'm gonna stick it through o3-pro
I might as well test that out.
13 minutes later.
13 minutes later for it to read the paper.
You're like, come on, dude.
I could have read that myself.
You're saying like quantum's like
already here.
It's kinda what you're saying.
We're saying you're probably gonna show quantum advantage, so
doing something better, faster, cheaper than a classical computer.
By 2026 and actually think it's gonna be even sooner than that.
All that and more
on Mixture of Experts, A Think podcast.
I am Tim Hwang, and welcome to Mixture of Experts.
Each week, MOE brings together the sharpest team of researchers,
engineers, and product leaders that you'll find anywhere in podcasting
to discuss and debate the biggest news in artificial intelligence.
Today I'm joined by a rockstar cast, Chris Hay, distinguished engineer
and CTO of customer transformation, Shobhit Varshney, who is head
of data and AI for the Americas.
And Kaoutar El Maghraoui, principal research scientist
and manager for hybrid AI cloud.
As always, we have a ton to talk about.
We're gonna talk about a bunch of news out of OpenAI, a huge deal for scale ai,
and you should stay tuned for the end where we're gonna have a special
segment that's gonna focus specifically on a very interesting announcement
that just came out about quantum.
Before we get to all that, uh, I want to talk about obviously Apple's,
WWDC , which is their annual developer conference, the big kind of showcase
for what Apple is going to do.
And there were a bunch of announcements, uh, that kind of came out.
But first I wanted want to just go around the horn and get everybody's opinion on,
you know, what coming out of like the keynote I'd say, you know, are we gonna
still be talking about in six months?
Sobe?
I'm curious what you think.
I think double WWDC would start being called.
Why was design changed?
I think this may end up being a Windows Vista moment, but I just have been, I've
been, uh, I've been rocking the new glass, uh, os for, for a while now, and it's just
not quite ready yet and I am very confused when the glass design is not quite ready.
They're comfortable shipping that.
They're saying that they can't ship AI because AI is not quite
ready, so they have to pick a lane.
Yeah, for sure.
Kaoutar what's your reflection?
Anything that we're gonna be remembering this WWDC for?
I think the still highly competitive pressure from open
ai, meta Google, Samsung, uh, especially kind of emphasizing
open, uh, Apple's late AI entry.
Uh, it puts this under a lot of pressure.
And, uh, you know, questions around did Apple's partnership
with open AI evolve or backfire?
How does Apple AI stack, you know, compare to what Meta and Google are doing with
their, you know, Llama, Gemini, and so on.
So, uh, and especially another, you know, I think point of concern is
the, and disappointment for many.
It's, uh, you know, the overhaul around Siri, uh, and you know, the.
The AI advancements, you know, that they're claiming here, which seems to
be kind of still lagging behind, you know, the, the competition out there.
So I also agree with the show bits around the liquid glass
interface, uh, I think is a change.
It's a final dig in there on there is that, you know, really making it pseudo
reception and any potential flows more.
I think immediately apparent and maybe all even more controversial here.
Yeah, for sure.
And uh, last but not least, uh, Chris, what did you think of, uh, the show?
Even darker dark mode.
In fact, I didn't realize how on dark my dark mode was, and then I was like,
oh, I can get dark mode even darker.
That's what I want.
So, so I thank you Apple because I love my dark mode and the even
newer, even darker, darker, dark, dark mode is gonna be awesome.
So I'm excited about that.
Apple, I'm just welcome news.
Yeah, but on a more serious note, I actually think that the, the ability
to access LLMs device is gonna be huge as part of the framework.
And I think because otherwise, we'll there, there would've been a risk of
loads of apps trying to install small models onto your phone, et cetera.
And I know some people have been doing that already.
So actually just.
Uh, ubiquitous access to on-device LLMs and being able to easily hook
that into your own applications.
Uh, we can argue about how good, uh, the AI on those devices are, but the fact
is Apple does have the best hardware.
Apple, silicon is incredible.
So I'm excited to see new and in applications via these SDKs and I think.
That's a little bit of a glimpse into the future because I think we're a
little bit too low level at the moment and Apple's gonna lead a little bit
of that way on, uh, framework access.
Yeah, for sure.
And actually that's a great place to kind of pick up because I think that,
you know, I was looking at say, the verge coverage of WWDC And there's
like a long list of announcements, but in pretty stark contrast to
where they were on the last WWDC.
Like the AI stuff is actually weirdly kind of not as prominent
or not as talked about.
Certainly people are complaining a lot more about the UI UX designs, but if
there's anything that Apple seems to be doing on the AI side, it's like.
Opening up new surfaces for the AI to interact with.
Right?
So it's one, any developer can go and play with their models.
There's also kind of this new idea that like Apple Intelligence will
be able to see what's on your screen and kind of interact with it, which
I think is also pretty interesting.
Um, and so I guess a show, but maybe a question for you is like.
Apple seems to almost be kind of admitting, look, we're not necessarily
gonna be a leader on the model game, but we, we might try to kind of use our unfair
advantage in terms of like our ecosystem.
Is that the right way of thinking about what they're trying to do here?
So I think it's, um,
it's their only play left, right?
And they, they, they had a lot of one-on-one sessions, post event
announcements with different, uh, YouTubers on explaining and rationalizing
why they're missing out this year on ai.
They went through two different architectures and the first one.
It could do a lot of the stuff they wanted to announce, but it was not quite
primetime ready for them to roll it out.
Right.
So their only other choice at this point was to make sure that we do subtle
things across the entire, uh, ecosystem.
Having, uh, Apple intel happening, Apple intelligence move away from
becoming the oxymoron of the year.
To actually starting to deliver some value.
And that is the future direction that they have to take.
And they need to come to a point where if they, if they either champion
the small models running on, uh, on device, and hence they can tout a
lot of security and stuff like that.
In the current phase, the Google, Gemini, uh, the, the Gemma models, the 3 billion
parameters that run on device, uh, 4 billion parameters, those are actually
pretty good right now and they're actually better than what Apple just released.
So Google is still ahead on the game, even on on device.
It seemed like a lot of the stuff that they were announcing were catching up.
I was like, Ooh, I can change the wallpaper in my iMessages, like
WhatsApp did like a decade back or stuff like that has to like, they
have to get away from these small incremental changes to something
that adds some real intelligence.
The amount,
There are very few companies where I would trust all of my
personal data with, and Apple has definitely won that trust for me.
So I'm very comfortable with my kids having access to an Apple phone.
The parental, uh, security res and stuff like that.
They've done a really good job around it, right?
So when we have as a family, trusted Apple with so much trust, they have all
the data, and I would rather have them deliver personalized experiences to
me versus all the competitors, right?
It'll be a nightmare for me to even think about, say, DeepSeek
or some of the other competitors to have access to all of my data.
So we have made our choices email.
Gmail, right?
I'm okay with Google having images of my passport and my medical records.
All that goes through my emails and stuff, right?
Both Google and and Apple have an.
Extraordinary opportunity to hyper-personalized intelligence
that open AI or AWS and stuff just cannot get close to.
But I think Apple, in this case has, has to really double down.
They could not have afforded a gap year in AI, and that's exactly
what they're doing right now.
If I was in that, like heads will, heads should roll on on
these kind of announcements.
Yeah, for sure.
Kaoutar, I think kind of one of the most interesting parts of this discussion, and
I think the last time we really talked about Apple was the daring Fireball,
kind of gruber take down of Apple.
Um, and I think the narrative through all of this is like, it's almost like
what Google was maybe six to 12 months ago where everybody was like, how
can you not be winning in this space?
Chris just talked about this, like incredible hardware.
They have access to, Chaba just mentioned all the incredible
trusts they have access to.
What's your diagnosis of Apple?
Like, what's going wrong?
Like, by all rights they should be crushing it.
Um, but, um, I'm curious from your, your vantage point, like
how you diagnose the issue.
Yeah, I agree.
I, I'm also like a bit disappointed in terms of, you know, this
underwhelming AI progress.
Um.
So the, for Siri, you know, I think they delayed, you know, their siri
overhaul, which is, you know, I think a big significant point of disappointment.
I, I see that across many reactions, you know, online.
Uh, they had promised a much more, more versatile Siri.
But, you know, comments, you know, they's not ready.
I think they're trying to be very cautious or maybe very, um.
Kind of conservative in their approach.
Uh, is it because they're worried about, you know, they want this privacy, uh,
first approach where they're trying to be really careful and they don't
wanna, uh, kind of mess up things with the AI, uh, and the privacy
aspects that, you know, they take.
So hard, you know, so in hard, like Shobhit mentioned.
So we, we, I also trust their platform.
I love, you know, all the integration they have, you know, the, the control
that they have over the entire stack, you know, all the way, you know, from
the apps all the way down to the silicon.
It's a very strong position that they have.
But I feel they're overly cautious here in terms of their AI strategy and, uh,
especially with all the hype happening.
Uh, elsewhere, what's, what's, uh, holding them back.
I also don't understand, you know, what's, what's going on here, why they're
taking a very conservative approach.
And especially if you look also at the paper, I think we're gonna talk about it.
Maybe the paper that they released.
It was also timely, maybe the timing of the paper right before
the WWDC, where they had all of these skepticism and so on about ai.
Um.
Kind maybe to justify why they're very kind of incremental and
cautious and taking a slow move.
In the WWDC,, they focus on a lot of other things other than AI.
They say, okay, there is ar, but there's all this other stuff like
productivity and visualization and the integration and the liquid glass.
A lot of focus on that, but all other stuff that I think they're
positioning also is important, which is important, but.
AI is also kind going so strong elsewhere, and we need a
strong position from Apple on this.
So I, this might have been just regrettable timing, who knows?
Right.
But basically, Apple also released a paper that got a lot of chatter this same
week entitled The Illusion of Thinking.
Um, and in some ways it's kind of a critique of what
reasoning models are doing.
And I, I think the, the narrative online at least was, well, it's weird that these
researchers don't even have confidence in the technology they're pushing.
I dunno, Chris, like how you're gonna respond, but I think there is a question
of just like, you know, does the company culture even really believe in the
technology that they're pushing here?
I like Apple's approach to this and, and I and I, I have to say we are in a
hype cycle just now and I know we'll get to the paper a little bit later, but.
I, I really want Apple to focus on having a good platform, right?
The mobile devices, the I, iPad, the Mac, all coming together,
and I, I actually think that's the bigger story of WWDC, right?
Which is that,
they've unified, at least on the numbering system for the operating system.
Right.
But actually I think they are trying to make this a stack where I can easily
go from one device to another and they have control of the full vertical device
and the platform and they all connect and, and we can mock the liquid glass.
Um, but actually they've done a very clever thing here, which is they've
built a design system that allows you to be able to, uh, have consistency
regardless of the form factor.
Now that's gonna be important in the future because actually we are gonna
wanna switch from our iPhone to our iPad to our Mac and not feel as if
we're on a different operating system.
So I, I weirdly think they're setting themselves up for the
future at a platform level.
And I also think they're setting themselves up from an AI perspective
because they're focusing on the sds, the hooks, and how
people are gonna hook into that.
Yes, their AI.
Clearly is not matching up to, uh, the ecosystem and the platform
that they're building, but.
They'll catch up on that.
So I, I would rather they fix all of these things anyway and prepare
for the future, and then we're gonna have great developer experiences.
I don't feel as if I'm missing any AI stuff on my phone.
If I need to go and speak to an LLM, I'll just bring up the chat GPT app.
It's fine.
Right.
So I'm, I'm okay with the approach that they're taking, but full
disclosure, I'm a full on Apple fanboy, so, you know, it's fine.
So,
so Chris, um, I'll summarize what you just said.
iPad.
Is more like Mac.
Mac is now more like iPhone, and iPhone is more like Android.
Is that fair?
No, I am not, I'm not taking your Google centric view on this show, but
with your love for Gemini, and if you ask Gemini to code, then you might
as well get a 3-year-old bashing at a typewriter because you're gonna
get equivalent code coming out of it.
And, and, and I get that you love Gemma and that fact,
and I do love Gemma as well.
But, but the, the reality is, you know.
These smaller models, the Apple will catch up on, on that in time.
So I, I, I just think there's two approaches.
You can chase the AI part and everybody is chasing that and that's important, but
actually they're chasing their platform and trying to make that consistent.
And I don't think that's a
bad move.
So, uh, I think, uh.
Windows has done a better job at opening up the Mac Os with more MCP based
things that I can actually tap into.
So my apps that I'm building are actually being able to take actions.
I was hoping that, uh, Apple will create a ecosystem where we can
start to build apps on top of that.
I mean, we will, we'll see what comes out coming out of this.
I think there's a huge potential that we can do there.
Second, my challenge right now is if I use a different app.
Say ChatGPT on my phone ChatGPT does not have the rich data about me.
I don't want ChatGPT to follow me and track my location preferences
and emails and all of that quite yet.
So instead of replicating the trust I built in Apple or Gmail with Google and
stuff, or Google Photos, I don't want a third party app to give me a service
that's not hyper-personalized to my needs.
Then there's a huge missed opportunity by Apple, hopefully next year.
Yeah, hopefully.
And I think also, you know, even, you know, I think in their.
Image, you know, playground, for example.
Uh.
The, the features like the photorealistic image generation or the, you know,
these advanced generative features.
I think it would be nice, you know, for them to have that.
Uh, and of course on the Siri, uh, I agree with you Shobhit, you know, because
of, you know, their hand on the data, the customization is, is gonna be a big leap.
You know, if you can integrate that all together with the, uh.
With their Apple AI intelligence with more advanced conversational
capabilities and generative AI features.
But hopefully, you know, I think they're delaying because they wanna get it right.
So that's, I think what I'm thinking, why it's taking, but I think Chris's
point on the getting the platform right is also a very important point.
Yeah.
Well, we'll have to wait and see.
There's more common, uh, and I mean, I think with Apple, they can just keep
taking more and more shots on goal, right?
I think that is one thing that people frequently miss is they can get it wrong.
For a very long time before they get it right and still largely be okay.
Alright, I'm gonna move us on to our next segment.
Um, some interesting news coming out of open ai, uh, this week, both
of a product nature and also of a, a. Philosophical nature, I suppose.
And I kind of wanna talk about both stories together.
So one announcement not unexpected is OpenAI announcing the availability
of o3-pro which is basically the more advanced version of their reasoning model
that has kind of taken the world by storm.
Uh, and also some kind of pricing updates there about how cheaply they're able to
offer a lot of their existing models.
Um, and then secondly, Sam Altman published this sort of essay called
The Gentle Singularity, where he kind of like makes the argument.
That we're already living through the singularity.
You know, it's not a future thing.
It's happening right now and you know, the world is gonna feel like
very different over the next decade.
And maybe, you know, Kaoutar, maybe I'll throw it to you first is, you
know, when you play with the oh three and oh three Pro, like, I guess I kinda
wanna ask the question like, do you agree with Sam Altman that we're like
already living in the singularity?
Like basically that like it's wild that we have.
Technologies that can do what o3-pro can do.
You know, how do you kind of think about that?
But like, I think that's one reflection is like how much
we buy what Sam is telling us.
Yeah, very, a very good question here.
I think of course the o3 model has brought a lot of advanced features,
especially on the reasoning capabilities, the multim modalities, you know, I
think really nice features there.
Uh, you know, especially focusing on the reasoning aspects.
But I think, uh, if you look at, you know, Sam's paper, the essay,
what I think is, it's a better, you know, kind of optimistic, uh,
you know, he's over optimistic.
Uh, he's, you know, softening the idea, you know, of this, uh,
runaway AGI, he's saying that.
This won't be a terminator, it'll be an assistant helping you solve hard problems.
The super intelligent, that's gonna exceed our intelligence.
So it's a smart narrative shift.
You know, I think it's more politically and socially acceptable
than machines can take over.
But, uh, I think, uh, maybe he's overall optimistic.
There are still a lot of challenges that we have to, uh, to solve, uh, and I think.
I think he's trying, especially to align with OpenAI uh, vision.
You know, they, they're trying to deliver on the vision of the reasoning tools
that help humans think it could reshape science, economics, and education.
But I think there are still a lot of issues around the misalignment
around, you know, the impact of these things, you know, on jobs around also.
Kind of the equal access and democratization of AI because
this might also kind of intensify the division globally.
You know, who gets access to these technologies and my privilege,
you know, certain people versus, or certain nations versus others.
So there are a lot of other issues I think that's are much deeper
than, you know, I think the nice
and Eutopic vision that Sam painted.
So I feel, you know, he's over optimistic, but there are
a lot of issues that still needs to be resolved.
Right.
You're almost like, it's still ahead of us in some sense.
Um, yeah, I mean, show the thoughts, like, I don't know if you've played
with o3-pro any impressions?
So the first thing I did
was look at Apple's paper that said, oh, these models, can't think they have
the full prompt for the Tower of Hanoi problem, where you remove the pegs over.
I took the exact same thing, gave it to oh three slam dunk, like
you just crush it right away.
So within a few days opening, I just came and swinging.
I was like, guys, stop making excuses.
AI is working really, really well.
And if you think about this from an enterprise perspective, for us,
we are more, it doesn't matter that the world has somebody who won Nobel
Prize or somebody who's like Einstein level IQ and stuff like that, right?
As an enterprise, it's very important for me.
Can I take that capability and apply it to a particular, uh, system?
Can I solve for a problem?
Can I deliver economic output?
And that requires us to have a hierarchy of intelligence.
Generally in our organizations, you're an obscenely, overpaid, CEO at the very
top who's an expert in multiple areas.
But by the time you get to the accounting department, the procurement
department, HR, and so on and forth, you have a really small model that
went to a community college that has been doing accounting really well,
and that's the right price point and the capability intelligence that we're
expecting for that particular job role.
So I don't think that o3 coming out with a brilliant model is something that's
gonna drastically change the narrative within the enterprises quite yet.
We can deliver an insane amount of value with the existing intelligence that
we already have and has to be more ROI driven intelligence versus o3 probing.
It's quite, quite expensive, but the fact that it just came and just rained
over Apple's parade on one side, somebody saying, Hey, AI ain't quite ready yet.
The other one is saying, Hey, just.
Come on, stop talking about, uh, like this, this crushing
through this very, very quickly.
Right.
Two ends of spectrum of uh, the same story.
That's right.
And I think Chris, I think what's so useful on the show, but I'm really glad
you brought up that Apple paper again.
'cause it kind of definitely looms large in my mind.
Right.
Is basically like I. It definitely offers a challenge to what we're
seeing with the reasoning models.
And I guess, I dunno, Chris, is what your take is like, is Apple just wrong
in what they're arguing here, given everything that we're seeing outta OpenAI?
Well, I did exactly what Shobhit did as well.
I took the paper and gave it to o3.
Um, I'm, 'cause I was like.
That's what I love about this show.
Tim gives us homework every time we appear.
It is like, you need to read the Apple Intelligence one.
You need to read Sam Altman's blog post.
And you're like, okay, I'm gonna stick it through 03 Pro.
I might as well test that out 13 minutes later.
13 minutes later for it to read the paper.
You're like, come on dude.
I could have read that myself.
You know what I mean?
So, um, so I love o3 Pro, but I, it does, it takes a long time.
Um.
On the, the Apple paper.
I think it is really interesting, um, actually if you ask, uh, o3 Pro
about it, uh, it's kind of like, yeah, the core of the paper's about right, but
actually, um, they've only gone with a single prompting strategy, et cetera.
Is that really gonna pan out over time?
So I think, um, even, even the o3 pro was a little bit skeptical of the paper.
I, I think there is some.
Relevant points though, which is that, um, I think puzzle based approaches,
so if you look at the Apple paper, they were talking about doing things
like the Tower of Hanoi, et cetera.
I think these puzzle based approaches are important because there is a lot
of contamination within these models.
They have seen things before, and therefore how do you distinguish
between regurgitation and, uh, and true, true thinking in that sense.
But the flip of that is, I mean.
If you take some, especially let's take code for example, and I'll go back to
my favorite model, which is, uh, Claude four, Opus and Claude four sonnet, right?
I mean, even without thinking, these models are able to just
spit out, you know, unique code.
Um.
You know, end to end thousands of lines of code and it's just perfect.
Uh, you know, oh, okay, you're gonna have to change some stuff, et cetera.
But it is incredible.
So I think even if it's simulation and, uh, you know, and you, they're saying
that the reasoning isn't going that far, the reality is it is able to do the
tasks that I wanted to do to some extent.
Um, and I think, and I think.
Maybe that's the thing to focus on.
Is it providing value?
And the answer is yes.
Now the other thing on that paper that I think is useful
is there a saying is that, um.
The models sort of collapse in and itself right after it generates
enough tokens, for example.
So when it's sort of reasoning about something, um, after a while it
will sort of go in a bit of a spiral because it doesn't know the answer.
And there is a bit of a fundamental flaw to this, which is it doesn't have
access to things like tools in this case.
So there's not a freshness of information coming in.
So it's sort of going into a spiral.
So I. I, I understand the paper and I get where they're going with this, and I
get the, what they're saying about that.
Actually, if you take multiple samples, you can get to, you know, fire the
pass side of things very similar to, uh, you get similar results
to what you get from reasoning.
But I, I think the reality is, Diversity of data in that
case becomes more of a thing.
So I am, I'm somewhere in between on that paper.
Hmm.
Yeah,
that's actually a really interesting response.
And yeah, I think that's like, I mean, you know, I'm just reviving a
phrase that I don't think we barely use anymore nowadays, which is like,
I think people used to be like, these are stochastic parrots, right?
All they do is they copy.
And I think like, I guess my response to this is kind of like, I don't
know if debating what intelligence really is, kind of, is very helpful.
'cause in some respect it's like, okay, so.
Say it is copying, it's like incredibly useful for all the tasks
that I need and can have a huge impact even with that in mind.
Um, and, and I know Kaoutar, it looks like you might want to kind of jump
in, but that I, I feel like that's kind of one of the interesting.
Tensions that we have with this era of these language models is,
yeah, I mean they might just be copying in some respects, but
it really feels pretty strongly.
Like it's a kind of reasoning that solves problems and I almost, I can, I kind
of get, can't get too fussed about it.
Yeah, and, and you know, if you look at, you know, this paper, you know,
the illusion of thinking paper.
You know, some of the things, you know, that I thought also the methodology that
they also used, uh, which, you know, they cherry picked, you know, these tasks.
So, uh, the, for example, the logic of the puzzles that was chosen, uh,
even I think humans struggle some sometimes with these, uh, things.
So does that, you know, represent the full spectrum of reasoning tasks?
So, and, and another thing, you know, these artificial constraints,
you know, so things like, um.
So it did not, for example, in this paper, uh, the Apple did not allow
the models to use coding as a tool to solve the problems for humans
and also for powerful AI models.
Writing code is a fundamental way to tackle complex, logical problems.
So these restrictions you restricting, you know, this capability.
Could artificially also limit the, the model's performance and
also misrepresent, you know, kind of the true reasoning potential.
Um, the, you know, there were also things like the output, you know, the
token output limits, uh, that they had, you know, some tasks assigned
reportedly, you know, I think exceeded the models token output limits.
So, and uh, essentially setting the models up for failure and.
It's kind of like a all, all or nothing grading, uh, uh, thing, you
know, the strict complete accuracy collapse, you know, quote, quote metric.
I think it might be too harsh.
Uh, so I think the approach, you know, the, I feel there are some flows, you
know, in the methodology that they use.
You know, also some of these exaggerated claims or mis
misinterpretation of the reasoning.
So for example, the semantics of reasoning, you know, are, you know,
some people you know from the analysis I looked at, some people argue that know the
papers title and conclusions are kind of,
they could be looked as inflammatory and mis misrepresent, you know,
what really LLMs are actually doing.
Uh, while you know, they may, might, might not, maybe reason in a human
conscious way, their ability to perform these complex calculations and generate
coherent logic and use tools in a form of, you know, a, a kind of reasoning form.
Even if it's bid on pattern matching, it is also a type of reasoning and really
that's gonna evolve and improve over time.
So, um, Tim,
I, I'm gonna start a whole campaign around AI rights matter.
Like we should not discriminate against ai.
And the reason I say, say that,
I thought we started with o3-pro we talked about the Apple
paper, and now we're, we're on the Shobhit's campaign for AI right?
Here we go.
Hermoine
Grainger here.
Go on Shobhit.
So here's my take on this, right?
I, uh, recently we had modern, uh.
Combine the CHRO and CIO function into one.
So one of my CHRO clients got spooked and you know, she and I was over
sketching up what a trusted employee is in a company today, human employee.
And I was trying to extrapolate from there where a trusted AI employee would be.
So if you offers, all of a sudden when we are hiring people, newcomers
into the, into the company, fresh hires from the industry or interns.
We do a really good job of car carving out the right set of costs that I can
delegate to a, a individual intern or a new hire without having to give
them step by step instructions and handholding them all the way through.
We already have mechanisms of thinking about it in that way, but somehow
when we get to an AI model and if it's performing at 95% accuracy,
we are still not satisfied, right?
We, I think, have to do a better job of appreciating what are the strengths
and weaknesses of a new hire and give them, allocate them work accordingly,
and the same thing should extract, extrapolate over to an AI model as well.
There will be a place to get a very high end.
PhD level o3 pro.
There'll be a lot of places where we need a much, much smaller model.
I'm doing some multi-agent frameworks in production for some clients.
I need the orchestration agent to have the logic and think through how do you
go delegate this down to smaller tasks.
Right now, anything less than 70 billion parameter model is
not working out well for me.
So I'm struggling with how do I influence the reasoning model
to do things a certain way.
And right now the bigger models are just completely slamming this through.
So we'll get to a point where smaller models in the enterprise environment,
we would be able to influence the reasoning and make it work.
There'll be much, much smaller models like intern and we'll have
a better position for what accuracy and how should we measure the, the
model into.
And I, yeah, I think these kind of discussions like we're gonna
increasingly get asked, right?
Is this like this kind of weird line between, well, what is this
that the model's actually doing?
But I think to your point, right, like I think.
It's this battle between pragmatism and then kind of like having to know,
and I feel like that's kind of like a classic theme in, in the AI space.
I'm gonna move us on to our third segments.
Uh, you know, I think one of the things I keep looking for every time one of these
AI headlines comes out is the fact that the transaction dollar amount is just
getting bigger and bigger and bigger.
And this week was no exception.
Um, a huge $15 billion transaction announced, uh, between meta and
scale ai, uh, which if you're not aware, is one of the big players
in the data annotation space.
So this is a massive.
Transaction.
Um, and I guess, Chris, maybe I'll throw it to you.
I, I guess the question with always all of these is like $15 billion.
So why, why is meta so interested in this?
Um, I think probably I. They want to try and get ahead in
some way, shape, or form, right?
So the reality is we, I love the Llama models.
I think Llama four is great, et cetera, but I think there's a little
bit of frustration that they're not maybe ahead of anthropic.
They're not ahead of Google and they're not ahead of open AI at the moment.
However, with that being said, they are great models and they're some of my
sort of go-to models all at the time.
So.
I think this is, let's put another bet in the ring there and run something
else in parallel and then hopefully get to where they want to go.
And I think the reality is, you know, this is a, this is being
touted as a winner takes all market.
I'm not convinced it is, but I think the reality is that that.
You have to spend that type of money to be in that game.
It is fierce competition.
So if that means attracting new talent and uh, bringing that in and being
able to set up a lab focused on super intelligence, I don't, I don't think
that's a bad idea because there's probably two competing things also
going on at the moment, which is.
They need to get AI into their products.
They need to get AI out to their consumers, et cetera.
And that's different from actually, how do I set myself up for the future?
So...
By actually splitting this up a little bit and saying, okay, you're gonna be focused
on super intelligence and we're gonna continue to get Llama out to folks and get
it into our platform so people can use it and sort of gain productivity benefits.
Then you don't have sort of competing, uh, goals and intentions and therefore.
And the folks can be focused on different things.
One is gonna be focused on the research element, one is gonna be
focused on the, uh, the product.
And so I don't think it's a bad thing.
Well, and on that, and I think Shobhit, you wanna jump in?
I mean, I think what's curious about this transaction is in the past some of the
really big deals have been around like.
Like sort of technical leaders in the space, right?
So, you know, nom shazer, right?
Big acquisition built around that, right?
Like obviously the kind of like key leadership of OpenAI has been
involved in all of these big deals.
What's interesting about Wang is that he's not like necessarily like
a, a noted machine learning, you know, expert or pioneer, uh, Shobhit.
I think that almost like, kind of suggests that there's like other things
leading meta to this acquisition.
I dunno if you agree with that analysis.
So, uh, meta is trying to make sure
that Llamacon does not become a WWDC.
Uh,
sure.
But the, the Alexander and Lucy Co-founded Scale AI, both of
them are brilliant, both of them.
The youngest billionaires, uh, like he is worth like 4 billion plus at this point.
He is like 28 years old.
Like, what a phenomenal story.
He has, I would argue that he has done an exceptionally good job.
He himself has been a math prodigy, coding genius and whatnot.
So I would, I would say that he has had, the street cred comes from a very,
uh, from very talented family as well.
So I think he has done the right way.
And we, we've worked with scale ai, uh, quite a bit.
Phenomenal assets.
They do a really good job at creating synthetic data.
They will, they've done a lot around how do you get this data ready for,
for training these models, and there are very few competitors who can
actually do what Scale AI is doing.
There's, there's like a dozen more people that we, that we look at work with.
But Scale AI has definitely done a really, really good job.
And this was definitely an acqui hire, right?
There's no other way for you to get, um, Alexander to come and join.
And Meta has recently gone through a couple changes in their org structure
and stuff like that in their machine learning, generative AI and AGI
groups and stuff like that too.
So clearly they, they, they usually do not.
Hire high profile people in leadership roles.
This is an exception and it's a very senior role that they're
being an external, uh, person for.
It just, I think, doubles down on what Chris was saying in terms of where
meta is with respect to the market today with OpenAI On one end you
have Apple saying that, "Hey, this or hype" on the other end, he was like,
"Hey, we are already passing singularity."
So meta somewhere in the middle is saying, which, which side should I go?
I'm gonna go follow where the market is going.
Right?
So I think there is, there's a lot.
The $15 billion, if you look at the overall CapEx that Meta has, has
committed to spending this year, they're looking at about 60 billion ish is
what they're gonna spend this year.
They've been spending half of what AWS and Microsoft are spending in,
uh, in CapEx on this every year for the last couple, two, three years.
Right?
So this year meta is doubling down and catching up.
Even if they spend about, uh, 60, $70 billion, this 15 billion will
come from that particular portion.
Companies are getting very, very creative with how they're.
Handling talent and IP and data Scale AI.
like I think is a really, really good acquisition, or at least 49%
acquisition for meta as a company.
They need it.
We're still waiting for the behemoth, still waiting for a really
good reasoning model from Meta.
There's a long way they need to go and they have to shake things up a bit.
Yeah, for sure.
Kaoutar, our final thoughts, if you had $15 billion, would you use it to
recruit Alexander Wang to your company?
Well, uh, maybe I might not be as bold as, you know, meta, but, but
overall I think it's a strategic move, but this also what is showing it.
I think this is showing a growing trend.
Infrastructure matters as much as models, training, data evaluation, human
feedback, and I think this is where the AI wars are being fought right now.
So infrastructure playing the AI race is becoming very important.
These investments, I think, highlights
the broader industry trend, we're controlling the underlying data
and the infrastructure for AI development is becoming as important
as, if not even more important than just developing the new models.
So I think Meta is really betting on securing its foundational AI
supply change with this acquisition, which I think is, is very strategic
and it's the right move for them.
Uh, so of course, you know, I think they're betting a lot and
I think they're saying also, uh, they want to secure this high.
Quality data for AGI and, uh, AGI, I think is one of
their big bets for the future.
And I think this, you know, this, uh, investment they're doing is kind of
trying to secure for them that future.
Yeah, that's a great response to Kaoutar.
I don't know if I would be as bold, uh, as, uh, Zuck to spend $15 billion will
keep an eye, uh, on, uh, how this deal goes and how this partnership evolves.
As always, this was so good.
Kaoutar, Shobhit, Chris, thanks for joining us and, uh, stay tuned.
Uh, for our next segment, we will have an interview with Oliver Dial, who
I recorded an interview with earlier today, uh, focusing on some recent
announcements, uh, from IBM Quantum.
So as I promised at the very top, we wanted to make some time
at the very end to talk about a super exciting announcement that's
coming out of IBM on Quantum.
And in some ways we have I think, the perfect person to talk about it.
Um, Oliver Dial is joining us on the show.
He's the CTO of IBM Quantum.
Thanks for taking some time today.
Really appreciate it.
Absolutely no problem.
You know, MoE don't, we don't usually talk too much about Quantum, but we
keep an eye on it because our scope is just emergent technologies in general.
Um, and, uh, unfortunately most of the airtime is always taken up
by AI, but we always wanna keep an eye on what's happening in quantum,
uh, in part because there's just really exciting things happening.
And so, um, olive really excited to have you on the show.
You know, the thing I always have with Quantum is that people are always
like, ah, it's never gonna hand happen.
People have been talking about quantum forever.
I think what's really kind of interesting is I always push back and I'm kind of
like, look, in the last 24 months there have been some really kind of major
leaps in quantum that kind of make it feel like, you know, even though
everybody's complaining about how long it's taking, it might actually finally
be, uh, around, around the corner.
Um, and so I think I wanted to bring you on the show to talk first about
kind of like the announcement that you guys have, um, that came out
today from IBM on the idea of sort of fault tolerant quantum computing.
And so I guess maybe let's just start with the problem, I guess Oliver, I'm
curious if you can kind of walk us through why has basically like fault,
um, prominence or like fault frequency been such a big problem for getting
quantum to actually work practically?
Yeah.
Well, at the end of the day, you're trying to do computation with some of
the most sensitive systems anyone's ever made in the world that you're trying to
manipulate single quantum states and use them to store information and compute.
And so the error rates that we have with today's hardware is, are orders
of magnitude larger than what you would have on a classical computer?
Many orders of magnitude.
Um, in fact, for our two cubic gates, which is what we usually talk about when
we talk about air rates, our current failure rate is about one in a thousand.
And so imagine trying to do computation where the computer one time in
a thousand, it makes a mistake.
Yeah.
And I think like those errors are caused by just like, it's like too close to an
electrical field or something, right?
Like it could be anything.
Um, so the biggest one for our qubits is we actually store the one or the
zero in a single microwave photon.
And so anything that can absorb microwave energy.
Can suck that one away and turn it into a zero.
Which is kind of like
everything, right?
Like everything, yeah,
everything on the planet.
Um, the worst thing for us are surfaces, because anywhere where there's a surface,
there's a place where you can have like a thin layer of oxide, you can have
contaminants, you can have all kinds of things that'll absorb microwave energy.
So you're trying to do computation with this fantastically high error rate.
And the really amazing thing to me is even today, despite that high error
rate, we're already doing computations that a classical computer can't run.
Right?
That's what we talk about for the era of utility, and the reason is we have a lot
of ways that we can sort of statistically remove the impact of those errors.
So today we play a game where we run a circuit, we run a quantum computation
many, many times we statistically remove the heirs to get an accurate answer out.
And using that, we're starting to be able to tackle some really
interesting problems in chemistry and optimization and material science.
Um, and we think that, you think quantum quantum's like already
here is kind of what you're saying.
Um, we're saying you're probably gonna show quantum advantage, so doing
something better, faster, cheaper than a classical computer by 2026.
And actually I think it's gonna be even sooner than that.
And, you know, you were talking about this cadence of news releases.
I mean, that is going to be such an all boats rise moment because that's gonna be
saying, okay, wait, this isn't something that's gonna be next month or next year.
This is actually really happening today now.
Um, but the big thing with all these techniques we use today is they have
a overhead that scales exponentially with the size of the problem.
And so it's a little bit of a, in the long term, a losing proposition.
You know, we have this computer that's exponentially faster for some jobs, but
we've added an exponential overhead.
And so, you know, it's a real horse.
'cause like,
I think, like, is it right to say like, kind of almost like the
bottleneck is like computation, right?
Mm-hmm.
Like you're basically saying like, we've got this noisy process and we
kind of de-noise it using mm-hmm.
Computation, but it just kind of can't scale the way we want
it to.
Exactly.
So when we talk about fault tolerance, we bring in another set
of tricks and it's air correction.
Like if you're doing communication between a classical computer, you
might include parody bits so that you can detect if the data got
through correctly and retransmit it.
We can pull the same kinds of tricks with a quantum computer, um, that these error
correcting codes are really nice because they, instead of correcting the errors
after the computation is done, which is where that exponential scaling came from,
you're actually correcting the errors on the fly as you do the computation.
And so the overhead, uh, is, is no longer exponential, right?
So if you can crack error correction in a way that lets you get to this
fault tolerant regime, then now you can really do any computation you want
that the cost ability of the computer is, um, logarithmic, sort of, and
how large you want the circuit to be.
Now, not all the error correcting codes that we can talk about are the same.
Um, a lot of people have talked about using something called the Surface
Code, which is a really neat code.
Um, you've gotta remember at the end of the day, these qubits
aren't, you know, virtual objects.
They are physical things on a chip, and there's actually little superconducting
wires connecting these together.
Like, you know, when you actually look at these devices,
they're really pretty looking.
So those superconducting wires connecting together, if you're doing an air corrected
chip, you really want them to exactly match the parody checks that you want to
be able to run, because that means you can run those parody checks really efficiently
without introducing additional layers.
Um, for the surface code, the layout that you want for the qubits
looks like a checkerboard, and the qubits are on our square lattice
and nearest neighbor connections.
So that's really neat.
Like, uh, as a semiconductor manufacturing person, you can
kinda look at this, you could say, yes, I can see how to build this.
It's not too bad.
Um, the problem with the surface code is you're talking thousands
or tens of thousands of physical qubits per logical qubit.
And so although it's easy looking, it's not practical.
It's too expensive, it's too large.
Um, to put that in perspective, the largest devices that we've ever
made are about a thousand qubits.
So Okay.
So you're, you're kind of priced out of doing this very quickly.
Exactly.
And, and those are also, by the way, the largest devices anyone has ever made, so.
Sure. Right.
Um, and so part of this announcement is we have this new code that
we're calling the gross code.
Uh, and we're calling that, not because it's disgusting, but because dozens keep
on showing up whenever you talk about it.
And a dozen, dozen is called a gross.
So, um, so the gross code breaks the rule that we had in the surface code.
You can no longer lay the qubits out in a checkerboard, uh, in fact each qubit.
In addition to that, checkerboard has two long range connections
that go to other qubits that are far, far away on the device.
Uh, think of it as like a highway overpass across the chip.
And what those let us do is create a code that is much more
efficient, uh, with the gross code.
With 300 physical qubits, we can encode 12 logical qubits.
So the qubits come in 12 packs, which is kind of neat.
Um, but it, it's, uh, order an order of magnitude for your qubits
and you need for the surface code, uh, to build devices like this.
That's fascinating because I, I think, you know, I miss like the way
you described the problem initially.
I guess again, my, my layman's view is like, well just try to shield
it so you don't get so much noise.
And kind of what you're saying is like the solution is a physical solution,
but it's like for the purposes of making the, the kind of computation
process, like just much more efficient.
Uh, but in some ways you're not, you're not trying to deal with the fundamental
problem that like, these are really delicate systems that like, you know, are,
are subject to all sorts of kind of like, um, interference from the environment.
Kind of.
I mean, we do want to solve the fundamental problems
too, as well as we can.
Um, that what these error correction, error correcting codes do is they
sort of exponentiate your error rate.
And so, um, the, uh, gross code, we call it a distance 10 code.
Basically the error rate goes like your physical error rate to the fits power.
And that's really neat in the sense that if we make our physical error rate 10
times smaller, the uh, logical error rate will get a hundred thousand times lower.
Right.
You get this enormous lever arm on that physical error rate.
And so actually your incentive to continue to make the underlying
physical hardware better is still there.
It's if anything even stronger.
Um, but there are fundamental limits that we think are gonna
be really hard to ex exceed.
You're never going to see a physical error rate of 10 to the minus
10 coming outta these systems.
There's just too many sources of error.
Well, and so I think one way of thinking about this is I, I think
a little bit about like we've got.
You know, existing computers, traditional computers, if you
will, that can solve like kind of a certain landscape of problems.
And then I think what you have is quantum now kind of like catching up, right?
You look across this landscape and more areas of the map are filled in by
like, oh yeah, quantum could do that.
Or like, and even it's like new places of the map, right?
Like, oh, you can never solve that problem with traditional computer.
And Quantum can do that now.
Um, do you wanna give our listeners a little bit of an intuition for like,
with this kind of efficiency gain?
I assume like y'all have certain problems in mind where you're like,
oh, suddenly we could start to crack.
ABC problems.
And I think maybe it makes it a little more tangible to, I think,
talk a little bit about like where this, where this goes, right?
Suddenly you can approach someone and say, Hey, maybe quantum is an
application for what you're doing.
Uh, I'm curious what, what
you think those are.
So today we're just barely beginning to be able to do sort of small
chemistry problems like, uh, a couple of atoms, a key part in a molecule,
in, in a way that's compelling, um, with these fault tolerant computers.
Um, really complex chemistry problems in catalysis, uh, and organic chemistry.
Become possible.
Uh, as well as a lot of really interesting problems in material science, studying
super connectivity, studying, um, uh, exotic states of matter become possible.
Um, one of the original motivations for quantum computers, you've gotta remember,
was ultimately simulating quantum systems.
Um, but I think the thing that excites most of our clients
the most is optimization.
And that's a place where having a wider logical register, more qubits that
work better is just a huge benefit because of course, ox optimization
problems, um, nobody really cares if you can call it smaller, co solve a
small optimization problem, right?
Because in practice they're not small.
Like, yeah, exactly.
And so,
you know, even as we're working on better algorithms so that we can solve these
problems on near term hardware, it's this fault tolerant hardware that is
really gonna begin to unlock this space.
Uh, and so when we're talking to financial institutions, when we're
talking to logistics companies, that's really where we get a lot of interest.
Yeah, I think that's like one of the wonderful things about the kind of
like current generation of all these like kind of emerging technologies.
'cause I joke about it all the time in the AI space where you're like,
you've, you've built a machine.
Intelligence.
Right?
And then you're like, actually the primary application is that we use it to like kind
of like optimize, like customer service.
I guess in some ways, like quantum almost has like a very similar quality, right?
Which is like you're literally playing with like the fundamental building blocks
of reality, and then it's like, we gotta solve the traveling salesman problem.
It's like kind of where you end up.
Yeah.
I'm not sure the traveling salesman problem has that much money behind it.
They're really good approximate answers there.
Yeah, that's right.
Well, that's great.
Um, I guess maybe just the last question is, I'm curious about, now that you've
kind of worked on this, like what's next?
Like what do you think is the next big challenge that
the community's focusing on?
Maybe give us a flavor for like what we might expect in the
next, you know, six to 12 months.
So I think, well first big one is quantum advantage.
We are going to see it, and like I said, it, it's just
going to be the gunshot that.
It gets heard around the world on Quantum.
Um, the other one is now that we've announced this roadmap default
tolerance, I think we're going to start to see a lot of other competitors
in the field updating their roadmaps to use some of the same ideas.
And so we're really going to begin a second stage, this race to fault tolerance.
It's like game on basically for you guys, it's game on.
And remember, you know, we have a plan.
It's really neat that in principle we have solutions to all the
problems that you need to build a fault tower and quantum computer.
You know, to the extent that we're actually built, starting to build a
data center in Poughkeepsie, New York to house these things, like that's
where we are in the planning year.
But that's every stage of that can get better.
We, we have a bare minimum.
We are continuing to do research.
We expect every aspect of our plan to still continue to improve and for
these machines to only get more capable than what we're anticipating today.
I think maybe just the one thing, maybe to kind of end where we started is, you
know, I. I've been hearing a lot from quantum people about like how this is
very soon and gonna come in the next 12 months and this is gonna be a shot.
Heard, heard around the world.
I don't wanna be too mean, but kind of like why should we believe you guys that
this is, this time is different, right?
Like what about your roadmap makes it like an actual, practical thing?
Well, if you think about the way we build really complicated machines,
the, the key is to be able to build them out of an individually
testable and composable unit cell.
That if, if we needed to build, uh, blue Jay is gonna be about a hundred thousand
physical qubits, if I needed to build you a hundred thousand physical qubit chip,
um, we would need aliens to come down to Earth and give us that technology.
Um, but the great thing about the gross code is the module
size is actually pretty small.
Once we've added all the extra qubits we need for computation, we need for
communication, we can build a module of about 500 qubits and that module,
we can then step and repeat and connect together to build this system as a whole.
And so the great thing about this design is that we don't need to build
a hundred thousand qubit system.
We need to build a 500 qubit system.
We need to be able to manufacture it.
We need to be able to test it, and we need to be able to
connect a lot of those together.
And so it's really that modularity that is bringing this into reach and the
fact that the module size is no bigger than what we've built in the past.
Right. We built Condor.
We have built a thousand qubit device that was only a little bit
less complex than what we need to build to make this code happen.
And that's why we're saying it's practical.
That's awesome.
Well, great.
I'm really looking forward to having you back on the show, Oliver, and
uh, thanks for taking the time today.
No problem at all.
Thank you.
Thanks all your listeners for joining us.
If you enjoyed what you heard, you can get us on Apple Podcasts, Spotify, and
podcast platforms everywhere, and we will see you next week on Mixture, of Experts.