Learning Library

← Back to Library

AI Compute Unbundling Sparks Market Battles

Key Points

  • OpenAI is “unbundling” its AI stack—dropping Microsoft’s exclusive compute rights and sourcing chips from Oracle, Google, etc.—because the real bottleneck now is getting enough hardware into data centers, not model research.
  • The massive, growing demand for AI services shows the market isn’t in a bubble; companies are racing to build the infrastructure needed to satisfy a backlog of “near‑infinite” intelligence appetite.
  • Anthropic’s Claude was integrated directly into Excel, prompting Microsoft to launch its own “agent mode” (which actually uses Anthropic’s models) to keep AI capabilities within its Office suite and maintain Azure lock‑in.
  • Microsoft’s strategy is to be “good enough” for CTOs—offering functional AI tools that preserve cloud usage—rather than trying to be the outright best, even when that means embedding a competitor’s technology.

Full Transcript

# AI Compute Unbundling Sparks Market Battles **Source:** [https://www.youtube.com/watch?v=8W_IUoSMvu0](https://www.youtube.com/watch?v=8W_IUoSMvu0) **Duration:** 00:07:57 ## Summary - OpenAI is “unbundling” its AI stack—dropping Microsoft’s exclusive compute rights and sourcing chips from Oracle, Google, etc.—because the real bottleneck now is getting enough hardware into data centers, not model research. - The massive, growing demand for AI services shows the market isn’t in a bubble; companies are racing to build the infrastructure needed to satisfy a backlog of “near‑infinite” intelligence appetite. - Anthropic’s Claude was integrated directly into Excel, prompting Microsoft to launch its own “agent mode” (which actually uses Anthropic’s models) to keep AI capabilities within its Office suite and maintain Azure lock‑in. - Microsoft’s strategy is to be “good enough” for CTOs—offering functional AI tools that preserve cloud usage—rather than trying to be the outright best, even when that means embedding a competitor’s technology. ## Sections - [00:00:00](https://www.youtube.com/watch?v=8W_IUoSMvu0&t=0s) **OpenAI Unbundles Compute Amid Chip Shortage** - The speaker clarifies that the headline‑grabbing trillion‑dollar IPO rumor is a distraction, emphasizing that OpenAI’s real move is to unbundle its tech stack and source compute from any provider, underscoring that the next breakthrough in AI depends more on securing enough chips for data centers than on novel model research. - [00:04:37](https://www.youtube.com/watch?v=8W_IUoSMvu0&t=277s) **Cursor vs Windsurf Agent Debate** - The speaker contrasts Cursor’s multi‑agent, long‑running task model with Windsurf’s ultra‑fast single‑agent approach, while noting the rise of multimodel support in platforms such as GitHub Copilot and Google AI Studio. - [00:07:57](https://www.youtube.com/watch?v=8W_IUoSMvu0&t=477s) **Closing Toast** - The speaker concludes the discussion with a brief, celebratory “Cheers.” ## Full Transcript
0:00I spent over a dozen hours this week 0:02following AI stories, so you don't have 0:03to. Let's get what matters in 10 0:04minutes. Number one, OpenAI has a 0:07rumored trillion dollar IPO and Nvidia 0:09hits $5 trillion in market cap. What's 0:11the real story here? It's actually not 0:13the rumored IPO, although that was news 0:15that went around the world. The real 0:17story is that OpenAI is unbundling the 0:20tech stack, and that is part of how they 0:22are reaching this valuation. They have 0:24reached an appetite for compute that 0:26exceeds even Microsoft's cloud's 0:28capability to deliver. And so they are 0:31unbundling and they have dropped the 0:33Microsoft first right of refusal on 0:35compute and they can now get compute 0:37from anywhere from Oracle, from Google, 0:39from elsewhere. That might seem like 0:41it's strange because some of the folks 0:43like Google make their own models but we 0:45already see Anthropic and Google working 0:47together. The bottom line is not who 0:48ends up in a deal with who. The bottom 0:50line is that everybody is in a race to 0:53build infrastructure. And if you want to 0:55look at when the next great model comes 0:57out from Gemini or from OpenAI or from 1:00Anthropic, the answer is increasingly 1:02not dependent on researchers doing smart 1:05things with models. It's dependent on 1:08people getting chips into data centers 1:10with power. Because researchers keep 1:12communicating and leadership at these 1:14companies keeps communicating, we're not 1:16blocked on progress. We're blocked on 1:18chips. were blocked on the ability to 1:20get enough chips into data centers to 1:22serve demand. As I called out earlier in 1:24the week, that incredible appetite for 1:27AI is part of how we know we're not in a 1:29bubble. This is getting built out to 1:32serve a backlog of existing demand and 1:35that demand has no signs of slowing 1:37down. It turns out the world has near 1:39infinite appetite for intelligence. 1:42Story number two, getting the 1:43intelligence into a practical space 1:45here, right? Anthropic has added Claude 1:47to Excel and Microsoft has launched 1:50agent mode. This is super interesting 1:53because Microsoft actually uses 1:55Anthropic's models for agent mode while 1:58competing with Claude for Excel. And so 2:01Microsoft is really in a position where 2:03they just want to show that they provide 2:05good solutions to CTOs who are 2:09purchasing Microsoft products so that 2:12they are able to preserve more of a 2:14lockin around AI usage and ultimately 2:16cloud usage for Azure. They don't need 2:18to be the best. They need to be good 2:20enough. And one of the things I pay 2:22attention to in this story is that 2:24because Anthropic has done such a great 2:26job with Claude for Excel and the news 2:29has gone around the world. I've written 2:30about it, others have written around it. 2:32Microsoft feels some pressure to bring 2:34that capability into their traditional 2:36Office suite. As far as I know, they 2:37have never done this where they brought 2:39someone else's tool and embedded it 2:40natively in Office, but Claude was so 2:42good they felt like they were losing a 2:44step and getting disintermediated if 2:46they did not pull Claude directly into 2:49Excel. So, I think that's a savvy 2:50strategic move, but it shows the 2:52pressure that can be placed even on 2:54traditional software makers when you 2:56have really excellent AI tooling. Story 2:59number three, Meta is laying off folks 3:02in the AI division. 600 to be exact. And 3:04these are not costcutting measures. Not 3:08really. Microsoft kept a hundred million 3:11plus reachers researchers while cutting 3:13more than 600 other researchers. And so 3:17the the way to think about this is that 3:18the skills that commanded a premium in 3:212023 like pietorch experience or an NLP 3:25background or whatever it is, those are 3:26now table stakes. And so the market has 3:28aggressively split into commodity AI 3:30engineers who implement no known 3:33techniques and really super elite 3:34researchers who discover new paradigms 3:36and get paid whatever they want. And so 3:39the challenge here is that every time I 3:42look and check the news, meta is in a 3:44place where it's causing chaos to this 3:46AI team. Hiring new people in, picking a 3:48new leader, firing an old leader, firing 3:50600 people. Teams need coherence and 3:53teams need consistency to ship. Obama is 3:56already outdated. We need to be in a 3:58position if we're meta where the team 4:02can ship and settle down. And I have not 4:04seen that. And I think that in the next 4:0790 days, say by the holiday period in 4:092025, we need to see if the Meta team, 4:12this expensive multi-billion dollar 4:15contract elite researchled meta team is 4:17able to actually ship because right now 4:20they're not. And all we see is more 4:22chaos every time we look around. The 4:24longer that happens, the more you 4:26disrupt the team and the less likely it 4:28is to sort of really come through. Story 4:30number four is about the IDE wars. 4:32Cursor composer and windsurf sw.5 4:37both shipped and they have very 4:39different approaches. So cursor is using 4:41an agentic approach where that you can 4:43run and spawn multiple agents to tackle 4:46tasks. They're clearly starting to 4:48disintermediate the engineer from the 4:50file system. Windsurf is betting that 4:52you actually want iteration more than 4:54you want agents doing longunning tasks. 4:56And so Windsurf came back and said we 4:58are shipping an incredibly fast agent. 5:01That is still good, but the key thing is 5:03you never get blocked because this agent 5:05is so quick to come back. That is a 5:07really interesting dog fight and I'm 5:10really really unclear who is going to 5:12win. Do you want to be in a position 5:14where you have multiple agents running 5:15longunning tasks or like Windsurf, would 5:17you prefer to develop with a super fast 5:19agent to come back? We get that choice. 5:21Developers get that choice and we'll see 5:23who wins. Story number five is about 5:27GitHub Copilot and Google AI Studio. 5:30This sounds boring, but stay with me. 5:33Fundamentally what's happening right now 5:35is that we are seeing models grow up and 5:39we are seeing some of the previously 5:42hard to build telemetry and evaluations 5:46to support models come into standard 5:49tooling and so for example with GitHub 5:51copilot you can have multimodel now and 5:55so even if GitHub is owned by Microsoft 5:58Microsoft can't stop you using 5:59multimodel because the gravity the 6:01center of gravity around best practice 6:02is so strong there that everybody needs 6:04to enable multimodel even these solely 6:07owned providers there's some maturity in 6:08the stack here that is coming through 6:10and then with Google AI studio similar 6:12story but on the observability side when 6:15models commoditize the reason you use 6:18something like Google AI studio is 6:20because you're doing production 6:21workflows so studio logging is really a 6:24feature that shifts the battleground 6:25from which model is smartest to which 6:27platform makes debugging and iterative 6:30improvement on my agentic workflows the 6:32easiest the agent flows are growing up. 6:34That's the sort of larger takeaway I 6:36think. Finally, Benai's Arvar. So, Arvar 6:39is an autonomous security agent in 6:42research preview right now. The exciting 6:44thing is this is the first major model 6:47launch that addresses security 6:49specifically. So, Artvark's entire job 6:52is to scan your repositories of code, 6:55look for vulnerabilities, assess their 6:57severity, and then propose fixes all by 6:59itself entirely autonomously. The fact 7:02that it's out now strongly suggests that 7:04it will be out from multiple model 7:06makers by the end of the year. And what 7:08that will do collectively across all of 7:10these solutions that will be built is 7:12that it will start to put to bed the 7:13idea that AI code is unsecure. If you 7:16can start to use AI as a weapon to 7:19actively build secure code, actively 7:22patch vulnerabilities, to do what 7:23engineers cannot do, which is to stay 7:25awake 247 and check for security 7:28vulnerabilities. Well, now you're in a 7:29position to argue that not only is AI 7:32code more efficient to write, it is also 7:35more secure because of tools like Arvar. 7:37That is a really big strategic shift in 7:39the landscape that we're right on the 7:40cusp of. And that's the stories that 7:42mattered. I hope you enjoyed it. And uh 7:43I wrote up a prompt if you want to dig 7:45into sort of what matters and why. And 7:48you can kind of have a conversation with 7:49the news, which is one of the fun things 7:51about the world we live in. You don't 7:52have to just absorb it. You can actually 7:53have the conversation. So check it out. 7:55Cheers.