Learning Library

← Back to Library

OpenAI's 2026 Strategy: Seats and Scarcity

Key Points

  • The conversation around AI should move beyond comparing devices like “who has the best product” and focus on the strategic direction OpenAI aims to take by 2026.
  • OpenAI is operating under tight constraints, balancing a consumer‑focused ChatGPT that attracts billions with low‑pay conversion against a growing market demand for enterprise “delegation engines” that deliver fully autonomous, high‑quality work outputs.
  • The company’s emerging strategy appears to pivot toward selling inference‑based autonomous agents that enterprises can purchase to offload tasks, signaling a shift from pure chat experiences to monetizable enterprise workloads.
  • OpenAI’s resource limitation can be visualized as an airline with scarce seats: compute capacity must be allocated among low‑price consumer users, higher‑value enterprise customers demanding outcomes and governance, and investors who need cash‑flow positivity.
  • Despite these pressures, OpenAI currently enjoys a distribution advantage that positions it to shape the market while it navigates the trade‑offs between scaling compute, meeting enterprise expectations, and achieving profitability.

Sections

Full Transcript

# OpenAI's 2026 Strategy: Seats and Scarcity **Source:** [https://www.youtube.com/watch?v=2gt2Ugy1b6Q](https://www.youtube.com/watch?v=2gt2Ugy1b6Q) **Duration:** 00:26:40 ## Summary - The conversation around AI should move beyond comparing devices like “who has the best product” and focus on the strategic direction OpenAI aims to take by 2026. - OpenAI is operating under tight constraints, balancing a consumer‑focused ChatGPT that attracts billions with low‑pay conversion against a growing market demand for enterprise “delegation engines” that deliver fully autonomous, high‑quality work outputs. - The company’s emerging strategy appears to pivot toward selling inference‑based autonomous agents that enterprises can purchase to offload tasks, signaling a shift from pure chat experiences to monetizable enterprise workloads. - OpenAI’s resource limitation can be visualized as an airline with scarce seats: compute capacity must be allocated among low‑price consumer users, higher‑value enterprise customers demanding outcomes and governance, and investors who need cash‑flow positivity. - Despite these pressures, OpenAI currently enjoys a distribution advantage that positions it to shape the market while it navigates the trade‑offs between scaling compute, meeting enterprise expectations, and achieving profitability. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=0s) **Beyond Devices: OpenAI’s 2026 Strategy** - The briefing reframes the AI conversation from consumer‑centric device hype to a strategic analysis of OpenAI’s constrained, multimodal platform evolution toward enterprise delegation engines and paid inference services by 2026. - [00:03:18](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=198s) **OpenAI vs Gemini: Distribution Battle** - The passage explains how Google’s Gemini is rapidly expanding via Google’s platforms, compelling OpenAI to defend its user base while managing consumer‑focused cost pressures and massive enterprise token demand, underscoring compute capacity constraints that go beyond simple model quality debates. - [00:07:53](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=473s) **Research‑Driven Roadmap at OpenAI** - The speaker explains that engineers and researchers control OpenAI’s direction, prioritizing ambitious science, medicine, and physics AI challenges over chat, reflecting a democratic, passion‑driven focus on humanity‑advancing work. - [00:12:41](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=761s) **OpenAI Funding, IPO, and Compute Constraints** - The speaker argues that despite rumors of a cash crunch, OpenAI is planning massive fundraising and an IPO to bridge its capital‑expenditure gap, but its growth remains limited by compute bottlenecks that could prevent meeting enterprise demand by 2026. - [00:16:05](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=965s) **OpenAI Monetization Beyond Paid Users** - The speaker argues that despite OpenAI’s large user base, its lack of device‑level distribution limits conversion, so future revenue must come from alternative streams like shopping assistance, ads, and commission models targeting the majority of non‑paying users. - [00:20:49](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=1249s) **Beyond Enterprise Seats: AI Adoption** - The speaker argues that simply providing premium AI “seats” to leaders leads to shallow, consumer‑style usage, and true enterprise value requires reshaping employees’ mental models so they treat AI as an outcome‑driven tool rather than a novelty. - [00:24:00](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=1440s) **OpenAI’s 2026 Enterprise Crucial Test** - The speaker argues that by 2026 OpenAI must turn its compute advantage, financing flywheel, and consumer habit into profitable enterprise outcomes without compromising the product’s core value. ## Full Transcript
0:00Most people are still talking about 0:02OpenAI the way they talked about Apple 0:04back in 2008, as if the whole story is 0:06who has the best device. Heading into 0:092026, I think that's the incorrect frame 0:11for the entire conversation around AI. 0:14So, in this executive briefing, I want 0:16to talk about the real question, the one 0:19that's strategic. If you assume that you 0:22have a multimodel stack baked in, which 0:25I talk about all the time, a lot of 0:26leaders are now getting, then ask 0:28yourself, what is OpenAI trying to 0:31become in 2026? And what happens to 0:34everybody else if they succeed? And what 0:36happens to everybody else if they fail 0:38at that plan? The cleanest description 0:40I've seen is this. Open AAI is behaving 0:43like a company operating under 0:46significant constraints, not necessarily 0:48like a company that has a single 0:51coherent product strategy to execute 0:53against. This has been really true in 0:56the last couple of months. The tension 0:58is kind of fundamental at this point 1:00given their success. Chat GPT is being 1:03optimized as an engagement container for 1:06a billion people only 5% of whom are 1:10willing to pay. While the market's 1:12willingness to pay is shifting toward 1:15delegation engines, systems that 1:17enterprises can purchase where you hand 1:20off work and walk away. A lot of the 1:22codeex line of direction and strategy 1:26seems to me to be headed that way where 1:29these are designed to be fully working 1:31autonomous agents, very very high 1:34quality inference. You'll pay for the 1:36inference, but you'll get excellent 1:37results. And if you prompt it properly, 1:39it will give you fully finished 1:41enterprise work product. Maybe that's 1:43code initially. I would not be surprised 1:45to see that branch out into other places 1:48given recent launches in late 2025 from 1:50OpenAI. So, as a strategic diagnosis, 1:54this tells you what OpenAI is defending, 1:57what it's postponing, and it implies 2:00where the tradeoffs are going to be when 2:02the system is pressured. So let's dig 2:04into that a little bit more. To 2:06understand OpenAI's 2026 strategy, it 2:09helps to stop thinking in terms of the 2:12product as a singular entity and start 2:15thinking in terms of seats because 2:18effectively I think the right analogy is 2:20that open AI is running an airline with 2:24scarce inventory. In this case, it's 2:27like you have an airline that's running 2:28a popular route from New York to London 2:30and you just cannot get enough seats on 2:33that airplane. In this case, the compute 2:35is the scarcity and they have to 2:37allocate seats on that jet between the 2:40consumer who's not willing to pay a 2:42whole lot by and large where defaults 2:44have to be cheap, where they have to be 2:46fast, and the enterprise seat where 2:48outcomes and governance are demanded. 2:50You have a lot of standards here. And 2:52then there's the investor and capital 2:53seat on the plane where the only real 2:55question is do you have enough cash 2:58runway and deals in place on compute to 3:01keep the machine flying until you get to 3:04cash flow positivity until you get to 3:06profitability. And so the key thing I 3:09want to call out is that OpenAI 3:12currently has a distribution advantage. 3:16Now, Google can push Gemini and is 3:18pushing Gemini through search, through 3:19Android, through Chrome, and they're 3:21growing faster than OpenAI at this 3:22point, but it's still true. Open AI has 3:25the king of distribution advantages in 3:27the AI space. But to keep it that way, 3:30Open AI is now in a position where they 3:32have to defend that territory. they have 3:34to earn and retain all of their users 3:37while growing at the margins in a market 3:40that increasingly has people who have 3:43already picked AI systems other than 3:45open AI. So now it's not just can I 3:47introduce you to AI pick one up. It is 3:50can I introduce you to AI that's open AI 3:53is AI hey don't use Gemini that's a 3:56different proposition in that world 3:58compute is both a unit economics 4:01constraint for consumers and also a 4:04capacity constraint for enterprise 4:06because you have to think of it as like 4:07the consumer cares and is price 4:09sensitive maybe you tip over more 4:11consumers into paid if you can serve 4:13compute more cheaply serve intelligence 4:16more cheaply but from an enterprise 4:18perspective you don't necessarily want 4:20the cheap intelligence. You want to burn 4:23tokens. Sam has said in a recent 4:25interview he has enterprises knocking on 4:27his door and saying we can ingest a 4:29trillion of your tokens. Please give us 4:30a trillion of your tokens. There's a 4:32capacity constraint at that scale where 4:34it's like how do we develop the compute 4:37to serve that kind of capacity to 4:39enterprise. This is the conversation 4:41that people are missing when they're 4:43talking about model quality because 4:46OpenAI's most important shipping service 4:48is not the weights in the model. It's 4:51actually the allocation of compute. It's 4:54where they route queries from consumers. 4:56It's what are your defaults on your chat 4:58surfaces. It's what are your plan limits 5:01and which experiences do they make easy 5:03for you versus which they stay hidden. 5:06And you can see that fundamental compute 5:08constraint leaking into a bunch of their 5:11recent product choices, right? Roll back 5:14of a slower reasoning by default in Chad 5:17GPT 5.2 is arguably an assessment that 5:22for free users, the cost and latency is 5:26not worth it and users prefer, frankly, 5:29faster, dumber models that are cheaper 5:32to serve. And this just underlines the 5:34thesis that chat is largely a saturated 5:37use case. The free user base is going to 5:39be happy with dumber models is going to 5:43shape public perception of what AI is 5:45capable of like it or not. And that's 5:48the world that we all live in, including 5:49open AI. I saw um a survey, I think it 5:53was in the last couple of days, that 5:56said that 67 66% 5:59of people believe that an AI's answer is 6:03either a retrieval from a database or 6:06simply reading a presscripted response. 6:08Twothirds and these are these are people 6:11who use AI. This is why the free user 6:14base is having challenges understanding 6:16the capacity of AI. We are still in the 6:19fundamental product dilemma of what 6:21happens when you scale the power of your 6:24product by ax in 2 years but your 6:27chatbot looks the same and people just 6:30cannot figure out how to use it better 6:32and they don't have the mental models to 6:33do that and increasingly the behavioral 6:36evidence suggests that open AI is not 6:39finding it economically useful to serve 6:42that audience 950 million people who are 6:47on the free plan, a high-grade 6:49intelligent. The plan is clearly to use 6:53that compute in two big plays in 2026. I 6:57think it's pretty clear. Number one is 6:59the ongoing deep inference research that 7:02will be needed to push out extremely 7:06intelligent models for science and 7:09medicine which they're aiming at really 7:11aggressively and to push out a lot of 7:13very thoughtful highquality inference 7:16tokens and make them available to 7:18enterprise. Both of those are paid 7:20allocations and the science and medicine 7:22one in particular aligns strongly with 7:25the long-term research vision that 7:28OpenAI has. I know that we talk about 7:30OpenAI as a company and it is but it 7:34started with a nonprofit sense of 7:38mission and I think we are incorrect if 7:41we don't believe that that DNA is still 7:44strong especially in the research part 7:47of the company. People believe in AGI. 7:51They believe in it as if it is something 7:53that is worth doing on its own for the 7:55benefit of humanity. That is the level 7:57of passion they bring. That's frankly 7:59what they need to bring to do a task 8:00that hard. And in that world, they are 8:03going to be interested in focusing on 8:06the medicine use cases, the science use 8:08cases, the physics use cases, the things 8:10that advance humanity. And I have been 8:13in enough organizations to tell you it 8:15is not necessarily true that leadership 8:18sets the road map. In many cases, when 8:21you have high-powered research and 8:22engineering organizations, research and 8:25engineering shape the roadmap because if 8:29you are working on something that your 8:30engineers and your researchers actively 8:33think is antithetical to what the 8:35business is supposed to be doing, 8:37they'll just disagree and tell you they 8:38don't want to do it and you can't 8:40replace them. So, you'll end up working 8:42on what they want to work on, which is 8:44usually the harder, more interesting 8:45problem. And I don't know, but I suspect 8:49that there is a strong democratic 8:52component where researchers are leaning 8:54into working on interesting problems at 8:56OpenAI. And those interesting problems 8:58are leaning the company toward science, 9:02toward medicine, toward heavy inference, 9:04super intelligent use cases that go way 9:07beyond what you need in chat. And this 9:09is why I've said chat is in many ways a 9:12side play for OpenAI even though they 9:14have the biggest distribution advantage 9:16on the board right now. So here's where 9:182026 gets really interesting. Open AAI 9:20is trying to win three different games 9:23at the same time. Three different chess 9:25games, right? They're trying to win the 9:27Frontier Lab chess game. They're trying 9:29to win the mass consumer platform chess 9:31game. And they're also trying to win the 9:33enterprise productivity chess game. and 9:36the required tradeoffs there conflict 9:38and they conflict around compute. This 9:41is a three-game problem set and it it 9:44predicts the organizational behavior 9:45that you'd expect. you are going to have 9:48what we hear described as code red 9:50reallocations. And I think Sam was 9:53correct to say maybe that was overblown 9:55in the news because to me what it read 9:57like is less code red drama and more we 10:01need to reallocate because we have a 10:04potentially dangerous chess position on 10:06one of our boards. In this case, it was 10:08on the mass consumer board. And when you 10:10are trying to reallocate resources and 10:12compute between three different games at 10:14once, you are going to have difficulty 10:17explaining the narrative as a whole 10:19because the narrative is threepronged. 10:22It can feel incoherent at times because 10:24the company is repeatedly 10:26repprioritizing to protect the core 10:29usage habit loop that they need across 10:32all three. Like to be a winning frontier 10:35lab, people need to use your product. to 10:37be a mass consumer platform, people need 10:39to use your product. And to be 10:41enterprise productive, people in the 10:43enterprise need to use your product, 10:44too. So, if you've felt some whiplash in 10:47the last couple of quarters and you've 10:49wondered what is OpenAI emphasizing from 10:51quarter to quarter, what's shifting? I 10:53think this is the underlying cause. The 10:55company that doesn't own the 10:58distribution truly cannot treat a 11:00consumer habit is optional. Keep in 11:02mind, Google owns distribution in a way 11:05that OpenAI does not. Apple owns 11:07distribution in a way that OpenAI does 11:09not. This is exactly why OpenAI would 11:13like to get into the device game. They 11:15would like to own distribution because 11:17without distribution, their current 11:20footprint advantage, the distribution 11:22they have is earned by the consumer 11:25habit loop. It's not taken for granted 11:28the way Tim Cook can take the iPhone for 11:30granted. Now adding capital. Now a lot 11:33of leaders will handwave and say this is 11:34the AI bubble. they can just raise 11:36money. I think it's not quite that 11:39simple. I agree they can raise, but I 11:41think that increasingly in 2026, we need 11:45a case for long-term profitability, and 11:48investors are going to start to expect 11:50it. From the conversations that I've 11:52seen in the public spaces, interviews 11:55Sam has given, other reports that we've 11:57seen on OpenAI, I think that the core 12:00flywheel is likely the core story around 12:03profitability is likely that enterprise 12:06inference is the long-term profit 12:08engine. It's those business class 12:10passengers that make the airline 12:11profitable. It is business class that is 12:14going to make OpenAI profitable. Compute 12:16scarcity does remain the binding 12:18constraint for the next few years and 12:19they are betting that the enterprise 12:22paying for those heavy token usage for 12:24inference, the high quality tokens that 12:27they need to do heavy work that is going 12:29to fund at least in part continued 12:32frontier model training to support even 12:35higher quality inference for enterprise. 12:38And if you combine that with one or two 12:41big raises and an IPO bridge, you can 12:44get across the capex gap and get to 12:47profitability. That's essentially the 12:49bet. Some of the math actually pencils 12:51out there. I know it's really become 12:52fashionable to say OpenAI is going to 12:54hit a cash wall, etc. It's not really 12:57that clear. If you think about it, 12:59Reuters reported Open AAI is in 13:02preliminary discussions to raise up to 13:04hund00 billion at a valuation of take 13:06your pick. I've heard anywhere from $750 13:09billion to $830 billion alongside 13:13rumored IPO preparation that would value 13:16the company as high as a trillion with a 13:18possible filing in the second half of 13:20next year. This is not background noise. 13:23This is capital strategy driving product 13:25strategy for all of us at OpenAI because 13:28compute remains the bottleneck that 13:30determines what they can ship to whom 13:32and to when. As a reminder, they have 13:35said repeatedly that they are not 13:37shipping their best models to us the 13:40public or to enterprises because they 13:43are compute constrained and their best 13:46models internally are compute inensive 13:48and that just remains a barrier. 13:50Recently Sam Alman told big technology 13:52that enterprises have been clear about 13:54how many tokens they want to buy. I 13:56think I referenced that one. and open AI 13:58is going to as he put it again fail in 14:012026 to meet enterprise demand. That is 14:03a high quality problem to have because 14:06that single sentence is a bridge between 14:09the consumer demand, the reality that AI 14:12is here, that people are desperate for 14:14highquality tokens and that when that 14:16scarcity persists, you're going to have 14:19to keep making those allocation 14:21decisions in ways that shape our 14:22pricing, our defaults, the policies 14:25OpenAI has, how a billion consumers 14:27experience this technology, and also how 14:31good the underlying models serve to 14:32enterp enterprise are will we live in a 14:34world where codec is available only for 14:38at high power only for select engineers 14:41at most enterprises because codec plans 14:44are expensive not because open AAI wants 14:46to constrain it but because the compute 14:48itself is constrained I have wondered if 14:51part of the reason why codeex has leaned 14:55into the coding use case and yes they 14:58absolutely you can use coding for codecs 15:00for non-coding use cases I have done it. 15:03I recommend it. I love it. It is used 15:05that way at OpenAI. They recommend that 15:08too. But if you are in that spot and 15:11you're compute constrained, if you're 15:13selling the enterprise plan and you're 15:16sitting on the sales team in OpenAI, you 15:18may have less options on how far those 15:22plan limits can go for non- tech if you 15:25remain, as Sam says, compute constrained 15:28through 2026. What do enterprise plan 15:30limits and what does pay as you go look 15:33like in that? And it's not like there's 15:34a free lunch other places. Anthropic is 15:37notoriously compute constrained. Google 15:40is 15:41definitely working on getting to the 15:44point where they have enterprisecale 15:47product offerings, but a lot of what 15:49they're bringing to the table is tied 15:51into the Google office and Google 15:54productivity suite. similar to Microsoft 15:56in their models tied into the Microsoft 15:58productivity suite. And Google's also 16:00tied into the Google cloud footprint. 16:02And so each of these players has 16:05different incentives around their unit 16:07economics that are shaping where 16:10constraints appear. I do want to take a 16:12moment to talk about usage here because 16:14I think this story gets a little bit 16:17uncomfortable for people who assume that 16:19the current consumer dominance for 16:21OpenAI automatically translates to a 16:24durable advantage for the company. I 16:26talked earlier in this video about the 16:28idea that yes, OpenAI has a distribution 16:31edge today with a billion people, but 16:33they don't control distribution with a 16:35device the way Google does and the way 16:37Apple does. Conversion can be 16:41structurally difficult 16:43in a world where you've already hit 5% 16:47paid at scale. A Reuters information 16:50story cited internal modeling suggesting 16:54roughly a 60 70% upside upside to 8 12% 16:59paid conversions by 2030 which to me 17:02having worked in consumer businesses 17:04feels really reasonable. If you get to 8 17:0712% paid conversions, you have a 17:10phenomenal product. It is not a knock at 17:12all. And so if OpenAI is looking for new 17:15monetization streams over the top for 17:18the 95 to 92% of consumers who will not 17:23pay, what does that look like? And how 17:26does that shape usage behavior? So let's 17:29talk about perhaps shopping assistance 17:31that can open up commissions and ads. 17:33Maybe separate from the chat so you 17:35don't contaminate the chat with ads but 17:37you have ads other places. Maybe looking 17:39at spaces where consumers can 17:42essentially agree to pay attention with 17:44their time and in turn do get useful 17:47work back from the agent. When 17:49conversion remains hard, the way we're 17:51talking about moving from 5% to 8 1/2% 17:55over 5 or 6 years with plenty of hard 17:58work with great products and you have to 18:00maybe monetize over the top with ads, 18:02your incentives are tough, especially in 18:05a company that has a passionate mission 18:07for a larger, more intelligent future 18:11that may not fit well in the chat, 18:13right? because the company can be 18:14simultaneously pushed to defend 18:16engagement, to experiment with 18:18monetization, and also to continue to 18:23sustain the habit loop that you need to 18:25keep enterprises knocking on the door 18:27for those tokens. It's it's a fragile 18:29place to be, more fragile than people 18:31might think. And that distribution 18:33pressure I talked about is showing up in 18:36growth rates. I think I mentioned 18:37earlier, Geminis's grew 30% 18:40from August to November. And Chad GBT 18:43apparently grew about 5%. 18:45And so Gemini's faster growth is 18:48something that is going to be more of a 18:50story if it continues into 2026 and we 18:55start to see a situation where there are 18:57two dominant players where open AI 18:59remains very dominant over a billion but 19:01perhaps Gemini starts to hit those 19:04billion person numbers as well. So, why 19:06does this matter for all of us heading 19:08into 2026, assuming that we already have 19:10a multimodel stack as I've been 19:12preaching? Because even in a multimodel 19:15world, even if you're in an enterprise 19:17and you've set this up so you can swap 19:19your models in and out because you don't 19:20want to be dependent on one player, the 19:23default interface layer sets the mental 19:26model for your employees, for the stack, 19:29for the people you work with. And the 19:30mental model determines whether AI is a 19:33toy, a tool or an operating system 19:36inside your business. And so to me, I 19:38think we still are coming back to what I 19:40talked about at the beginning where the 19:41chat box itself is illegible. If Chad 19:45GPT's mental model for a billion people 19:48and and Geminis's to some extent too 19:51remains either a chatbot I ask questions 19:54or a nice friend who makes me images, 19:56then the product is hiding tremendous 19:59capability breath. It's diluting the 20:02peak value people believe they can 20:04extract from it. And that does include 20:06work implications. And it means that 20:08your people at work are going to 20:10underuse it, undervalue it, and 20:13ultimately not sustain usage. This is 20:16jumping over a bit, but you'll notice 20:18that Microsoft ran into this with 20:20Copilot. Microsoft is cutting C-pilot 20:23sales targets because people who pushed 20:25the button to adopt it, CTO's largely, 20:28are seeing their people not use it. I 20:30don't believe that's only a co-pilot 20:32problem. That is a larger problem with 20:35the way we enable chat bots at the 20:38enterprise level. People's mental models 20:40are sticky. Mental models don't stop at 20:43the office door. If you have a mental 20:44model of AI from your phone, guess what? 20:47It's the same mental model you bring to 20:49AI at work. This is why the enterprise 20:51seat can be misleading. Leaders may get 20:54premium treatment on their executive 20:56seats or whatever, but adoption is 20:58driven by thousands of employees who 21:00regardless of the seat that you may buy 21:02them are doing the default. And this is 21:05why whether you're using C-Pilot or or 21:08Claude Enterprise or Chad GPT, if all 21:11you're doing is having your employees 21:14try it out, they are mostly just going 21:16to rewrite their emails with it. If the 21:19default teaches chat for quick answers 21:21and if that's the default in the 21:22consumer world, you get shallow usage 21:25without sustained effort and if you're 21:28able to get to the point where you're an 21:29AI native organization, you will be able 21:33to teach teams to delegate work and come 21:36back to outcomes. That is the bridge 21:38that organizations will need to cross to 21:40move from that shallow usage pattern. 21:42But to do that, you have to deeply 21:45engage your teams in ways we've never 21:47had to do for traditional software 21:48because they have this mental model 21:50that's very sticky from their consumer 21:52devices. You have to convince them 21:54regardless of what you use at home, this 21:56is how you work with AI at work. And 21:59this te's up the crux. Open AI strategy 22:01only truly works if they're able to 22:04escape this engagement trap and become 22:07an outcome engine for enterprise. And so 22:10if distribution advantage plus a compute 22:14constraint is where we are living now, 22:16we're on a jet plane that is like 22:18constrained by seats, but it's a popular 22:20plane. It's a popular route. The winning 22:23way forward is to own extremely high 22:28quality outcomes for the enterprise that 22:30drive those enterprise seats like crazy. 22:33Basically, you want to be in a position 22:35where your experience in business class 22:37on this jet is so good, you're just 22:40going to get everybody to sign up for 22:41your airline to fly to London. So, that 22:43means this explains a lot, I think, of 22:46where they're going with codeex. You 22:47need to be able to run your tasks really 22:49efficiently for long periods of time. 22:51You're going to want the ability to run 22:52your tasks in parallel. You're going to 22:54want to return to finished work with a 22:56very predictable quality bar. You're 22:58going to want to wrap it in enterprise 23:00governance. You're going to want to have 23:02enterprisegrade 23:03code review and QA which codeex really 23:06leans into. It goes even further than 23:08this. If enterprise inference is really 23:11driving the funding engine, then a first 23:14class delegation layer where you 23:17allocate the compute essentially is how 23:19you convert to paid outcomes at scale. 23:22And this is where like there's this 23:23weird relationship between the decision 23:26to shift consumers onto a cheaper faster 23:30model and the decision to allocate 23:33highquality tokens to enterprise. They 23:36might look separated but one they're 23:38switching compute across and two the 23:41habit loops are very entangled at the 23:44enterprise level. And there are some 23:46interesting feedback consequences to 23:48choosing to give people cheaper models 23:51to play with and then expect them to 23:54magically know what to do when 23:56enterprises have fancier models at work. 23:59That's why all of this matters for us 24:00heading into 2026. OpenAI of course is 24:04not just another model vendor in the 24:06portfolio. It is the company that made 24:09the chat GPT moment. It is the company 24:11that is most aggressively trying to 24:13become the default layer where work 24:15begins while simultaneously financing a 24:18massive compute buildout that frankly 24:21it's it's a meaningful chunk of the 24:22broader economy if it comes into play 24:24and it ends up being downstream of 24:26whether this whole approach of buying 24:29business class seats actually holds up 24:31and so when we talk about the AI bubble 24:33part of why I don't necessarily buy it 24:36is I agree with Sam I don't see a 24:38shortage of demand from enterp 24:40enterprises for highquality inference 24:42tokens. I see a shortage of human 24:44capability in using those tokens and I 24:46think that's a massive question for 2026 24:48and we've talked about that here, but 24:50the demand is there. And so I think 24:52that's where OpenAI has a case for a 24:55financing flywheel that ends up into 24:58positive cash flow territory, ends up 25:00into profitability, ends up in the IPO 25:02space. So if you want the overall 25:04takeaway here, 2026 is the year Open AI 25:09needs to prove it can turn compute 25:12scarcity and capital and the consumer 25:15habit piece into enterprise outcomes. 25:17And it has to do that without letting 25:20the pressure of monetization driven by 25:22compute constraints deform or twist the 25:25product into the incorrect shape. And so 25:27the implication for all of us for 25:29leaders for builders people who are rank 25:31and file all of us in that are employees 25:33it's that multimodel isn't really the 25:36end of the strategy it's just your 25:38starting condition that the real 25:40question that we're wrestling with as we 25:41go into 2026 is who owns delegation who 25:46owns governance and who owns workflow 25:48outcomes on the top of those models. 25:50Basically, if we are going to have 25:52really fancy strong inference, how do we 25:56make sure our people are there so that 25:58they can own the allocation of the 26:00model? They can own the workflow 26:02outcomes we're able to drive. You can 26:04delegate effectively to models. That is 26:08the question we all have to answer in 26:11return as we assess OpenAI strategy. I 26:14hope this conversation on OpenAI 26:15strategy has been useful. I've written 26:17some other pieces on how we need to 26:19scale up as teams. I think they're very 26:21relevant and I think that open AAI 26:23strategy will continue to pose a 26:26strategic question for us as to how we 26:30scale up our people to meet the 26:32enterprise demand for highquality 26:34inference that is driving this entire 26:37product strategy. Best of flock.