OpenAI's 2026 Strategy: Seats and Scarcity
Key Points
- The conversation around AI should move beyond comparing devices like “who has the best product” and focus on the strategic direction OpenAI aims to take by 2026.
- OpenAI is operating under tight constraints, balancing a consumer‑focused ChatGPT that attracts billions with low‑pay conversion against a growing market demand for enterprise “delegation engines” that deliver fully autonomous, high‑quality work outputs.
- The company’s emerging strategy appears to pivot toward selling inference‑based autonomous agents that enterprises can purchase to offload tasks, signaling a shift from pure chat experiences to monetizable enterprise workloads.
- OpenAI’s resource limitation can be visualized as an airline with scarce seats: compute capacity must be allocated among low‑price consumer users, higher‑value enterprise customers demanding outcomes and governance, and investors who need cash‑flow positivity.
- Despite these pressures, OpenAI currently enjoys a distribution advantage that positions it to shape the market while it navigates the trade‑offs between scaling compute, meeting enterprise expectations, and achieving profitability.
Sections
- Beyond Devices: OpenAI’s 2026 Strategy - The briefing reframes the AI conversation from consumer‑centric device hype to a strategic analysis of OpenAI’s constrained, multimodal platform evolution toward enterprise delegation engines and paid inference services by 2026.
- OpenAI vs Gemini: Distribution Battle - The passage explains how Google’s Gemini is rapidly expanding via Google’s platforms, compelling OpenAI to defend its user base while managing consumer‑focused cost pressures and massive enterprise token demand, underscoring compute capacity constraints that go beyond simple model quality debates.
- Research‑Driven Roadmap at OpenAI - The speaker explains that engineers and researchers control OpenAI’s direction, prioritizing ambitious science, medicine, and physics AI challenges over chat, reflecting a democratic, passion‑driven focus on humanity‑advancing work.
- OpenAI Funding, IPO, and Compute Constraints - The speaker argues that despite rumors of a cash crunch, OpenAI is planning massive fundraising and an IPO to bridge its capital‑expenditure gap, but its growth remains limited by compute bottlenecks that could prevent meeting enterprise demand by 2026.
- OpenAI Monetization Beyond Paid Users - The speaker argues that despite OpenAI’s large user base, its lack of device‑level distribution limits conversion, so future revenue must come from alternative streams like shopping assistance, ads, and commission models targeting the majority of non‑paying users.
- Beyond Enterprise Seats: AI Adoption - The speaker argues that simply providing premium AI “seats” to leaders leads to shallow, consumer‑style usage, and true enterprise value requires reshaping employees’ mental models so they treat AI as an outcome‑driven tool rather than a novelty.
- OpenAI’s 2026 Enterprise Crucial Test - The speaker argues that by 2026 OpenAI must turn its compute advantage, financing flywheel, and consumer habit into profitable enterprise outcomes without compromising the product’s core value.
Full Transcript
# OpenAI's 2026 Strategy: Seats and Scarcity **Source:** [https://www.youtube.com/watch?v=2gt2Ugy1b6Q](https://www.youtube.com/watch?v=2gt2Ugy1b6Q) **Duration:** 00:26:40 ## Summary - The conversation around AI should move beyond comparing devices like “who has the best product” and focus on the strategic direction OpenAI aims to take by 2026. - OpenAI is operating under tight constraints, balancing a consumer‑focused ChatGPT that attracts billions with low‑pay conversion against a growing market demand for enterprise “delegation engines” that deliver fully autonomous, high‑quality work outputs. - The company’s emerging strategy appears to pivot toward selling inference‑based autonomous agents that enterprises can purchase to offload tasks, signaling a shift from pure chat experiences to monetizable enterprise workloads. - OpenAI’s resource limitation can be visualized as an airline with scarce seats: compute capacity must be allocated among low‑price consumer users, higher‑value enterprise customers demanding outcomes and governance, and investors who need cash‑flow positivity. - Despite these pressures, OpenAI currently enjoys a distribution advantage that positions it to shape the market while it navigates the trade‑offs between scaling compute, meeting enterprise expectations, and achieving profitability. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=0s) **Beyond Devices: OpenAI’s 2026 Strategy** - The briefing reframes the AI conversation from consumer‑centric device hype to a strategic analysis of OpenAI’s constrained, multimodal platform evolution toward enterprise delegation engines and paid inference services by 2026. - [00:03:18](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=198s) **OpenAI vs Gemini: Distribution Battle** - The passage explains how Google’s Gemini is rapidly expanding via Google’s platforms, compelling OpenAI to defend its user base while managing consumer‑focused cost pressures and massive enterprise token demand, underscoring compute capacity constraints that go beyond simple model quality debates. - [00:07:53](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=473s) **Research‑Driven Roadmap at OpenAI** - The speaker explains that engineers and researchers control OpenAI’s direction, prioritizing ambitious science, medicine, and physics AI challenges over chat, reflecting a democratic, passion‑driven focus on humanity‑advancing work. - [00:12:41](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=761s) **OpenAI Funding, IPO, and Compute Constraints** - The speaker argues that despite rumors of a cash crunch, OpenAI is planning massive fundraising and an IPO to bridge its capital‑expenditure gap, but its growth remains limited by compute bottlenecks that could prevent meeting enterprise demand by 2026. - [00:16:05](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=965s) **OpenAI Monetization Beyond Paid Users** - The speaker argues that despite OpenAI’s large user base, its lack of device‑level distribution limits conversion, so future revenue must come from alternative streams like shopping assistance, ads, and commission models targeting the majority of non‑paying users. - [00:20:49](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=1249s) **Beyond Enterprise Seats: AI Adoption** - The speaker argues that simply providing premium AI “seats” to leaders leads to shallow, consumer‑style usage, and true enterprise value requires reshaping employees’ mental models so they treat AI as an outcome‑driven tool rather than a novelty. - [00:24:00](https://www.youtube.com/watch?v=2gt2Ugy1b6Q&t=1440s) **OpenAI’s 2026 Enterprise Crucial Test** - The speaker argues that by 2026 OpenAI must turn its compute advantage, financing flywheel, and consumer habit into profitable enterprise outcomes without compromising the product’s core value. ## Full Transcript
Most people are still talking about
OpenAI the way they talked about Apple
back in 2008, as if the whole story is
who has the best device. Heading into
2026, I think that's the incorrect frame
for the entire conversation around AI.
So, in this executive briefing, I want
to talk about the real question, the one
that's strategic. If you assume that you
have a multimodel stack baked in, which
I talk about all the time, a lot of
leaders are now getting, then ask
yourself, what is OpenAI trying to
become in 2026? And what happens to
everybody else if they succeed? And what
happens to everybody else if they fail
at that plan? The cleanest description
I've seen is this. Open AAI is behaving
like a company operating under
significant constraints, not necessarily
like a company that has a single
coherent product strategy to execute
against. This has been really true in
the last couple of months. The tension
is kind of fundamental at this point
given their success. Chat GPT is being
optimized as an engagement container for
a billion people only 5% of whom are
willing to pay. While the market's
willingness to pay is shifting toward
delegation engines, systems that
enterprises can purchase where you hand
off work and walk away. A lot of the
codeex line of direction and strategy
seems to me to be headed that way where
these are designed to be fully working
autonomous agents, very very high
quality inference. You'll pay for the
inference, but you'll get excellent
results. And if you prompt it properly,
it will give you fully finished
enterprise work product. Maybe that's
code initially. I would not be surprised
to see that branch out into other places
given recent launches in late 2025 from
OpenAI. So, as a strategic diagnosis,
this tells you what OpenAI is defending,
what it's postponing, and it implies
where the tradeoffs are going to be when
the system is pressured. So let's dig
into that a little bit more. To
understand OpenAI's 2026 strategy, it
helps to stop thinking in terms of the
product as a singular entity and start
thinking in terms of seats because
effectively I think the right analogy is
that open AI is running an airline with
scarce inventory. In this case, it's
like you have an airline that's running
a popular route from New York to London
and you just cannot get enough seats on
that airplane. In this case, the compute
is the scarcity and they have to
allocate seats on that jet between the
consumer who's not willing to pay a
whole lot by and large where defaults
have to be cheap, where they have to be
fast, and the enterprise seat where
outcomes and governance are demanded.
You have a lot of standards here. And
then there's the investor and capital
seat on the plane where the only real
question is do you have enough cash
runway and deals in place on compute to
keep the machine flying until you get to
cash flow positivity until you get to
profitability. And so the key thing I
want to call out is that OpenAI
currently has a distribution advantage.
Now, Google can push Gemini and is
pushing Gemini through search, through
Android, through Chrome, and they're
growing faster than OpenAI at this
point, but it's still true. Open AI has
the king of distribution advantages in
the AI space. But to keep it that way,
Open AI is now in a position where they
have to defend that territory. they have
to earn and retain all of their users
while growing at the margins in a market
that increasingly has people who have
already picked AI systems other than
open AI. So now it's not just can I
introduce you to AI pick one up. It is
can I introduce you to AI that's open AI
is AI hey don't use Gemini that's a
different proposition in that world
compute is both a unit economics
constraint for consumers and also a
capacity constraint for enterprise
because you have to think of it as like
the consumer cares and is price
sensitive maybe you tip over more
consumers into paid if you can serve
compute more cheaply serve intelligence
more cheaply but from an enterprise
perspective you don't necessarily want
the cheap intelligence. You want to burn
tokens. Sam has said in a recent
interview he has enterprises knocking on
his door and saying we can ingest a
trillion of your tokens. Please give us
a trillion of your tokens. There's a
capacity constraint at that scale where
it's like how do we develop the compute
to serve that kind of capacity to
enterprise. This is the conversation
that people are missing when they're
talking about model quality because
OpenAI's most important shipping service
is not the weights in the model. It's
actually the allocation of compute. It's
where they route queries from consumers.
It's what are your defaults on your chat
surfaces. It's what are your plan limits
and which experiences do they make easy
for you versus which they stay hidden.
And you can see that fundamental compute
constraint leaking into a bunch of their
recent product choices, right? Roll back
of a slower reasoning by default in Chad
GPT 5.2 is arguably an assessment that
for free users, the cost and latency is
not worth it and users prefer, frankly,
faster, dumber models that are cheaper
to serve. And this just underlines the
thesis that chat is largely a saturated
use case. The free user base is going to
be happy with dumber models is going to
shape public perception of what AI is
capable of like it or not. And that's
the world that we all live in, including
open AI. I saw um a survey, I think it
was in the last couple of days, that
said that 67 66%
of people believe that an AI's answer is
either a retrieval from a database or
simply reading a presscripted response.
Twothirds and these are these are people
who use AI. This is why the free user
base is having challenges understanding
the capacity of AI. We are still in the
fundamental product dilemma of what
happens when you scale the power of your
product by ax in 2 years but your
chatbot looks the same and people just
cannot figure out how to use it better
and they don't have the mental models to
do that and increasingly the behavioral
evidence suggests that open AI is not
finding it economically useful to serve
that audience 950 million people who are
on the free plan, a high-grade
intelligent. The plan is clearly to use
that compute in two big plays in 2026. I
think it's pretty clear. Number one is
the ongoing deep inference research that
will be needed to push out extremely
intelligent models for science and
medicine which they're aiming at really
aggressively and to push out a lot of
very thoughtful highquality inference
tokens and make them available to
enterprise. Both of those are paid
allocations and the science and medicine
one in particular aligns strongly with
the long-term research vision that
OpenAI has. I know that we talk about
OpenAI as a company and it is but it
started with a nonprofit sense of
mission and I think we are incorrect if
we don't believe that that DNA is still
strong especially in the research part
of the company. People believe in AGI.
They believe in it as if it is something
that is worth doing on its own for the
benefit of humanity. That is the level
of passion they bring. That's frankly
what they need to bring to do a task
that hard. And in that world, they are
going to be interested in focusing on
the medicine use cases, the science use
cases, the physics use cases, the things
that advance humanity. And I have been
in enough organizations to tell you it
is not necessarily true that leadership
sets the road map. In many cases, when
you have high-powered research and
engineering organizations, research and
engineering shape the roadmap because if
you are working on something that your
engineers and your researchers actively
think is antithetical to what the
business is supposed to be doing,
they'll just disagree and tell you they
don't want to do it and you can't
replace them. So, you'll end up working
on what they want to work on, which is
usually the harder, more interesting
problem. And I don't know, but I suspect
that there is a strong democratic
component where researchers are leaning
into working on interesting problems at
OpenAI. And those interesting problems
are leaning the company toward science,
toward medicine, toward heavy inference,
super intelligent use cases that go way
beyond what you need in chat. And this
is why I've said chat is in many ways a
side play for OpenAI even though they
have the biggest distribution advantage
on the board right now. So here's where
2026 gets really interesting. Open AAI
is trying to win three different games
at the same time. Three different chess
games, right? They're trying to win the
Frontier Lab chess game. They're trying
to win the mass consumer platform chess
game. And they're also trying to win the
enterprise productivity chess game. and
the required tradeoffs there conflict
and they conflict around compute. This
is a three-game problem set and it it
predicts the organizational behavior
that you'd expect. you are going to have
what we hear described as code red
reallocations. And I think Sam was
correct to say maybe that was overblown
in the news because to me what it read
like is less code red drama and more we
need to reallocate because we have a
potentially dangerous chess position on
one of our boards. In this case, it was
on the mass consumer board. And when you
are trying to reallocate resources and
compute between three different games at
once, you are going to have difficulty
explaining the narrative as a whole
because the narrative is threepronged.
It can feel incoherent at times because
the company is repeatedly
repprioritizing to protect the core
usage habit loop that they need across
all three. Like to be a winning frontier
lab, people need to use your product. to
be a mass consumer platform, people need
to use your product. And to be
enterprise productive, people in the
enterprise need to use your product,
too. So, if you've felt some whiplash in
the last couple of quarters and you've
wondered what is OpenAI emphasizing from
quarter to quarter, what's shifting? I
think this is the underlying cause. The
company that doesn't own the
distribution truly cannot treat a
consumer habit is optional. Keep in
mind, Google owns distribution in a way
that OpenAI does not. Apple owns
distribution in a way that OpenAI does
not. This is exactly why OpenAI would
like to get into the device game. They
would like to own distribution because
without distribution, their current
footprint advantage, the distribution
they have is earned by the consumer
habit loop. It's not taken for granted
the way Tim Cook can take the iPhone for
granted. Now adding capital. Now a lot
of leaders will handwave and say this is
the AI bubble. they can just raise
money. I think it's not quite that
simple. I agree they can raise, but I
think that increasingly in 2026, we need
a case for long-term profitability, and
investors are going to start to expect
it. From the conversations that I've
seen in the public spaces, interviews
Sam has given, other reports that we've
seen on OpenAI, I think that the core
flywheel is likely the core story around
profitability is likely that enterprise
inference is the long-term profit
engine. It's those business class
passengers that make the airline
profitable. It is business class that is
going to make OpenAI profitable. Compute
scarcity does remain the binding
constraint for the next few years and
they are betting that the enterprise
paying for those heavy token usage for
inference, the high quality tokens that
they need to do heavy work that is going
to fund at least in part continued
frontier model training to support even
higher quality inference for enterprise.
And if you combine that with one or two
big raises and an IPO bridge, you can
get across the capex gap and get to
profitability. That's essentially the
bet. Some of the math actually pencils
out there. I know it's really become
fashionable to say OpenAI is going to
hit a cash wall, etc. It's not really
that clear. If you think about it,
Reuters reported Open AAI is in
preliminary discussions to raise up to
hund00 billion at a valuation of take
your pick. I've heard anywhere from $750
billion to $830 billion alongside
rumored IPO preparation that would value
the company as high as a trillion with a
possible filing in the second half of
next year. This is not background noise.
This is capital strategy driving product
strategy for all of us at OpenAI because
compute remains the bottleneck that
determines what they can ship to whom
and to when. As a reminder, they have
said repeatedly that they are not
shipping their best models to us the
public or to enterprises because they
are compute constrained and their best
models internally are compute inensive
and that just remains a barrier.
Recently Sam Alman told big technology
that enterprises have been clear about
how many tokens they want to buy. I
think I referenced that one. and open AI
is going to as he put it again fail in
2026 to meet enterprise demand. That is
a high quality problem to have because
that single sentence is a bridge between
the consumer demand, the reality that AI
is here, that people are desperate for
highquality tokens and that when that
scarcity persists, you're going to have
to keep making those allocation
decisions in ways that shape our
pricing, our defaults, the policies
OpenAI has, how a billion consumers
experience this technology, and also how
good the underlying models serve to
enterp enterprise are will we live in a
world where codec is available only for
at high power only for select engineers
at most enterprises because codec plans
are expensive not because open AAI wants
to constrain it but because the compute
itself is constrained I have wondered if
part of the reason why codeex has leaned
into the coding use case and yes they
absolutely you can use coding for codecs
for non-coding use cases I have done it.
I recommend it. I love it. It is used
that way at OpenAI. They recommend that
too. But if you are in that spot and
you're compute constrained, if you're
selling the enterprise plan and you're
sitting on the sales team in OpenAI, you
may have less options on how far those
plan limits can go for non- tech if you
remain, as Sam says, compute constrained
through 2026. What do enterprise plan
limits and what does pay as you go look
like in that? And it's not like there's
a free lunch other places. Anthropic is
notoriously compute constrained. Google
is
definitely working on getting to the
point where they have enterprisecale
product offerings, but a lot of what
they're bringing to the table is tied
into the Google office and Google
productivity suite. similar to Microsoft
in their models tied into the Microsoft
productivity suite. And Google's also
tied into the Google cloud footprint.
And so each of these players has
different incentives around their unit
economics that are shaping where
constraints appear. I do want to take a
moment to talk about usage here because
I think this story gets a little bit
uncomfortable for people who assume that
the current consumer dominance for
OpenAI automatically translates to a
durable advantage for the company. I
talked earlier in this video about the
idea that yes, OpenAI has a distribution
edge today with a billion people, but
they don't control distribution with a
device the way Google does and the way
Apple does. Conversion can be
structurally difficult
in a world where you've already hit 5%
paid at scale. A Reuters information
story cited internal modeling suggesting
roughly a 60 70% upside upside to 8 12%
paid conversions by 2030 which to me
having worked in consumer businesses
feels really reasonable. If you get to 8
12% paid conversions, you have a
phenomenal product. It is not a knock at
all. And so if OpenAI is looking for new
monetization streams over the top for
the 95 to 92% of consumers who will not
pay, what does that look like? And how
does that shape usage behavior? So let's
talk about perhaps shopping assistance
that can open up commissions and ads.
Maybe separate from the chat so you
don't contaminate the chat with ads but
you have ads other places. Maybe looking
at spaces where consumers can
essentially agree to pay attention with
their time and in turn do get useful
work back from the agent. When
conversion remains hard, the way we're
talking about moving from 5% to 8 1/2%
over 5 or 6 years with plenty of hard
work with great products and you have to
maybe monetize over the top with ads,
your incentives are tough, especially in
a company that has a passionate mission
for a larger, more intelligent future
that may not fit well in the chat,
right? because the company can be
simultaneously pushed to defend
engagement, to experiment with
monetization, and also to continue to
sustain the habit loop that you need to
keep enterprises knocking on the door
for those tokens. It's it's a fragile
place to be, more fragile than people
might think. And that distribution
pressure I talked about is showing up in
growth rates. I think I mentioned
earlier, Geminis's grew 30%
from August to November. And Chad GBT
apparently grew about 5%.
And so Gemini's faster growth is
something that is going to be more of a
story if it continues into 2026 and we
start to see a situation where there are
two dominant players where open AI
remains very dominant over a billion but
perhaps Gemini starts to hit those
billion person numbers as well. So, why
does this matter for all of us heading
into 2026, assuming that we already have
a multimodel stack as I've been
preaching? Because even in a multimodel
world, even if you're in an enterprise
and you've set this up so you can swap
your models in and out because you don't
want to be dependent on one player, the
default interface layer sets the mental
model for your employees, for the stack,
for the people you work with. And the
mental model determines whether AI is a
toy, a tool or an operating system
inside your business. And so to me, I
think we still are coming back to what I
talked about at the beginning where the
chat box itself is illegible. If Chad
GPT's mental model for a billion people
and and Geminis's to some extent too
remains either a chatbot I ask questions
or a nice friend who makes me images,
then the product is hiding tremendous
capability breath. It's diluting the
peak value people believe they can
extract from it. And that does include
work implications. And it means that
your people at work are going to
underuse it, undervalue it, and
ultimately not sustain usage. This is
jumping over a bit, but you'll notice
that Microsoft ran into this with
Copilot. Microsoft is cutting C-pilot
sales targets because people who pushed
the button to adopt it, CTO's largely,
are seeing their people not use it. I
don't believe that's only a co-pilot
problem. That is a larger problem with
the way we enable chat bots at the
enterprise level. People's mental models
are sticky. Mental models don't stop at
the office door. If you have a mental
model of AI from your phone, guess what?
It's the same mental model you bring to
AI at work. This is why the enterprise
seat can be misleading. Leaders may get
premium treatment on their executive
seats or whatever, but adoption is
driven by thousands of employees who
regardless of the seat that you may buy
them are doing the default. And this is
why whether you're using C-Pilot or or
Claude Enterprise or Chad GPT, if all
you're doing is having your employees
try it out, they are mostly just going
to rewrite their emails with it. If the
default teaches chat for quick answers
and if that's the default in the
consumer world, you get shallow usage
without sustained effort and if you're
able to get to the point where you're an
AI native organization, you will be able
to teach teams to delegate work and come
back to outcomes. That is the bridge
that organizations will need to cross to
move from that shallow usage pattern.
But to do that, you have to deeply
engage your teams in ways we've never
had to do for traditional software
because they have this mental model
that's very sticky from their consumer
devices. You have to convince them
regardless of what you use at home, this
is how you work with AI at work. And
this te's up the crux. Open AI strategy
only truly works if they're able to
escape this engagement trap and become
an outcome engine for enterprise. And so
if distribution advantage plus a compute
constraint is where we are living now,
we're on a jet plane that is like
constrained by seats, but it's a popular
plane. It's a popular route. The winning
way forward is to own extremely high
quality outcomes for the enterprise that
drive those enterprise seats like crazy.
Basically, you want to be in a position
where your experience in business class
on this jet is so good, you're just
going to get everybody to sign up for
your airline to fly to London. So, that
means this explains a lot, I think, of
where they're going with codeex. You
need to be able to run your tasks really
efficiently for long periods of time.
You're going to want the ability to run
your tasks in parallel. You're going to
want to return to finished work with a
very predictable quality bar. You're
going to want to wrap it in enterprise
governance. You're going to want to have
enterprisegrade
code review and QA which codeex really
leans into. It goes even further than
this. If enterprise inference is really
driving the funding engine, then a first
class delegation layer where you
allocate the compute essentially is how
you convert to paid outcomes at scale.
And this is where like there's this
weird relationship between the decision
to shift consumers onto a cheaper faster
model and the decision to allocate
highquality tokens to enterprise. They
might look separated but one they're
switching compute across and two the
habit loops are very entangled at the
enterprise level. And there are some
interesting feedback consequences to
choosing to give people cheaper models
to play with and then expect them to
magically know what to do when
enterprises have fancier models at work.
That's why all of this matters for us
heading into 2026. OpenAI of course is
not just another model vendor in the
portfolio. It is the company that made
the chat GPT moment. It is the company
that is most aggressively trying to
become the default layer where work
begins while simultaneously financing a
massive compute buildout that frankly
it's it's a meaningful chunk of the
broader economy if it comes into play
and it ends up being downstream of
whether this whole approach of buying
business class seats actually holds up
and so when we talk about the AI bubble
part of why I don't necessarily buy it
is I agree with Sam I don't see a
shortage of demand from enterp
enterprises for highquality inference
tokens. I see a shortage of human
capability in using those tokens and I
think that's a massive question for 2026
and we've talked about that here, but
the demand is there. And so I think
that's where OpenAI has a case for a
financing flywheel that ends up into
positive cash flow territory, ends up
into profitability, ends up in the IPO
space. So if you want the overall
takeaway here, 2026 is the year Open AI
needs to prove it can turn compute
scarcity and capital and the consumer
habit piece into enterprise outcomes.
And it has to do that without letting
the pressure of monetization driven by
compute constraints deform or twist the
product into the incorrect shape. And so
the implication for all of us for
leaders for builders people who are rank
and file all of us in that are employees
it's that multimodel isn't really the
end of the strategy it's just your
starting condition that the real
question that we're wrestling with as we
go into 2026 is who owns delegation who
owns governance and who owns workflow
outcomes on the top of those models.
Basically, if we are going to have
really fancy strong inference, how do we
make sure our people are there so that
they can own the allocation of the
model? They can own the workflow
outcomes we're able to drive. You can
delegate effectively to models. That is
the question we all have to answer in
return as we assess OpenAI strategy. I
hope this conversation on OpenAI
strategy has been useful. I've written
some other pieces on how we need to
scale up as teams. I think they're very
relevant and I think that open AAI
strategy will continue to pose a
strategic question for us as to how we
scale up our people to meet the
enterprise demand for highquality
inference that is driving this entire
product strategy. Best of flock.