Gemini 3: The Next AI Reset
Key Points
- Gemini 3, the first non‑OpenAI state‑of‑the‑art model, is set to trigger the biggest AI “reset” since ChatGPT’s 2022 launch, reshaping how consumers, builders, engineers, and executives operate.
- The competitive landscape now hinges on five critical axes: frontier capability, default distribution, capital & compute resources, enterprise penetration/trust, and (implicitly) ecosystem integration.
- Distribution advantage is key: Google embeds Gemini across Android (≈½ billion users), Apple relies on ChatGPT as its default AI app, Microsoft leans on Copilot in Windows/Office, while Anthropic remains a niche, non‑default option.
- Capital dynamics differ sharply: OpenAI burns billions with profitability projected around 2030, whereas Google and Apple effectively have “infinite” cash for AI, and Anthropic is rapidly scaling to a multibillion‑dollar valuation but must manage frontier‑scale model costs.
- Enterprise adoption and safety reputation are decisive: Anthropic already serves 300 k businesses with 80 % of revenue from enterprise, while OpenAI enjoys massive usage but faces heightened regulatory and trust challenges.
Sections
- Gemini 3 AI Paradigm Shift - The speaker warns that Gemini 3, a non‑OpenAI state‑of‑the‑art model now embedded in Google’s platforms, will trigger the biggest AI reset since 2022, reshaping capabilities, distribution, and strategic priorities for users, developers, and executives alike.
- Gemini 3’s Potential Market Shift - The speaker speculates that a breakthrough Gemini 3 model, if licensed to Apple and baked into both Android and iOS, could turn Google into the default AI engine for the world’s two biggest mobile platforms, but warns that Google’s slower production rollout and potential Apple‑imposed constraints could limit its distribution advantage, allowing competitors like OpenAI to retain market lead while Apple aims to leapfrog its own AI lag.
- Anthropic's Enterprise Surge vs OpenAI Challenges - The speaker outlines hardware and UX setbacks limiting a new device’s rollout, warns that OpenAI faces a make‑or‑break need for massive scale or monopoly pricing, and highlights Anthropic’s rapid enterprise revenue growth and robust Claude model ecosystem as a contrasting success.
- Strategic Guidance for Enterprise AI - The speaker advises enterprises to avoid relying on a single model, focus on integration surfaces, treat Anthropic as the safety benchmark, and monitor OpenAI’s burn rate as the market consolidates around a few major providers.
- AI Orchestration and Vendor Strategy - The speaker urges professionals to shift from prompt‑crafting to mastering AI system orchestration—balancing cost, latency, quality, and security—while adopting a multi‑vendor portfolio and making explicit decisions between using OS defaults and building custom workflows.
Full Transcript
# Gemini 3: The Next AI Reset **Source:** [https://www.youtube.com/watch?v=F-m4AIU8blY](https://www.youtube.com/watch?v=F-m4AIU8blY) **Duration:** 00:19:36 ## Summary - Gemini 3, the first non‑OpenAI state‑of‑the‑art model, is set to trigger the biggest AI “reset” since ChatGPT’s 2022 launch, reshaping how consumers, builders, engineers, and executives operate. - The competitive landscape now hinges on five critical axes: frontier capability, default distribution, capital & compute resources, enterprise penetration/trust, and (implicitly) ecosystem integration. - Distribution advantage is key: Google embeds Gemini across Android (≈½ billion users), Apple relies on ChatGPT as its default AI app, Microsoft leans on Copilot in Windows/Office, while Anthropic remains a niche, non‑default option. - Capital dynamics differ sharply: OpenAI burns billions with profitability projected around 2030, whereas Google and Apple effectively have “infinite” cash for AI, and Anthropic is rapidly scaling to a multibillion‑dollar valuation but must manage frontier‑scale model costs. - Enterprise adoption and safety reputation are decisive: Anthropic already serves 300 k businesses with 80 % of revenue from enterprise, while OpenAI enjoys massive usage but faces heightened regulatory and trust challenges. ## Sections - [00:00:00](https://www.youtube.com/watch?v=F-m4AIU8blY&t=0s) **Gemini 3 AI Paradigm Shift** - The speaker warns that Gemini 3, a non‑OpenAI state‑of‑the‑art model now embedded in Google’s platforms, will trigger the biggest AI reset since 2022, reshaping capabilities, distribution, and strategic priorities for users, developers, and executives alike. - [00:04:35](https://www.youtube.com/watch?v=F-m4AIU8blY&t=275s) **Gemini 3’s Potential Market Shift** - The speaker speculates that a breakthrough Gemini 3 model, if licensed to Apple and baked into both Android and iOS, could turn Google into the default AI engine for the world’s two biggest mobile platforms, but warns that Google’s slower production rollout and potential Apple‑imposed constraints could limit its distribution advantage, allowing competitors like OpenAI to retain market lead while Apple aims to leapfrog its own AI lag. - [00:07:52](https://www.youtube.com/watch?v=F-m4AIU8blY&t=472s) **Anthropic's Enterprise Surge vs OpenAI Challenges** - The speaker outlines hardware and UX setbacks limiting a new device’s rollout, warns that OpenAI faces a make‑or‑break need for massive scale or monopoly pricing, and highlights Anthropic’s rapid enterprise revenue growth and robust Claude model ecosystem as a contrasting success. - [00:13:03](https://www.youtube.com/watch?v=F-m4AIU8blY&t=783s) **Strategic Guidance for Enterprise AI** - The speaker advises enterprises to avoid relying on a single model, focus on integration surfaces, treat Anthropic as the safety benchmark, and monitor OpenAI’s burn rate as the market consolidates around a few major providers. - [00:16:07](https://www.youtube.com/watch?v=F-m4AIU8blY&t=967s) **AI Orchestration and Vendor Strategy** - The speaker urges professionals to shift from prompt‑crafting to mastering AI system orchestration—balancing cost, latency, quality, and security—while adopting a multi‑vendor portfolio and making explicit decisions between using OS defaults and building custom workflows. ## Full Transcript
I believe we're headed into the most
significant reset moment for AI since
2022 when chat GPT launched. Why is
that? Because for the first time, we are
about to see a new state-of-the-art
model that has nothing to do with Open
AI. That's Gemini 3, and it's going to
change everything. I want to give you
the strategic implications of that shift
today. And I want to lay out for you the
implications that you are going to see
as a consumer, as an AI enthusiast, as a
builder, as an engineer, and as an
executive. I want you to think about
this as a holistic shift in the
landscape because I believe it's going
to be the first thing we're going to
talk about is the axes that matter in
the board game ahead because I think
that these are not widely understood.
Number one, frontier capability is one
of the five axes that matter. raw
reasoning, how it does on benchmarks.
We're actually pretty familiar with this
one. I'm not going to take a lot of time
on it. The key thing to remember is that
for a long time now, OpenAI and Google
and Enthropic have all been neck andneck
around the top of the leaderboards and
Chinese open- source models have been
competing and just behind. Number two is
distribution and who gets to have
default status. Who owns the default
surface for billions of users? Google
has that on Android with Gemini
integrated throughout. It's one reason
why, and many people don't know this,
there are half a billion Gemini users.
Apple has this, but doesn't have any OS
that actually is intelligent. And so,
Chad GPT is functioning as Apple's
default right now because that is the
primary app that iPhone users are using.
That's more vulnerable than you would
think. Microsoft has a strangle hold via
co-pilot on a lot of the Windows-driven
office experience and anthropic is
almost always in apps that you choose
not defaults and that's going to come
back. The third axis is capital and
compute posture. So, OpenAI has a
between a 12 and 20 billion revenue
trajectory, but it's burning 8 to9
billion a year and projected another $15
billion of spend through 2029 with
profitability not expected until 2030.
Google and Apple, you can effectively
thinking of them as having infinite cash
for our purposes. From their core
businesses, they're spinning off so much
cash that AI is a line item. It is not
an existential bet for them. I know that
sounds crazy, but it is true for them
both. anthropic. It's at five billion
ARR in mid 2025 and it's scaling
extremely rapidly. It will probably be
valued at over $300 billion at its next
raise. The capital question is not can
they raise, it is can they sustain
frontier scale model burn and keep unit
economics somewhat sane for enterprise.
Axis number four, enterprise penetration
and trust. So Anthropic has over 300,000
businesses now. 80% of their revenue is
from enterprise and they have a very
strong safety first brand that's helping
them. Open AAI has massive usage high AR
overall but is also kind of the poster
child for regulatory scrutiny for the
AGI risk and doom narratives. It has
some brand issues. Google is a trusted
infrastructure vendor for cloud already
and has a long history of killing
products and moving slowly which is not
helping it in this situation. Apple
maxes out the consumer trust axis but
has minimal existing enterprise AI
footprint and frankly minimal existing
consumer AI footprint. The fifth axis
around which everything moves is control
of the UX layer. Whoever owns what you
talk to wins a whole lot more than
whoever owns the model. So Apple is
trying to do this with Siri, but that's
been a disaster. Amazon tried to come in
and make a play for that with their
inhome assistance. That's been a
disaster. Google is trying this with
Android voice. Open AAI has chat GPT as
a voice but the voice model has not
necessarily kept up with the pace of 5.1
and the march of the models that are uh
producing written text. Enthropic has
web and API only and you can kind of
compare it to having a strong brain but
it doesn't have a lot of voice
integration. The reset moment is that
all five axes are about to move at once
instead of one at a time which is what
we've been seeing. Let's look at where
the players sit on the board before we
contemplate how the ball is about to
spin. Google and Gemini from Lagard to
OEM intelligence. So, their position now
is at the frontier. Gemini 2.5 Pro is
Google's top model. The company calls it
the most powerful AI model today. You
can make that claim, but whatever. It's
it's in there. It has a lot of breadth.
It has strong distribution, Android,
Chrome, Workspace, etc. What changes
with Gemini 3? If we assume, and it is
an assumption cuz it's not out yet, but
if we assume that Gemini 3 is a clearly
accepted big step change
state-of-the-art model, it is clearly
better than everything out there today
on all accepted benchmarks and a bunch
of new ones. And if we assume it is
integrated by default into Android and
iOS via Apple licensing because Apple
just cut a really big deal with Google,
now we're in a different game because
Google now shifts from being third
contender in a race to the AI Intel
inside for the world's two largest
mobile platforms at once. Now, there are
risks here. There are constraints here.
If Apple wraps Jebana in its own UX and
Apple wraps it in its privacy guarantees
and Apple nerfs the model, Google risks
being seen as just an engine, Google
risks their brand and it may not happen
very fast. Also, Google is historically
slow at sort of productionalizing these
research models. And so, it may be that
we get Gemini 3 and it is incredibly
good, but the distribution is not great
and OpenAI is able to steal a march and
keep their distribution advantage with
the consumer. So, where is Apple?
Apple's opportunity is really to move
from AI lagger to potential leaprog.
Apple's in-house models trail behind
everybody else on metrics, obviously,
but they're finalizing a deal to license
a really big Gemini model for Siri and
use that to power an Apple intelligence
revamp. The cost is reported to be
around a billion dollars a year. The
plan would be to run a custom Gemini
based model on Apple controlled cloud,
keep the privacy narrative intact and
use it to power a huge AI reboot for the
company that would enable Apple to get
Frontierra intelligence without eating
the full capital expenditure of training
Frontier models. And they can afford the
cash, right? So they retain the OS
integration, they retain all of the
identity and the payment rails. They
retain all of the hardware margins and
the your data stays on your device
story. If Gemini keeps pace or wins on
quality, and if Apple can pull that
intelligence in at a steady pace and
refresh the experience so it stays
cutting edge, Apple could leapfrog open
AI on consumer UX, which none of us saw
coming. Now, the risk is pretty simple.
They're dependent on Google's road map.
Any safety issues with Gemini become
Apple's risk. Enterprise AI continues to
be largely untouched by any of this.
This is a consumer and ecosystem mode.
It is not a cloud play. Meanwhile, if we
go to OpenAI's side of the chessboard,
they have very strong models at GPT5 is
extremely strong on most benchmarks. It
is the default mental model for AI for
hundreds of millions of people. They've
raised 40 billionish in capital. I keep
turning around. They raise more
billions, so who knows where they're at
now. Their projected 2025 revenue is
somewhere between 12 and 20 billion,
give or take. And they're burning
cashively. and they are trying to
translate their cash into a cuttingedge
frontier model position. So, OpenAI
effectively bought Johnny IV's hardware
startup to build a screenless AI device
of some sort. That venture has
reportedly hit technical and legal
snags. A court has ordered them to pause
marketing under the IO brand.
Fundamental UX issues around how the
device speaks and on device. There's
leaks coming out of that team basically
saying it's very difficult. Netnet,
they're not shipping hardware yet. So
now you're in a situation where you
bought this device to help you to secure
your advantage through this cash burn
period, but it's just a lot of capital
expenditure. There's a high uncertainty
on the form factor. You haven't gotten
to results yet. So to lad it up, Open AI
is simultaneously a frontier model lab,
a consumer app, and an infrastructure
provider. They are in a go big or go die
position. They need to either get to
monopoly level pricing power which given
the extreme proliferation of AI is
unlikely or they have to go to extreme
scale and multiple massive distribution
partners maybe Microsoft maybe Apple
OEMs whatever that's the only way they
get to scale meanwhile anthropic quietly
attacking the enterprise jugular they
have scaled their revenue super fast
they're on track for call it 9ish
billion by the end of this year 20 to 26
billion next year their valuation keeps
exp- exploding and their base is 300,000
plus business customers with largest
accounts into six figures. So product
stack claude models are very strong.
They're near state-of-the-art. They're
efficient. They're safe. The ecosystem
is very strong thanks to the model
context protocol adoption. Now thanks to
Claude skills, uh claude code is very
popular with developers. Almost all
their revenue is enterprise unlike
anyone else in this position.
distribution is via platform enterprises
already use AWS, Google cloud, direct
API, SAS integrations. They have a very
strong alignment first narrative which
helps with enterprise focused on safety
and they have economics that look much
more disciplined than open AIs.
Enthropic is essentially saying let open
AI and Google fight over consumer. We
will own the budget lines at the Fortune
500. It might work. So here's what
changes
if Gemini 3 and Apple actually come
together. we will move from a model arms
race to a distribution duopoly on
mobile. So instead of seeing a massive
arms race across the whole spectrum, we
will suddenly be in a world where Google
powers the iOS experience by default,
Google powers the Android experience by
default and Google wins just about no
matter what. We will also move from a
world where we ask who has the best
model all the time because they're so
tightly competitive to a world where we
ask who has the best UX and who has the
best data loops because increasingly as
models continue to get more effective.
We're not going to be asking ourselves
is the model smart enough to do it.
We're going to be asking is the UX easy
enough for me to use and is the data
loop in place where I can get the data I
need safely. A dumber model with better
access to data is better today than any
other model out there. It's also true
that if the UX is terrible, you don't
get the distribution. And that is
actually the primary issue right now
with Gemini is that Gemini's UX is not
on par with where Claude and where
OpenAI are. Gemini continues to be
treated a little bit like a research
project from Google and that is the
historic risk of Google product
thinking. I don't want to lose the
narrative here either. If Gemini 3 is
the clear state-of-the-art, Apple can
credibly say, "We pick the best model."
That makes it not a defensive choice
anymore. Google will gain leverage
versus AWS and Microsoft when they sell
cloud AI because they can point to
consumer dominance and they can point to
state-of-the-art benchmarks and say they
have the best. Open AI will lose some of
their halo as the default synonym for AI
unless they can deliver a model that
beats. And this delivers a reset moment
where the crown of best model and the
crown of default assistant at once moves
from OpenAI Microsoft to Google/Apple,
at least in consumer. Now, if you layer
in OpenAI's current trajectory and you
look at their cash burn, this suddenly
begins to matter strategically because
if you're not obviously winning on
distribution, spending tens of billions
of dollars to stay at the frontier is
going to be less defensible. Open AAI
has a strategic imperative to continue
to win at distribution and there is a
real chance with the Gemini 3 moment
that they will lose that edge. So what
does this look like over the next couple
years? If we fast forward, let's say
Gemini 3 comes out, it's what we say
Apple is able to move quickly. These are
assumptions. They might not come true,
but if they do and they're reasonable,
then we have scenario A. Gemini is just
everywhere. It's Google's winning all
the way. Gemini 3 is the clear
state-of-the-art. And then whatever
comes after it, Apple's able to ship
with the real brains of Gemini inside
it. Anthropic eats enterprise share and
OpenAI remains a strong player on web
and app but loses default AI narrative
and probably loses ground on enterprise
and probably has some issues with
fundraising down the road. Scenario B is
a device reset. If OpenAI is able to
ship a compelling AI native device, they
could win the personal AI hardware
subscription battle, harvest a ton of
cash flow, and reset the bar for who is
able to access AI as default and who is
two hops away. Because if you can ship
an AI native device, and it becomes the
place where all of your voice is
captured, now you're in a position to
control the market. Scenario C is
enterprise carve up and consumer chaos.
The consumer space may continue to
remain noisy with iOS having multiple
players, Android having Gemini, multiple
assistants, competing apps, etc.
Enterprise buyers may consolidate on
just anthropic and maybe OpenAI and
maybe Google and just pick between them,
which is a little bit like what I see
today, and that might continue. If that
happens, the winner is probably
Anthropic because they thrive in a
multimodel scenario. And the losers
would be single model SAS vendors who
have thin moes because this kind of
carve up requires intense competition
and thin moat SAS vendors are
vulnerable. So what are the strategic
implications here? Number one, stop
treating your best model as your core
bet. Assume that you need to swap
models. I've said this before. I'm
serious. Number two, optimize for
surfaces. Don't just optimize for model
IQ. Ask where does my user's intent
originate? and then ask how can I build
an opinionated workflow against that
surface where they are against the voice
against the Slack against the email not
a generic chatbot if Apple and Gemini
become the default assistant you'll want
to design flows where you have that hot
handoff from Siri or from Gemini into
your app for specialized tasks number
three start to treat Anthropic as the
enterprise benchmark like take it
seriously the way they invest in safety
the way they invest in governance is
something that I think sets expectations
for a lot of production workloads.
Number four, keep an eye on OpenAI's
burn rate and keep an eye on the
regulation and safety narrative at
OpenAI. There are risks there. What can
you expect to see changing depending on
your role? If you're an individual, and
I'll move up to executive from there. If
you're an individual, you should expect
your day-to-day tools will become more
opinionated and more embedded. The idea
of the best model is going to matter
less to you than how you can orchestrate
your tools around your work. And the
half-life of specific tool skills is
going to keep dropping. The half-life of
judgment and the ability to design
workflows is going to be very
persistent. So optimize for how to think
with AI, not a particular model. You
want to treat your assistants like
interchangeable contractors. And you
want to become the person who can
translate what leadership wants into
what this stack of tools can do. If
you're in the builder space, if you're a
founder or a PM or a producty person,
you cannot bet on a single model vendor
or worthy assistant app as a strategy.
Instead, you need to architect for model
volatility. You need to pick a surface
and obsess over owning it. Maybe it's
spreadsheets, maybe it's email, maybe
it's the terminal, maybe it's the
calendar. Just own that. And then you
need to differentiate on your workflow
and on your proprietary data. So you
have to have hard one process knowledge
plus proprietary data or labels that are
measurably better and then deliver into
a domain specific UX that really adds
value. Finally, financial discipline
around AI usage is going to matter. You
as a builder will have to make sure that
usage can explode without token costs
exploding. Your edge is going to be
owning a specific workflow on a specific
surface with a multimodel back end and a
believable margin story. That's
basically the big story you're going to
have. If you're an engineer, the
frontier model itself is less of a moat
and how you use it is more of a moat.
The stack is just going to get more
complicated. So you need to start to
learn to specialize in orchestration, to
specialize in systems, not just in
prompting. You need to design for tool
and provider churn when you're thinking
about your career and your systems. And
you need to get really, really good at
balancing cost, at balancing latency,
and at balancing quality. Being able to
show executives, we can cut 60% cost
with 5% quality loss is going to be a
big deal. And better or worse, security
and data boundaries are part of your
job. Now, you have to understand to
things like tenant isolation, PII flows.
You need to expect customers to ask
about this and about which providers see
their data and under what terms. If
you're an engineer, your edge is going
to be turning unstable models into
stable systems that the business can bet
on. If you're an you know seuite
executive, if you own outcomes, budgets
or teams, you are responsible for
results, but you cannot pick the winner.
You cannot pick a model. You have to
adopt a portfolio vendor strategy. You
need to plan on multiple primary model
partners. You need to decide explicitly
where you lean on OS defaults versus
where you build your own. So if it's
generic productivity, maybe the OS is
okay. If it's core workflows for your
business, maybe you invest in your own
orchestration. But that line has to be
an explicit choice. You need to start to
frame AI as a workflow transformation,
not software. I keep emphasizing this,
but when you approve an AI initiative,
the question is which workflow are we
replatforming? What metrics will move?
People can't answer that. If they can't
talk about the workflow, they shouldn't
be partners with you on that build.
Governance and safety is going to be a
bigger and bigger deal. We saw saw the
anthropic hack this week. We need to
have inventories of where models are
used, policies on data residency, all of
the stuff that goes with risk
management. Also is going to become a
sales enabler for you because other
companies are going to take this serious
too. On the talent side, you're going to
need to be looking for AI native
operators, not just prompt people. So
the most valuable hires can map your P&L
and ops to AI workflows, start to
prioritize those by impact, and then
work with technical teams to get them
live. Titles are going to vary, but
that's the capability that you're going
to want to find. Last, but not least,
you have to be disciplined with your
capital allocation. Do not fund in-house
model training, please, unless you have
very clear reasons. Default to renting
the intelligence and owning the data,
the workflows, and the customers.
Ultimately, your edge is going to be
turning AI from scattered experiments
into a coherent portfolio of bets that
you can actually measure ROI against.
This is what I want to leave you with.
The strategic insight that we are on the
verge of another reset, I think is
stable even if the Gemini 3 and Apple
story only becomes partially true. If
Gemini 3 is a state-of-the-art model,
but maybe not 20% better, maybe only 10%
better, this could still happen. If
Gemini 3 is embedded into Apple, but it
takes 6 months instead of 3 months, this
could still happen. The reason why is
driven more by the strategic position of
the players on the board where Anthropic
is, where OpenAI is, where Google is,
than it is by the exact timing of
individual model releases. And that's
what I think makes this a durable
thesis. And I think it's worth paying
attention to because we have not had a
shakeup like this. We have not had a
moment when OpenAI lost the crown. and
we're about to find out what that looks
like. So, get ready. The AI race is only
heating up. I hope that this gives you a
sense of where the market and where the
AI space is going in general and what
you can do to take advantage. Two.