AI Roadmap 2026: Compliance Opportunities
Key Points
- 2026 AI planning now requires anticipating five key trend drivers, starting with tightening regulatory enforcement worldwide.
- The EU AI Act will roll out enforcement from August 2025 to full compliance by August 2027, while California and over 45 U.S. states are passing AI bills that impose transparency, safety, and hefty penalty requirements.
- Rather than just a cost, compliance creates a new market opportunity, demanding robust measurement practices such as bias and performance testing, model cards, evaluation packs, and forensic audit capabilities.
- Smaller firms must scale compliance investments appropriately, while larger organizations and emerging vendors can capitalize by offering specialized compliance infrastructure and services.
Sections
- 2026 AI Roadmap: Regulatory Driver - The speaker urges businesses to start planning for 2026 by outlining five key AI trend drivers, beginning with the first—intensifying AI regulations like the EU AI Act and California's bill—that create new compliance market opportunities.
- Economic Scrutiny Drives Outcome Pricing - Growing macro‑economic pressure and the rise of AI agents will force vendors to prove ROI through outcome‑based pricing across sales, product, and engineering, or risk losing market share by 2026.
- Incumbent vs Disruptor ROI Strategies - The speaker contrasts how established firms can capitalize on preserved data and context to ensure smoother productivity gains, whereas disruptors must prove a dramatically higher, easy‑to‑switch ROI—often amplified by AI agents tied to high‑value workflows.
- Hybrid On‑Device AI Emergence - The speaker predicts that by 2026 Nvidia‑Intel investments will enable on‑device hybrid AI architectures, prompting split cloud‑local workloads, privacy‑centric LLMs, and a surge of new hardware startups.
- Premium AI as Competitive Edge - The speaker explains that purchasing OpenAI’s premium, high‑cost tiers grants individuals and companies vastly superior, near‑instant task automation—effectively letting them “be in two places at once”—creating a nonlinear competitive advantage over those on free or lower‑priced plans, and raising issues of access control, UI, and trust.
- Premium vs Commodity AI Strategies - The speaker outlines how to align tooling and pricing with customer willingness to pay, advocating cheap, fun “nano‑banana” AI for mass‑market adoption while reserving deep, high‑cost capabilities for premium, enterprise users.
- Vertical Expertise Over Horizontal Tools - The speaker warns that broad, generic solutions are increasingly at risk, emphasizing that deep vertical knowledge—such as specialized memory requirements and regulatory navigation—is the defensible advantage, urging companies to focus on a single vertical and highlighting related talent shortages.
- Anthropic, Microsoft, Google AI Strategies - The speaker explains Anthropic’s R&D‑driven high‑end compute and Claude‑code tools for enterprise work, notes Microsoft’s concerns about primitive‑level tooling, and warns that Google must monetize Gemini to defend its search revenue as ad spend shifts toward AI chat platforms.
Full Transcript
# AI Roadmap 2026: Compliance Opportunities **Source:** [https://www.youtube.com/watch?v=x_fsaOnqbeo](https://www.youtube.com/watch?v=x_fsaOnqbeo) **Duration:** 00:28:49 ## Summary - 2026 AI planning now requires anticipating five key trend drivers, starting with tightening regulatory enforcement worldwide. - The EU AI Act will roll out enforcement from August 2025 to full compliance by August 2027, while California and over 45 U.S. states are passing AI bills that impose transparency, safety, and hefty penalty requirements. - Rather than just a cost, compliance creates a new market opportunity, demanding robust measurement practices such as bias and performance testing, model cards, evaluation packs, and forensic audit capabilities. - Smaller firms must scale compliance investments appropriately, while larger organizations and emerging vendors can capitalize by offering specialized compliance infrastructure and services. ## Sections - [00:00:00](https://www.youtube.com/watch?v=x_fsaOnqbeo&t=0s) **2026 AI Roadmap: Regulatory Driver** - The speaker urges businesses to start planning for 2026 by outlining five key AI trend drivers, beginning with the first—intensifying AI regulations like the EU AI Act and California's bill—that create new compliance market opportunities. - [00:04:02](https://www.youtube.com/watch?v=x_fsaOnqbeo&t=242s) **Economic Scrutiny Drives Outcome Pricing** - Growing macro‑economic pressure and the rise of AI agents will force vendors to prove ROI through outcome‑based pricing across sales, product, and engineering, or risk losing market share by 2026. - [00:08:27](https://www.youtube.com/watch?v=x_fsaOnqbeo&t=507s) **Incumbent vs Disruptor ROI Strategies** - The speaker contrasts how established firms can capitalize on preserved data and context to ensure smoother productivity gains, whereas disruptors must prove a dramatically higher, easy‑to‑switch ROI—often amplified by AI agents tied to high‑value workflows. - [00:11:34](https://www.youtube.com/watch?v=x_fsaOnqbeo&t=694s) **Hybrid On‑Device AI Emergence** - The speaker predicts that by 2026 Nvidia‑Intel investments will enable on‑device hybrid AI architectures, prompting split cloud‑local workloads, privacy‑centric LLMs, and a surge of new hardware startups. - [00:15:54](https://www.youtube.com/watch?v=x_fsaOnqbeo&t=954s) **Premium AI as Competitive Edge** - The speaker explains that purchasing OpenAI’s premium, high‑cost tiers grants individuals and companies vastly superior, near‑instant task automation—effectively letting them “be in two places at once”—creating a nonlinear competitive advantage over those on free or lower‑priced plans, and raising issues of access control, UI, and trust. - [00:19:09](https://www.youtube.com/watch?v=x_fsaOnqbeo&t=1149s) **Premium vs Commodity AI Strategies** - The speaker outlines how to align tooling and pricing with customer willingness to pay, advocating cheap, fun “nano‑banana” AI for mass‑market adoption while reserving deep, high‑cost capabilities for premium, enterprise users. - [00:23:05](https://www.youtube.com/watch?v=x_fsaOnqbeo&t=1385s) **Vertical Expertise Over Horizontal Tools** - The speaker warns that broad, generic solutions are increasingly at risk, emphasizing that deep vertical knowledge—such as specialized memory requirements and regulatory navigation—is the defensible advantage, urging companies to focus on a single vertical and highlighting related talent shortages. - [00:27:06](https://www.youtube.com/watch?v=x_fsaOnqbeo&t=1626s) **Anthropic, Microsoft, Google AI Strategies** - The speaker explains Anthropic’s R&D‑driven high‑end compute and Claude‑code tools for enterprise work, notes Microsoft’s concerns about primitive‑level tooling, and warns that Google must monetize Gemini to defend its search revenue as ad spend shifts toward AI chat platforms. ## Full Transcript
is time to think about 2026. If you're
building in the roadmap business in
leadership, everywhere I look across the
business world, people are starting
their 2026 planning right now, which
means we need a view down the road of
the AI trends that are going to matter
the most. I've put together five drivers
for AI trends that I think you need to
be aware of. I'm going to pull them out
into specific implications as we go
through this video and then at the end
I'm going to look through crosscutting
implications that cut across all of
those drivers. Once you get done with
this briefing, you are going to be well
equipped to drive an AI fluent planning
process. Whether you're a product person
building road maps in the seauite
looking at how to provision your team
for next year, buying from a vendor, or
even as an entrepreneur or builder,
figuring out what you want to build or
pivot toward next year. Let's get into
it. Driver number one, regulatory
enforcement is getting teeth and that
creates a compliance infrastructure
market that you can benefit from.
Governance is rapidly becoming a launch
requirement. The EU AI Act enforcement
started in August of 2025 for new GPAI
systems and high-risk systems kick in in
August 2026 with full compliance in
August 2027. So, this is phasing in over
24 months, but you need to start
thinking of it now if you have
operations in Europe. Meanwhile,
California is working on an AI bill that
has implications for everyone because
California's AI bill is going to cover
modelmakers with new transparency and
safety requirements. That happens likely
in January of 2026. Uh there are 45 US
states currently with 550 plus AI bills
somewhere in the legislature. Penalties
are going up too. So these are not bills
that don't have teeth. Penalties are up
to 6% of global revenue in the EU and
average compliance failures cost $9.2
million. These are these are issues with
real teeth. Now, there are ways that you
can derisk, but we need to plan for them
in advance. And so, I want to move from
the sort of scary compliance picture to
what I called out at the top, which is
that compliance is really an
opportunity. First, make sure that
you're measuring correctly. Don't just
measure adoption by loginins. Don't just
measure vibes. Don't just measure
customer satisfaction. You want to be
looking at bias testing. You want to be
looking at performance testing. You want
to have model cards. You want to have
eval packs that might be required to
show the edges and red lines of your
capabilities and where they break down.
You want to have some kind of forensic
or audit capability in place for
incident investigation. Now, of course,
if you are a larger company, this is a
harder and harder requirement. If you
are a smaller company, this is something
that you have to invest in in a way
that's commensurate with your scale.
This is where the builder's opportunity
comes in. There are suddenly going to be
a lot of companies that have questions
about compliance and governance with
their particular model installations.
And we already see vendors out there
pedalling eval packs to help these busy
companies get from I don't know what my
AI is doing to I can show and measure
it. That market is going to grow. I
don't think many of the current vendors
are well positioned because they tend to
be one-sizefits-all. Looking at the wide
forest of of the regulatory environment
with many different kinds of mini
environments across different US states
and across the EU, we will need vendors,
builders, tools that offer very fine
grained eval and compliance
capabilities. This is an a tremendous
opportunity. If you're a builder,
vertical expertise is going to be
defensible. You are going to be able to
build in legal, in healthcare, in
finance and other high regulatory
environments and it will be difficult to
unseat you if you have the reputation of
being a highrusted provider that helps
with compliance in a way that really
lifts the load for IT departments. So
that's the first trend compliance.
Driver number two is economic scrutiny
driven by the macro environment. We are
going to see more and more of a move
toward outcomebased pricing versus seat
licenses. Especially as we move toward
an agentic workforce, you already see
model makers leaning in on AI agents
doing this, that, and the other thing.
Proactive agents, agents that take hours
or 30 minutes or 40 minutes to do their
tasks, agents that can do work end to
end. That is going to put long-term
pressure on pricing plans to focus more
on outcomes. We already see that with
the way tools like Finn and intercom do
outcomebased pricing. I'm suggesting
that we are going to see CFO pressure
for ROI proof push more vendors toward
outcome based pricing beyond just
obvious CS use cases. So we'll see it in
sales, we'll see it in product. I think
we'll start to see it in engineering as
well eventually. If you aren't able to
show your ROI telemetry, if you aren't
able to show a proof of value in the
work that you're selling from an AI
perspective, you are going to have
trouble selling in 2026 because the
competition is going to get fierce. One
of the crosscutting trends here is that
the companies that were funded in 2025
are going to be hitting the ground in
2026 and they are going to be fiercely
competing for every sales dollar. And so
you want to be in a position where you
can argue that your product is not only
compliant driver one, but driver two is
able to show
agentic aligned measurable outcomes. If
you're deploying an agent in production,
if you are selling something that has an
agentic element, if you are buying
something that has agents included, you
want to make it clear why that extra
work, that extra token burn, that extra
expense is worth it. And the only way I
can think to make that really math math,
you know, to make 1 plus 1 equal 3 is to
show that the agent can solve problems
that traditional LLM tooling cannot.
That is what we are starting to see
through chat GPT's launch of for example
GDP val which is the new benchmark for
showing that Agentic LLMs can solve real
world tasks in as good a manner or
almost as good a manner as experts in
that field. We will see more progress in
that direction and as we do there will
be an expectation from model makers that
they can charge commensurate to the
value they're providing. All of this
means that multimodel resilience is
going to be essential for cost
management. This was already the case,
but it's going to be especially the case
when you have agents driving variable
price outcomes. You need to be in a
place from an architectural perspective
where you can trade models in and out
very very easily and you do not have to
go through an expensive rearchitecture.
That's also true if you're building
products off AI to sell. You don't want
to be in a position where you're locked
in. You want to be in a position where
you can switch stuff out at the command
line very very easily. Driver number
three, you need to be aware of the
competitive velocity from AI native
entrance. So remember how I said earlier
in this briefing, the dollars that were
funded in 2025 are going to come and hit
the revenue line in 2026. They are going
to hit the competitive landscape in 2026
as well. You should expect a forest of
new AI native competitors to spring up
all around you and launch their products
not just as demos but as fullyfledged
products in the first half of 2026. It
will feel like you are swimming in a red
ocean of competition. If you thought you
were competitive already, it's going to
get worse. You are going to see AI
native entrance claim that they can
persist memory and context, claim that
they can deliver 10x productivity gains,
claim that they can deliver training
advantages. And you know what's
interesting? You can also claim that as
a more established incumbent in the
space if you're thoughtful and build
now. So for example, stateless queries
is something that makes more sense. So
stateless queries are when you ask a
question, you have to load all that
context into the chat. That is actually
something that you're more likely to do
with a new entrant than an incumbent.
And so even if incumbents start to come
with like their big memory packets and
this and that, you can pull a page from
Notion's playbook this week and argue
that your AI solution reinforces the
data that people have already trusted
you with, which is exactly the play
Notion used. And that is part of how
they're making a context play in a
competitive market. And context drives
productivity. And so if we go back to
driver number two and the idea of ROI,
one of the things that established
incumbents can emphasize is that by
preserving data and context in an
existing stack, you can ultimately get a
smoother path to real productivity in
the workforce. Now, if you are on the
other side of the table and you are a
disruptor, you are an entrepreneur, you
are a builder, the key here is getting a
disproportionate gain over the existing
income. You have to be able to show that
you are worth switching to like easily
hands down the easiest solution to
switch to. The bar is very high and
there will be fierce competition from
other disruptors as well. Simple value
propositions that offer tangible
multiples on current ROI are going to
break through the noise and that is the
key for disruptors this coming year. Now
if you are looking ahead this is also
where the agents tie in. So agents are
going to be able to drive a lot of that
10x value for disruptors and they are
going to be tied to golden workflows
that customers will pay for. And so if
you're an incumbent, invest in agents
across the workflows that you know your
customers value the most now. Because if
you don't, you run the risk of a
disruptor coming along and offering that
10x multiple on your current ROI for
customers by building an agent for that
quote unquote golden workflow that they
care about the most. Make sure you know
what those workflows are. Make sure you
map them and make sure you think in
terms of building agentic solutions that
are relevant in your area to stay ahead.
The last thing I will call out is that
the AI native velocity gap is real and I
don't see a path to it closing.
Traditional companies are not moving as
quickly as AI native companies and it's
an inherent risk. AI native startups are
hitting a million dollars in ARR in just
6 to 12 months versus 18 to 24 for
traditional SAS. They operate with 50 to
80% fewer employees and they're
iterating 10x faster. It's an inherent
risk factor. And the only inoculation,
the only de-risking that I see as a
possibility is that you are going to be
able to anoint, bless, build an
entrepreneurial team that can act like
an AI native startup inside your
business. Otherwise, you're just going
to move slower and you're going to have
to depend on your data and your context
and your existing distribution
relationships as a moat. Let's get to
driver number four. The maturing
technical market for AI is enabling
production systems that were not
possible even in the first half of 2025.
So there are going to be specific
reasons why this statement is even more
true in 2026 and why that should inform
your road mapping your building plans
and the biggest reason is that
investment that Nvidia made in Intel. We
are moving to a world where we will have
ondevice hybrid architectures. So the
current state really is cloud first for
most consumers and also for most
businesses. Businesses clearly see
liability there. I have lost track of
the number of times I have had
businesses ask me about cloud-based
risk. There is appetite and interest in
local LLMs, but there hasn't been the
compute to support it. Now there will
be. You're going to see NPU laptops,
phones with low latency, high privacy,
and there will be a cost pressure to
bring them down. You're going to see
workloads split likely between the local
and the cloud. You'll have faster UX,
somewhat dumber models on local, and you
will have smarter inference and hardened
pipelines on cloud. One of the things
that I expect to see is a new generation
of builds and startups associated with
the availability of local chips that can
power local LLM experiences. You know
how we have clearly right now that adds
that transparent layer and the LLM looks
across your system. I expect winning
builds that are entirely private onrem
local LLMs that also look across your
system and are always on. They're going
to get to mobile and they're going to
get to the laptop. Now this generates a
lot of downstream opportunities if
you're building. Think about it. You
don't have to assume that you have to
compute cloud inference into all of your
costs. You can think about ways in which
second half of 26 early 27 you are going
to start to be able to make experiences
available that give users choices
between local LLM compute and cloud
compute. you're also going to start to
think about more agentic experiences
cross-pollinating with that local LLM
experience. And so you're going to have
these moments where you have a
human-driven experience in your head as
you start to plan for 2026 and the
customer experience you want. But the
customer is going to have a local LLM.
and that local LLM may act as an agent
for them, may purchase for them, may
shop for them, may develop proactive
recommendations for them. We're already
seeing the tip of the iceberg on that
with products like Chad GPT's Pulse,
which just launched this week. That is
very transparently an ad surface. It is
a surface where you want to start to
drive top offunnel consideration. Now,
couple that with agentic capabilities,
and you're going to start to see agents
reading ads, agents looking at content,
agents proactively taking action, and
those agents may be entirely private and
on the local machine. Lots of
interesting stuff going on there. If you
are thinking about what to build in that
world, build assuming that compute in
the cloud becomes something that is
optional for early adopters in 2026 and
becomes widely spread as an optional
choice in 2027. Assume
that for enterprise and for businesses,
if that's who you're selling to, they
are going to have better local compute
options for their employees where they
can buy laptops by the dozen. And they
will be early adopters if that gives
them privacy they're looking for and
they will be aggressive in looking for
privacy favored solutions. And that's
going to affect the kinds of solutions
that are built because most of the
vendors out there are presuming that you
will want to go through them and
essentially rent cloud compute from one
of the major model makers. That world
may not last. Let's go to driver number
five. There's an increasing market
segmentation between commodity and
premium AI and it's only getting
reinforced. That pulse update I told you
about only available for pro. If you're
on plus, it's not clear when it's going
to be available. I mean, presumably if
they're going to pay for it with ads,
they're going to run it down market
eventually, but the larger trend is
clear. It's clear for Claude. It's clear
for Google. It's clear for Perplexity.
It's clear for Open AI. Pay for what you
get. The more you pay, the more you get.
And so customers, whether they're
individuals or employees or businesses
who are willing to pay into the hundreds
of dollars a month per person, are going
to get access to premium human
augmentation. They are going to get
access to the latest models that can
complete tasks for humans with a high
degree of accuracy and long-term agency.
and the disproportionate payoff for
businesses and for people who can afford
that. It's nonlinear. It's a huge deal.
If you can afford premium AI and the
premium AI by next year is doing tasks
of four or six hours for you, tasks that
take half a workday, you are going to
have a tremendous advantage over
everyone else in your business if
they're not paying for it. You are going
to be able to be in two places at once
for the first time in the working world.
That's massive. That is why if you are
planning, you need to be planning for
people to essentially duplicate
themselves and be able to do multiple
strands of work simultaneously. So
there's roles based access control and
agency questions there. There's UI
questions. How do you give readr access
to the agent? There are questions around
purchase authority, wallet, how you
check in and provide a trustful
experience. At the same time, if you're
going down market into the commodity
space, you still have chat GPT building
for a user base that is by and large on
the free plan. And so even though 2% 1%
maybe 5% at most of the market is super
premium and pays hundreds of dollars a
month, the vast majority, well over 95%
is paying 20 bucks a month or nothing.
And in that world, what do you do as a
business to figure out your positioning?
Do you want your employees to be
augmented at the 20 buck a month level
and you view AI as a tool add-on, or do
you want them to be able to effectively
duplicate themselves and do two or three
times the work, but it's going to cost
you more? The answer is not quite as cut
and dried as it seems. If you're doing
IT department planning, you have to ask
yourself, is everyone at my company, if
I get the budget, is everyone at my
company a champion and able to adopt and
use the superpowered AI the way it
should be? If the new AI that we're
getting in 2026, the Chat GPT6, Chat
GPT7, it's like a Ferrari. Can everyone
at the company drive the Ferrari? Or
really, even if you gave them the keys
to the Ferrari, are they really going to
be able to use it and take it on the
corners the way it should be taken?
probably not. And this is why I keep
emphasizing the the importance of this
12 or 18month window in individual
levelups of AI capability. This is a
catch up, take the elevator to the
penthouse moment for individuals and
companies. It won't last forever. And
the longer we have market segmentation,
the more the bowling lanes in that
market are going to harden. it's going
to be harder and harder to jump lanes
both as a person and as a company. And
so when you are planning, you have to be
thinking who am I marketing to? What
kind of tooling do I need internally?
And what is the tool set that is going
to enable me to do what I need to do and
what is the willingness to pay of my
customer and how does that how does that
imply what I need to deliver as far as
premium or commoditized AI? Because you
may be in a situation where your market
is kind of that 95%. And as much as you
want a roadmap for the cool stuff, maybe
only the internal champions at your
company get the fancy AI and you know
your market's not going to pay for it.
And so you're giving them nano banana,
right? Nano banana is for everybody.
That is why Google Gemini is pushing it
so hard. It is an adoption play to drive
familiarity with AI. That's why whenever
I open Tik Tok now, I see a nano banana
ad and it's always personal. It's always
about change your hair color, change
your style, change your vibe, take a
selfie, make it look different, and now
you can come up with a new look for the
winter. That is the kind of thing that
you have to commit to serving its scale
if you want to get into the commodity
space. It has to be fun. It has to be
personal. It has to be quality enough
that it delivers those delightful
moments that are beginning to mark AI
for the average consumer. But it doesn't
need to have the ability to do thinking
for 20 minutes and come back with an
amazing PowerPoint deck. That's the
premium side of things. Know your lane
and know where to build. Okay, we've
gone through the five key drivers. We've
gone through the market segmentation
piece. We've gone through technical
maturation and sort of how the
availability of chipsets is going to
change things in 2026. We've gone
through competitive velocity from AI
native entrance and what that means for
everybody. We've gone through how we are
dealing with ROI demands in uncertain
economic times and we've gone through
regulatory enforcement and the
regulatory market. Those are the five
big drivers. I want to close by talking
about a number of critical crosscutting
patterns that aren't drivers themselves
but that you should be thinking about as
you are road mapping, building,
planning, budgeting for 2026. Number
one, every driver above is going to
manifest differently by vertical. I
mentioned the specific vertical
implications in the first driver around
regulatory for healthcare, for legal,
for finance. That extends, right? If
you're in manufacturing compliance, it
looks different. If you're in robotics,
it looks different. If you are in B2B
SAS, it looks different. If you are in
the consumer space, it looks different.
you need to do the work to extend the
current thinking around drivers to your
space. And I'm going to prepare a prompt
for you that actually helps you do that,
helps you start to unpack that a little
bit, and I'll include it in this post so
that you can have some help as you start
to think through it. I'm a big fan of
prompts enabling us to kind of take
these articles in new directions. The
second thing I want to call out is that
memory is going to look different. This
is a crosscutting theme that gets at the
chipset driver. It gets at the
regulatory driver. What is allowed to be
remembered? Who do we record it for?
What are the rules around miners? Where
do we have chipsets that give us more
memory? One of the things that people
don't realize is that memory is scaling
more slowly than inference. And so
inference compute is really
debottlenecked right now comparatively,
but memory is not. Memory is not growing
as fast. We need better memory
solutions. Businesses that can deliver
seamless memory solutions in their area
are going to have an advantage because
they will close a habit loop much more
effectively. That's one of the things I
called out in my AI native velocity
driver that incumbents have potentially
an advantage on. If you can make the
data you have memory, suddenly you have
some stickiness there in the AI era. But
memory looks different. Memory for
clinics looks different than memory for
legal. You have different requirements.
Memory for maintenance looks different
from memory for a car rental shop.
Everybody has different memory needs and
the startups that are building in this
space are by and large not recognizing
that yet. And so there's a lot of
opportunity to build around the memory
space whether you're an incumbent or a
startup. This gets at another insight I
want to call out. We've been talking a
lot about verticals in this crosscutting
theme. Think of that in terms of build
generic horizontal tools which many
vendors are pedalling right now are
increasingly going to be at risk in a
world when vertical expertise is
becoming more defensible. One of the big
crosscutting themes here is that all of
these drivers reinforce the value of
vertical expertise. You need vertical
expertise to navigate regulatory. You
need vertical expertise to navigate what
customers need around local chips on
their machines for their local problems.
Expertise is going to matter more.
Expertise is defensible and expertise is
vertical tide. Which means when you are
thinking about what you build this
coming year, think about your vertical.
Take it seriously. This may not be the
year to try and jump two or three
verticals at once. I know people get
bold about that.
This may be the year to own the vertical
you're in. I will call out one more
piece here. There are talent
implications for the kind of world we're
talking about. And if you're building
for the team, a lot of people are asking
me where are the engineers? Where are
the architects? What does that look
like? I also want to ask you about the
next level of team. As you're thinking
about your budgeting, don't just think
about architects. Don't just think about
engineers. Think about implementation.
What does it take to invest in training?
What does it take to invest in
implementation specialists? What does it
take to invest in your culture so that
you have the right AI champions on the
team? Not necessarily just engineers and
architects who can drive the business
forward. This is one of the biggest
misses most businesses had in 2025. It's
one of the reasons why that infamous MIT
study showed up with a 95% fail rate.
There's talent issues at every single
business I consult at. Take the talent
more seriously next year. That is also a
crosscutting theme. If you get the
talent right, you can negotiate these
drivers more fluently because you have
talent that's running faster. You have
talent that knows how to replicate
itself with AI. So even if you're only
getting 10, 15, 20% of your team
superpowered on AI and you think that's
not that much, I've got news for you.
That's a lot more than what most people
have. Most people are struggling to get
one or two% of their team superpowered
on AI right now. So if you can get to 10
or 20, you're way ahead. Now AI native
businesses have the luxury of having
everybody be AI native and superpowered
but right now they're tiny and that's
the risk right that's always the risk
with challengers and that's always the
advantage that incumbents have is their
size and scale and access to capital the
last thing I want to call out is that
platforms are one of the biggest
question marks as you think through
these drivers strategically most of the
things I am suggesting here argue
against a strong year for platform
forms. The only thing running in favor
of a platform play from a road mapping
build perspective is the persistence of
brand. Chat GPT is a brand. Claude is a
brand. Gemini is a brand. These major
model makers are persisting as brands
around core work primitives. If you are
worried, and this is always something
that comes up, so let me close with
this. If you are worried about where you
are going to build next year because you
don't want to get run over by a new
release by Sam Alman, think about the
incentive sets that each of these major
model makers have. Chad GPT, it's a
consumer world. They're clearly moving
into ads. They are building for your
attention. They are building for habit
loops. They want to keep you in chat GPT
similar to the way that Mark Zuckerberg
wants to keep you in the Meta ecosystem.
They also have a strong R&D arm. The
enterprise offerings they have will
bring more of that R&D capability, but
fundamentally they're in the business of
providing very high-end compute to
companies and very engaging social
habits to consumers. Enthropic is for
work. Enthropic is interested in
providing high-end tools to people who
choose to engage with them. That's the
positioning for claude code. That's why
they've invested so heavily in an AI
model that can build Excel and
PowerPoint well. And they are working
specifically on connectors through MCP
and work primitives. So docs, feats,
making sure that the basics of work are
things that you think about with clot.
If you're Microsoft, you're worried. If
you're building tools that are beyond
primitives, it's somewhat less
concerning. Finally, Google. Google
needs to maintain search dominance next
year in a world where ad spend is going
to open up for chat GPT. Those ad
budgets, are they going to come from
nowhere? Everyone's going to be looking
at their search revenue and asking, do
we spend this much on search or do we
switch this to Chat GPT? What do we do?
That's going to be the question for
marketers next year. And Chad GPT is
going to be aggressively positioning
chat as the super premium highquality
intent option. Google needs an answer.
That is where Gemini is going. Gemini
will get into the ad space somewhere and
they will start to monetize because they
need it to defend Google's core search
revenue dominance. There you go. That's
your sneak peek at where the big players
are going. I hope you don't get squished
by them. Best of luck with your 2026
planning. I hope this overview of the
year ahead has helped you to get some
clarity. That's what we're after here is
clarity. And of course, use those
prompts to dig deeper into your own
situation. Tears.