AI Startup vs Enterprise: Core Learnings
Key Points
- Startups and large enterprises operate under fundamentally different constraints, so the “right” AI strategy for each varies dramatically.
- Agile “vibe‑coding” and rapid, even risky, feature releases are viable for startups because they can personally manage a small user base, whereas enterprises must prioritize compliance, data security, and stability to avoid lawsuits and contract losses.
- High AI‑credit spending in a startup is comparable to hiring multiple full‑time developers, enabling fast experimentation toward product‑market fit, while enterprises face lengthy approval processes (e.g., for tools like GitHub Copilot) and strict governance.
- The key to success is playing the game your customers expect: startup founders may need to adopt enterprise‑level rigor when targeting large B2B clients, and enterprises should recognize the value of faster, more experimental approaches where appropriate.
- Understanding these divergent “rules of the sport” helps both sides set realistic expectations and informs where AI adoption will likely head throughout 2025.
Sections
- AI Startups vs Enterprises: Core Learnings - The speaker contrasts the distinct speeds, tool stacks, and constraints of AI‑native startups and large AI enterprises, outlines six key insights drawn from both, and seeks to bridge the misconceptions separating founders and CEOs.
- Startup DNA vs Enterprise Demands - The speaker explains how founders must pivot their product focus and speed based on target markets—adopting compliance-heavy practices for B2B clients while staying scrappy for AI‑savvy startups and consumers—illustrated by a Google ad subtly mocking Apple’s unfulfilled AI promises.
- AI‑Generated Code in Enterprises - The speaker explains that while AI can quickly create small, well‑defined software components, large companies often lack the time to experiment with tools like lovable.dev, leading to a speed gap with startups and raising concerns about accumulating engineering debt and reducing code understandability at scale.
- AI’s Impact on Technical Debt - The speaker explains that early‑stage startups deprioritize technical debt until scale or compliance demands it, but rapid advances in AI‑driven code refactoring will soon make fixing large codebases cheaper—though this won’t replace the broader, strategic work of senior engineers.
- Driving Enterprise AI Through Pain - The speaker argues that large companies lack the immediate pain that drives AI adoption in startups, so leaders must highlight the looming existential risk of ignoring AI to create the urgency needed for meaningful implementation.
- AI Workflow Leverage and Resistance - The passage explains that AI‑driven workflows have exponentially higher impact and require careful coordination at large enterprises versus startups, while previous disappointing AI experiences foster resistance, giving AI‑native newcomers a distinct career advantage.
- AI Accelerates Startup Disruption - The speaker outlines six AI adoption principles and warns that AI‑native startups achieve dramatically higher development velocity than traditional enterprises, creating a growing disruption risk for slower organizations.
- Bifurcated Future of AI Coding - The speaker argues that AI‑assisted code will dominate large enterprises while small startups will still hand‑craft most software, cautions against taking AI leaders’ timelines at face value, and urges companies to build realistic, customer‑focused solutions in this rapidly shifting, high‑stakes landscape.
Full Transcript
# AI Startup vs Enterprise: Core Learnings **Source:** [https://www.youtube.com/watch?v=ax8Oh5FCLh8](https://www.youtube.com/watch?v=ax8Oh5FCLh8) **Duration:** 00:25:28 ## Summary - Startups and large enterprises operate under fundamentally different constraints, so the “right” AI strategy for each varies dramatically. - Agile “vibe‑coding” and rapid, even risky, feature releases are viable for startups because they can personally manage a small user base, whereas enterprises must prioritize compliance, data security, and stability to avoid lawsuits and contract losses. - High AI‑credit spending in a startup is comparable to hiring multiple full‑time developers, enabling fast experimentation toward product‑market fit, while enterprises face lengthy approval processes (e.g., for tools like GitHub Copilot) and strict governance. - The key to success is playing the game your customers expect: startup founders may need to adopt enterprise‑level rigor when targeting large B2B clients, and enterprises should recognize the value of faster, more experimental approaches where appropriate. - Understanding these divergent “rules of the sport” helps both sides set realistic expectations and informs where AI adoption will likely head throughout 2025. ## Sections - [00:00:00](https://www.youtube.com/watch?v=ax8Oh5FCLh8&t=0s) **AI Startups vs Enterprises: Core Learnings** - The speaker contrasts the distinct speeds, tool stacks, and constraints of AI‑native startups and large AI enterprises, outlines six key insights drawn from both, and seeks to bridge the misconceptions separating founders and CEOs. - [00:03:05](https://www.youtube.com/watch?v=ax8Oh5FCLh8&t=185s) **Startup DNA vs Enterprise Demands** - The speaker explains how founders must pivot their product focus and speed based on target markets—adopting compliance-heavy practices for B2B clients while staying scrappy for AI‑savvy startups and consumers—illustrated by a Google ad subtly mocking Apple’s unfulfilled AI promises. - [00:06:10](https://www.youtube.com/watch?v=ax8Oh5FCLh8&t=370s) **AI‑Generated Code in Enterprises** - The speaker explains that while AI can quickly create small, well‑defined software components, large companies often lack the time to experiment with tools like lovable.dev, leading to a speed gap with startups and raising concerns about accumulating engineering debt and reducing code understandability at scale. - [00:09:16](https://www.youtube.com/watch?v=ax8Oh5FCLh8&t=556s) **AI’s Impact on Technical Debt** - The speaker explains that early‑stage startups deprioritize technical debt until scale or compliance demands it, but rapid advances in AI‑driven code refactoring will soon make fixing large codebases cheaper—though this won’t replace the broader, strategic work of senior engineers. - [00:12:35](https://www.youtube.com/watch?v=ax8Oh5FCLh8&t=755s) **Driving Enterprise AI Through Pain** - The speaker argues that large companies lack the immediate pain that drives AI adoption in startups, so leaders must highlight the looming existential risk of ignoring AI to create the urgency needed for meaningful implementation. - [00:16:06](https://www.youtube.com/watch?v=ax8Oh5FCLh8&t=966s) **AI Workflow Leverage and Resistance** - The passage explains that AI‑driven workflows have exponentially higher impact and require careful coordination at large enterprises versus startups, while previous disappointing AI experiences foster resistance, giving AI‑native newcomers a distinct career advantage. - [00:19:30](https://www.youtube.com/watch?v=ax8Oh5FCLh8&t=1170s) **AI Accelerates Startup Disruption** - The speaker outlines six AI adoption principles and warns that AI‑native startups achieve dramatically higher development velocity than traditional enterprises, creating a growing disruption risk for slower organizations. - [00:22:51](https://www.youtube.com/watch?v=ax8Oh5FCLh8&t=1371s) **Bifurcated Future of AI Coding** - The speaker argues that AI‑assisted code will dominate large enterprises while small startups will still hand‑craft most software, cautions against taking AI leaders’ timelines at face value, and urges companies to build realistic, customer‑focused solutions in this rapidly shifting, high‑stakes landscape. ## Full Transcript
You know, AI native startups and AI
enterprises have been moving at
different speeds, getting different
things done, using different tool
stacks. And for the most part, when I
look at the discourse online, when I
talk to founders in the small startup
category, when I talk to CEOs of large
companies, what I hear is two entirely
different worlds and frankly a lot of
disappointment in what's framed as the
other side. I would like to instead take
the time to look at six different core
learnings that have emerged from
studying both startups and enterprises
as they grapple with the transition to
AI at the same time but under very
different conditions.
So this should apply to you if you are
working in a large company environment
but equally if you are working in a
small company or even if you're just a
solo founder. If you've ever wondered
what is the difference between these two
environments besides size and how is
that impacting AI besides the really
obvious one of the big companies often
go slower and they get mocked for it and
they shouldn't. Well, I'm here to
disentangle all of that, give you the
learnings and then have some reflections
on where this is going in the rest of
2025. So, number one, let's get into
those core learnings. Different
constraints will create different
correct answers. Startups and
enterprises just aren't playing the same
game. They're playing different sports
with different rules effectively.
When a startup founder ships a broken
feature to 10 customers, well, you can
personally call everyone if that breaks.
And so vibe coding is very viable. When
a PM ships to 10,000 healthcare comp
customers, one data leak could trigger
lawsuits, lose the biggest contracts
that the business has, and eventually
destroy the company. The standards are
very different. The founder, solo
founder, can rebuild the entire codebase
over a weekend. The PM must maintain
systems with database columns that stem
back to migrations in, oh, I don't know,
2008. The startup founder who burns
thousands or even tens of thousands of
dollars a month in AI credits isn't
actually being reckless. Even if a lot
of those credits get used and reused
across the same part of the codebase,
effectively he or she is buying the
equivalent of three or four developers
working 24/7
trying to solve the really difficult
problem of product market fit. The
enterprise on the other hand is going to
take several months to approve GitHub
co-pilot, but they have socks
compliance. They have enterprise
customers doing security audits. They
have a board that demands very
predictable quarterly results.
They don't have the same game or the
same rules. And my reflection for you on
this is that you play the game that your
customer wants you to play.
And this is actually more profound than
you would think because many times
startup founders end up having to act
more big company because they are in the
B2B space and they're trying to serve
larger and larger customers. And so they
end up getting themselves into this
world where they have to do compliance
audits and enterprise and this and that.
And that doesn't mean they lose their
sort of fast young scrappy DNA, but they
do have to start to shade more toward
the enterprise space faster. On the
other hand, if you are serving hungry
startups and that is your primary
customer base, well, yeah, you want to
be as hungry as you can. If you are
serving AI savvy builders, AI savvy
consumers, you want to be as quick as
you can and you want to actually make
sure that you are iterating faster than
anybody else. And finally, if you're
serving consumers as a whole, which is a
different demographic,
you want to be in a position where you
can make the change translatable and
manageable to them. I saw a great ad
from Google that really illustrates
this. I think the ad without generating
any liability for Google took a poke at
Apple and basically said without using
the word Apple that another big phone
company had made a lot of promises about
AI and hadn't kept them.
which everyone knows Apple made a big
deal about rolling out AI and couldn't
keep it and then suggested maybe the
step forward is to actually work with a
phone that can keep its promises. Right?
You need to make these big AI changes
feel natural and feel easy for consumers
and that leads to an interesting hybrid.
So the first sort of demographic change
is like if you're serving business
customers, you're going to be pushed
into a more business quarterly cadence
kind of workflow just naturally and you
have to fight that to go faster. On the
other hand, if you're serving super AI
conscious consumers or builders or solo
founders or tiny startups, you're going
to be pushed to go faster yourself. And
then in the third category, if you're
serving consumers, you are going to be
pushed
to make sure that you can translate the
change and make it easy to understand.
And that tends to lead you to a slightly
more unpredictable cadence. You
basically have to wait till you have the
thing that you know the consumer is
going to want and then double down
there. Those are three different worlds.
Essentially, what I'm suggesting to you
is that instead of assuming your given
size dictates the game and just letting
it be, think about it as your customer
would like you to work at an ideal pace,
what is that pace? And maybe see if you
can shift the constraint set in that
direction. Number two, AI changes what
building software is. We talk about it a
lot, but I want to simplify it a little
bit. AI changes building software to a
conversation.
I think Dan Shipper at every is a good
example. He can have a conversation with
Claude code that eventually becomes a
feature and not just him other people
who are non-engineers can have that too
in his company.
If you can get to a point where you can
describe what you want increasingly if
it is a fairly buildable smallcale piece
of software the AI can do it for you.
Maybe it's not cla code maybe it's
codeex but it can still do it for you.
Maybe it's lovable.dev. dev, right? And
vibe coding sounds too good to be true
until you see it work. I have had
moments and I think this is actually
really important to do. So this listen
up if you work in a bigger company. I
have had moments where I've talked with
directors and above at larger companies
and they just don't have time. They all
have meetings. They don't have time for
practicing vibe coding, for seeing how
lovable works, etc. Pull up lovable.dev
and show them how easy it is to build
something. I have seen jaws drop in a
sense. Part of the gap in AI is just
knowing what you can do and having the
time to try it. And so part of why there
is a there there's a speed gap that
people assume is true between startups
and enterprises
is that the startups have nothing to
lose by dramatically shifting the way
they build software. And the enterprise
has a lot to lose. the enterprise has
engineering debt and it's non-trivial. I
don't want to set this up as like an
obvious choice to go with a startup. As
an example, if you are shift shipping AI
code changes, shipping AI code lines
over time, you run the risk of your
codebase at scale, at enterprise scale
becoming less understandable to you.
That in turn generates a tremendous
amount of cognitive debt, particularly
for senior engineers. And so it's not as
simple as saying, well, you have to get
with the program and just ship more AI
code and go faster. You actually have to
think about how organizational size
dictates different approaches to AI. I
would argue in this case AI is a
conversational process regardless, but
it's easier to start the conversation
for startups and for larger companies,
it's more difficult to start that
conversation. You may have elements that
feel conversation that only happen after
after you've done all of the
requirements and compliance pieces. And
that's what I see anecdotally at larger
companies. They do all of those initial
steps first and then the creation of the
software ends up being more sort of
conversational wherever they can, right?
If they're using claude, if they're
using cursor, other other tools.
Principle number three, technical debt
is increasingly optional.
So I think there's a number of dynamics
here. Another way I've talked about it
is that the cost of technical data is
going negative.
Code quality at AI native startups
ranges widely. If you have a very strong
engineering founder on the team, it's
often higher, but it doesn't have to be
all that high to ship these days. There
are startups where no one knows how to
code and they're doing over a million
dollars in ARR. And I I want to suggest
that what that implies is that you could
always buy your way out of technical
debt with time and with good engineers.
And increasingly
time is not the factor. Scale is not the
factor. You just need to have enough
scale to be able to afford a production
engineer to refactor something if you
need it. But that is often so far down
the road that you kind of don't care for
a while. If you can hit a million
dollars off of Vibe Coding, which
literally the first startup that has hit
a million dollars off of Lovable already
exists,
you're not you don't really worry about
technical debt. You don't worry about
whether you fully understand the
codebase. You just worry about whether
you're shipping for the customer. Now,
if you have to pass compliance as an
enterprise company, you do worry about
the codebase. It's not optional to care.
Technical debt can become a legal
liability.
And so
I think that this is one of those areas
where we are living in a movable
constraint set and we need to understand
the next six or eight months. Code is
one of the spaces in AI where tech is
invested so heavily in leveling up the
ability of AI to agentically fix code
that we are going to get massive
breakthroughs in the next few months.
That suggests to me that the problem of
refactoring even in large code bases is
going to get progressively easier.
I don't think that's the same thing as
saying that AI will be able to do senior
engineering work because senior
engineering work is a lot more than
refactoring code bases. But it is
something that we should keep in mind
because what it suggests is that the
cost of technical debt has fallen. Yes.
Is probably negative for startups. Yes.
but is also falling for larger companies
very very quickly.
This is the source of some of those big
headlines you see where Amazon or others
say publicly, we saved so many thousand
man years by using AI to transition our
code from this language to that
language. And they're doing it already,
right? They're going to do it even more
as the ability to handle context windows
gets better, as these AI agents become
more proactive and able to work around
problems, etc. This is a lot of the
reason why chat GPT5 is agentic because
it's designed to solve problems like
this. So what's interesting is technical
debt is becoming optional.
It's becoming something you don't have
to care about at a startup level. But I
think the thing that I take away from
that is that in a lot of ways startups
were always advised not to care about
tech debt in the first place. And so
what's changed is that they're still
told not to care about it, but now they
don't really have to pay to clean up the
mess either. like they don't have this
huge bill that comes due at series A or
series B to clean up all of their
architecture in the same way because
it's so much cheaper and faster to
address it. Enterprises still have the
compliance burden, but the cost of
addressing it is rapidly improving as
well. And so this is one to watch. This
is one where I think we'd have a
different conversation in 6 months.
Number four, success starts with pain.
Every AI adoption story that that you
see starts with a team or a founder
drowning in work that embraces AI as a
band-aid. That's a very consistent
pattern. It extends to the consumer,
too. People don't embrace stuff unless
it immediately solves a real problem.
And so, when you think about or want to
criticize how enterprises are adopting
AI,
one of the things that you need to think
about is how real is the pain at the
level of the team in the enterprise.
Does the team feel the pain of not
adopting AI?
Because startups feel that pain right on
the revenue line. If they're not moving
fast enough, if they're not shipping
fast enough, somebody doesn't get paid
that month. Whereas for larger
companies, where's the consequence to
half-heartedly adopting chat GPT and
only using it to write your emails? It's
not a lot in most in most enterprises.
So, success has to start with pain. And
one of the things that I think that
we've done a poor job of in midsize and
above companies is really making it
clear that the pain may not it may not
be acute but it's real and is going to
hit the company in a existential way in
the next few years if they don't address
it. I do not believe in a world where an
enterprise can whistle by the graveyard
and skip AI and get away with it. Almost
every company is going to have to
confront this and the longer they wait
to confront it directly,
the worse off it will be for them. And
so this is a case where I think startups
should be kinder to enterprises in
mid-market because they should recognize
that getting an entire company to feel
pain is a very difficult art. Takes real
leadership. Steve Jobs had it. Not
everybody has that leadership.
If you are a leader in a midsize or
larger company, part of your job is to
get your team to feel pain until they
start to adopt AI. Number five, the
workflow matters more than the tools. I
have sat there in conference rooms and
talked to larger companies and they just
sort of make long faces and they're sad.
They say, "Well, we don't we can't get
chat GPT. We can't get Claude code.
All all we can do is we can use
Copilot." The workflow matters more than
the tools. The workflow matters more
than the tools. I have written an entire
guide for how to use Copilot. It exists.
It's out there. You can get it. It's
right on Substack. But it's not about my
guide. It's about the idea that you can
integrate any AI tool into a good
workflow. And if you actually use it, it
will be the smartest tool out there that
people don't integrate into their
workflows. Part of how startups have an
advantage here is because one brain can
hold the whole workflow.
one brain. One brain can hold the whole
workflow and in a larger company that's
not true. You have to have a lot of
people working together to shift a
workflow and coordination problems take
time to solve. This is actually one of
the reasons why I think that the excited
guesses that AI would take a bunch of
jobs in the enterprise. If you're trying
to design workflows across multiple
teams, the sheer fragmented knowledge is
actually really, really hard to
overcome. It's actually not something AI
does a good job of. It's something
humans do a good job of. Humans
need to come together,
do AI sprints, do something that helps
you to figure out as a larger team how
to actually build an AI first workflow
that doesn't just stop at the level of
the individual or stop at the level of
the small team. And one of the things
that I noticed about this is that
startups assume that again their speed
is their advantage which I suppose is
true
and the other guys are bad at what
they're doing and that is not true.
Workflow has 100 to a thousand times
more leverage at a larger company. I
could run workflows when I was at Amazon
that would shift the way thousands and
hundreds of thousands of titles worked.
Well, you can't do that at a startup
because you don't have the product for
it.
Workflow matters more than tools and
workflows are higher stakes and higher
leverage at bigger companies. And it is
more important to get them right and it
takes more people to get them right. And
so you have to work together to build AI
workflows at big companies in a way you
don't at startups. Number six,
experience tends to create resistance.
This is counterintuitive. We like to
think if only they could see AI
everything would be good.
My observation is that is not always
true. People will try AI. They'll have
one or two disappointing experiences. If
they have a prior suspicion of AI, they
will just use that suspicion and those
one or two experiences as a reason to
say no, I'm done. Especially if they're
worried about their role or they're
worried about having to change what they
know. I know developers who are walking
out of tech rather than deal with the AI
change. they're like I want to go do
something else right like they can take
up a hobby something else this is an
area where if you are getting started
you can have an advantage in your career
by being AI native from the beginning
because there are enough senior people
who are not ready for this kind of
change and voting with their feet to say
I'm done I don't have to work on this
right now I'm leaving well as a junior
person you can step in you can be the
one that shows that experience doesn't
have to create resistance Again,
startups advantage here comes from
having fewer people. You can handpick
your people. Your people all can be
people for whom experience creates
optimism and forward motion. People for
whom experience is a good thing. People
who are AI native. At a larger company,
you have to work with the people you
have, many of whom have domain knowledge
that is highly specialized that you
can't move out. And so the problem then
becomes how do we make sure that the
resistance tendency is downleveled and
minimized as much as possible across
this large multi,000 person company.
That is a different kind of hiring
reality. You're looking at incentives.
You're looking at team leadership.
You're looking at how you can make
training matter to different teams with
widely different needs. You're looking
at honest conversations about what it
means to have career growth at a larger
company with AI. And I I'm going to
remind you again,
the senior developers and the principal
developers at large companies on the
whole are higher quality engineers than
most of the engineers at smaller
startups that I've worked with. I've
worked with both. I love working at
there's a reason I go to startups. I I'
I've loved startups because of how fast
they've moved. But
the kinds of quality engineers that you
see who are lifers at these larger
companies
absolutely extraordinary and it is very
rare to see them in the startup world
unless they are founders. And so as much
as startups would like to say, well, we
can just pick good people and we're just
amazing. Like,
you know what? I doubt that most of your
engineers operate at the level of the
senior principal engineers at Meta or at
Apple or at Amazon or at Microsoft.
And so, in a sense, you have to pay the
piper. If people that good want to take
their time understanding AI, you have to
make sure that you bring them along. And
that's another difference. So, those are
six principles we've gone through,
right? Experience creates resistance.
Workflow matters more than tools.
Success starts with pain.
Technical debt is becoming more and more
optional. That one's really fluid. AI
changes what building software is all
about. It makes it a conversation.
And different constraints create
different correct answers.
Let's step back and let's look at the
future. What is what is actually
happening here? Number one, the velocity
gap is real and growing. I think that
that's something that I emphasize a lot
when I talk to leaders at midsize and
above businesses because disruption risk
is in some ways a function of velocity
and AI enables such a velocity
differential for AI native startups that
traditional enterprises should be more
worried. AI native startups can be
orders of magnitude faster than the than
the regular startups the 2010s. If you
can ship a feature in an hour that would
have taken two weeks in a team of
engineers in 2018.
Well, you are definitely going faster
than startups went a decade ago and that
poses more disruption risk to enterprise
over the long term. These are
fundamentally different realities.
They're fundamentally different physics
for startups.
But velocity without direction is speed.
And so the risk of startups shipping 20
versions a week is that they don't have
the discipline to learn and they are
just accumulating chaos. And that is the
one advantage enterprises bring. They
put so much intention into a ship. If
they do one solid version in 6 months,
but it's actually builds on itself and
it builds a flywheel of value for
customers, they might still win. So the
question really isn't who is right. The
question is what happens in a world
where startups are 10x faster than they
were and AI is desperately trying to
crack through in the enterprise.
Will startup velocity actually disrupt
enterprise reliability? Will enterprise
reliability and enterprise customer
distribution win? This is the question
we're facing. And this is the question
that like AI adoption is actually
driving it in both of these
environments. It really matters. Dario
Amade predicted that AI would write 90%
of code in 2025. Technically,
that is true at some startups and it is
becoming more true at enterprises and
those mean two different things. At
startups, I buy it. If AI writes 90 95%
of your code at startups, you were going
fast. The way we've talked about, if AI
writes 90% of your code or 60% or 70% of
your code at a large company, I think
the CEO of Coinbase said that something
like 60% of the code at his company was
written by by AI, I immediately ask
questions because it's a different kind
of problem. Are you incentivizing your
engineers by lines of code at that
scale? Because you don't have that
problem at a startup. You just want to
ship the feature. But at scale where
people are incentivized by goals and
metrics, if you say it's lines of code,
will people just be incentivized to
write big bloated features to hit the
lines of code goal? Anecdotally, that is
absolutely happening. And so my question
at a large scale is should you be even
thinking in percentage terms?
And then two, is Daario simplifying the
world too much? And maybe what we're
headed for is a bifurcated world.
where you have AI assisted code
composition and structuring at larger
companies and I don't care or know what
the percentage is. It's a meaningful
percentage of code that you could say is
written by AI but humans have a huge
role to play in how they structure
because the complexity of the system and
that it's nearly 100% of code written at
small startups that feels like a more
realistic world. I will also call out
this is continuing a trend where large
model makers are making predictions that
are directionally true. But when we come
up to the actual deadline and we look at
it, we're like h it's a little early.
I think that one of the things I'm
learning to do with Sam and with Dario
and with other major model maker CEOs is
I discount their predictions by a year
or two. I I take whatever they say I'm
like give it another year or two beyond
what they say because they live in this
world where they have better models than
are out in the public. They are immersed
in AI all the time. They may not have
the perspective to see this wider world.
So where does this leave us? We need to
be
building companies that reflect the
constraints of the game we're actually
in that focus on what customers need in
a world where frankly customer
appetites, customer demand is shifting
really rapidly. And we need to recognize
that the stakes are existential. If we
as a startup, we as midsize, we as
enterprise don't figure out how to
meaningfully build AI into our
workflows, the disruption risk is real
and fast. It's like in Jurassic Park
where the raptors kept getting smarter.
Other businesses are like those raptors.
They're keep getting smarter and keep
getting more AI enabled and they're
going to catch you unless you're able to
actually put AI into place. And so my
suggestion to you is that you should if
you are a large company, look at those
small company learnings. Look at the
small company tool stack. See what you
can learn. See what you can learn about
speed. If you are a startup, if you're
in the small company category,
see what you can learn from the
reliability, the stability, the system
architecture of the big company. At
least acquire some sympathy for where
they're at because that may be you.
particularly if you serve businesses.
Both parties here have a lot to learn
from one another and I sometimes feel
like I'm the only one talking to both
and they mostly want to throw rocks at
each other. So everybody's in this
together. We're all learning AI
together.
Let's see what we can