Nine Overlooked Lessons for AI Builders
Key Points
- Building AI‑driven products is challenging because each prompt is essentially a piece of the final system, and many developers overlook recurring pitfalls throughout the journey from chat interfaces to fully integrated apps.
- Chat models are “weakly intelligent”: they lack direct access to a user’s data environment, making them useful as rapid task starters but insufficient for high‑precision, end‑to‑end workflows.
- This weakness creates a strategic split: precise, enterprise‑grade AI solutions are needed for complex tasks, while casual‑use AI tools must fight against the dominance of the weak‑intelligence layer that already satisfies most everyday needs.
- The sticky, low‑bar nature of weak AI mirrors how Instagram rode the mobile phone wave, suggesting that despite its limitations, weakly intelligent chat can become a pervasive consumer platform and reshape how both builders and ordinary users engage with AI.
Sections
- Overlooked Lessons in AI Building - The speaker outlines nine common misconceptions about using chat AI—highlighting its weak intelligence, data isolation, and non‑specialized nature—to guide users from casual conversation to building robust AI‑powered applications.
- Rethinking Development Vocabulary for AI - The speaker argues that while AI accelerates coding, developers remain constrained by legacy, data‑centric design patterns and fragmented UI flows, necessitating a new building vocabulary that treats whole conversations as core user experiences.
- Planning Is the Real AI Leverage - The speaker argues that people underestimate AI because they ignore the outsized value of intentional, well‑planned work—enhanced by AI tools—where thorough planning, not the technology alone, yields the greatest impact.
- Talent-Driven Coding Tool Preference - The speaker argues that developers will select AI coding assistants (Claude, Cursor, Lovable) based on their skill level and existing talent, leading to brand loyalties akin to the 1990s Mac‑Windows rivalry rather than purely on technical capabilities.
- Data Middleware Bottleneck - The speaker explains how corporate lock‑ins and lack of a data‑middleware layer restrict AI access to enterprise data, creating costly token consumption and hindering AI’s effectiveness.
- Standardizing Work for AI Readability - The speaker explains that the growing value of AI is driving people to adopt uniform, tokenizable templates so machines can process their work, a shift that will blur the line between AI‑generated and human‑crafted output until a new professional standard emerges.
- Key Factors for Effective AI Agents - The speaker outlines nine critical considerations—including precise token‑depth control, aligned incentives, privacy‑driven data middleware delays, under‑invested distribution experience, and the rise of tokenizable templates—that shape intentional and successful AI agent deployment.
Full Transcript
# Nine Overlooked Lessons for AI Builders **Source:** [https://www.youtube.com/watch?v=bjcDgqKgvho](https://www.youtube.com/watch?v=bjcDgqKgvho) **Duration:** 00:25:26 ## Summary - Building AI‑driven products is challenging because each prompt is essentially a piece of the final system, and many developers overlook recurring pitfalls throughout the journey from chat interfaces to fully integrated apps. - Chat models are “weakly intelligent”: they lack direct access to a user’s data environment, making them useful as rapid task starters but insufficient for high‑precision, end‑to‑end workflows. - This weakness creates a strategic split: precise, enterprise‑grade AI solutions are needed for complex tasks, while casual‑use AI tools must fight against the dominance of the weak‑intelligence layer that already satisfies most everyday needs. - The sticky, low‑bar nature of weak AI mirrors how Instagram rode the mobile phone wave, suggesting that despite its limitations, weakly intelligent chat can become a pervasive consumer platform and reshape how both builders and ordinary users engage with AI. ## Sections - [00:00:00](https://www.youtube.com/watch?v=bjcDgqKgvho&t=0s) **Overlooked Lessons in AI Building** - The speaker outlines nine common misconceptions about using chat AI—highlighting its weak intelligence, data isolation, and non‑specialized nature—to guide users from casual conversation to building robust AI‑powered applications. - [00:04:45](https://www.youtube.com/watch?v=bjcDgqKgvho&t=285s) **Rethinking Development Vocabulary for AI** - The speaker argues that while AI accelerates coding, developers remain constrained by legacy, data‑centric design patterns and fragmented UI flows, necessitating a new building vocabulary that treats whole conversations as core user experiences. - [00:08:34](https://www.youtube.com/watch?v=bjcDgqKgvho&t=514s) **Planning Is the Real AI Leverage** - The speaker argues that people underestimate AI because they ignore the outsized value of intentional, well‑planned work—enhanced by AI tools—where thorough planning, not the technology alone, yields the greatest impact. - [00:12:21](https://www.youtube.com/watch?v=bjcDgqKgvho&t=741s) **Talent-Driven Coding Tool Preference** - The speaker argues that developers will select AI coding assistants (Claude, Cursor, Lovable) based on their skill level and existing talent, leading to brand loyalties akin to the 1990s Mac‑Windows rivalry rather than purely on technical capabilities. - [00:15:50](https://www.youtube.com/watch?v=bjcDgqKgvho&t=950s) **Data Middleware Bottleneck** - The speaker explains how corporate lock‑ins and lack of a data‑middleware layer restrict AI access to enterprise data, creating costly token consumption and hindering AI’s effectiveness. - [00:20:33](https://www.youtube.com/watch?v=bjcDgqKgvho&t=1233s) **Standardizing Work for AI Readability** - The speaker explains that the growing value of AI is driving people to adopt uniform, tokenizable templates so machines can process their work, a shift that will blur the line between AI‑generated and human‑crafted output until a new professional standard emerges. - [00:24:03](https://www.youtube.com/watch?v=bjcDgqKgvho&t=1443s) **Key Factors for Effective AI Agents** - The speaker outlines nine critical considerations—including precise token‑depth control, aligned incentives, privacy‑driven data middleware delays, under‑invested distribution experience, and the rise of tokenizable templates—that shape intentional and successful AI agent deployment. ## Full Transcript
Building with AI is hard. Every time we
prompt, we're building something. Even
if it's as simple as a conversation with
AI. These are the nine most overlooked
lessons that I have seen come out again
and again and again as I have coached
people through the journey of getting
from the chat GPT interface to actually
vibe coding and building apps with AI.
I've done it for dozens and dozens and
dozens of people. I've done it at
company scale. This is what pops out to
me that people don't understand that I
wish they would. And if you think to
yourself, I'm not a vibe coder, Nate.
Why would I care? You care because this
shapes the entire strategic landscape.
And you care because this also shapes
the way you engage with chat today. Even
if you're just a casual chat interface
user. Number one, chat is not a
specialized tool. And that's a very
interesting problem. I have seen claim
after claim after claim that the chat
interface won't stay. It's heavy on UX.
It's not intuitive. How can it get 800
million users for chat GPT? I have come
to a deeper understanding. Chat is
dangerous and it's a problem because it
is a weekly intelligent layer. And you
will tell me, well, it depends on the
model, Nate. I have a great model. Why
is it weakly intelligent? It is weakly
intelligent because the true
intelligence of the system depends on
the data inputs and most chat models are
strikingly isolated from the data
environment you operate in dayto-day. I
will stick to it. It is weekly
intelligent. It is good enough to be a
task starter. You can get going quickly
but it's not ultimately good enough to
finish the job. That is a lot of the
promise of AI agents that they will be
good enough to integrate with the data
layer. We have not seen that really
transpire yet in 2025. So what does this
mean for builders and users? Number one,
this incentivizes serious AI builders
that require precision because if you're
doing a big task that requires
precision, you cannot do it well in a
chat model. It's not good enough. On the
other hand, this disincentivizes
AI tools that fall in the casual use
category. If you are a casual use tool
builder, if you are interested in that
category, it is tougher because the
weekly intelligent layer will eat you
alive. People are habitually addicted to
weekly intelligent AI. It is good enough
for most things. Weekly intelligent AI
task saturates for most people. And so
I've been wondering, a lot of people
have been wondering, is there an
Instagram, a wide consumer success story
that should live off the top of a new
underlying technology? Insta lived off
the mobile phone. When when the iPhone
took off, Insta was the runaway success.
I still remember that. Maybe chat GPT is
the Instagram of this era. It is showing
that kind of consumer stickiness and it
is making me think that this kind of
weakly intelligent AI is surprisingly
sticky for casual application. I wish
people understood this better because it
actually suggests a lot of upside for
people building serious tools because
the weekly intelligent AI is just good
enough to get you started and I've seen
this happen over and over again. Anyone
working seriously with AI does not
finish the work in chat GPT in Claude in
whatever tool you're using. They may
start there, but they're moving
elsewhere to get the job done if they're
real crafts people. That's number one on
chat as a tool. Number two, one turn
versus multi-turn conversation. Almost
everyone that I encounter that's getting
started in a building journey or getting
started with AI thinks in one turn
conversations. You know what? That's
really natural because AI is
reinforcement learned. It is RL is built
for one turn conversations. The AI
itself is architected for that. But the
value is in multi-turn. That's where you
focus. That's where you find. That's
where refine. That's where you do real
intellectual work. And so if you think
about it, a good prompt is not
necessarily designed to get you a
oneshot answer for the whole thing. Most
people using AI seriously think about it
as I have an anchor prompt at the top
that shapes the parameters of this
thread and then the thread itself is
what exposes the intelligence I need to
do interesting work the conversation the
back and forth the refinement but people
who are casual users don't understand
that people and we don't understand that
as builders because we tend to build and
assume these systems are one-turn
systems one of the biggest gaps in AI
today is that we aren't building for
conversations. We're building for chats.
We should be thinking about how you
surface intelligence out of
conversations and how you make whole
conversations a fundamental unit of the
user experience because increasingly it
is. But we're stuck searching in the
sidebar for these chat and it's just
it's really painful. It's really awful.
Number three, we need a new building
vocabulary if we are going to build
successfully. Most people who start to
build don't get stopped by AI. They get
stopped by 2000s and 2010s and maybe
earlier era building systems. If you
think about it, the fundamental way
we've coded has not evolved that much.
Yes, AI enabled coding is making us able
to produce that code faster, but the
vocabulary of building still revolves
around having a database, being able to
transmit data securely to and from the
database, doing operations with business
logic against the database. None of that
has changed. And so when people want to
vibe code and they run into trouble, it
is almost always not the AI. It is
almost always that they are struggling
with how to build a datadriven
application of some sort. Maybe it's the
transactions piece that messes them up.
Maybe it's login. Maybe it's some custom
integration they need. And what that
suggests to me is that our vocabulary of
building is due for a change. And there
is opportunity on the table for people
who can figure that out. With cloud as
an example, we still needed to know
files and file structures and code, but
we could care a lot less about where all
of that lived because cloud made it
possible to live anywhere on the planet.
You can put that code anywhere you want
and you'll be good to go. Now we need
the same kind of convers conver to occur
around how we take the intelligence we
develop in conversational threads and
get that into a build environment that
abstracts away from the fundamentals
that are tripping up lots and lots and
lots of wouldbe builders. It's like we
are introducing people to a kitchen and
the first thing we're doing is saying
this is how a refrigerator works.
Architecturally, you need to understand
a heat pump to make this work. Or this
is how your burner processes natural gas
or your electric induction burner. This
is how it inducts. If you have to know
that to cook, you're not really getting
it done. Now, tools like Lovable have
done a great job trying to abstract
that, but it is not seamless yet. We
need tools that will take conversational
intelligence which I talked about
earlier in this video and will make that
central unit of understanding. I think I
want to suggest something slightly
spicy. I think the conversation is due
to take the place of the file. Yes, we
may have files underneath. I'm not
saying we won't have an underlying
substrate that's files because that's a
surprisingly sticky workspace. But
systems need to be enabled that let you
build over the top and hide the
underlying complexity. Open the
refrigerator door, there's food there,
start cooking, right? That's the
analogy. And that is the dream of these
AI building tools. And I am here to say
they are only partway there. And I think
most of them would admit that they're
only partway there. It is still a case
where it is complex to hook up a real
application. You have to understand
something about data. You have to get
into the wires and integrate stuff. If
we really want to enable the power of AI
agents to build across conversations, we
have to make conversations a fundamental
unit of computing. And we haven't done
that yet. Number four, AI planning and
underestimation. Most people still
underestimate what AI can do
dramatically. And I think that that's
not a surprise. That's not the insight.
What's interesting to me is that you can
put a multiple on that underestimate if
you tell them that there's more value in
planning than in the AI itself. If I
plan my conversation for 2x the length
of time that I would have allocated
otherwise if I plan it for 20 minutes
instead of 10, I get far more value
because I took the time to think about
the conversation I wanted to have. Now,
we're all ripping off casual
conversations all day. That's not what
I'm talking about. I'm talking about an
intentional conversation where you're
trying to understand and build really
interesting intellectual work, a complex
document, a piece of code, whatever it
is. The planning matters and people
dramatically underestimate AI precisely
because they don't understand the
leverage that planning can provide. Now,
AI can help with planning. It's not just
me and my brain and a pencil anymore.
You can use AI and prompts and things
like that to help you get to clarity,
but you still have to invest in the
planning stage. And so when I talk about
people underestimating AI, yes, they
probably don't realize the power of the
newest models. Yes, they don't realize
that there are specialized tools that
enable you to do really cool stuff. All
of that is true, but they really really
don't understand, most people really
don't understand that the leverage is in
the planning if you're doing serious
work. And it and that leverage like you
can say in management theory the
leverage was always in the planning
right like we were always supposed to do
the planning but with AI power law
returns are accelerated. In other words
the things that were true before are
even more true now. And so planning has
more leverage because you have more
intellectual horsepower behind you when
you execute. So get the planning right
or you end up in really wrong territory.
Number five, build tool are in a really
interesting position right now and I
haven't had time to really dig into this
and unpack it, but it's important to
talk through. Fundamentally, you have
three classes of building tools. You
have cursor, which is a dedicated
development environment powered by AI.
You have Claude code, which is a
terminal that you can invoke Claude on,
and Claude will just build for you in
the background. You don't really touch
the files. And then you have the AI
powered prompting build tools like
lovable. Those are your three basic
classes. What's interesting is Lovable
is aggressively scaling up its
capability set. Lovable is tackling
exactly what I'm describing here as they
enable you to write more and more
complex applications and finish them in
the next year or two with the power of
underlying models scaling up. Hello chat
GPT5. We don't know if it will work, but
maybe it will. They've certainly been
hinting that it's a strong model. But
the point is the models will get better.
Lovable is going to get better. Lovable
is going to eat cursor's lunch from the
casual users side because before casual
users eventually hit a point where they
had to graduate to a more complex
development environment to get something
done. I have seen personally how much
those tools have evolved. Replet is also
in that class. how much they've evolved
to the point where you could do some
serious work with those tools and they
are continuing to evolve very rapidly
which means you can do more and more
serious work in vibe coding tools at the
same time code is taking advantage of
the agent layer cla code is saying you
don't want to mess with files you can
just type and I'll go do the work and
because it's claude code and it was
developed to be a development assistant
within enthropic it is an excellent
model at doing exactly that like the
anthropic team built it to help them and
boy does it show. It's a really good
model. And so cursors squeezed on the
serious builder side by claude code and
on the casual builder side by tools like
lovable. And the question becomes, is
there a middle ground? Is there a middle
ground for a development environment
where you're going to have some AI
agents, you're going to have some
conversations that lead to code, and
you're going to have some hand coding.
That is one of the biggest and most
interesting questions in tech, right?
And my thesis is that that is going to
be a talent dependent huristic. You will
pick the tool across those three
depending on the talent set you have.
And your top tier engineers eventually
are going to be moving to claude code or
something a lot like it because they
have the power of the LLM. They have the
agents that they can go after and claude
code will evolve a way to get into
editing code very very easily. People
are already rolling their own. that's
just going to become part of the fabric
of the tool. And then the mid-tier
engineers that still want to have some
hands-on code, mid-tier talent are going
to stick with cursor because it feels
like the development environment that is
familiar to them. And the people who are
aggressively getting into tech for the
first time who never went to engineering
school are going to be living and dying
on lovable. And in a sense, it will be
less and less about the capabilities and
it will be more and more about the brand
affinities that these tools produce. We
are going to get into a place like the
Mac and Windows wars of the '90s where
people have a brand affinity for
something and they really believe in it
and they will go to the mat for it. But
if you dig underneath, it's because
their life experiences line them up
where it makes more sense for them to
use that tool.
All right, let's move on. Number six,
let's talk about token depth. People
don't realize that these tools that we
use, these AI tools don't have common
token depth. And so what I mean by that
is the amount of tokens they are willing
to burn to get a solution done varies
and is not transparent and is something
the model makers are incentivized to
constrain and people are incentivized to
get more of. When you talk about these
like declining quality, I I see people
saying declining quality of this tool
net to I lose track. Cursor had it, cla
code had it, others. What you see under
the surface is that people can't measure
the intelligence they're getting, but
they suspect based on fingertippy feel
of what they're using that something has
changed in the model. And if you look
under the surface, often what has
changed is that there is less tokens
available to be spent because tokens
aren't cheap. And so model makers are
incentivized to constrain them a little
bit if they get product adoption.
Ironically, token depth is nonlinear.
The primary value of agents is to
increase token depth because problems
tend to be token fungeible. Let me say
that again. The primary value of agents
is to increase token depth. That is
actually directly from a factorial study
that anthropic conducted which said that
the primary value of multi- aent systems
is to achieve additional token burn
against problems. And I wish people
understood that better because at the
end of the day, token depth is something
that it is very hard to control right
now as a user. And anyone who launches
something that goes beyond think hard as
a command or as a toolclick setting is
going to enable users to have a
previously unprecedented level of
control over how much they want to solve
a problem. I actually want to call out
Manis here because Manis has done a
great job basically saying you pay as
you go. Now, there's other issues with
Manis, but they've done a phenomenal job
saying, "We want our incentives to be
aligned with yours." And so, if you want
an excellent AI agent that solves a real
problem, we can do that for you. You'll
pay this much. You pay as you go. When
you run out of tokens, you pay for more.
Everybody's aligned. You don't have the
same issues because you can pay for the
token depth you need to solve the
problem. We need that kind of tooling
across the entire ecosystem. Number
seven, data incentives are finally
delaying the data middleware layer. Like
it is it is a real problem and it is
becoming a worse problem. Let me explain
what I mean. Salesforce locked off
access to Glean. Why? Because Salesforce
is incentivized to keep Slack data
inside the house so Glean the AI tool
can't access it. That is happening
everywhere. The problem is that data
needs a middleware layer to actually re
realize the value of AI. AI needs data
to work well. Remember when I talked
earlier in this video about the idea
that most of our chat bots are isolated?
That's not an accident. That's the
result of the fact that data middleware
doesn't exist. Data middleware to
translate large volumes of data into our
AI experiences is largely missing. There
are players that want to make that
happen. Lately, it's been agentic search
and the idea that you can use AI agents
to go and far it out the data that seems
like a very expensive way to solve that
problem. As I said earlier, agents cost
token burn. You'd rather just have the
data made available really easily and
you can review it. But either way, data
availability is more of a bottleneck
than data. data is being incentivized to
be locked off because boardroom after
boardroom is being told don't let your
data out of the house. Everybody's
putting up walls and moes around their
data and the intelligence needs that
data to operate successfully. And so
there is a missing middle of data layer
companies that I really want to see
exist that would enable us to have
datadriven connections for our AI
tooling. Without that, it is very very
difficult to get effective AI agents
that operate as crossf functionally as
those demos would have us dream. It is
going to be difficult for chat GPT to
launch a magical AI office assistant. If
the data layer isn't fully integrated,
if the data layer isn't fully there, now
that's not going to stop them trying.
That's not going to stop them
negotiating with major players. Neither
will it stop Anthropic or others who are
in that race. The point is that we need
a data availability incentive shift.
People need to see maybe not releasing
confidential information, but releasing
some structured information in a data
stream is productive for everyone. It's
kind of like the HTML of the internet.
You want to have a web page. Sure, it
has company information, but you want to
have it. In the same way, you need to
have data availability that makes it
easy for AI agents and AI tooling to
operate against your data, not just your
graphical user interface. And now we
come to number eight. Distribution and
data user experience design. Yes,
distribution is king. We've talked about
that a lot. I'm not the only one to say
it. Distribution is king in the age of
AI. But the way to win distribution if
you're starting from scratch is through
seamless data experiences. And this is
where I see so many tool builders go
wrong. They invest in the isolated
experience of the CX and they don't
invest in the data integrations that go
with it. That is a big issue and it
means that people will feel like that
experience is more isolated. They have
to port more of the data in. It's a
heavier experience. They're not going to
come back. This is severely
underestimated in most MVPs. It's
underinvested. The data is there. The
question becomes how do you get it? And
right now, agentic search seems to be
one of the ways people are going after
it because they have to solve the
problem themselves in the absence of
that common norm and etiquette around
making data that is appropriate easily
available to agents. I do think we need
that latter piece as I've said, but in
the meantime, people are using something
like agent search to go and get the data
and ferret it back. Perplexity has been
very vocal about doing that. So if you
want distribution, win it through
seamless data experiences. Last but not
least, templates and AI structure. This
is an observation about the way we will
be working in the future that I think is
really important because so many of our
problems are token fungeible which I
talked about earlier in this video. The
world is going to norm to tokenizable
templates. In other words, when OpenAI
released agent mode and I found that it
couldn't handle my particular workflows
very well, other people rightly pointed
out, well, it handles mine fine. And you
know what was the common attribute of
the people who said it handled theirs
fine? They had tokenizable templates.
They had workflows that were fairly
standard that would have been
reinforcement learned against uh things
like a very standard discounted cash
flow sheet. Yeah, I bet agent mode can
do that. It is standard. It is middle of
the distribution. It is not too hard. AI
is not going to handle the 25,000 row
spreadsheet that someone in your office
maintains to keep all of marketing
operations online. We are not close to
that. What's interesting about that is
that because AI is so valuable, we see a
pull factor toward those more standard
templates over time. It's not just that
your manager will tell you, hey, we have
to use the normal template today I can
read. It's that you yourself want to use
it because you want AI to be able to
read it and help you. I see this in
myself. I am using more normal and
standard templates where I can because I
want AI to have an easier time reading
it. I think in terms of chunks of
information that AI can consume. Our
work is becoming tokenizable templates
because we need to make it AI readable
for a bit. This is going to make it
harder to distinguish between AI slop
and real good work because they're both
going to start using the same template
for a while. That's going to be one of
the confusing things about the next year
or two. That being said, we will
eventually reach a professional standard
where we say this is what good work
looks like. Yes, it's AI readable, but
humans with taste also contributed
because even if you have standardized
pieces of work, the taste and craft that
enables someone to define a usable
business mode is something that humans
are going to be bringing to the table.
the skin in the game, the ownership
sense. These are things that humans are
going to be bringing to the table, but
they may be bringing them, they may be
doing this work through easily
tokenizable templates. And so I think
that our work norms and artifacts are
due for a shift. And yes, if you want to
go there, maybe this means there's a
shift in tooling. Maybe this means that
Word is right for disruption. But boy,
have I heard that before. Some of these
tools are surprisingly sticky. So there
you have it. nine different principles
that I have seen come up over and over
and over and over again as I look at the
journey people take from individual
contributor chatting with chat GPT or
co-pilot or whatever you have to I'm
using it seriously as a workflow tool
maybe I'm vi coding and building or
maybe I'm using it to create complex
documents but it's a big workflow those
are the things that most people don't
recognize they have trouble articulating
that I wish we knew and talked about
more I'll review them briefly here One,
chat is not a specialized tool. Chat is
weekly intelligent and that produces
really interesting incentives for
builders and we should be aware of it as
users. Two, people think in one turn
conversations. They should think in
multi-turn conversations. Three, we need
a new build vocabulary. We should think
about conversations as a fundamental
unit of computing, not files. Number
four, AI planning and underestimation.
Most people dramatically underestimate
the leverage they get from planning
because there's a power law in execution
with AI and you will go dramatically off
the rails or dramatically on the rails
if you get it right. Number five, tool
wars and cursor build tools are
converging on the middle. Lovable is
getting better. Clawed code is getting
better. Cursor stuck in the middle. I
think we are headed toward an OS style
brand war like the 1990s between Windows
and Mac users where you will have an
affinity but it's actually your own life
experiences that shape that and we
should be aware as users of where we may
fit best. Then hidden layers and data
incentives. So token depth is nonlinear.
The primary value of an agent goes
beyond token depth. problems are
fungeible. And so what we need to get to
is control settings that enable us to
set token depth more precisely,
reliably. And we need agents that enable
us to accelerate that token depth to
solve hard problems. I gave Manis as an
example of at least aligned incentives
because model makers are not always
aligned with users. Here number seven,
data incentives are delaying the data
middleware layer. Privacy incentives in
particular and the concern about leaking
data are keeping a data middleware layer
from existing that we need for agents to
succeed. Number eight, distribution and
data user experience. Yes, distribution
is king, but if you're not building for
that data integration, it won't feel
seamless enough even if your AI is
intelligent. And that is a severely
underinvested experience. That that
integrations piece is underinvested in
most MVPs. And number nine, templates
and AI structure. The world is norming
to tokenizable templates cuz AI can eat
them. Which means we have this weird
period coming where it's going to be
harder to distinguish AI slop from
really good work. But eventually our
work patterns will get more structured
and we'll be able to start to see this
is an example of an artifact that has AI
influence but that also has human craft
and taste over the top. So there you go.
The nine things. I hope they're helpful.
I hope they made you think. I hope they
make you use AI with more intention.
Tears.