Nine Patterns of AI Adoption Failure
Key Points
- AI adoption frequently fails, so the speaker outlines nine common failure patterns to give organizations a clear vocabulary for diagnosing and fixing problems.
- The first pattern, the “integration tarpet,” occurs because budgets focus on development costs while ignoring the extensive coordination, legal, and compliance work required for deployment; the remedy is to treat stakeholder approval paths as a core part of the project, often by assigning a dedicated deployment PM to manage those processes.
- The second pattern, a “governance vacuum,” emerges when security and red‑team findings expose vulnerabilities that were never overseen by a formal AI governance framework; establishing clear policies, oversight bodies, and continuous review processes is essential to close this gap.
- A recurring theme across all patterns is that organizations assume technical success guarantees smooth rollout, but hidden organizational and process complexities make failures sticky unless they are deliberately budgeted for and addressed up front.
- Ultimately, the fix for each failure pattern involves recognizing and budgeting for the non‑technical side of AI—process, people, and policy—as seriously as the code itself.
Sections
- AI Integration Tarpit Failure - The speaker outlines nine common AI adoption failure patterns, beginning with the “integration tarpit” where rapid engineering outpaces slow sales, legal, and compliance processes due to budgeting for code rather than coordination, making technically sound prototypes hard to deploy.
- AI Governance Void in Security - The transcript highlights how the absence of a dedicated owner for AI‑related vulnerabilities creates a governance gap that stalls incident response, especially in high‑compliance industries, and calls for specialized talent and tools to treat AI governance as a first‑class security function.
- Hidden Review Burden in AI - The speaker warns that flashy AI demos conceal the hidden cost of human review, urging designers to build human‑in‑the‑loop systems that limit review burden, mitigate security risks, and account for AI’s unreliable performance.
- Avoiding AI Scaling Pitfalls - It warns that without seamless AI‑human handoffs and careful measurement, scaling pilots too quickly creates edge bottlenecks, exploding support costs, and degraded quality.
- Avoiding the Mechanical Horse Fallacy - The speaker warns against simply digitizing existing processes—calling it the mechanical‑horse fallacy—and urges teams to reimagine work, define outcome‑focused north‑star metrics, and prototype zero‑process workflows to ensure AI adds real value.
- Balancing Bets Across AI Horizons - The speaker advises cautious firms to treat AI initiatives as a diversified portfolio—allocating funds to fast‑payoff, medium‑term, learning, and scaling projects with clear milestones—so they can track trajectories, double down on successes, and avoid paralysis in decision‑making.
Full Transcript
# Nine Patterns of AI Adoption Failure **Source:** [https://www.youtube.com/watch?v=9m1Bd6cxYBk](https://www.youtube.com/watch?v=9m1Bd6cxYBk) **Duration:** 00:20:32 ## Summary - AI adoption frequently fails, so the speaker outlines nine common failure patterns to give organizations a clear vocabulary for diagnosing and fixing problems. - The first pattern, the “integration tarpet,” occurs because budgets focus on development costs while ignoring the extensive coordination, legal, and compliance work required for deployment; the remedy is to treat stakeholder approval paths as a core part of the project, often by assigning a dedicated deployment PM to manage those processes. - The second pattern, a “governance vacuum,” emerges when security and red‑team findings expose vulnerabilities that were never overseen by a formal AI governance framework; establishing clear policies, oversight bodies, and continuous review processes is essential to close this gap. - A recurring theme across all patterns is that organizations assume technical success guarantees smooth rollout, but hidden organizational and process complexities make failures sticky unless they are deliberately budgeted for and addressed up front. - Ultimately, the fix for each failure pattern involves recognizing and budgeting for the non‑technical side of AI—process, people, and policy—as seriously as the code itself. ## Sections - [00:00:00](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=0s) **AI Integration Tarpit Failure** - The speaker outlines nine common AI adoption failure patterns, beginning with the “integration tarpit” where rapid engineering outpaces slow sales, legal, and compliance processes due to budgeting for code rather than coordination, making technically sound prototypes hard to deploy. - [00:03:06](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=186s) **AI Governance Void in Security** - The transcript highlights how the absence of a dedicated owner for AI‑related vulnerabilities creates a governance gap that stalls incident response, especially in high‑compliance industries, and calls for specialized talent and tools to treat AI governance as a first‑class security function. - [00:06:10](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=370s) **Hidden Review Burden in AI** - The speaker warns that flashy AI demos conceal the hidden cost of human review, urging designers to build human‑in‑the‑loop systems that limit review burden, mitigate security risks, and account for AI’s unreliable performance. - [00:10:10](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=610s) **Avoiding AI Scaling Pitfalls** - It warns that without seamless AI‑human handoffs and careful measurement, scaling pilots too quickly creates edge bottlenecks, exploding support costs, and degraded quality. - [00:13:14](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=794s) **Avoiding the Mechanical Horse Fallacy** - The speaker warns against simply digitizing existing processes—calling it the mechanical‑horse fallacy—and urges teams to reimagine work, define outcome‑focused north‑star metrics, and prototype zero‑process workflows to ensure AI adds real value. - [00:16:21](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=981s) **Balancing Bets Across AI Horizons** - The speaker advises cautious firms to treat AI initiatives as a diversified portfolio—allocating funds to fast‑payoff, medium‑term, learning, and scaling projects with clear milestones—so they can track trajectories, double down on successes, and avoid paralysis in decision‑making. ## Full Transcript
It seems like AI adoption fails more
often than it goes right. I want to take
today's video and talk specifically
about the nine different AI failure
patterns that I've seen in organizations
over the last few months in 2025. I want
to get at not just what happened, but
what is the root cause? What are the
things that make that pattern sticky,
that fail pattern sticky, and then what
is the actual fix that unsticks an
organization and gets it back on track.
I don't think we talk enough about the
categories of AI adoption failure and I
want to lay them out really cleanly so
that we have a vocabulary to talk about
them to address them and ultimately to
get back on track. Let's start with
number one, the integration tarpet.
Let's say engineering ships working AI
code in weeks, but sales and legal and
compliance cycles were never meant to
run that fast. They stretch into months
or longer. Cross team stakeholder
meetings are multiplying all over the
business talking about policies and
approvals. The root cause here is that
the organization's budget for AI
development was structured in terms of
dollars and cents and not in terms of
coordination cost. When you are working
with a prototype, it's very simple. But
a prototype does not equal a system that
fits data architecture, compliance, and
politics together. Yes, organizational
politics are relevant. So why is this
sticky? Everyone assumes if it works
technically then deployment is going to
be easy but integration complexity
becomes visible only after the build is
is complete right you can see it working
you can see that deployment is easy
executives tend to not understand why
it's not being used why we're stuck on
paper the committees will make sense the
IT policy will make sense and none of it
actually delivers value the fix is
pretty simple take approval paths take
policy paths as critically as you take
writing code. You want to fix and
pre-wire in how you are fasttracking
adoption through the organization before
you jump in and start just saying we can
ship this thing quickly. Treat the human
problem as significant as the code
problem. you want to assign someone,
maybe it's a deployment PM whose entire
job separate from the engineering piece
is just to say, do we have all of our
ducks in a row process-wise to actually
get this into people's hands? Do we have
data support and approval? Do we have
legal clearance? Do we have any concerns
around compliance we need to address? Do
we have any concerns from HR that we
need to address? They're not asking,
does the model work? That needs to be
somebody else's job. All they're trying
to do is to wrangle the stakeholders and
move them quickly along. You have to
budget for the organizational side to
get out of the integration tarpet. There
is no shortcut. Failure number two,
governance vacuum. Let's say red teams
find vulnerabilities. We actually had
that happen this week with teams finding
vulnerabilities in AI powered browsers.
Security will flag an unapproved
architecture when a red team
vulnerability is found, but there's no
owner for what happens if AI does
something. And so in this situation, if
your ordinary red team, your ordinary
security issues are triggered by AI, you
often run into a governance issue.
Really, there's no one who's a directly
responsible individual who treats AI
governance as a first class object. And
that is your core problem. That is why
when small vulnerabilities are sometimes
found in your agentic systems, you get
stuck. When they're found in your
implementation of a custom chat GPT, you
get stuck. You have to treat governance
as a first class object. And that is
especially true if you are in a high
compliance industry. And here's the
trick. You probably knew that if you
were in high compliance. But what you
may not have realized is that the skill
set for governance and AI is different
from the skill set for a lot of typical
IT security projects. And that is why
this problem is sticky because people
often try and address it by saying let's
give it a security review. Let's give it
a software review. That feels like
bureaucratic slowdown. It looks like
bureaucratic slowdown. The teams that
are addressing it don't have the tools
to do it right. So teams just kind of go
hands off and one incident ends up
freezing everything. A governance vacuum
ends up grinding your system to a halt.
The fix is simple. You need to embed the
right talent with the right tools to
make security a day zero problem. That
means that you have to think about what
the AI agent can access, what it does,
what its blast radius is, what failure
modes look like, how you architect
security rather than making security the
agents problem with decisioning, how you
can reliably evaluate whether the agent
is doing the right thing, how you test
in production systems a range of
utterances or words from the user that
let you know that if there are things
the system should not be responding to,
prompt injection attacks, etc. You can
prove that you're addressing them
correctly at the desired rate of
success. If you don't make all of that
somebody's problem, and it is again not
a traditional security software purview.
It's a new set of skills, you're going
to be in trouble. Failure number three,
the review bottleneck. AI will generate
output so fast. I've talked about this
before, but human review doesn't get
shorter just because AI gets longer. And
so output quality starts to vary super
wildly. and engineers or other job
families end up babysitting AI systems.
The root cause here is that you have
stuck AI as an engine onto the wrong
part of your workflow, generation
instead of judgment. So, organizations
will usually measure success by how much
you can produce. And so, the instinct is
to just stick a bolt-on engine of AI
generation onto the generative part of
your process. Maybe it's making social
media stuff. Maybe it's writing
documents or breaking out tickets. Maybe
it's writing code for code reviews. We
think how much it can produce matters.
It's sticky because impressive demos
look so good when they show speed of
generation. And a lot of people are
stuck in the mindset that that is the
KPI that matters and review burden is
hidden. You don't see review burden in a
demo. You need to design systems that
are human in the loop from the start.
I've said this before. Your best humans
should feel more fingertippy on your
work because of AI, not not fighting AI.
So AI should be able to draft useful
pieces of work, whether that's code or
something else. And a human should have
comfortable capacity to review that
work. That means being clear about what
your AI scope actually is, what the
scope of your AI assistant for this task
actually is. And it means getting
serious about how much of a review
burden your AI system imposes. If you
have someone and all they're doing is
just hitting merge on AI generated poll
requests on your codebase, you are
extending vulnerability into your system
because you refused to think about the
review bottleneck. And there are real
security implications and yes, systems
really do that. Please, please, please
take the time to look at whether you are
architecting your system for review and
for putting expert humans in touch with
the work or not. Number four, the
unreliable intern. Let's say AI handles
80% of a task perfectly and it fails
catastrophically on the last 20%. And
you can't predict when failures are
going to occur. Supervision costs may
approach the cost of just doing the work
at that point. The root cause here is
that AI lacks judgment and memory and
context for what you need specifically
and organizations keep trying to deploy
AI on tasks that aren't AI ready yet as
a result. Part of the risk here, part of
why this stays sticky, is that the 80%
success rate in this situation, which is
like real, it feels close enough to keep
trying. Teams assume just one more tweak
is going to fix that issue. But the real
fix is actually simple. You
intentionally audit the task for intern
suitability before you decide if it's AI
ready. In other words, you ask yourself,
would I give this to a smart but
forgetful intern who can't learn? If I
give them a clear task and clear context
and a clear and a clear structure for
the ask and output, could they do it?
Break complicated tasks into subtasks.
You want AI to do the retrieval and
formatting. You want AI to do these
sequential steps that are clear and you
want humans to be able to offer that
review as I said earlier. So be really
explicit when you're going through this
audit. I know this doesn't sound fun,
but part of the ask when you do AI
automation, well, when you are trying to
unstick adoption blockers, it's an ask
to extend organizational intent. You
have to be clear about your intention
for what tasks are suitable. And there
is no substitute for going into the
nitty-gritty and looking at those one by
one. Failure number five is the handoff
tax. AI can handle one step in a
multi-step process. Handoffs between AI
and human are not fully worked out and
the system design and overall cycle time
barely improves. Sometimes it gets
worse. The root cause is you again you
automated the wrong part of your
workflow. You optimized for one
bottleneck and you created two new ones
on each side cuz you didn't think about
your on-ramps and off-ramps. This is
sticky because the per step improvement,
the KPIs are going to look great. Wow,
we took our per step for this drafting
stage down by 200%. Well, you have to
take full cycle time for workflows very
seriously or you are going to discover
how bad this is too late. The fix is
simple and again it comes down to
intent. You have to map the full
intended workflow before deploying AI.
Redesign it so AI can handle the
on-ramps and off-ramps of all the
components it needs to touch so that you
are not creating new bottlenecks at the
edges of AI systems. And then you want
to measure cycle time for the whole
process. If cycle time improvement is
not moving, you probably have an issue
with the edges of your AI system and how
it hands off to humans. And yes, that is
going to include training your humans in
new patterns of work. Number six, the
premature scale trap. Let's say you have
a successful pilot and it's pushed
rapidly to companywide. You want to
double down on what's working. Edge
cases will immediately multiply. Support
costs are going to explode and quality
is going to degrade. You, it turns out,
were not ready for companywide roll out.
The root cause here is that you usually
have a much more controlled environment
for pilots with motivated users and very
clean data. This is almost a function of
the purpose of the pilot. You pick these
pilots because they're easier and people
magically forget that when they go to
roll out wide. The pilot team probably
understood AI limitations and worked
around them in a way the broader org
doesn't and can't. This is sticky
because leadership is just wired to seek
a quick win on AI and they want to
capture value fast and they're
optimizing and it feels like testing and
doing a gently scaled roll out is just
unnecessary delay and not moving quickly
in the age of AI. Well, I've got news.
Sometimes slow is smooth and smooth is
fast. The fix is honestly to document
what fundamental differences exist
between the pilot environment and the
real environment. So document all of the
workarounds that your pilot team used to
achieve the results. Those become
training. Document with skeptical users,
not with enthusiastic ones. How do they
use the tool? Research, understand, is
it working? Make sure that you try a
second pilot on a hard problem in the
messiest part of the organization. Does
that still work and deliver value? Then
start to dial up in stages, right? Maybe
you go to a 100 people and then 500
people before you hit 50,000. Right? You
want to build support infrastructure in
terms of people, learning and
development opportunities, giving very
very clear approval, disapproval, bug
reports, lots of feedback opportunities
for the software as you roll out. And
then you want to monitor, right? If you
get from five people or 25 people in a
pilot to 500 and your support tickets
are increasing per user, then you are
not ready to go farther. You have found
an edge case you need to resolve. take
the time to do it. There's no shortcuts.
Number seven, automation trap. Let's say
AI speeds up existing processes. Great,
but it doesn't change outcomes. Activity
increases, results don't. You have you
have successfully automated
inefficiency. Congratulations. The root
cause here is you deployed AI before
asking whether the process should exist
at all. You automated approval workflows
that maybe shouldn't require approval.
Right? There's a lot of other examples
of this. This is sticky because we have
the mechanical horse fallacy. That's the
idea that a new technology should look
like the previous one in the way that a
car should look like a horse. No. I know
it's easier to automate what you're
already doing than to reimagine work.
But the value comes from reimagining
work. Before deploying, ask, should we
be doing this at all? And then you want
to look at outcomes that will stay
steady regardless of the process behind
them. Those are your north stars. So you
want to look at customer satisfaction.
You want to look at the efficiency with
which you can do a job in a way that is
steady regardless of the particular
technology used. And you want to look at
the perhaps business metrics that you
can drive given a piece of workflow
you'll automate. Whatever it is, make
sure that you prototype as close as
possible to a zero process version. Ask
yourself, what if AI dropped this
workflow? Would it work? You may not
find that it does. You may find you need
the workflow. N I can only take certain
pieces of it. That's a great answer. But
if you don't ask the question, you run
the risk of a mechanical horse. You run
the risk of an automation trap. And then
ask yourself how you'll know when it's
time to go the next step. That's sort of
how people start to really build value.
They look at these AI agent systems as
evolving. They look at this north star
of customer satisfaction or topline
revenue and they say AI can't drop this
process yet. It can do two parts out of
six. We are going to come back in a
quarter and see if we can get the whole
thing because AI is getting better.
Number eight, existential paralysis.
Leadership is debating whether AI will
cannibalize the core business and you
get conflicting directives from senior
leaders. You have strateg strategy
discussions after strategy discussions
that loop without decisions. This
happens a little bit less often than
some of the others because I think the
FOMO and the bias for action are real in
this space. But fundamentally, I have
been in these rooms. I have seen people
worry about the risk of AI to the point
where they take no action. The root
cause is that AI's pace of change is
dramatically outstripping traditional
corporate strategic planning cycles. And
so, by the time you have built your
careful 5-year AI strategy that feels
steady, the landscape has shifted and
it's already outdated. That has happened
multiple times to organizations in the
last two years. It is part of the reason
why organizations are regretting
building custom models back in 2022
because now they've launched them in 23
24 and now they're regretting it because
the the cloud provided models are so
much better. Their AI strategy stayed
still because it was on a corporate
planning cycle and the market shifted.
Exist existential paralysis killed them.
And this is sticky because outcome
unpredictability makes every single
decision feel really high stakes. So
more analysis feels much safer than
making a bold commitment. Well, the fix
is simple. You can take if you're a
conservative organization, if you're not
ready to make a truly bold burn the
boats move, which is a way that startups
are addressing this and having great
success. I don't want to not call that
out. You can adopt a portfolio approach.
If you're feeling more conservative, you
can allocate your budget across
different horizons, a fast payoff mode,
a two to threeear bet mode, etc. And you
don't have to predict which ones wins.
You can diversify your bets. You can set
speed targets like getting to
complicated AI questions answered in
Slack within 90 days and getting to
truly agentic CRM automation for leads
in 8 months, right? Like you can have
different horizons in different bets and
measure them differently. You want to
also be clear in the portfolio bet world
that you can have learning investments
and scaling investments and that you
have clear gates to get to scale.
Essentially what I'm saying is if you
are not a burn the boats organization,
if you are a more cautious organization,
which is where this happens, then you
should be thinking about it as an
investment in a series of equities and
you don't know which one is going to be
a runaway success. But, you know,
failing to invest will certainly prevent
you from getting a runaway success. And
so, you need to balance your bets across
all of the different equities you've
got, watch their trajectories, and
double down where they're working. And
that requires a different
decision-making cycle from leadership.
And that is the only way I've seen that
these kinds of existential paralysis
organizations start to get themselves
together. Finally, number nine, the
training deficit and the data swamp. Two
sides of the same coin. You have low
adoption despite tool availability.
Users revert to old workflows. Do you
know why? Because AI can't access needed
data and data quality issues only
surfaced after you deployed the tool.
The root cause here is that you deployed
the tool and taught people to use the
tool and never bothered to think about
the data issues that the tool was
surrounded by and it looked okay in
training. Data infrastructure work is
not fast. It doesn't ship in weeks. It's
typically boring. It's expensive. It's
slow. It's very difficult to fix data
problems and most organizations opt to
skip it if they can. This makes it
sticky because training is treated as
just one-time onboarding and you're not
really thinking about how you build the
capability to solve problems with data
using AI tools for your employees. So,
there's a mindset shift you have to have
along with a commitment to data
integrity. And so, you have to think
about how you're upshifting the data to
meet AI needs. and also how you're
upshifting training so your team can
take advantage of the AI tooling once
it's connected to data. I know AI
deployment is exciting and fast, but if
you deploy without paying attention to
the training and the data availability,
you're going to be in trouble. You
should allocate like I'm not kidding 3
to 6 months of expected training at
enterprise scale before you start to
think about ROI. You want to train on
workflows, not tools. So, you want to
ask, "How can I teach people to research
competitive intelligence using AI?" Not,
"How do you use chat GPT?" Here's a
handy twoline hack for a research
competitive intelligence prompt. It's a
deeper conversation. You can't assume
the tools will be the same over time.
So, you want to focus in your training
as you're starting to build people up,
focus on your AI champions, focus on the
ones who can teach their peers because
that is going to enable you to trigger
network effects that will enable AI
adoption to spread faster. On the data
side, you're going to need to do a full
data audit. You're going to need to
prioritize data access and you're going
to need to assign clear data ownership
so that someone is accountable for
making sure data is available for the
AI. What do we learn as we look across
all nine of these? I'll tell you the one
biggest takeaway that I have after
seeing all nine of these play out in
organizations over the last few months
is that AI adoption remains a
preventable problem. If you're having
issues with your AI adoption, it's on
you. Leadership is responsible for
establishing the kind of intentful,
thoughtful best practices that I'm
describing here that keep you out of
these failure modes. And when you run
into them, you got to be honest. You got
to say, "Here's the root cause. Here's
what's making it sticky. Here's what we
can do to get ourselves out of the
mess." That's why I made this video.