Learning Library

← Back to Library

Nine Patterns of AI Adoption Failure

Key Points

  • AI adoption frequently fails, so the speaker outlines nine common failure patterns to give organizations a clear vocabulary for diagnosing and fixing problems.
  • The first pattern, the “integration tarpet,” occurs because budgets focus on development costs while ignoring the extensive coordination, legal, and compliance work required for deployment; the remedy is to treat stakeholder approval paths as a core part of the project, often by assigning a dedicated deployment PM to manage those processes.
  • The second pattern, a “governance vacuum,” emerges when security and red‑team findings expose vulnerabilities that were never overseen by a formal AI governance framework; establishing clear policies, oversight bodies, and continuous review processes is essential to close this gap.
  • A recurring theme across all patterns is that organizations assume technical success guarantees smooth rollout, but hidden organizational and process complexities make failures sticky unless they are deliberately budgeted for and addressed up front.
  • Ultimately, the fix for each failure pattern involves recognizing and budgeting for the non‑technical side of AI—process, people, and policy—as seriously as the code itself.

Sections

Full Transcript

# Nine Patterns of AI Adoption Failure **Source:** [https://www.youtube.com/watch?v=9m1Bd6cxYBk](https://www.youtube.com/watch?v=9m1Bd6cxYBk) **Duration:** 00:20:32 ## Summary - AI adoption frequently fails, so the speaker outlines nine common failure patterns to give organizations a clear vocabulary for diagnosing and fixing problems. - The first pattern, the “integration tarpet,” occurs because budgets focus on development costs while ignoring the extensive coordination, legal, and compliance work required for deployment; the remedy is to treat stakeholder approval paths as a core part of the project, often by assigning a dedicated deployment PM to manage those processes. - The second pattern, a “governance vacuum,” emerges when security and red‑team findings expose vulnerabilities that were never overseen by a formal AI governance framework; establishing clear policies, oversight bodies, and continuous review processes is essential to close this gap. - A recurring theme across all patterns is that organizations assume technical success guarantees smooth rollout, but hidden organizational and process complexities make failures sticky unless they are deliberately budgeted for and addressed up front. - Ultimately, the fix for each failure pattern involves recognizing and budgeting for the non‑technical side of AI—process, people, and policy—as seriously as the code itself. ## Sections - [00:00:00](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=0s) **AI Integration Tarpit Failure** - The speaker outlines nine common AI adoption failure patterns, beginning with the “integration tarpit” where rapid engineering outpaces slow sales, legal, and compliance processes due to budgeting for code rather than coordination, making technically sound prototypes hard to deploy. - [00:03:06](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=186s) **AI Governance Void in Security** - The transcript highlights how the absence of a dedicated owner for AI‑related vulnerabilities creates a governance gap that stalls incident response, especially in high‑compliance industries, and calls for specialized talent and tools to treat AI governance as a first‑class security function. - [00:06:10](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=370s) **Hidden Review Burden in AI** - The speaker warns that flashy AI demos conceal the hidden cost of human review, urging designers to build human‑in‑the‑loop systems that limit review burden, mitigate security risks, and account for AI’s unreliable performance. - [00:10:10](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=610s) **Avoiding AI Scaling Pitfalls** - It warns that without seamless AI‑human handoffs and careful measurement, scaling pilots too quickly creates edge bottlenecks, exploding support costs, and degraded quality. - [00:13:14](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=794s) **Avoiding the Mechanical Horse Fallacy** - The speaker warns against simply digitizing existing processes—calling it the mechanical‑horse fallacy—and urges teams to reimagine work, define outcome‑focused north‑star metrics, and prototype zero‑process workflows to ensure AI adds real value. - [00:16:21](https://www.youtube.com/watch?v=9m1Bd6cxYBk&t=981s) **Balancing Bets Across AI Horizons** - The speaker advises cautious firms to treat AI initiatives as a diversified portfolio—allocating funds to fast‑payoff, medium‑term, learning, and scaling projects with clear milestones—so they can track trajectories, double down on successes, and avoid paralysis in decision‑making. ## Full Transcript
0:00It seems like AI adoption fails more 0:02often than it goes right. I want to take 0:04today's video and talk specifically 0:06about the nine different AI failure 0:09patterns that I've seen in organizations 0:11over the last few months in 2025. I want 0:14to get at not just what happened, but 0:16what is the root cause? What are the 0:18things that make that pattern sticky, 0:20that fail pattern sticky, and then what 0:22is the actual fix that unsticks an 0:24organization and gets it back on track. 0:26I don't think we talk enough about the 0:28categories of AI adoption failure and I 0:31want to lay them out really cleanly so 0:32that we have a vocabulary to talk about 0:34them to address them and ultimately to 0:36get back on track. Let's start with 0:38number one, the integration tarpet. 0:41Let's say engineering ships working AI 0:43code in weeks, but sales and legal and 0:46compliance cycles were never meant to 0:48run that fast. They stretch into months 0:50or longer. Cross team stakeholder 0:52meetings are multiplying all over the 0:54business talking about policies and 0:56approvals. The root cause here is that 0:59the organization's budget for AI 1:01development was structured in terms of 1:03dollars and cents and not in terms of 1:06coordination cost. When you are working 1:08with a prototype, it's very simple. But 1:11a prototype does not equal a system that 1:15fits data architecture, compliance, and 1:17politics together. Yes, organizational 1:20politics are relevant. So why is this 1:23sticky? Everyone assumes if it works 1:25technically then deployment is going to 1:28be easy but integration complexity 1:30becomes visible only after the build is 1:32is complete right you can see it working 1:35you can see that deployment is easy 1:37executives tend to not understand why 1:39it's not being used why we're stuck on 1:42paper the committees will make sense the 1:44IT policy will make sense and none of it 1:47actually delivers value the fix is 1:49pretty simple take approval paths take 1:52policy paths as critically as you take 1:55writing code. You want to fix and 1:57pre-wire in how you are fasttracking 2:00adoption through the organization before 2:03you jump in and start just saying we can 2:06ship this thing quickly. Treat the human 2:09problem as significant as the code 2:11problem. you want to assign someone, 2:14maybe it's a deployment PM whose entire 2:17job separate from the engineering piece 2:19is just to say, do we have all of our 2:23ducks in a row process-wise to actually 2:24get this into people's hands? Do we have 2:26data support and approval? Do we have 2:28legal clearance? Do we have any concerns 2:31around compliance we need to address? Do 2:34we have any concerns from HR that we 2:36need to address? They're not asking, 2:38does the model work? That needs to be 2:40somebody else's job. All they're trying 2:41to do is to wrangle the stakeholders and 2:44move them quickly along. You have to 2:46budget for the organizational side to 2:48get out of the integration tarpet. There 2:50is no shortcut. Failure number two, 2:54governance vacuum. Let's say red teams 2:57find vulnerabilities. We actually had 2:59that happen this week with teams finding 3:01vulnerabilities in AI powered browsers. 3:03Security will flag an unapproved 3:06architecture when a red team 3:07vulnerability is found, but there's no 3:09owner for what happens if AI does 3:12something. And so in this situation, if 3:14your ordinary red team, your ordinary 3:16security issues are triggered by AI, you 3:20often run into a governance issue. 3:22Really, there's no one who's a directly 3:24responsible individual who treats AI 3:26governance as a first class object. And 3:28that is your core problem. That is why 3:31when small vulnerabilities are sometimes 3:33found in your agentic systems, you get 3:35stuck. When they're found in your 3:37implementation of a custom chat GPT, you 3:40get stuck. You have to treat governance 3:43as a first class object. And that is 3:45especially true if you are in a high 3:47compliance industry. And here's the 3:49trick. You probably knew that if you 3:52were in high compliance. But what you 3:53may not have realized is that the skill 3:55set for governance and AI is different 3:58from the skill set for a lot of typical 4:01IT security projects. And that is why 4:03this problem is sticky because people 4:06often try and address it by saying let's 4:08give it a security review. Let's give it 4:10a software review. That feels like 4:12bureaucratic slowdown. It looks like 4:14bureaucratic slowdown. The teams that 4:16are addressing it don't have the tools 4:17to do it right. So teams just kind of go 4:20hands off and one incident ends up 4:22freezing everything. A governance vacuum 4:25ends up grinding your system to a halt. 4:27The fix is simple. You need to embed the 4:30right talent with the right tools to 4:33make security a day zero problem. That 4:36means that you have to think about what 4:38the AI agent can access, what it does, 4:41what its blast radius is, what failure 4:43modes look like, how you architect 4:45security rather than making security the 4:47agents problem with decisioning, how you 4:50can reliably evaluate whether the agent 4:52is doing the right thing, how you test 4:55in production systems a range of 4:58utterances or words from the user that 5:01let you know that if there are things 5:03the system should not be responding to, 5:05prompt injection attacks, etc. You can 5:08prove that you're addressing them 5:10correctly at the desired rate of 5:12success. If you don't make all of that 5:14somebody's problem, and it is again not 5:16a traditional security software purview. 5:19It's a new set of skills, you're going 5:20to be in trouble. Failure number three, 5:22the review bottleneck. AI will generate 5:25output so fast. I've talked about this 5:26before, but human review doesn't get 5:28shorter just because AI gets longer. And 5:30so output quality starts to vary super 5:34wildly. and engineers or other job 5:37families end up babysitting AI systems. 5:39The root cause here is that you have 5:41stuck AI as an engine onto the wrong 5:43part of your workflow, generation 5:46instead of judgment. So, organizations 5:48will usually measure success by how much 5:50you can produce. And so, the instinct is 5:53to just stick a bolt-on engine of AI 5:55generation onto the generative part of 5:57your process. Maybe it's making social 5:59media stuff. Maybe it's writing 6:01documents or breaking out tickets. Maybe 6:03it's writing code for code reviews. We 6:06think how much it can produce matters. 6:07It's sticky because impressive demos 6:10look so good when they show speed of 6:12generation. And a lot of people are 6:14stuck in the mindset that that is the 6:16KPI that matters and review burden is 6:19hidden. You don't see review burden in a 6:21demo. You need to design systems that 6:24are human in the loop from the start. 6:26I've said this before. Your best humans 6:28should feel more fingertippy on your 6:31work because of AI, not not fighting AI. 6:35So AI should be able to draft useful 6:37pieces of work, whether that's code or 6:39something else. And a human should have 6:42comfortable capacity to review that 6:45work. That means being clear about what 6:47your AI scope actually is, what the 6:51scope of your AI assistant for this task 6:53actually is. And it means getting 6:56serious about how much of a review 6:59burden your AI system imposes. If you 7:02have someone and all they're doing is 7:05just hitting merge on AI generated poll 7:08requests on your codebase, you are 7:10extending vulnerability into your system 7:13because you refused to think about the 7:15review bottleneck. And there are real 7:17security implications and yes, systems 7:19really do that. Please, please, please 7:22take the time to look at whether you are 7:25architecting your system for review and 7:27for putting expert humans in touch with 7:29the work or not. Number four, the 7:32unreliable intern. Let's say AI handles 7:3580% of a task perfectly and it fails 7:38catastrophically on the last 20%. And 7:41you can't predict when failures are 7:43going to occur. Supervision costs may 7:45approach the cost of just doing the work 7:46at that point. The root cause here is 7:49that AI lacks judgment and memory and 7:51context for what you need specifically 7:54and organizations keep trying to deploy 7:57AI on tasks that aren't AI ready yet as 8:00a result. Part of the risk here, part of 8:02why this stays sticky, is that the 80% 8:05success rate in this situation, which is 8:07like real, it feels close enough to keep 8:10trying. Teams assume just one more tweak 8:13is going to fix that issue. But the real 8:15fix is actually simple. You 8:17intentionally audit the task for intern 8:21suitability before you decide if it's AI 8:23ready. In other words, you ask yourself, 8:25would I give this to a smart but 8:27forgetful intern who can't learn? If I 8:29give them a clear task and clear context 8:32and a clear and a clear structure for 8:35the ask and output, could they do it? 8:37Break complicated tasks into subtasks. 8:40You want AI to do the retrieval and 8:42formatting. You want AI to do these 8:45sequential steps that are clear and you 8:47want humans to be able to offer that 8:49review as I said earlier. So be really 8:52explicit when you're going through this 8:54audit. I know this doesn't sound fun, 8:56but part of the ask when you do AI 8:59automation, well, when you are trying to 9:01unstick adoption blockers, it's an ask 9:04to extend organizational intent. You 9:07have to be clear about your intention 9:09for what tasks are suitable. And there 9:12is no substitute for going into the 9:13nitty-gritty and looking at those one by 9:16one. Failure number five is the handoff 9:18tax. AI can handle one step in a 9:20multi-step process. Handoffs between AI 9:23and human are not fully worked out and 9:25the system design and overall cycle time 9:28barely improves. Sometimes it gets 9:30worse. The root cause is you again you 9:32automated the wrong part of your 9:33workflow. You optimized for one 9:35bottleneck and you created two new ones 9:37on each side cuz you didn't think about 9:40your on-ramps and off-ramps. This is 9:42sticky because the per step improvement, 9:45the KPIs are going to look great. Wow, 9:47we took our per step for this drafting 9:49stage down by 200%. Well, you have to 9:52take full cycle time for workflows very 9:55seriously or you are going to discover 9:58how bad this is too late. The fix is 10:01simple and again it comes down to 10:02intent. You have to map the full 10:05intended workflow before deploying AI. 10:08Redesign it so AI can handle the 10:10on-ramps and off-ramps of all the 10:12components it needs to touch so that you 10:15are not creating new bottlenecks at the 10:18edges of AI systems. And then you want 10:21to measure cycle time for the whole 10:22process. If cycle time improvement is 10:25not moving, you probably have an issue 10:27with the edges of your AI system and how 10:30it hands off to humans. And yes, that is 10:32going to include training your humans in 10:34new patterns of work. Number six, the 10:37premature scale trap. Let's say you have 10:39a successful pilot and it's pushed 10:41rapidly to companywide. You want to 10:43double down on what's working. Edge 10:45cases will immediately multiply. Support 10:48costs are going to explode and quality 10:51is going to degrade. You, it turns out, 10:53were not ready for companywide roll out. 10:56The root cause here is that you usually 10:58have a much more controlled environment 11:00for pilots with motivated users and very 11:04clean data. This is almost a function of 11:06the purpose of the pilot. You pick these 11:08pilots because they're easier and people 11:10magically forget that when they go to 11:12roll out wide. The pilot team probably 11:14understood AI limitations and worked 11:16around them in a way the broader org 11:18doesn't and can't. This is sticky 11:20because leadership is just wired to seek 11:23a quick win on AI and they want to 11:25capture value fast and they're 11:26optimizing and it feels like testing and 11:29doing a gently scaled roll out is just 11:32unnecessary delay and not moving quickly 11:34in the age of AI. Well, I've got news. 11:37Sometimes slow is smooth and smooth is 11:39fast. The fix is honestly to document 11:43what fundamental differences exist 11:46between the pilot environment and the 11:48real environment. So document all of the 11:50workarounds that your pilot team used to 11:52achieve the results. Those become 11:54training. Document with skeptical users, 11:58not with enthusiastic ones. How do they 12:00use the tool? Research, understand, is 12:02it working? Make sure that you try a 12:04second pilot on a hard problem in the 12:07messiest part of the organization. Does 12:08that still work and deliver value? Then 12:11start to dial up in stages, right? Maybe 12:13you go to a 100 people and then 500 12:14people before you hit 50,000. Right? You 12:17want to build support infrastructure in 12:19terms of people, learning and 12:21development opportunities, giving very 12:24very clear approval, disapproval, bug 12:26reports, lots of feedback opportunities 12:29for the software as you roll out. And 12:31then you want to monitor, right? If you 12:33get from five people or 25 people in a 12:35pilot to 500 and your support tickets 12:38are increasing per user, then you are 12:41not ready to go farther. You have found 12:43an edge case you need to resolve. take 12:45the time to do it. There's no shortcuts. 12:48Number seven, automation trap. Let's say 12:50AI speeds up existing processes. Great, 12:54but it doesn't change outcomes. Activity 12:57increases, results don't. You have you 12:59have successfully automated 13:01inefficiency. Congratulations. The root 13:03cause here is you deployed AI before 13:07asking whether the process should exist 13:09at all. You automated approval workflows 13:11that maybe shouldn't require approval. 13:13Right? There's a lot of other examples 13:14of this. This is sticky because we have 13:17the mechanical horse fallacy. That's the 13:19idea that a new technology should look 13:22like the previous one in the way that a 13:24car should look like a horse. No. I know 13:26it's easier to automate what you're 13:28already doing than to reimagine work. 13:30But the value comes from reimagining 13:32work. Before deploying, ask, should we 13:35be doing this at all? And then you want 13:38to look at outcomes that will stay 13:39steady regardless of the process behind 13:42them. Those are your north stars. So you 13:44want to look at customer satisfaction. 13:46You want to look at the efficiency with 13:48which you can do a job in a way that is 13:50steady regardless of the particular 13:53technology used. And you want to look at 13:55the perhaps business metrics that you 13:58can drive given a piece of workflow 14:00you'll automate. Whatever it is, make 14:02sure that you prototype as close as 14:05possible to a zero process version. Ask 14:08yourself, what if AI dropped this 14:10workflow? Would it work? You may not 14:12find that it does. You may find you need 14:14the workflow. N I can only take certain 14:16pieces of it. That's a great answer. But 14:18if you don't ask the question, you run 14:21the risk of a mechanical horse. You run 14:23the risk of an automation trap. And then 14:26ask yourself how you'll know when it's 14:28time to go the next step. That's sort of 14:29how people start to really build value. 14:32They look at these AI agent systems as 14:34evolving. They look at this north star 14:36of customer satisfaction or topline 14:38revenue and they say AI can't drop this 14:40process yet. It can do two parts out of 14:42six. We are going to come back in a 14:45quarter and see if we can get the whole 14:46thing because AI is getting better. 14:48Number eight, existential paralysis. 14:50Leadership is debating whether AI will 14:52cannibalize the core business and you 14:54get conflicting directives from senior 14:56leaders. You have strateg strategy 14:58discussions after strategy discussions 15:00that loop without decisions. This 15:02happens a little bit less often than 15:04some of the others because I think the 15:06FOMO and the bias for action are real in 15:08this space. But fundamentally, I have 15:10been in these rooms. I have seen people 15:12worry about the risk of AI to the point 15:15where they take no action. The root 15:17cause is that AI's pace of change is 15:19dramatically outstripping traditional 15:21corporate strategic planning cycles. And 15:23so, by the time you have built your 15:25careful 5-year AI strategy that feels 15:28steady, the landscape has shifted and 15:30it's already outdated. That has happened 15:32multiple times to organizations in the 15:34last two years. It is part of the reason 15:36why organizations are regretting 15:38building custom models back in 2022 15:40because now they've launched them in 23 15:4224 and now they're regretting it because 15:44the the cloud provided models are so 15:46much better. Their AI strategy stayed 15:49still because it was on a corporate 15:51planning cycle and the market shifted. 15:53Exist existential paralysis killed them. 15:55And this is sticky because outcome 15:57unpredictability makes every single 15:59decision feel really high stakes. So 16:01more analysis feels much safer than 16:04making a bold commitment. Well, the fix 16:06is simple. You can take if you're a 16:08conservative organization, if you're not 16:10ready to make a truly bold burn the 16:12boats move, which is a way that startups 16:14are addressing this and having great 16:15success. I don't want to not call that 16:17out. You can adopt a portfolio approach. 16:19If you're feeling more conservative, you 16:21can allocate your budget across 16:22different horizons, a fast payoff mode, 16:24a two to threeear bet mode, etc. And you 16:27don't have to predict which ones wins. 16:29You can diversify your bets. You can set 16:31speed targets like getting to 16:33complicated AI questions answered in 16:36Slack within 90 days and getting to 16:40truly agentic CRM automation for leads 16:43in 8 months, right? Like you can have 16:45different horizons in different bets and 16:47measure them differently. You want to 16:49also be clear in the portfolio bet world 16:52that you can have learning investments 16:55and scaling investments and that you 16:57have clear gates to get to scale. 16:59Essentially what I'm saying is if you 17:01are not a burn the boats organization, 17:03if you are a more cautious organization, 17:05which is where this happens, then you 17:07should be thinking about it as an 17:09investment in a series of equities and 17:11you don't know which one is going to be 17:13a runaway success. But, you know, 17:15failing to invest will certainly prevent 17:17you from getting a runaway success. And 17:19so, you need to balance your bets across 17:21all of the different equities you've 17:23got, watch their trajectories, and 17:25double down where they're working. And 17:26that requires a different 17:27decision-making cycle from leadership. 17:29And that is the only way I've seen that 17:32these kinds of existential paralysis 17:34organizations start to get themselves 17:36together. Finally, number nine, the 17:38training deficit and the data swamp. Two 17:40sides of the same coin. You have low 17:42adoption despite tool availability. 17:45Users revert to old workflows. Do you 17:47know why? Because AI can't access needed 17:51data and data quality issues only 17:53surfaced after you deployed the tool. 17:55The root cause here is that you deployed 17:57the tool and taught people to use the 17:59tool and never bothered to think about 18:01the data issues that the tool was 18:03surrounded by and it looked okay in 18:06training. Data infrastructure work is 18:08not fast. It doesn't ship in weeks. It's 18:10typically boring. It's expensive. It's 18:12slow. It's very difficult to fix data 18:14problems and most organizations opt to 18:16skip it if they can. This makes it 18:18sticky because training is treated as 18:20just one-time onboarding and you're not 18:22really thinking about how you build the 18:24capability to solve problems with data 18:28using AI tools for your employees. So, 18:31there's a mindset shift you have to have 18:33along with a commitment to data 18:34integrity. And so, you have to think 18:36about how you're upshifting the data to 18:39meet AI needs. and also how you're 18:42upshifting training so your team can 18:44take advantage of the AI tooling once 18:46it's connected to data. I know AI 18:48deployment is exciting and fast, but if 18:50you deploy without paying attention to 18:52the training and the data availability, 18:54you're going to be in trouble. You 18:55should allocate like I'm not kidding 3 18:58to 6 months of expected training at 19:00enterprise scale before you start to 19:02think about ROI. You want to train on 19:04workflows, not tools. So, you want to 19:07ask, "How can I teach people to research 19:09competitive intelligence using AI?" Not, 19:12"How do you use chat GPT?" Here's a 19:14handy twoline hack for a research 19:16competitive intelligence prompt. It's a 19:18deeper conversation. You can't assume 19:20the tools will be the same over time. 19:22So, you want to focus in your training 19:24as you're starting to build people up, 19:26focus on your AI champions, focus on the 19:28ones who can teach their peers because 19:31that is going to enable you to trigger 19:32network effects that will enable AI 19:34adoption to spread faster. On the data 19:37side, you're going to need to do a full 19:39data audit. You're going to need to 19:41prioritize data access and you're going 19:43to need to assign clear data ownership 19:46so that someone is accountable for 19:48making sure data is available for the 19:51AI. What do we learn as we look across 19:54all nine of these? I'll tell you the one 19:57biggest takeaway that I have after 19:59seeing all nine of these play out in 20:01organizations over the last few months 20:02is that AI adoption remains a 20:06preventable problem. If you're having 20:08issues with your AI adoption, it's on 20:10you. Leadership is responsible for 20:13establishing the kind of intentful, 20:15thoughtful best practices that I'm 20:17describing here that keep you out of 20:20these failure modes. And when you run 20:22into them, you got to be honest. You got 20:24to say, "Here's the root cause. Here's 20:26what's making it sticky. Here's what we 20:28can do to get ourselves out of the 20:29mess." That's why I made this video.