Learning Library

← Back to Library

Scaling Prompt Mastery for Enterprise Success

Key Points

  • Individual prompt‑mastery alone won’t scale; to succeed you must turn personal AI hacks into repeatable, team‑wide learning systems that deliver measurable business value.
  • A recent MIT study (August 2025) found that 95% of enterprise AI projects generate zero ROI within six months, sparking headlines that exaggerate AI’s failure but miss the nuanced reasons behind those outcomes.
  • The study’s framing is flawed because it surveys executives about builders’ actions, overlooking the disconnect between leadership’s AI expectations and the day‑to‑day practices of prompts‑focused contributors.
  • By aligning builders, leaders, and organizational workflows around proven principles—rather than chasing every new feature—you can join the 5% that turn AI pilots into sustainable, ROI‑driving initiatives.

Sections

Full Transcript

# Scaling Prompt Mastery for Enterprise Success **Source:** [https://www.youtube.com/watch?v=zw39KBZkPeA](https://www.youtube.com/watch?v=zw39KBZkPeA) **Duration:** 00:19:20 ## Summary - Individual prompt‑mastery alone won’t scale; to succeed you must turn personal AI hacks into repeatable, team‑wide learning systems that deliver measurable business value. - A recent MIT study (August 2025) found that 95% of enterprise AI projects generate zero ROI within six months, sparking headlines that exaggerate AI’s failure but miss the nuanced reasons behind those outcomes. - The study’s framing is flawed because it surveys executives about builders’ actions, overlooking the disconnect between leadership’s AI expectations and the day‑to‑day practices of prompts‑focused contributors. - By aligning builders, leaders, and organizational workflows around proven principles—rather than chasing every new feature—you can join the 5% that turn AI pilots into sustainable, ROI‑driving initiatives. ## Sections - [00:00:00](https://www.youtube.com/watch?v=zw39KBZkPeA&t=0s) **Scaling Prompt Mastery to Business Impact** - The speaker argues that personal prompt expertise alone won’t drive AI adoption, and outlines how to convert individual hacks into a systematic, team‑wide learning process that aligns leadership, secures budget, and delivers measurable business value. - [00:03:16](https://www.youtube.com/watch?v=zw39KBZkPeA&t=196s) **Critiquing Misguided AI Study Reactions** - The speaker argues that the internet’s panic, methodological nitpicking, and simplistic buy‑versus‑build advice over the MIT AI pilot study miss its nuance, even though the study still offers useful insights. - [00:06:32](https://www.youtube.com/watch?v=zw39KBZkPeA&t=392s) **Enterprise AI Needs Feedback Loops** - The speaker stresses that effective AI adoption in businesses demands ongoing effort—creating feedback loops, retraining pipelines, and maintaining context persistence—rather than expecting a free lunch, and critiques studies that ignore these practical implementation challenges. - [00:09:39](https://www.youtube.com/watch?v=zw39KBZkPeA&t=579s) **Instrumenting AI Projects for Success** - The speaker advocates using detailed instrumentation and leading‑indicator metrics to assess AI system quality and align builder goals with leadership expectations. - [00:12:51](https://www.youtube.com/watch?v=zw39KBZkPeA&t=771s) **Mapping AI Builder Skills to Influence** - The speaker explains how core AI principles translate into practical contributor skills—like shadow‑AI detective work, guard‑rail engineering, friction design, learning system architecture, health monitoring, and prompt library creation—to build influence and integrate AI across the business. - [00:16:25](https://www.youtube.com/watch?v=zw39KBZkPeA&t=985s) **Intelligent Hybrid Workflow Strategies** - The speaker explains how using confidence scores, smart overrides, and hybrid architectures can formalize guerrilla workflows, audit shadow IT, embed selective friction for security, and turn personal mastery of prompts and APIs into a competitive business advantage. ## Full Transcript
0:00In the next few minutes, I'm going to 0:01save you at least six hours of work. I'm 0:04going to help you turn your prompt 0:06mastery, let's say you've been following 0:08my videos, you feel like you know 0:09prompting into a recipe for 0:11organizational AI success. What does it 0:15take to go from being a prompt ninja 0:17perfecting chat GPT prompts, Claude 0:19workflows, cloud artifacts, whatever it 0:21is, chasing every new feature 0:23announcement. Claude code interpreter 0:25came out this week. At the same time, 0:27you're in a world where your company or 0:30your team around you are not on the same 0:33page. Even in AI first businesses, there 0:36is a wide range of adoption patterns of 0:39AI. There are some people who are still 0:41using those manual workflows. And so, 0:42you see that your company's AI pilot 0:45stalls out. Budgets, they might get 0:47slashed. Executives will say AI is a fad 0:49or say that things aren't working or 0:51that they aren't seeing return on 0:52investment. Here's the secret that isn't 0:56getting told enough. The individual 0:58prompt mastery practice you're doing 1:00doesn't scale. But the secret is not 1:04just get your leaders on board with AI. 1:07To be in the 5% who succeed, you need to 1:10figure out as a builder, as someone who 1:12may not even be a director or a VP, how 1:16to level up your personal AI hacks into 1:20something that is a system of learning, 1:22something that can help your team 1:24deliver business value. And that's what 1:26we're about today. Why am I talking 1:28about this now? Because the number one 1:32study circulating on the internet right 1:34now is the infamous 95% fail study. It 1:39is a study published in August 2025 by 1:42MIT reporting that 95% 1:46of enterprise AI initiatives deliver 1:49zero measurable ROI within six months. 1:5395% that's based on 150 plus executive 1:57interviews and 30 to40 billion dollars 2:00in represented AI spending. It sparked 2:02global headlines. I can tell you that 2:05the Google first search page is like all 2:08disaster headlines. It's just all 2:10everything is bad. LinkedIn doom loops 2:12people sharing the headlines, not 2:13reading it. Very few people have 2:14actually read this study. This is part 2:16of how I'm saving you time. All of this 2:18surface level narrative overlooks some 2:21of the key nuances that separated the 2:25winners and the losers. And I did the 2:27digging so that you don't have to. 2:29Number one, nobody is talking about 2:31this. The frame for this study is mostly 2:35incorrect. This study is asking 2:39executives what builders are doing. And 2:42anyone who has worked in a business will 2:44tell you executive pictures of AI 2:47adoption and AI fluency differ 2:50dramatically are not the same as what 2:52builders on the ground are doing. And 2:54that's why I am talking in this video to 2:58you. If you are building, if you are 3:00prompting, if you are excited about 3:01prompting, if you're a founder, a solo 3:04builder, whatever it is, you have a 3:05chance to change this narrative. And I'm 3:07going to give you specific principles 3:08that popped out from hours of study 3:11looking at the MIT study, the people 3:13talking about it, everything else. 3:14First, what did the internet get wrong 3:16about this reaction? Then we'll get into 3:18how we dig in further. Number one, the 3:20executive panic is incorrect. We got 3:23executives saying, "Is AI a bubble?" We 3:25got stock crashes. We got boardroom 3:27jitters. It's just it's not the right 3:29focus. It misses the nuance and the 3:31detail. We got methodology debunking. I 3:34don't want to go into it. There are 3:35entire subreddits dedicated to debunking 3:39this particular study and saying you 3:41can't draw big conclusions because it's 3:43such a small study and interviews and 3:45this and that. Look, I understand 3:46statistics. I could go there. You don't 3:48have time for it. I don't have time for 3:50it. We're going to save the time. Let's 3:51just say maybe the study is flawed, but 3:53there's a lot we can learn and we don't 3:55have to worry about it. There's a lot of 3:56copypaste journalism. We're not going to 3:58waste time on that. And there's a lot of 4:00really binary conclusions. One of the 4:02things that they came out with is a 4:04opinion on the binary buy versus build 4:07where the MIT study basically said you 4:09were much more likely to succeed in your 4:12AI pilot if you bought versus if you 4:14built. It's kind of convenient that 4:16they're saying that because the people 4:18running the study are selling something. 4:19So yeah, I'm shocked that they came out 4:22with the buy. But but wave that aside, 4:24let's say they have good intentions and 4:26they got that and they're honestly sort 4:28of presenting what they see. It's still 4:30oversimplified advice that misses the 4:33realities I see diving into 4:35organizations daily and so it's still 4:37wrong. So let's dive in and let's figure 4:40out what the real takeaways are so you 4:42don't have to spend 6 hours staring at 4:44all this stuff. Number one, the MIT 4:46study actually measures only profit and 4:49loss focus over a 12 to 18month period. 4:53That is it. It is a very narrow measure 4:56of success. It's the first thing to be 4:58aware of. Number two, as I said, it only 5:00talks to execs. Number three, it only 5:03gives execs a buy or build. It's sort of 5:05a very binary conversation. We talked 5:07about that. Number four, it talks a lot 5:10about workflows, but is too high up 5:13because they're talking with execs to 5:15have specific guidance on how to build 5:16those workflows in ways that actually 5:18work. This is where we're going to start 5:19to close the gap and give you takeaways 5:21that matter. So, what actually drives 5:24success? Let's turn around and say, what 5:26if you want to be in the 5%. and you 5:28don't have the power of a vice president 5:30and you're you're a prompt expert, 5:33you're a team influencer for AI, maybe 5:36you're uh a leader on your team, how do 5:38you actually start to drive success? I 5:40want to suggest to you that we builders 5:43know technical patterns that come up 5:46again and again and again as success 5:49indicators that did not show up in the 5:51MIT conversations because execs aren't 5:54aware of them. So, let me name them here 5:56and let me share them because so many of 5:59us are discovering them by running into 6:01them. We're like, we've got our eyes 6:03closed and we're feeling around in the 6:04dark and we're discovering these 6:05principles. Let me just name them so 6:07that we all are talking off the same 6:08page. Hybrid architectures matter. We 6:12didn't have that choice in the MIT 6:13conversation. They didn't write it up. 6:15Every single time I have talked to 6:17businesses who are actually implementing 6:19AI, they have hybrid architectures. They 6:22are combining best-in-class models with 6:25custom workflow logic. They're not just 6:27doing a roll your own versus a buy. 6:30They're actually taking the best of both 6:32worlds and they are recognizing the work 6:34it takes to do that. That there is no 6:36such thing as a free lunch. One of the 6:38things that the MIT study missed is that 6:41even if you buy, you are buying work. 6:44And again, executives don't see that. 6:45Number two, learning systems is how you 6:49should think about installing AI. You 6:51want to build feedback loops. You want 6:53to retrain your pipelines. You want to 6:55have context persistence. It's builders 6:58that understand this and that are 6:59stumbling into it. When I wrote my guide 7:01on rag, when I wrote my guide on 7:03chunking data, it's all about how you 7:06start to take the data you have in the 7:08business and surface it and make it 7:10available so that AI workflows can 7:13actually use it in feedback loops that 7:17allow the business to learn and get 7:20better. next time at completing tasks 7:22that matter. This is one of the key 7:24findings from the study, but it was 7:26never pulled through. They never figured 7:27out how to pull it through because 7:28again, they were talking to execs. So, 7:30the generic conclusion of the study was, 7:33"Oh, yeah, it would be good if AI was 7:35able to adapt to enterprise workflow 7:38realities." Yeah, I mean, you know, 7:40sliced bread is cool, too. Isn't that 7:42great? The the reality is the only way 7:44you get that done is by actually 7:47building feedback loops with persistent 7:50context and being willing to retrain 7:52your pipelines until it works. This is 7:55hard work. When I talk about even at an 7:57individual level, let alone a team 7:58level, what it takes to have an agentic 8:02workflow, people get this big heavy 8:03site. They're like, "That's a lot of 8:04work." And I'm like, "Yeah, that's why 8:06you should pick problems that matter 8:09because you're going to have a lot of 8:10work. you're going to have to put your 8:11shoulder to it and you're going to 8:13harvest the value disproportionate 8:15disproportionate ratio if you have a 8:17better goal. And so if you have a goal 8:20that's big that's audacious and you want 8:23to learn a lot and you want to break 8:24through for the business, let's say that 8:26your goal your your choice is between a 8:28rag system for an HR policy manual and a 8:31rag system that allows you to maintain 8:34context across deals for the sales team. 8:37Pick the sales team option. You put in 8:39the same amount of work either way and 8:41you get so much more value off the sales 8:43one. Build learning systems that matter 8:45and that are aligned to your goals. 8:47Number three, this one almost didn't get 8:49talked about, but the study did mention 8:51it. It said that building intelligent 8:54friction mattered for successful 8:56organizations. People think of these 8:58systems as like you want to make AI as 9:00easeful as possible. That's not true. 9:03You want to embed smart friction. And 9:07I'm going to give you specifics and the 9:08study didn't. You want to embed 9:10confidence thresholds. What if you were 9:12able to show in a printed out response 9:15from an LLM in red or green or yellow, 9:18what is the confidence the LLM has in 9:20the token it's presenting? So you can 9:22see low confidence tokens that might 9:25indicate hallucinations. What if you had 9:28human review gates where humans could go 9:30back and retune and they could say, you 9:33know what, I want a more aggressive LLM 9:35pass. I want a less aggressive LM pass. 9:37I have sliders to tell it how to adjust. 9:39I don't just have a yes or no. Is it 9:41more friction? Yes. Is it something 9:43that's actually going to help you build 9:45a more useful system long term because 9:48that friction is smart and reinforces 9:50your learning system? Also, yes. Again, 9:53not something the MIT study jumped into 9:55because they did not talk to builders. 9:57Instrumentation. You want to be looking 10:00really carefully at your accuracy, at 10:03your latency, at your error rates, at 10:05your override metrics. The reason I am 10:07saying this is not because I want you to 10:09invest in metrics. If you haven't built 10:11a system, I want you to invest in making 10:15sure that you know whether the model is 10:19solving a meaningful problem and what is 10:21the quality of the solution it is 10:23proposing. And the reason I think this 10:25matters is that if you let execs 10:28determine ROI as a measure, that's fine. 10:32It's down the road and you have no 10:34leading indicators. Instrumentation is a 10:37way to get actionable leading indicators 10:39that let builders actually drive success 10:42for AI projects in a way that you guys 10:44can then report up to leadership. And if 10:47we don't talk about that a lot, if we 10:49don't talk about instrumenting these 10:50projects, we're going to let leadership 10:52dictate what success looks like and we 10:54actually have a chance to influence 10:55that. Now, if you're wondering what does 10:58instrumentation look like besides like 11:00the technical ones, I will say it is 11:03more useful to be able to agree with 11:06leadership on a general goal and problem 11:08you want to solve. Show the problem is 11:10being solved well and then show the 11:12direction for how to extend the 11:14solution. than it is to talk about 11:17vanity metrics. And I think one of the 11:19most persistent vanity metrics is 11:21adoption and time saved. You'll notice I 11:24didn't mention those ar those aren't 11:26technical metrics. And what I find is 11:28when execs see that, they think that 11:31builders like you and me are trying to 11:33position success outside of ROI. Whereas 11:36if they start to see technical metrics, 11:38you have to explain them so they 11:39understand what they are. But once you 11:41explain them, no one mistakes them for 11:43the end goal. And that matters. Another 11:46principle that really matters is that 11:48this one doesn't get talked about at 11:49all. MIT didn't get to it. I think it's 11:51really important to actually be in that 11:53successful AI category. Shadow AI 11:56mining. You need to be the one that 11:59formalizes the gorilla AI use cases your 12:02team depends on. Is there like a GPT 12:04you're passing around that works? Is 12:06there a use of perplexity that works? 12:08Whatever it is, if you can mine the 12:11shadow AI for behaviors that work, and 12:14by the way, product managers, this is a 12:16hint for you. If you're in the B2B 12:18space, your customers have shadow AI use 12:21cases. If you can get that out of them 12:24and build for it, that is gold. That is 12:27gold. So, do some shadow AI. Formalize 12:30those use cases and see if you can build 12:33them into actual workflows that work. 12:35You're going to make it happy. you're 12:37going to make if if you're building a 12:39product for B2B, you're going to make 12:40the business happy because it's going to 12:41drive value. Look for the AI that's 12:44happening in the shadows that again 12:46execs aren't going to be aware of. So, 12:47you've you've seen some of these things. 12:49You've seen some of the principles, the 12:51technical elements that drive success. I 12:54want to transition now to skills. If you 12:57want to translate individual contributor 12:59skill sets, the things that make you 13:01successful as an AI builder with AI into 13:05influence, these skills actually map 13:08back to some of the principles I just 13:10talked about. They're par. This is not a 13:12whole new set of things to remember. You 13:15can do the shadow AI detective work and 13:17then you become someone who's known for 13:20systematizing workflows in a way that 13:22brings AI out of the shadows and into 13:25the business. That's influence. You can 13:27be known as someone who can engineer 13:30guard rails that build trust through 13:31transparency. Doesn't that sound good? 13:33Well, guess what you're doing? That's 13:34friction design, right? We just talked 13:37about that. You can become known as 13:38someone that designs AI products that 13:41improve with each interaction. That's 13:44learning system architecture. Again, a 13:46way to develop influence from those same 13:49sets of principles I just gave you. You 13:51can learn how to show the connection 13:54between engineering KPIs like accuracy 13:56and business ROI. Suddenly, you're known 13:59for technical health monitoring of AI 14:01systems. That sounds like influence, 14:03too. You can learn to develop prompt 14:06libraries and templates that are 14:08tailored to diverse team needs and 14:10architect those libraries in ways that 14:12enable other people to jump onto them. 14:14That's an example of context translation 14:16and it is part of the hybrid 14:18architecture that shapes that shapes 14:22good AI systems. Let me give you just 14:25some examples of how you can go forward. 14:28How you can be the one that takes this 14:3095% study, turns it on on its head and 14:33says, you know what, this is a study for 14:35builders and no one said so and we can 14:37actually be much more influential. 14:39Product managers, you can run shadow AI 14:42surveys. You can build features that are 14:44better with guerilla workarounds. You 14:46can figure out where to put intelligent 14:49friction in your products. You can 14:51figure out how to communicate 14:52instrumentation to execs in ways that 14:54they can understand that give you 14:55leading edge indicators so they won't 14:58just go to an MIT study and say, "Well, 15:00there's no ROI." Solo founders, 15:01entrepreneurs, you can focus on narrow 15:05workflows that allow you to customize to 15:08the business. If there's anything you're 15:10hearing here, hybrid architectures work. 15:13Businesses that buy AI solution are 15:15taking a tremendous amount of burden. If 15:18you focus narrowly on workflows you can 15:20tailor, you can deliver deep value and 15:23you can help lift the load for those 15:25businesses and you will earn trust and 15:27that will enable you to go after 15:29adjacencies over time. Engineers, you 15:31can get good at instrumenting. And I 15:34realize by the way that this is a new 15:35skill set. There is a degree to which 15:38instrumenting in AI is different than 15:40any other kind of instrumentation. 15:41Arguably one of the biggest skill sets 15:43engineers need to pick up is data 15:46science. Like how can they start to 15:47bring more data science into their 15:48instrumentation because these are 15:50non-deterministic models and it's 15:51complicated to measure them. You can 15:53learn, you can scale up, you can do your 15:56best to instrument, you can do your best 15:57to automate. And in particular, this is 16:00an area where I think engineers can lead 16:01the way in the company. You can lead the 16:03way on context management and what it 16:05means for the business. You can lead the 16:07way on explaining why the difference 16:09between a bad work result and a good 16:11work result can be as simple as clean 16:13context. Is not putting the kitchen sink 16:15of the wiki into the prompt and 16:17wondering why it doesn't work. UX 16:19designers, there is so much here for you 16:21if you're anywhere in the design space. 16:23surfacing confidence scores, figuring 16:25out how to help people who are doing QA 16:27loops do so intelligently, offering 16:29people great override options, figuring 16:32out how to take guerilla workflows and 16:34formalize them without losing the value. 16:36Security and compliance, like if you're 16:38working in that space, you definitely 16:40want to be auditing shadow usage. And 16:42you can only do that if you earn trust. 16:44You definitely want to be on the 16:46forefront of explaining how hybrid 16:48architectures can actually be more 16:50secure by speeding value and like 16:52pushing the shadow IT footprint to the 16:54edges and speeding the value of the 16:55business getting the install done. You 16:57can figure out where to embed friction 16:59for sensitive approvals. There's so much 17:02here. I'm just giving you a few 17:04examples. You can fill in the rest. Your 17:06competitive advantage is that you know 17:09the prompts. You may be digging into the 17:11APIs as a builder. you know the hidden 17:13workflows on your team that work that 17:15really work the exacts don't this 17:17playbook I'm giving you here this this 17:19open door into this MIT study what went 17:22wrong with it where we need to actually 17:24build this is going to help you bridge 17:26the gap between your personal mastery 17:28your sense of achievement and how 17:30businesses actually get done I guarantee 17:33you if you start to think about how to 17:35build hybrid architectures how to build 17:37systems that learn how to have 17:38intelligent friction how to think about 17:40buy versus build not as a binary 17:42tradeoff. How to think about 17:44instrumentation that leads to good ROI 17:47outcomes without just measuring the 17:49dollar and without just cheapening out 17:50and measuring adoption. You are going to 17:52be so far ahead of most of the 17:55individual contributors like 99% of them 17:58and it's going to give you options to 18:00drive good outcomes for the business. I 18:02don't know what your career goals are. 18:04Maybe you're looking for a promotion. 18:05This sure seems like a pathway there. 18:07Maybe you're looking to just extend your 18:08influence and you want to avoid a 18:10promotion. You can probably dictate your 18:12terms if you do something like this. 18:14This is a skill set. What the MIT study 18:18missed is that the people with the skill 18:20sets to do this are developing it on 18:24their own and reinventing the wheel time 18:27after time after time. And I see this 18:29and I want to lay it out in the open. 18:31These are principles I see being 18:33redeveloped over time that the MIT study 18:35missed. They are reasons for success. So 18:38learn from these principles. Don't 18:41reinvent the wheel. Know that other 18:43people are struggling with you and that 18:44if you see a headline like that, you 18:47have a surprising amount of influence if 18:49you are not in leadership to change the 18:52outcome of the business that you work 18:53with. You are not powerless. You can 18:56build in ways that avoid that 95% 19:01headline failure outcome, which by the 19:02way, I don't even know that I believe 19:0495% is correct. It's another story. I 19:07hope this has been helpful. I hope you 19:08see some of the pathway forward to go 19:10from individual prompt mastery to 19:12something more to something that enables 19:14you to influence the business. Let me 19:16know what you think. Pop it in the 19:18comments.