Learning Library

← Back to Library

Managing AI Skills for Real Value

Key Points

  • The rapid, unchecked adoption of AI tools—like Claude’s new “Skills” feature—can create a chaotic, unmaintained sprawl of custom solutions that add activity but no real value.
  • Organizations often rush to deploy AI (custom GPTs, Zapier, N8N, etc.) to appear innovative, yet without disciplined governance these projects fade as day‑to‑day priorities take over, leaving only vague time‑saving claims.
  • Effective AI integration requires leaders to develop new fluency skills that focus on maintaining, tracking, and measuring AI assets rather than simply proliferating them.
  • Empirical studies show AI can boost team productivity dramatically (40‑50% up to several hundred percent), confirming that the technology’s potential is real when applied correctly.
  • The core challenge for leaders is identifying whether obstacles are talent, culture, process, or tooling—and then implementing the three key principles of AI fluency to turn activity into measurable business value.

Sections

Full Transcript

# Managing AI Skills for Real Value **Source:** [https://www.youtube.com/watch?v=rlJmALoNl5g](https://www.youtube.com/watch?v=rlJmALoNl5g) **Duration:** 00:20:13 ## Summary - The rapid, unchecked adoption of AI tools—like Claude’s new “Skills” feature—can create a chaotic, unmaintained sprawl of custom solutions that add activity but no real value. - Organizations often rush to deploy AI (custom GPTs, Zapier, N8N, etc.) to appear innovative, yet without disciplined governance these projects fade as day‑to‑day priorities take over, leaving only vague time‑saving claims. - Effective AI integration requires leaders to develop new fluency skills that focus on maintaining, tracking, and measuring AI assets rather than simply proliferating them. - Empirical studies show AI can boost team productivity dramatically (40‑50% up to several hundred percent), confirming that the technology’s potential is real when applied correctly. - The core challenge for leaders is identifying whether obstacles are talent, culture, process, or tooling—and then implementing the three key principles of AI fluency to turn activity into measurable business value. ## Sections - [00:00:00](https://www.youtube.com/watch?v=rlJmALoNl5g&t=0s) **Managing AI Skill Sprawl** - The speaker warns that unchecked proliferation of AI tools and “skills” can create chaotic, low‑value activity and stresses the need for new leadership capabilities to keep AI initiatives organized and effective. - [00:04:36](https://www.youtube.com/watch?v=rlJmALoNl5g&t=276s) **Infrastructure Boundaries vs Gatekeeping Culture** - The speaker explains that regulated‑data constraints act as a secure infrastructure boundary that fosters creative AI skill development through test cases and documentation, while warning that excessive approval gates and review boards create a restrictive, culture‑killing environment. - [00:07:43](https://www.youtube.com/watch?v=rlJmALoNl5g&t=463s) **Essential AI Prompting Skills** - The passage outlines crucial, transferable abilities such as breaking complex problems into AI-sized chunks and deciding when to iterate versus restart prompts to achieve rapid, effective AI-driven solutions. - [00:10:49](https://www.youtube.com/watch?v=rlJmALoNl5g&t=649s) **Deciding When to Use AI** - The speaker explains that discerning whether to begin a task with AI or handle it manually is a learnable meta‑skill involving problem decomposition, context awareness, and examples of successes and failures. - [00:14:57](https://www.youtube.com/watch?v=rlJmALoNl5g&t=897s) **Start Simple, Add Infrastructure** - The speaker urges teams to prioritize delivering value with minimal infrastructure, adding complexity only when necessary, to foster safe experimentation, skill growth, and effective AI problem‑solving. - [00:19:12](https://www.youtube.com/watch?v=rlJmALoNl5g&t=1152s) **Avoiding AI Infrastructure Overkill** - The speaker warns against building unnecessary AI infrastructure—often driven by vanity or hype—and urges teams to prioritize delivering real value, iterating based on breakages, and evolving toward genuine fluency and multiplicative business impact. ## Full Transcript
0:00You know what's undertalked about? We 0:02don't talk about what happens when you 0:04have everybody using AI at work and your 0:07whole team is not building velocity. 0:09They're not building value. You can't 0:11tell the difference. It's a bunch of 0:13activity. And in fact, by some measures, 0:15maybe you slowed down. That does happen 0:17and it happens by default. And so I want 0:19to talk a little bit about the new kinds 0:22of skills as leaders that we need to be 0:26building in order to make sure that 0:29organizations take AI aboard and are 0:31able to actually get leverage, able to 0:34actually get value out of it. What's the 0:36trigger for this one? Well, it's the 0:38launch of Skills. Skills was the big 0:40launch this week from Claude. I made a 0:42whole video about it. And the thing with 0:44skills is it has all of the hallmarks of 0:47a brilliant technology that people who 0:49don't understand it can turn into a 0:52complete spaghetti code activity mess in 0:56your organization. And what I mean by 0:57that is fast forward 3 months or four 1:00months, you now have 5,000 skills in 1:03your organization for a team of 300 1:04people or you know, pick your number and 1:07no one's maintaining them. No one can 1:09track where they all are. They're in 1:10some sort of enterprise instance. Do you 1:12use the Excel version two or the Excel 1:14version 3 or the Excel NATE version? 1:16It's going to be a complete mess. And so 1:18when I think about that, and it's not 1:20the first time I've seen this, we have 1:22this same problem with custom GPTs. We 1:24have it with AI integrations like 1:25Zapier. We have it with N8N agents. 1:28There's this excitement that comes when 1:31these AI tools burst onto the scene and 1:34executives greenlight it. and teams want 1:36to get a lot done and they want to show 1:38they're doing the AI and so they build a 1:40bunch of stuff but they don't 1:42necessarily always deliver value and 1:45those things gradually fall by the 1:46wayside as the evergreen priorities keep 1:50the team busy and so long-term 3 months 1:52four months 5 months 6 months the team 1:54is busy they have AI they'll report on 1:56surveys they're saving time but you 1:58never see that value come through 2:00anywhere I think that there are three 2:02key principles for building AI fluency 2:04that the organiz organizations that do 2:06add value, that do figure out how to 2:08make AI actually work, they figure these 2:10out. And most organizations that don't 2:13figure them out, end up in that activity 2:15bucket. Before we get into them, I do 2:17want to underline for you the case for 2:20productivity at the team level is 2:23closed. AI 100% confident. We've seen 2:27the studies, it can enhance activity at 2:29the team level hugely. We're talking a 2:32range from 40 to 50% all the way up into 2:36the multiple hundreds of percent. It is 2:38a massive breakthrough. So far, we 2:40mostly see that that super bullish 2:43optimistic case for small startups that 2:46are AI native. And a lot of the question 2:48that we've been wrestling with as 2:50leaders has been is that a talent issue 2:52or is that a culture issue or is that a 2:55process issue? Is that a tools issue? I 2:57want to start to break that log jam for 2:59us. And I'm going to start with the 3:00assumption that if you have rolled out 3:02AI and it has been at least by a corner 3:05of the company enthusiastically 3:07received, then your first question 3:10should not be about talent and it should 3:12not be about tools. You should be asking 3:14yourself about fluency. And that gets a 3:17little bit at that culture piece. So 3:19let's dive in. What are the principles 3:21for building AI fluency that we don't 3:24often talk about? Number one is enabling 3:28constraints rather than processes. And 3:30so before we go further, constraints are 3:33a bad word. Like a lot of people think 3:36constraints are not good because it's a 3:38negative connotation word. And if you're 3:40in IT and security, you think 3:42constraints are something that enables 3:44you to control, right? You can you can 3:46surround the risk and control it with 3:48constraints. I don't mean either of 3:50those things. I don't think it's a bad 3:52word and I don't think it enables you to 3:56control and manage all risk. That's not 3:59my point here. My point is to set 4:02structure and boundaries that make good 4:05work and healthy work patterns with AI 4:09feel natural and that make bad unhealthy 4:13work patterns with AI feel hard. We are 4:16setting the incentives. So the goal is 4:19not to control what people build. It's 4:20to set the constraint so people feel 4:22good building in healthy patterns. What 4:24do I mean by that? One good example is 4:27every skill you build includes a test 4:29case. Another good example is skills 4:32have a named maintainer in our business. 4:34Another example outside of skills AI 4:36cannot access regulated data outside our 4:40virtual sandbox where the data is secure 4:42and it's not even possible right you 4:44can't go and get it. In other words, 4:46these constraints are a building block 4:49that enables you to be as creative as 4:51you want within that space. Skills 4:54include a test case. Doesn't mean you 4:56can't create a skill. It just means 4:57think a little bit, produce a test case 4:59that goes with the skill so we can see 5:01how it works. Document it. Include that 5:03markdown in the file. AI can't access 5:06regulated data. That might seem like a 5:08negative or controlling sort of case, 5:11but it's not. It's just an 5:12infrastructure boundary. It's it's a 5:14business rule encoded as infrastructure. 5:16You can do whatever you want inside. 5:17What is the opposite? What is a process 5:20killing negative control case look like? 5:23There's three types I want to call out 5:24here. These are all bad examples where 5:27you're using constraints to try and 5:28control people and you're going to kill 5:30culture. Number one, an example would be 5:33AI skills require it approval. That's 5:36gatekeeping. I see it a lot, right? Or 5:38this AI tool requires g, you know, IT 5:40approval. The more you do that, the 5:43worse it gets. Gatekeeping creates a 5:45gatekeeping culture and it drives out 5:48value. Another one, you need to submit 5:50your skills or your prompts to the 5:52review board and the review board will 5:53approve it. I've heard stories of people 5:56trying to argue about whether prompts 5:58are somehow copyrightable intellectual 6:01property inside the company. And so they 6:03have to like go through the lawyers like 6:05no bureaucracy will kill value too. It 6:08kills the culture. It kills the ability 6:10to build. That's not the constraint you 6:12want. A third example of a constraint 6:13that you don't want. Use only approved 6:16patterns for prompts. Use only approved 6:18patterns for skills. Only these 6:20templates. No. Again, you don't want to 6:22kill people's creativity. So, the way to 6:24think about it is this. Enabling 6:26constraints raise the floor for the 6:28team. They make it easier for the team 6:30to move at their best. Process lowers 6:33the ceiling. It makes it hard for the 6:35team to excel. So, the next time you 6:38think about a constraint, and I'm not 6:39saying don't do it. I'm actually asking 6:41you to have healthy constraints. Think 6:43about finding constraints that raise the 6:44floor. Principle number two for AI 6:47fluency. Learn problem solving skills 6:50that are AI fungeible. This is a huge 6:52one. It is under discussed and I want to 6:55get into it a little bit because I think 6:58that most people don't understand and 7:00think I mean like learn prompting. I 7:03don't. The people getting 10x better, 7:06they're not magically learning tools. 7:08They're not magical prompters by 7:10default. They aren't only learning 7:12prompting. They aren't only learning 7:13skills from Claude. They're learning the 7:16judgment that transfers across AI 7:18systems. And that is not something that 7:21is easy to learn on the open web because 7:25most of the videos are made for clicks 7:27and entertainment. And hard stuff like 7:30how to decompose complex problems, well, 7:33that doesn't get as many views, does it? 7:36So, let me go into the specific skills 7:38that I have seen that start to transfer. 7:41This would probably be a whole another 7:43hour of video, but we're going to get a 7:44start and give you a sense of what those 7:46skills are, and we can get further into 7:48it in later editions of this executive 7:50briefing. What transfers? The skill to 7:53decompose complex problems into AI sized 7:57pieces is a new skill in our world. It's 8:00a big deal and if you have it, it's one 8:02of those universal skills that transfers 8:04across tools and prompts everywhere you 8:07go. So, think about it. That helps you 8:09with prompting. It helps you with 8:10context engineering. It helps you to 8:12know which tool to use. Understanding 8:14how to decompose a problem into AI sized 8:17pieces means that you understand how AI 8:20models work. You understand your 8:23problem. You are experienced enough with 8:26articulating problem framing that you 8:29can break the problem into separate 8:32chunks and then you can put the chunks 8:34into the model. It's a very advanced 8:36skill when I talk about it out loud. It 8:38is one of the underlying skills that 8:40these teams that are delivering 300% 8:42400% 500% speedups have. Let me give you 8:46another one. When to iterate versus when 8:49to start over. That one's a little bit 8:51easier. I think it's a little bit more 8:53of an easy mode skill. It's learning in 8:56any given LLM interaction. When do you 8:59wipe the context window versus when do 9:01you not? When do you provide course 9:03correction versus when do you just say, 9:06you know what, we're going to start over 9:07with a better prompt. Let me give you 9:09another one. How do you recognize 9:11intuitively when AI is confident and 9:14incorrect? That's an extremely high 9:17value skill. It is also a very fuzzy 9:19skill. You have to know enough about 9:21your domain. You have to know enough 9:23about how AI speaks and the utterances 9:25it uses. You have to know enough about 9:27the relationship between AI language and 9:31AI truth claims that you can read a 9:34particular statement and say, I've seen 9:36several hundred or several thousand AI 9:39statements before. This one feels more 9:41like a false category AI statement 9:44because it doesn't quite ring true for 9:45my domain and because I've noticed that 9:48when AI is hallucinating in this model 9:50in this chat, it feels more confident 9:52because it's actually doubtful, right? 9:54Like something something like that as an 9:56example. Like I'm not entirely making 9:58that up. Like I caught Claude 10:00hallucinating today in a very similar 10:03situation. I stared at it and I said I 10:05think 60% is incorrect here. It doesn't 10:09smell right with my intuition of the 10:11domain and it also feels like Claude in 10:15particular likes to get weirdly specific 10:17and weave it into pros because Claude is 10:19a great writer and I was like this just 10:21feels wrong and I just challenge it and 10:23I caught it. That's an advanced skill. 10:25That's a skill that teams that go fast 10:27have. Another one, what kinds of context 10:29actually matter for a problem? That's a 10:31very advanced skill. Does quantitative 10:33data matter here? What cut of 10:35quantitative data? How clean does the 10:37data need to be to be good enough? What 10:39kinds of context will help the LLM make 10:42the next step, but maybe not all the 10:44steps? How do I chunk that context? 10:46These are hard questions when I start to 10:49talk about them, aren't they? I'll give 10:50you a couple more and then I'll give you 10:53some counter examples. So, here's one. 10:55When do I use AI versus when do I do it 10:57myself? When do I start with AI versus 11:00when do I start with myself? When do I 11:02start with myself and just extend into 11:04AI right away? These are all different 11:05questions, but they're all related. 11:07You're trying to figure out the real 11:09working relationship for a given task, 11:11given intent, given domain. And that is 11:13a skill. It's actually a learnable 11:15skill. And that's one of the things I 11:16want to emphasize. I've talked about 11:17decomposing problems, iterating, 11:19recognizing when AI is wrong, the kinds 11:21of contexts that matter, how to use AI 11:23or just do it yourself. Those are all 11:25learnable skills by your team. And in my 11:29experience, the more we are able to name 11:32them, which is exactly what I'm doing 11:35here, the more we are able to start a 11:37conversation about successful cases in 11:40our organizational context and 11:42unsuccessful cases, things that didn't 11:45go well. That really matters because if 11:48teams can't see good examples of these 11:50skills being practiced and bad ones, it 11:52is hard to get it into their heads. 11:53These are meta skills, right? They're 11:54advanced skills. You can't just go to 11:56one LLM and practice it and make it 11:58work. What isn't one of these skills? 12:01What's a counter example for you where 12:02this is not a skill that's AI fungeable? 12:04I'll give you a really example. How to 12:07how to structure a specific prompt for a 12:09given language model. This is how you 12:10hack GPT5 with this magical prompt. It's 12:13a no. That's not a transferable skill. 12:16This is the exact workflow to use to use 12:18N8. No, not a transferable skill. You 12:21need to think about can your people get 12:23similar results in different AI tools? 12:26Can your people preferentially choose AI 12:28tools not based on familiarity but based 12:31on these kinds of meta skills? Can 12:33someone tell you when I decompose my 12:35problems, I find that this particular 12:37context window and this model right now 12:39works better for this type of problem 12:41and here is why. If you have 10 or 20 12:45people in an organization of 500 that 12:47have that skill set, you will probably 12:50make more of a difference to your 12:52business than if you have all 500 12:55trained on chat GPT. I'm not kidding. 12:57The nonlinear unlock you get from small 13:00teams that operate like this is 13:02enormous. Let's go to the third fluency 13:06principle. Do not overinfrastructure 13:09your AI. In fact, where you can adopt 13:12the rule of thumb that says we don't add 13:15AI infrastructure until our workflows 13:18break. There's a there's a real pattern 13:21right now driven by vendors on AI 13:24adoption. Teams see the power of AI. 13:26They immediately start building 13:28infrastructure to contain it. And so 13:30I've seen teams go off the rails where 13:32they have barely any users and they're 13:34building custom harnesses for agent 13:36orchestration and complicated rag 13:38systems for knowledge management of a 13:40dirty wiki and elaborate frameworks for 13:42prompt management when nobody's 13:43prompting at work and sophisticated tool 13:45chains for AI workflows. You get the 13:47idea. Often times this is premature. If 13:49you are building, don't do anything but 13:53start to build and see how far you can 13:56get. Like if if I can tell one thing 13:59especially to engineers, it is start 14:02simple and add infrastructure when the 14:04simple approach breaks. And that works 14:06for CTO's as well, right? If you're 14:08thinking about build versus buy and 14:10vendor solutions, I would encourage you 14:12to think about it as are your people in 14:15a place where their workflows are 14:17breaking despite current use of AI and 14:20they need this tool to unbreak the 14:22workflow. And there are absolutely cases 14:23like that. you can get into more 14:26complicated orchestration, more 14:28complicated memory management scenarios 14:30that naturally are required for a given 14:34use case. If you have complicated data 14:36lake and you're going to need to migrate 14:38it into a place where you have a very 14:40strong use case for AI and you know the 14:42traditional data lake architecture won't 14:44work, sure, you're going to need to 14:46build some infrastructure. I'm not 14:48saying don't do it. I'm saying don't 14:51start by building infrastructure. 14:54Start by building value and then build 14:57the infra when workflows actually break. 15:01Build it when you really need it. So 15:03start simple. And I think that this will 15:05help like so many teams that I have seen 15:08that go off the rails initially because 15:10the instinct to complicate as soon as 15:13you see something like AI is strong. I 15:15don't quite know why it is. I have 15:16sometimes wondered if it is because it 15:18gives teams an illusion of certainty 15:21because it's a brand new technology. But 15:23regardless of the inner reason, just 15:25start simple. Add infrastructure when 15:28the simple approach breaks. That's it. 15:30It's not that complicated. And you know, 15:32the three principles I've spent some 15:34time outlining here, they they work 15:36together. If you enable constraints that 15:38raise the floor, you let people 15:40experiment safely without having to get 15:44permission. That means people actually 15:46can get better as they start to work 15:49within structures that push them toward 15:52healthy work habits. And that applies to 15:54skills, this week's release, as much as 15:56it applies to anything else, right? 15:58Minimal infrastructure is going to keep 16:00the focus on developing judgment and not 16:03managing systems. So if you start 16:06simple, you are encouraging people to 16:08spend more time learning the art of 16:12problemolving. Principle number two, 16:14learning the art of tackling 16:16increasingly complex problems with AI. 16:19And that my friends is where the real 16:22value lies. If you look at these teams 16:24that are getting twox improvements and 16:26you ask yourself where is this massive 16:28multiple coming from it is coming from 16:32the ability to tackle much much harder 16:35problems in AI like order of magnitude 16:3810x harder problems in AI than people 16:41who don't know these skills can tackle 16:43and that is why I spent so much time 16:45talking about problem solving skills 16:48with AI it is a new class of problem 16:51solving understanding it is a big deal 16:53and we are not teaching it well today. 16:56My goal with this video has been to give 16:58you a sense of how you start to 17:01structure your team, your organization, 17:03your incentives so that people are 17:06focused on that. People are focused on 17:08how they can start to learn to solve 17:10problems differently. And I'm going to 17:12remind you again, this doesn't need to 17:13be your whole organization to deliver 17:16extraordinary value. You can get extreme 17:18value from a group of 10 or so people 17:20putting this together. It's a big deal. 17:22And so my encouragement to you is pretty 17:24simple. This week I want to suggest that 17:27if you are a team leader, if you're a 17:28director, if you're an executive, anyone 17:30who has people responsibilities, there 17:33are three questions that I think would 17:35be really productive for you to get 17:36engaged with your team on number one, 17:39what are our enabling constraints? What 17:41boundaries here at work on our team in 17:44our org make good AI work feel natural? 17:47And if the answer is we don't know what 17:49good AI work feels like, there's your 17:51answer, right? That's where you need to 17:53dig in, peel that onion back, and start 17:55to figure out what good AI workflows 17:57look like. Someone on your team, some 17:59champion has a good example. Go find 18:01them. Question two, how do we develop 18:04good judgment in problem solving? Are we 18:07just training to the tools or are we 18:10truly building the capability to problem 18:13solve with AI? Are we multiplying our 18:15problem solving muscles because people 18:18understand how to structure pieces of 18:21work in ways that AI can use and assist 18:24them on? It's essentially learning the 18:27skill of playing with robots, right? You 18:30have to learn the skill of working side 18:32by side with a robot, of passing the 18:33work over, letting the robot take a 18:35turn, and then coming back. That is 18:37literally what we're doing. It is a new 18:38skill versus passing it to a human. It 18:40is not the same thing. And people have 18:43to learn the difference. And that is why 18:44I spent so much time in this video on 18:47talking through that skill set. It's a 18:48critical one. So ask your team, how are 18:52we doing at developing judgment? Ask 18:54your learning and development team if 18:56you have one. Are we just training to 18:58the tools? The vendors will encourage 19:00that. They want you to train to the 19:01tools and buy more tools. Or are we 19:03training to the skill? Are we training 19:04to the capability? And if you're 19:06confused, you can always ask me. 19:08Finally, ask where are we overbuilding? 19:12What infrastructure have we been tempted 19:14to add on before we know we need it? Is 19:17it a vanity project? Is it something for 19:19the board? Is it something that we said 19:21we'd commit to because we saw a LinkedIn 19:24post? I've seen that done. Most of us 19:26have. But seriously, what infrastructure 19:28don't we really need for AI? Can't we 19:31just focus on building the value and 19:33find out what breaks? It's really 19:35important to think that way. and you 19:36will have things break and you can add 19:38necessary complexity and infrastructure 19:40at that point but then you're not 19:42wasting your effort. So there you go. 19:44That is my words of wisdom for you. I 19:46think it's especially important in an 19:48era when we are going to get release 19:50after release that feels a lot like 19:53skills from Claude democratizing 19:55empowering everyone loves it. Soon it 19:58will proliferate across your business. 19:59It may not deliver actual value. And so 20:02this piece is all about how can we move 20:05from activity to true fluency and 20:08multiplicative value for teams and for 20:10the business.