Learning Library

← Back to Library

When to Use AI vs Agents

Key Points

  • The video introduces a four‑category decision framework for choosing between plain data processing, classical predictive ML, generative AI, and AI agents, helping viewers know exactly when each approach is appropriate.
  • Category 1 (plain data processing) covers simple cleaning, aggregation, and reporting tasks—any problem that can be expressed as a basic math formula should **not** use AI or agents because it’s slower, costlier, and less reliable.
  • Category 2 (classical predictive machine learning) remains valuable for structured historical data with a clear target variable (e.g., demand forecasting, fraud detection, churn prediction), despite current hype around large language models.
  • The later categories (generative AI/large language models and AI agents) are suited to tasks requiring natural‑language understanding, creative content generation, or autonomous workflow orchestration—situations where pattern‑based ML or simple queries fall short.
  • The presenter also provides practical scripts and principles for pushing back on unrealistic AI requests from stakeholders, ensuring solutions are matched to the right technology category.

Sections

Full Transcript

# When to Use AI vs Agents **Source:** [https://www.youtube.com/watch?v=1FKxyPAJ2Ok](https://www.youtube.com/watch?v=1FKxyPAJ2Ok) **Duration:** 00:22:01 ## Summary - The video introduces a four‑category decision framework for choosing between plain data processing, classical predictive ML, generative AI, and AI agents, helping viewers know exactly when each approach is appropriate. - Category 1 (plain data processing) covers simple cleaning, aggregation, and reporting tasks—any problem that can be expressed as a basic math formula should **not** use AI or agents because it’s slower, costlier, and less reliable. - Category 2 (classical predictive machine learning) remains valuable for structured historical data with a clear target variable (e.g., demand forecasting, fraud detection, churn prediction), despite current hype around large language models. - The later categories (generative AI/large language models and AI agents) are suited to tasks requiring natural‑language understanding, creative content generation, or autonomous workflow orchestration—situations where pattern‑based ML or simple queries fall short. - The presenter also provides practical scripts and principles for pushing back on unrealistic AI requests from stakeholders, ensuring solutions are matched to the right technology category. ## Sections - [00:00:00](https://www.youtube.com/watch?v=1FKxyPAJ2Ok&t=0s) **When to Use (or Skip) AI** - The speaker outlines a four‑category decision framework for data and insight problems, emphasizing that simple data processing tasks—like basic reporting or aggregation—should never involve AI, agents, or generative models. - [00:03:24](https://www.youtube.com/watch?v=1FKxyPAJ2Ok&t=204s) **LLM Applications for Unstructured Tasks** - The speaker explains how large language models are suited for generating textual (and image‑based) outputs from mixed, unstructured data such as summaries, drafts, and descriptions, while noting challenges like hallucinations, compute cost, and latency. - [00:06:49](https://www.youtube.com/watch?v=1FKxyPAJ2Ok&t=409s) **Choosing the Right AI Ladder** - The speaker stresses selecting the simplest effective AI solution—using a four‑rung ladder from basic data ops to AI agents—to avoid costly, overengineered implementations like using an AI agent for a simple sales sum. - [00:10:27](https://www.youtube.com/watch?v=1FKxyPAJ2Ok&t=627s) **Cost‑Benefit Language for AI Projects** - The speaker stresses the importance of framing AI initiatives in terms of data‑operations value and compute costs to persuade executives, highlighting generative AI’s expensive token usage, comparatively cheaper machine learning, and the relative ease of building and maintaining data pipelines. - [00:13:45](https://www.youtube.com/watch?v=1FKxyPAJ2Ok&t=825s) **When to Choose AI Over Simpler Solutions** - The speaker urges evaluating AI projects by first exploring non‑AI options and adopting agents or large‑language models only if they can deliver at least a tenfold improvement in accuracy, speed, or user experience, otherwise stick with simpler approaches and clearly articulate this rationale to leadership. - [00:18:08](https://www.youtube.com/watch?v=1FKxyPAJ2Ok&t=1088s) **When AI Really Replaces Humans** - The speaker warns that many touted AI solutions still rely on humans, urging organizations to rigorously scope problems, benchmark transitions, and prioritize ROI‑driven value over feature lists to ensure genuine automation. - [00:21:21](https://www.youtube.com/watch?v=1FKxyPAJ2Ok&t=1281s) **Building Trust Through Tool Choice** - The speaker stresses earning executive, personal, and customer trust by selecting appropriate technologies—agents, generative AI, data pipelines, and machine learning—and recognizing which problems each is best suited to solve. ## Full Transcript
0:00Nate, when do I use AI? When do I use AI 0:02agents? I get that question a lot. This 0:05video is for you. If you've ever 0:06wondered, how do I know when to use 0:09agents? How do I know when to use 0:11generative AI and large language models? 0:13This is going to show you. We're going 0:14to go through the four different 0:17categories that you can choose between 0:18when you make decisions about data and 0:20insights. We're going to give you a 0:22concrete decision framework. We're going 0:23to give you the principles to work with 0:25so you understand how to recognize these 0:27problems elsewhere. And I'm even going 0:28to give you scripts so that you can 0:30understand if if an investor if your 0:32boss is pushing on you for a solution 0:34that you know won't work. How do you 0:35push back in a way that makes sense? 0:37First step, let's understand the four 0:39categories we're working with. Number 0:40one, plain old data processing. It's not 0:43new. It's not fancy. It's not AI. It's 0:46the simplest possible thing. Data 0:48cleaning, aggregating it up, building 0:50simple reports. If you just need to get 0:52a very simple sales report and you're 0:54aggregating up the clients and the 0:56regions and over a particular time 0:58period, that kind of thing, do not use 1:02AI. Repeat after me, don't use AI. Don't 1:05use agents. Don't use generative AI. 1:06Don't believe anyone who tells you to do 1:08that. If you're on an e-commerce site 1:09and you just need to look at your 1:11payment volumes over the last quarter, 1:12don't use AI. If you just need to 1:15understand how many SKs you have for 1:17sale, don't use AI. Are you getting the 1:19idea? If it is the kind of thing where 1:21you could write it out as a math 1:23problem, x + y= z, don't use ai. It's 1:28not worth it. It's going to be much more 1:30expensive. It's going to be less 1:31dependable. It's a waste of everybody's 1:33time. Let's go to bucket number two. 1:35Classical predictive machine learning. 1:37This one has almost disappeared because 1:38there's been so much hype around large 1:40language models. So, let's talk about 1:42what it actually is and where to use it 1:44so we don't lose the value. Because we 1:46put decades of work into developing 1:48classical machine learning. I've built 1:49classical machine learning systems 1:51myself at scale. It is important to 1:54understand where it is still valuable 1:55versus large language models, but almost 1:58nobody thinks about it because of the 1:59hype cycle, which is why we have to make 2:01videos like this. If you have rich 2:02historical data and you have a clear 2:04target variable to optimize against and 2:07you need something to predict that's 2:08very specific, like I want to predict 2:10seasonal Q4 demand or I want to detect 2:12fraud or I want to predict churn. Well, 2:15machine learning excels in taking 2:18patterns in structured data and pulling 2:20them to light when you have a clear goal 2:22like that. Now, this takes training 2:23data. It takes evaluation metrics. It 2:25takes monitoring. It's a little bit more 2:27complex than just running a SQL query. 2:29But if you want to predict next quarter 2:31sales based on past trends and 2:33promotions, a lot of people are using 2:35large language models for this. But the 2:36correct tool is not large language 2:38models. The correct tool is actually 2:40traditional machine learning. 2:42Traditional machine learning is designed 2:45for situations where you have structured 2:47data and a problem with a single 2:49variable you're optimizing toward. Let 2:52it do its job. Let it do its job. And 2:54you see the difference, right? I want to 2:56make sure you understand the difference. 2:57If you're doing very simple sums and 2:59reports and aggregations, that's not for 3:01machine learning. That's that plain old 3:03data processing category. If you want to 3:05predict the performance of a single 3:07variable and you have structured data, 3:09that's not large language models. That's 3:10not AI the way most people talk about 3:12it. It's machine learning or traditional 3:14artificial intelligence back before chat 3:17GBT generative AI or large language 3:20models. Now that's our third bucket. 3:22Let's say you have a data set that's 3:24mixed. You have some numbers. You have 3:26structured data. You have some text. Now 3:29you also need to generate text in the 3:30answer. Maybe you need to generate 3:32concrete summaries of the marketing 3:35quarterly report and you want to 3:37generate the text with that. Maybe you 3:38want to generate an image with that. The 3:40problem involves summarizing something 3:42that is not numeric necessarily. It 3:46involves translating things. It involves 3:47drafting content. It has a lot of words 3:49in it. Well, large language models are 3:52probably your best tool at this point 3:53for that. They're flexible, but you also 3:55have to take into account 3:56hallucinations, which I've talked about 3:58a fair bit, how you handle higher 3:59compute costs, and how you handle 4:01latency, which is like the unpredictable 4:04sort of gap in response time. So, if you 4:06want to autodraft customer support 4:07responses based on the text of the 4:10customer support manual, that is a great 4:13example of a large language model task. 4:15If you want to autogenerate product 4:16descriptions, that is a great example of 4:18a large language model task. And you can 4:20even do it if they're just looking at 4:22the image and they're writing the 4:23description based on the image. LLM 4:25tasks are characterized by wordiness. 4:28They're characterized by unstructured 4:30data and they often have multi-threaded 4:33output. So you're not optimizing for a 4:35single variable in a structured data 4:37set. You're actually trying to get 4:39multiple outputs. You might have an 4:41image output in some cases. You might 4:42have a text output in other cases. And 4:44you value that so highly that you are 4:47willing to put up with the risk of 4:49hallucinations and putting in guards to 4:51minimize that and all the investment 4:52that goes with that. In other words, 4:54generative AI is more expensive to build 4:56and maintain. So you have to do the math 4:59to decide that it's worth it. And we'll 5:00get into more of that ROI math later in 5:02this video. The fourth bucket is AI 5:04agents. It's the most complex bucket. 5:06That's why I put it fourth. Use it when 5:08tasks involve workflows. Dynamic 5:12multi-step workflows with clear decision 5:15points. That's critical for agents. 5:17Decision points where you can describe 5:19the criteria. You can describe the scope 5:21of decision and you can give the agent 5:24all the context it needs to make a good 5:27choice. Things like scheduling, 5:29follow-ups, data retrieval across 5:31systems all fall into the agent bucket. 5:35As an example, an agent that books 5:37conference rooms, notifies attendees, 5:39and adjust schedules when conflicts 5:41arise automatically. That's an AI agent 5:43problem. It's not traditional machine 5:45learning. It's also not generative AI. 5:46It's an agent problem. Agents can 5:48orchestrate complicated tasks 5:50autonomously, but you have to have very 5:53careful error handling. You have to have 5:55good observability so you can see what 5:57they did and you need to have humans who 5:59know how to debug them. So there's a 6:01human talent question with agents as 6:03well. It's worth thinking about though 6:05because as we've gone through these four 6:06buckets, what you should be thinking is 6:09leverage. These buckets are not linear. 6:13These buckets are disproportionate. 6:14There's a power law return here. If you 6:17get one x return on the simple x plus y 6:20plus c, the simple monthly sales by 6:22region, write a SQL query, you get it 6:24back, you get x return on solving it 6:26with a machine learning problem if it's 6:28machine learning susceptible. You get a 6:30100x return on generative AI and you can 6:32get a x return on agents. Now those are 6:35somewhat illustrative. I'm not saying 6:36every single project falls exactly in 6:38that number. But in my experience and 6:40the experience of a lot of others who 6:41have implemented these in practice, that 6:44is how it works. These are not stairs. 6:47It's like a roller coaster to heaven, 6:49right? Like this is a crazy gain in 6:51leverage as you move up, but it's also a 6:54crazy gain in cost and maintenance. And 6:56you have to design the more advanced 6:58systems very intelligently and target 7:00them at the right problem, which is 7:02exactly why we have this video. Because 7:04you can imagine the expense, the cost of 7:07building a generative AI system, of 7:08building an agentic system against a 7:11problem set that didn't need it. What if 7:13you built an AI agent workflow to sum 7:16monthly sales by region? Is it possible? 7:18Yes, absolutely. Is it like bringing a 7:21bazooka to kill a fly? Yes, it is. It's 7:25ridiculously expensive. You don't need 7:27to do it and it would be a terrible 7:28waste to try and do it that way. And 7:30that brings me to the idea of the 7:32ladder. Pick the simplest solution on 7:36the ladder. Imagine these four rungs. 7:38You have just basic data operations. You 7:41have machine learning traditional. You 7:42have generative AI and then finally you 7:44have AI agents. Pick the lowest rung on 7:48the ladder you possibly can. And I'm 7:50going to show you how you kind of think 7:51that through. You could call this 7:53developing engineering taste, but it's 7:55not just for engineers. So many of these 7:57skills have been sequestered away and 7:59hidden in engineering uh conference 8:02rooms for too long. And I want to bring 8:04them out because they're not actually 8:05too technical and we really need them in 8:07the age of AI. So the first skill that 8:09you need to navigate this ladder 8:11correctly is to focus on pattern 8:14recognition over hype. Did you notice 8:16how I talk about the type of pattern? 8:18That is a skill you can learn to focus 8:21on problem structure, not on buzzwords. 8:23You can learn to ask, hey, what needs 8:26solving? Is this a decision that we need 8:27to solve for? Is it a prediction we need 8:29to solve for? Is it a generation we need 8:31to solve for? If it's a decision in a 8:33workflow, it might be an agent. If it's 8:35a prediction, well, that might be 8:37traditional machine learning. If it's a 8:39generation problem, that might be a 8:41large language model problem. If it's 8:43just a report, that might be traditional 8:45just data operations. That kind of 8:48thinking, that kind of sober focus on 8:51problem structure is going to help you 8:52resist the AI for everything temptation. 8:55Sometimes just having a SQL uh script 8:58that runs will solve almost your entire 9:00problem. And by the way, I said almost 9:03on purpose. Do not give in to the 9:05temptation to make something a 9:07generative AI project or an agent 9:10project if 5 or 10% of the value is 9:13coming from that agentic piece or that 9:14generative AI piece and most of the 9:17value is coming from SQL. If your boss 9:19says to you, I want a quarterly 9:22marketing report and I want it to have 9:24this fancy insight as to why we why why 9:28we performed the way we did and I want 9:29it to be in text and I want to have an 9:31illustration of our top selling product. 9:33You could look at that and say, "Well, 9:35there's some stuff here that is 9:36generative AI." So, it's probably a 9:39generative AI problem or maybe it's a 9:41mixed problem where you use two or three 9:42runs on that ladder. I talked about data 9:45processing, generative AI, maybe even 9:46some prediction from machine learning. I 9:49would not look at it that way. Instead, 9:50I would look at it and say, why the heck 9:53do we need the picture? Why do we need 9:54the text? Don't we get the business 9:56value to make good decisions out of 9:58traditional data operations? If you get 10:0090% of the value for 5% of the cost, the 10:02business should take that trade all day 10:04and you should be able to articulate 10:06that in dollars and cents very very 10:08clearly. It is worth asking where the 10:11leverage and the problem lies. That is 10:13my point. So think of the problem as a 10:15distribution and it's going to be 10:16distributed along the four legs of that 10:18ladder. If the problem is is going to be 10:20bumpy and like really skewed heavily 10:22toward one of the legs of the ladder, 10:24you should take that pretty seriously. 10:26You should say maybe this is 10:27fundamentally just a data operations 10:29problem because most of the problem 10:31value is there and maybe we should make 10:33the decision to cut the five or 10% of 10:35value you're talking about and make that 10:37a later choice and just do the thing 10:39that we can get away with now that is 10:40only one of the legs of that ladder. The 10:42simplest one you need to understand how 10:44to speak the language of costbenefit to 10:47make these kinds of claims because 10:48that's how executives speak. If your 10:50boss or your investor is telling you to 10:51invest in AI, the only way they will 10:53really hear you is if you come back with 10:55cost benefit. So start to learn how to 10:57talk about compute costs. Generative AI 11:00and agents are both extremely expensive 11:02in tokens. It is not cheap to run those 11:05pipelines. If you make a mistake with 11:06that architecture, you you can be out 11:08thousands, tens of thousands more. 11:10Machine learning is a lot cheaper, but 11:12does take expertise to set up and it's 11:14not free. And data processing is the 11:16cheapest of all. Almost anybody at this 11:18point with an engineering degree can set 11:20up a data processing pipeline without 11:22any kind of issue. Most of us who don't 11:23have engineering degrees can figure it 11:25out with Jad GPT. The maintenance 11:27version is also non-trivial and I want 11:30to call that out because the maintenance 11:31version you know how I talked about this 11:33idea of power law returns and this 11:34roller coaster that stretches up as a 11:36way of showing like how much uh well 11:38potentially fun but also how much 11:40excitement uh there is in some of the 11:42agentic use flow cases, the generative 11:44AI cases. The maintenance also scales 11:47with that. So if you have an agentic 11:48workflow that is going to have 11:50significantly more costs than a large 11:53language model workflow that is going to 11:55have significantly more costs again than 11:57a machine learning workflow which is 11:59still more expensive than a data 12:00pipeline workflow. It's not 1 2 3 4 12:02costs. It's more like 1 2 48 costs. 12:05These costs get much more expensive. And 12:08part of why is that agents and LLM 12:11systems have to be maintained in 12:13production. Your leader who charges you 12:15with building these systems has to know 12:18that the dollars keep going out the door 12:21on time spent supporting these systems 12:24after they are launched. And so instead 12:26of thinking about an agent workflow like 12:28traditional software, you have to think 12:29about it as continually maintained 12:32almost like a little employee that you 12:34have to pay every month. The last thing 12:36I want to call out from a costbenefit 12:38perspective is time to value. As you 12:40would imagine with compute costs, with 12:42maintenance burden, it is increasingly 12:44complex as you move up the ladder and 12:46that takes increasing time and talent. 12:49And so if just about anybody can set up 12:50a data pipeline in a few days given the 12:53data and if you can get a data scientist 12:55to work with you on a machine learning 12:57model and that might take just a couple 12:58weeks if you have everything ready. 12:59Generative AI prototypes really vary. If 13:02it's out of box and it's super simple, 13:04it can be a couple hours. And if it's a 13:06full production pipeline, it's going to 13:08be multiple weeks, multiple weeks. It's 13:11not going to be easy. Often months, I 13:14know people who are at scale, who are 13:15building LLM pipelines that aren't 13:17agentic, still taking them months. This 13:20is not easy to do. Agents, if you're 13:22starting from scratch and you're a 13:24scrappy startup and you're wellunded in 13:25the valley, sure. Can you knock up some 13:27agents over the weekend? Absolutely. You 13:29have the right talent. you have a clean 13:31slate to work with. If you're an 13:32existing company and you're trying to 13:34get this done, it is even harder than 13:36LLMs. It is quite difficult to do well 13:39and quite difficult to sustain well and 13:41you have to recruit the talent for it. 13:43The months stretch out into 6 months or 13:45more very, very quickly. And it's your 13:47job if you are given this assignment or 13:49asked to do a project that involves AI 13:51to find a way to articulate that and 13:53explain it to find a way to say yeah we 13:57could do this with agents but it seems 13:58like what you're really optimizing for 14:00is just getting the data into a report 14:03and there could be a simpler way to get 14:04you that value much much faster. we can 14:06find another use for AI so that you can 14:09tell the board right so in sum if you 14:12have someone coming to you or if you're 14:13asking the question when do I use AI 14:15when do I use agents before proposing AI 14:18identify the simplest nonAI solution 14:21evaluate whether AI measurably improves 14:23the accuracy speed or user experience 14:25against that solution and by measurably 14:28I mean it has to significantly improve 14:31my rule of thumb with using a large 14:33language model or an agentic workflow is 14:36that if it isn't 10x versus the 14:38baseline, it probably isn't worth it 14:40because there's adoption, there's 14:41talent, there's systems to maintain. And 14:44so 10x is my rule of thumb. And if you 14:46don't hit that, stick with a simpler 14:47approach. It's worth it. So I want to 14:50suggest to you that there are a few 14:52scripts that you can use if you get 14:54stuck with leaders who are just not 14:56believing you that will help you to work 14:59through this. And then we're going to 15:00get at the end of the video into some 15:01sort of contrarian insights, things that 15:03kind of go deeper. But before we do, 15:05just to like sum up this piece around 15:07costbenefit and communication. Here's 15:08how you answer. Let's say your VP says, 15:10"We need to use chat GPT for our 15:12report." Common. I've seen it happen. 15:14You could say this. Hey, uh, thanks for 15:17asking. I researched three different 15:18approaches for uh, the report problem 15:21with a data pipeline. Uh, it's going to 15:23take us 2 days to set it up. Uh, it's 15:25going to cost us like 200 bucks and 15:27we're going to have 100% accuracy on our 15:28known metrics. You can have it by 15:30Friday. If we used a machine learning 15:32model, which is not AI, Mr. VP. That's 15:35going to be 2 and 1/2 weeks with our 15:37data science team, push back another 15:38project. I suspect we'll get to 80% 15:41accuracy on prediction initially and 15:42have to work up from there. Uh total 15:45cost, I want to say $15,000 maybe. If 15:48it's generative AI and uh it's a large 15:50language model prototype, I would guess 15:52out of the gate that you're going to 15:53have inaccuracies across all of your key 15:55metrics initially. It will take us a few 15:57days to get it set up and it will take 15:59us probably three or four months to root 16:01out enough of the hallucinations to make 16:02the report really worthwhile. I would 16:05recommend option one because we can get 16:07reliable results by Friday and it's the 16:09cheapest overall. If we really need 16:10predictive insights, we can add that 16:13machine learning model as a fast follow. 16:15You see how that's something that speaks 16:17executive. It talks cost. It talks time. 16:19It doesn't say no directly. It actually 16:22just reframes the problem and helps them 16:24understand what's going on. Okay, now 16:26let's get to some of the contrarian 16:28insights that you need to have in your 16:31head when you're facing these when do I 16:32use agents, when do I use AI problems. 16:35Number one, data quality is going to 16:37beat model complexity every time. 16:39Garbage in, garbage out, right? If you 16:42are introducing AI and you have bad 16:44data, you are pouring money down the 16:46drain and lighting it on fire. Fix your 16:48data pipelines before reaching for 16:50models. And if you have great quality 16:52data, you can use cheaper and cheaper 16:54and cheaper models. Make sure you take 16:56advantage of your data quality. And if 16:57you don't have data quality, make sure 16:59you make that a priority and fix it. You 17:01do get real leverage from that fix. That 17:03is how you unlock the 100x,000x use 17:05cases cuz agents also struggle with it 17:07if the data is bad quality. Number two, 17:09boring solutions when a clear, 17:11well-designed BI dashboard that just 17:13works outperforms a fancy AI model that 17:16nobody understands and that can't be 17:17audited. I love AI. AI has a lot of use 17:20cases, but I have seen too often now 17:23that we're in this sort of hype cycle 17:24for AI, people are throwing out the 17:26boring solutions that work. Don't do 17:28that. Don't be that person. Find ways to 17:31reframe like the sort of narrative that 17:32we talked about. Number three, human in 17:35the loop first. When you're deploying 17:38LLM, when you're deploying agents, start 17:40by surfacing suggestions for humans to 17:42vet, build trust, gather feedback, and 17:45then automate. Now, if you're a startup 17:48and you have nothing to lose, sure, give 17:50it a shot. See if you can automate it 17:51and make it work in a weekend. If you're 17:53a company with stakes, you have to take 17:56seriously the idea that humans need to 17:58be involved from the beginning and 17:59helping to get the model to a place 18:00where it works. And you need to assume 18:02that cost. Who's going to staff for 18:04that? Who's going to pay for that? How 18:06long are you going to use the humans? 18:07How do you know if the humans are able 18:08to transition to AI systems? You know, 18:10one of the dirty secrets in AI is that 18:13sometimes it never makes it and then 18:15there's a scandal. So, for example, in 18:17the Amazon just walk out checkout 18:19stores, the company would trumpet that 18:21it was AI that they were using, but the 18:23reality was it was people looking 18:25through the cameras, checking all the 18:26work because they were never able to 18:28transition out of humans in the loop. AI 18:30is hard. Don't believe every pressed 18:32release you see. Look at whether you can 18:34reasonably hand off human work to AI. 18:37Sometimes you can. There are real wins 18:38out there, but think about it and make 18:41an intentional plan and benchmark 18:42yourself and see if you're actually able 18:44to successfully do it. This comes down 18:46to scoping the problem very very 18:48precisely around the value you intend to 18:51deliver. Number four, ROI focused. Don't 18:54think about the feature list. Remember 18:55when I said earlier in this video that 18:57you want to mentally assess the problem 19:00space and look at where the value is 19:02spiky against that ladder, whether 19:03that's simple data operations or machine 19:06learning or generative AI or agents. 19:08Focus on that value piece. The feature 19:10lists will spread out all over the 19:12place. If you focus on feature lists, 19:14you're going to be extremely 19:15inefficient. Focus on where the leverage 19:17is. Focus on the value and frame AI as a 19:20means to specific business outcomes. It 19:22should deliver reduced costs, faster 19:25decisions. It should deliver better 19:26customer satisfaction. The point is not 19:28AI itself. The point is the value it 19:30delivers. And so if you need to push 19:33back when someone makes the case that 19:35you should use an AI agent and you have 19:37used this framework and you know you 19:39shouldn't, push back on ROI. push back 19:41and say, I want to deliver reduced costs 19:44and this isn't going to do that. I want 19:45to make sure that we make good decisions 19:47to actually improve customer 19:49satisfaction, and this agent workflow is 19:51not going to do that on the timeline you 19:52need. So, let's close with a 30-se 19:54secondond decision tree that will help 19:56you the next time you face this. Does 19:59the problem have deterministic rules? 20:01It's a data processing problem. Does the 20:03problem need to predict an outcome? It's 20:05a traditional machine learning problem. 20:07Does the problem need to generate novel 20:09tokens, novel words, novel content, 20:11generative AI, large language models? 20:13Finally, do we need a workflow with 20:15autonomous decisions and multi-step 20:18orchestration? That's an agent's 20:20problem. The cross cutting factor here 20:22is talent. As you go up these four, the 20:25talent needs get bigger. The same person 20:27that can do data processing usually 20:29can't do autonomous multi-step 20:31orchestration with agents, not a 20:32production scale. So you have to have to 20:34also be aware of how to advocate for the 20:37talent that you need in order to deliver 20:39against these outcomes. It is not fair 20:41to ask an engineer who has never worked 20:43with large language models to 20:45immediately build an autonomous 20:46multi-step orchestration agent that 20:49handles 95% of customer service tickets. 20:52That is unlikely to go well and the 20:54companies that tend to ask for that are 20:55setting themselves up for grief. I hope 20:57this has been helpful. My goal for you 20:59has been to help you to develop a sense 21:01of taste by focusing on problem 21:02structure, not problem hype and not 21:05solution hype in AI. To start simple, 21:07remember to climb the complexity ladder 21:08as you go. To make sure you frame things 21:10in terms of ROI, that's going to help 21:12executives to really make sense of what 21:14you're saying. To build trust first, 21:17which by the way is one of the 21:18underlying themes all the way through. 21:20You have to build trust with your 21:21executive by speaking their language. 21:22You have to build trust with yourself 21:24when you're making your decisions by 21:25making decisions that actually lead to 21:27sustainable software. You have to build 21:28trust with your customers by making sure 21:30that you focus on quality for them and 21:32use the right tool. Data pipelines, 21:34machine learning, generative AI, and 21:35agents each have their place. You need 21:38to know when to pick up the right tool 21:39in the toolbox. I hope that this video 21:42has given you a sense of the kinds of 21:45problems that are susceptible to agents, 21:47the kinds of problems that are 21:48susceptible to generative AI, and the 21:50frankly fairly wide class of problems 21:52that isn't either of those things and 21:54that we still have in business today and 21:56that still needs a good solution. Good 21:58luck out there.