Learning Library

← Back to Library

Automate the Edges First

Key Points

  • Focus on automating the “edges” of a workflow—data preparation, QA, synthesis, and handoffs—because AI can cut cycle times by 70‑90% there, delivering the biggest immediate ROI.
  • Core processes are often riddled with ambiguity, exceptions, and tribal knowledge, so trying to automate them first leads to stalled agents, scope creep, and frustrated teams.
  • Treat edge automation as a low‑risk entry point: evaluate how you currently collect, clean, and normalize context, and let LLMs handle those repetitive steps before tackling the full workflow.
  • Leverage LLMs for quality checks and summarization tasks such as consolidating ticket discussions, templating outputs, or grouping relevant information—high‑value work that’s simple for AI but time‑intensive for humans.
  • Once edge automation proves effective, use the streamlined outputs to package deliverables (briefs, reports, etc.), creating a repeatable pipeline that can later support more ambitious core‑workflow automation.

Full Transcript

# Automate the Edges First **Source:** [https://www.youtube.com/watch?v=B3rSU7XROrg](https://www.youtube.com/watch?v=B3rSU7XROrg) **Duration:** 00:08:05 ## Summary - Focus on automating the “edges” of a workflow—data preparation, QA, synthesis, and handoffs—because AI can cut cycle times by 70‑90% there, delivering the biggest immediate ROI. - Core processes are often riddled with ambiguity, exceptions, and tribal knowledge, so trying to automate them first leads to stalled agents, scope creep, and frustrated teams. - Treat edge automation as a low‑risk entry point: evaluate how you currently collect, clean, and normalize context, and let LLMs handle those repetitive steps before tackling the full workflow. - Leverage LLMs for quality checks and summarization tasks such as consolidating ticket discussions, templating outputs, or grouping relevant information—high‑value work that’s simple for AI but time‑intensive for humans. - Once edge automation proves effective, use the streamlined outputs to package deliverables (briefs, reports, etc.), creating a repeatable pipeline that can later support more ambitious core‑workflow automation. ## Sections - [00:00:00](https://www.youtube.com/watch?v=B3rSU7XROrg&t=0s) **Automate the Edges First** - The speaker urges teams to focus AI automation on peripheral tasks such as data preparation, quality assurance, synthesis, and handoffs—where 70‑90% cycle reductions are possible—rather than tackling the core, ambiguous workflow, thereby avoiding stalled agents, bloated scope, and frustrated stakeholders. - [00:03:13](https://www.youtube.com/watch?v=B3rSU7XROrg&t=193s) **Automating High‑Friction Workflow Edges** - The speaker argues that AI agents should initially focus on the coordination‑heavy, high‑friction edges of a process—tasks that are low‑judgment, data‑ready, and easily recoverable—because improving these spots delivers immediate value without disrupting the core workflow. - [00:06:35](https://www.youtube.com/watch?v=B3rSU7XROrg&t=395s) **Automating Workflow Edges for Trust** - The speaker suggests that real AI transformation comes from first automating peripheral tasks like intake, data pulling, QA checklists, and synthesis to build reliability and trust before attempting to replace core processes. ## Full Transcript
0:00I want to let you in on a little secret 0:01around AI automation and agents. 0:04Automate the edges first. And I'll get 0:07into what I mean there. Most teams burn 0:09months trying to automate the core of 0:11their work, the thing the humans already 0:13do pretty well. The real leverage often 0:16comes from automating the edges, the 0:17data preparation, the QA, the synthesis, 0:20the handoffs. AI can quietly compress 0:22cycles here by 70 80 90% but most people 0:25don't start here. I want to note that 0:28this is different from the problem space 0:30you pick. So if you're saying, Nate, I 0:33thought you tell us to pick something 0:34important to work on, 100% I do. I think 0:38you need to pick things that matter for 0:39AI. I'm saying once you do, think about 0:42the edges of the work because there's 0:45tons of leverage around that valuable 0:47problem space in the edges of the work. 0:49And so I get the automate everything 0:52vision, especially if you have a core 0:53workflow. But keep in mind that most 0:56core workflows start out when you face 0:59them containing ambiguity. They contain 1:01exceptions. They contain tribal 1:03knowledge. Teams underestimate the 1:06hidden state and tend to overestimate 1:08model reliability, especially if you 1:10haven't built an AI agent automation 1:12before. What does this lead to? It leads 1:15to stalled agents. It leads to bloated 1:17scope. It leads to frustrated 1:18leadership. frustrated engineers, 1:20endless QA. If you are trying to 1:22automate the core first, it's kind of 1:24like trying to build a self-driving car 1:27before you've invented cruise control. 1:29My challenge for you, when you pick a 1:32valuable workflow to automate, if this 1:34is your first AI agent job, figure out 1:36the edges of your workflow and just 1:39test, just see if there is something 1:42here that gives you a lot of bang for 1:44your buck. Look at data preparation. How 1:47do you collect context for this workflow 1:49today? How do you clean your data 1:51inputs? How do you normalize your 1:52formats today? Is that a manual process 1:54before you even get into the core 1:56workflow? Look at QA. How are you 1:58checking for dness, completeness, 2:00quality, consistency, obvious errors? 2:03Something that an LLM as judge can 2:05perhaps easily do that doesn't require 2:08doing the whole workflow. Synthesis is 2:10another great example. Let's say that 2:12all you're trying to do is not automate 2:14the full workflow, but you're picking a 2:15valuable part of it and you're saying, I 2:17just need to summarize information to 2:18date. I want to summarize the discussion 2:20thread in the JUR ticket and update the 2:22description, right? I want to summarize 2:24and synthesize information that is 2:26relevant in the workflow and communicate 2:28it over here. That can also look like 2:30grouping information. It can also look 2:31like templating output. So, you have the 2:34information and you're just writing it 2:35to template. Super valuable work. often 2:38takes a lot of human time but not super 2:41hard for the LLM and is a valuable edge 2:44to go after. Another edge to go after 2:46the packaging of the work. How do you 2:47convert the work into deliverables? It's 2:49done. How do you get into a brief? How 2:51do you get into a report? Especially now 2:54with the advent of Nano Banana, with the 2:56advent of Gemini 3, with Opus 4.5, 2:59working on PowerPoint skills for longer, 3:01harder, you have options to get all the 3:04way to finish deliverable that you did 3:07not have 3 months ago. That is another 3:09edge that you can start to look at. 3:11Coordination is another edge that often 3:13has a ton of value, especially in tribal 3:15knowledge situations. coordination often 3:17resides in someone manually pulling 3:19information here, talking to someone, 3:21then putting it over here. If you can 3:23pick up that piece where you have the 3:26information and you just need to get it 3:27over to point point B from point A, that 3:30is often very very very valuable. So why 3:33do I suggest edges of the work? They're 3:35high friction because typically workflow 3:37is least frictional at the core and most 3:40frictional at the edges. That's just a 3:42general observation anyone will tell you 3:44having done workflows. It's the edges 3:46that are often the worst. It's also 3:47often a low judgment task because all of 3:50the inputs are ready, which is perfect 3:52if you're just starting out on AI 3:54agents. And that means they are perfect 3:56for LLMs. Even when LLMs are imperfect, 3:59you should not assume that your first AI 4:02agent is perfect. You should assume it's 4:04imperfect and it needs to deliver value 4:07anyway. This also means that errors are 4:09often recoverable and cheap because the 4:11humans doing the core of the workflow 4:12were doing those edges before. And if 4:14there's an exception that occurs, they 4:16can pick that up easily. You have the 4:18chance to look at the data. You have the 4:20chance to fix it. And you have the 4:21chance to come back and make your agent 4:23better. This also means, by the way, 4:26that you are not abandoning the core of 4:27the workflow. If your goal ultimately is 4:29to have an AI agent sit at the heart of 4:31the workflow, you get a clean path into 4:35that by attacking the edges. If you own 4:38QA, if you own handoff, if you own data 4:40inputs and data preparation, you are 4:42well positioned to have the knowledge 4:45you need to do the AI agent at the heart 4:47of the workflow, that may be your 4:48ultimate goal. You position yourself by 4:50being at the edges of a core valuable 4:52workflow. You position yourself to 4:55attack the heart of that workflow next 4:57and then to snowball those gains across 4:58the org. Because really, what you're 5:00doing is twofold. You're not just going 5:03after this core workflow. You are 5:05teaching yourself and teaching the org 5:07how AI automation ought to work. And 5:09this is the part that almost nobody says 5:11out loud. You are not just doing a 5:14technical project. You are doing an 5:15upskilling project. Not just for the 5:17engineers building the agents, but for 5:19the humans involved. And the humans 5:22involved tend to have a lot of tribal 5:23knowledge. They tend to be fingertippy 5:25on the work. If it's a valuable 5:26workflow, they need to be able to be 5:30confident that your AI automation task 5:32with the agent will not cost them that 5:35fingertippy feeling on the work. They 5:37are crafts people. Make sure they know 5:39where their craft can be practiced. If 5:41the part of the work that is highly 5:43valuable in this workflow is the highle 5:46understanding of the customer history 5:48over multiple years and how you nuance a 5:50particular response to the customer. 5:52That's a customer success example. You 5:54want to automate around that so that the 5:57customer service agent can apply that 6:00knowledge efficiently with their full 6:03intuition with their full human memory 6:05of the relationship and not be 6:06distracted by other stuff. And so when 6:08you start by attacking the edges, you 6:10are reminding the people doing the work 6:13that their fingertippy feeling for the 6:15work is valuable, that they are worth 6:18having involvement in the work because 6:20of the craft they bring. That is 6:22critical because if you lose that trust, 6:24they will not be inclined to share with 6:26you all of the secrets of the art that 6:28you need for the rest of the workflow. 6:31You need to look at AI agent building as 6:35an exercise in trust. There is no 6:39substitute. And so I'm going to argue 6:41that the real leverage hides outside the 6:43core. It hides in stuff like intake, in 6:45data pull, in QA checklists, in 6:47synthesis, in packaging. You get the 6:49idea. And when you do this, reliability 6:52can go up. You have less risk. You're 6:54attacking a core workflow. You're 6:56showing gains and you're earning the 6:58trust of everyone involved to get where 7:00you want to go. This leads to teams 7:02winning fast. So if you want to apply 7:04this tomorrow, pick a workflow that you 7:06touch every single week that's valuable. 7:09Map the edges. Where do you waste time 7:11prepping? Where do you check for errors? 7:13Where do you hand off repeatedly? Where 7:14do you summarize over and over? Pick the 7:16simplest edge. get into Chad GBT to 7:19claude to Gemini and focus on thinking 7:22about how you build a simple solution. 7:25It's okay if it's semmanual to start and 7:27you start to automate from there. That's 7:29fine. The point is that you're 7:30approaching it correctly and then you 7:33can build the automation edge inward. 7:35Automation does not start with replacing 7:38the core unless you have a very 7:40experienced engineering team. It starts 7:42with reclaiming the edges. So if you 7:44automate three or four edges in a row 7:46and you're starting to feel good, you 7:47don't need the full grand vision. The 7:49workflow itself will reveal the answer 7:51to the correct place of automation and 7:53the correct place of human expertise. 7:56And that's how real AI transformation 7:58happens. And I wish we talked about it 7:59more. You tell me where are you looking 8:02to automate