Learning Library

← Back to Library

One-Way vs Two-Way AI Decisions

Key Points

  • AI demos often feel magical, but real‑world deployments falter because businesses can’t afford the mistakes that are acceptable in a controlled demo environment.
  • The true bottleneck isn’t model intelligence but trust, which hinges on how risky a decision is and how easily it can be undone.
  • Reversible, “two‑way‑door” tasks (e.g., scheduling, file organization) benefit from fast, frictionless AI assistance, while irreversible, “one‑way‑door” tasks (e.g., sending customer communications, granting access) require deliberate checks and approvals.
  • Decades of software design have turned many business actions into easy‑to‑reverse decisions, creating the fertile ground that now lets AI agents handle low‑stakes work effectively.
  • Scaling AI agents across enterprises will depend on building governance structures that embed the right amount of friction for high‑stakes decisions while preserving speed for low‑stakes ones.

Sections

Full Transcript

# One-Way vs Two-Way AI Decisions **Source:** [https://www.youtube.com/watch?v=7NjtPH8VMAU](https://www.youtube.com/watch?v=7NjtPH8VMAU) **Duration:** 00:19:50 ## Summary - AI demos often feel magical, but real‑world deployments falter because businesses can’t afford the mistakes that are acceptable in a controlled demo environment. - The true bottleneck isn’t model intelligence but trust, which hinges on how risky a decision is and how easily it can be undone. - Reversible, “two‑way‑door” tasks (e.g., scheduling, file organization) benefit from fast, frictionless AI assistance, while irreversible, “one‑way‑door” tasks (e.g., sending customer communications, granting access) require deliberate checks and approvals. - Decades of software design have turned many business actions into easy‑to‑reverse decisions, creating the fertile ground that now lets AI agents handle low‑stakes work effectively. - Scaling AI agents across enterprises will depend on building governance structures that embed the right amount of friction for high‑stakes decisions while preserving speed for low‑stakes ones. ## Sections - [00:00:00](https://www.youtube.com/watch?v=7NjtPH8VMAU&t=0s) **From Demo Magic to Real‑World Trust** - The speaker explains that the real barrier to deploying AI agents isn’t model intelligence but the lack of trust stemming from business risk and decision‑making structures. - [00:03:15](https://www.youtube.com/watch?v=7NjtPH8VMAU&t=195s) **Safety Infrastructure Enables Rapid Change** - The speaker describes how modern software delivery relies on reversible, automated, and closely monitored change processes that lower risk and allow huge engineering teams to innovate quickly—an advantage not yet common in most other business areas. - [00:06:20](https://www.youtube.com/watch?v=7NjtPH8VMAU&t=380s) **Beyond Tool Access: Trust and Safety** - The speaker argues that while model context protocols enable agents to use tools, the real business challenge lies in ensuring safe, reversible actions and robust error‑recovery rather than simply providing connectivity. - [00:10:11](https://www.youtube.com/watch?v=7NjtPH8VMAU&t=611s) **Applying Software Primitives to Business Ops** - The speaker proposes adopting software engineering practices—drafting before execution, visual previews, and bounded time windows—to make business automation more transparent and trustworthy. - [00:13:41](https://www.youtube.com/watch?v=7NjtPH8VMAU&t=821s) **Agentic Delegation in Back‑Office Ops** - Applying software‑engineered controls such as drafts, approvals, and logging enables AI agents to responsibly automate procurement, support, access, and financial tasks, making back‑office operations the first wave of accountable, low‑risk AI delegation. - [00:17:53](https://www.youtube.com/watch?v=7NjtPH8VMAU&t=1073s) **Redesigning Decisions for Safe AI** - The speaker urges leaders to prioritize redesigning decision processes—making actions reversible and audit‑able—so that delegating to AI agents becomes a safe, intentional design choice rather than the default. ## Full Transcript
0:00I think most of us have had this moment 0:01in the last year. You watch an AI demo. 0:03It looks magical, right? It writes the 0:05document, it updates the spreadsheet, it 0:07navigates the website. You think, okay, 0:08so we're basically at the point where 0:09this thing can just run parts of my 0:11business. And then you try to deploy it 0:14and it immediately turns into something 0:16much less dramatic. It's a drafting 0:18assistant. It's a chat widget. It's a 0:21tool that helps but doesn't really do. 0:23It's not because the technology got 0:25worse. It's because the demo happened in 0:27a world where mistakes don't matter much 0:30and your business happens in a world 0:32where mistakes can cost real money, real 0:34trust, and real careers. And that's that 0:36gap. That's what this executive briefing 0:38is is all about between what agents can 0:41do in a controlled setting and what 0:43we're willing to let them do in the real 0:45world. That's the most important story 0:47of the next two years. And it has a root 0:49cause that most of us are not naming. 0:51And I want to talk about it today 0:53because I haven't heard it discussed 0:54like this anywhere else. We keep talking 0:56as if the bottleneck is intelligence. If 1:00the models keep improving, they're going 1:01to take on more responsibility, right? 1:03But the deeper bottleneck is trust. And 1:06trust is not about how smart your agent 1:08is. Trust is about the structure of 1:11decisions in the business environment. 1:14In plain language, how bad is it if 1:17you're wrong and how can you undo it if 1:19you are? Every meaningful decision in 1:22your business lives on one of those two 1:24axes, right? Some decisions are low 1:26stakes and easy to reverse. If you pick 1:28the wrong time for the meeting, you just 1:30reschedule it. No big deal. If you 1:32organize a file incorrectly, no big 1:34deal. You just move it. If a draft memo 1:36is off, you can edit it. Right? These 1:37are the reversible ones. These are what 1:39Amazon would call two-way doors. You can 1:42walk through, you can walk back. You 1:43want speed here. Overthinking is a 1:45waste. You don't want to have people 1:47asking for permission. Just do it. Other 1:49decisions are high stakes and hard to 1:51reverse, but they're really important. 1:53If you send the wrong customer message, 1:55you cannot unend it. If you grant the 1:57wrong person access to sensitive 1:59systems, the damage might already be 2:01done. If you commit to a vendor 2:03contract, unwinding that can be really 2:05expensive or even impossible. Those are 2:08one-way doors. If you want frict, you 2:10actually intentionally want friction 2:12there, right? You want reviews, you want 2:13approvals, you want people slowing down 2:15on purpose to take time and think about 2:17it. But this is the part that people 2:19don't realize that makes the whole agent 2:21ecosystem actually click. Software 2:24became the easiest place for AI doing 2:28work precisely because we spent decades 2:31turning a huge number of software 2:34decisions into two-way doors. It's like 2:37we forgot that we had 30, 40, 50 years 2:40of experience turning software into a 2:42series of easy to reverse decisions. 2:45Things like GitHub are inventions. 2:48They're inventions because we wanted 2:50reversibility. It's not because software 2:52is inherently easy, people. Not because 2:54engineers are gullible early adopters. 2:56It's because the environment in which we 2:59engineer software has been deliberately 3:01designed to make software mistakes 3:05survivable. Think about how modern 3:07organizations will ship digital 3:08products. They don't treat every change 3:10like a permanent irreversible commit. 3:13They treat changes like proposals that 3:15can be tested, monitored, and rolled 3:17back. If something goes wrong, the 3:19system is designed to recover really 3:20quickly. There are built-in steps that 3:23lower the consequences of being wrong. 3:25Changes are reviewed, often tested 3:27automatically, released gradually, 3:29watched very carefully, and then 3:30reversed if they're needed. The entire 3:32culture of modern software delivery is 3:34basically one massive project in how do 3:38we move fast without breaking 3:40everything. This is the only way that 3:43you can have multi-,000 engineer 3:45developer footprints and get anything 3:47done. You have to have reversible 3:49decisions. And this is the part that 3:50most of us mess as leaders. That safety 3:53infrastructure for software is a huge 3:56and hidden reason why agentic progress 3:59has felt so fast in engineering in 2025. 4:02In software work, there's a universal 4:04expectation that you can make a change. 4:07You see what you changed, you test it 4:09out, and you undo it if it causes harm. 4:11That expectation is not a given anywhere 4:14else in the rest of your business. It's 4:16the result of tens of millions, perhaps 4:18billions of human hours invested across 4:20the software ecosystem over decades in 4:23processes and tools that are all 4:25designed to compound and make change 4:28less scary. We have spent decades honing 4:31the flywheel of software engineering, so 4:33it's less scary to change things. But 4:35now look up from software. Look at the 4:38wider world. Commerce, operations, 4:41finance, HR, legal, compliance, 4:44education, healthcare, the physical 4:46world. Most of it doesn't work like 4:48this. Many actions are not easily 4:51reversible. And even when they are 4:53theoretically reversible, the reversal 4:55is messy because it involves people, it 4:58involves exceptions, it involves 4:59negotiations, it involves reputational 5:02repair. Buying a car is one example. If 5:05you buy the wrong car, you can't just 5:07undo it. You might be stuck. You might 5:10have to sell it at a loss. You might 5:11have to fight through a return process. 5:13You might have to eat financing fees. In 5:15a small number of cases, you get a 5:17generous return policy, CarMax style, 5:20where the market has a built-in escape 5:21hatch, but that escape hatch is itself 5:23not free. It exists because companies 5:25invested in policies, in logistics, and 5:27in fraud controls that make the reversal 5:30possible. Now take the car idea and 5:33expand it. Look at business decisions. 5:35Most of your organization's important 5:37decisions are effectively one-way doors 5:40or at least one-way doors once you pass 5:42a certain point. Think of it as a 5:44commitment curve. Early steps can be 5:46undone. Later steps cannot. A vendor 5:49selection can be revisited until you 5:51sign and begin working together and then 5:52you have to wait for the term or there's 5:54an escape clause or whatever. A pricing 5:56change can be revised until customers 5:58receive it and it becomes a public 6:00promise. An access change can be 6:03corrected unless data has already been 6:06downloaded. A compliance filing can get 6:08amended sometimes, but you still have to 6:10create a record and a paper trail and an 6:12adjustment. This is where we see tool 6:14calling and standards like model context 6:16protocol in a really different light. 6:18People sometimes treat model context 6:20protocol as a kind of universal USB plug 6:23that lets models connect to many 6:24different systems. That that is true and 6:28it does matter, but it's really solving 6:30a narrower problem. How to make it 6:33technically possible for an agent to 6:36take action across tools. That's 6:37important, but it does not solve the 6:39bigger business problem, which is how do 6:41you make it safe to delegate actions 6:44across tools where actions are hard to 6:47reverse? Essentially, the challenge I 6:50have for you, I'm not saying don't build 6:52MCP. What I'm saying is tool access does 6:56not create trust. And once you see that, 6:58a lot of the AI market's recent behavior 7:00becomes very obvious to read. This is 7:03why so many agents are really co-pilots. 7:05They draft, they propose, they fill 7:07forms, they generate the plan, they stop 7:09before the point of no return because 7:12they're not trusted. The design choice 7:14isn't just cautious product managers 7:16covering their behinds. It's an 7:18admission. The real world doesn't have 7:20an undo button, and the vendor cannot 7:23take the responsibility for irreversible 7:25mistakes. This also explains why the AI 7:28safety conversation often feels abstract 7:31or overmoralized when the practical 7:33issue is much more mundane. Error 7:36recovery in software. Error recovery is 7:38super normal. It has a name. It has 7:40metrics. Great organizations measure how 7:42often changes cause problems and then 7:44they measure how quickly they can 7:46recover when they do. In the rest of the 7:48business, recovery is often 7:49improvisational. Someone is scrambling. 7:52Someone is escalating. someone is doing 7:55damage control and that works when 7:57humans are the throttle and we can 7:58manage all of that ourselves. It does 8:00not work when actions can happen at 8:02machine speed which is the world we're 8:04entering in 2026. This is the pivot 8:07where where this story of agents turns 8:10from a technology story really into an 8:12institutional and organizational change 8:14story. The question becomes, what should 8:17remain a one-way door in our businesses 8:20and what should become a two-way door? 8:23And the uncomfortable truth is we don't 8:25really have to answer that question 8:28until this year. The uncomfortable thing 8:30is for all of corporate history, humans 8:33were slow enough that we could make this 8:35one-way door work. And so, we're all 8:38going to invent it next to each other in 8:402026. Humans naturally introduce 8:43friction, right? Humans hesitate. Humans 8:45double check. Humans feel social 8:47anxiety. Humans worry about 8:49embarrassment and reputational loss. All 8:51of that has acted like an informal 8:54safety system in our societies and also 8:57more recently in the corporation. It is 9:00inefficient, but it has acted as a risk 9:03avoidance break. Agents remove that 9:06informal safety system. If you give an 9:08agent the ability to take an action like 9:10sending a message or changing a record 9:12or approving a request or moving money, 9:14the agent has no reputational risk on 9:17the line, the agent doesn't feel a sense 9:18of anxiety and go back and triple check. 9:21And so it's up to you to either redesign 9:23the process so it's safe and reversible 9:26or to keep the agent confined to 9:28drafting forever. There's not really a 9:30stable middle ground in between those 9:32two. Either the agent can take the 9:34action or it cannot. So what does red 9:38redesigning so it's safe actually mean 9:40in a business context? I think it means 9:43building a set of very practical 9:45non-technical primitives that make more 9:47of our actions as a business reversible 9:50or at least safely correctable inside 9:52our businesses. So the first one I want 9:54to suggest and these are absolutely 9:57robbed from the software engineering 9:59culture and this is part of my larger 10:01thesis that the story of 2026 is that 10:03there is no technical and non-technical 10:05anymore. It is all blurring. It is 10:08people using tools to solve problems. 10:10And so we're going to steal some of 10:11these software engineering principles 10:13because they've worked to help software 10:14engineering accelerate. We see agents 10:16working in software systems. Let's steal 10:18those primitives for other parts of the 10:20business. The first one is drafting 10:21first. Nothing important should go 10:24straight from idea to done. It should go 10:26into a proposed state first. A proposed 10:29refund, a proposed access change, a 10:31proposed vendor onboarding, a proposed 10:33customer communication. And that that 10:35captures a lot of the value of agency 10:38without necessarily crossing the one-way 10:41door. Yes, this is not fully the agent 10:43taking action, but if you are trying to 10:45make progress with your agents, having 10:48your agents jump to draft and have have 10:51that draft pass all of your evaluations 10:53is a great way to get started. The 10:56second one is preview as a primitive. 10:59Before any action becomes final, your 11:01system should be able to show what the 11:03change will look like in plain English. 11:05Which customer records will be updated? 11:07Which emails will be set? Which accounts 11:09will be changed? which permissions will 11:11be granted, what data will be shared. In 11:14the software world, people are used to 11:15seeing here's what changed, here's the 11:17diff. In business operations, we rarely 11:19get that clarity and it's one reason 11:21that leaders have trouble trusting 11:22automation and we need to build for that 11:25as system designers and we need to 11:26insist on it as leaders. The third 11:28primitive I think is time windows. Many 11:31actions feel irreversible because they 11:33become final instantly. But you can 11:35manufacture reversibility by delaying 11:38final settlement. So you can schedule 11:40customer emails with a recall window. 11:42You can make refunds pending for an hour 11:44or a day unless the amount is small. You 11:46can grant sensitive access as time 11:48limited by default. So it expires 11:50automatically for an agent unless it's 11:52renewed. This is a big unlock because 11:53it's mostly process and configuration. 11:55It's not fancy AI. And I have two 11:57examples here that I think are really 11:59relevant. One, Amazon does this with 12:01orders. People don't know this, but when 12:02you place an order on Amazon, it is 12:05intentionally delayed in processing for 12:08about half an hour because they want to 12:10give you the option to reverse the order 12:12as a customer without consequence. And 12:15so they could pick it up and take it 12:16right away. They opt to wait a half an 12:18hour and give you a time window to 12:21reverse. The second example is from the 12:23app superhum. They know that people tend 12:26to read emails and do their checks as 12:29humans in reality after we send. And so 12:31what they do is they build in a 10 or 15 12:34second reversibility window where you 12:36can hit send and they pop up an undo 12:39button as soon as you hit send and you 12:41can choose to hit undo right away 12:44because you're suddenly reading it for 12:46the first time because that's how humans 12:48work. We check it after it goes live. 12:50And so just having that little time 12:51window delay is massively helpful. The 12:54fourth primitive I would talk about is 12:55repair plans. When something truly 12:58cannot be undone, you're going to need a 13:00standard playbook for repair. Refunds, 13:02apologies, reversing the accounting, 13:05rotating your credentials out, notifying 13:07affected teams. You kind of get the 13:09idea. In business, this is often handled 13:11like a fire drill. It's handled ad hoc. 13:13For agents to act at machine speed, we 13:15have to think about how repair becomes a 13:17systematic thing. It doesn't have to be 13:19perfect, but it does have to be 13:21consistent and systematic. The fifth 13:24primitive is a permanent record. Every 13:27agent-driven action should leave behind 13:30a simple queryable history. What the 13:33agent was trying to do, what information 13:36it used, what it changed, what tools it 13:38touched, who approved the final step. 13:40That's not the purpose of this is not 13:41bureaucracy and filling log books. It's 13:43it's to make sure that you have 13:44accountability and you have learning 13:45over time. I think if you tackle those 13:48five primitives, suddenly a lot more of 13:50your organization is becoming and is 13:53going to become an agent-friendly 13:55substrate. Even though it has nothing to 13:57do with software engineering, you're 13:59pulling software engineering principles 14:01into a nonsoftware context. As an 14:04example, you'll be able to let agents 14:06handle procurement requests up to X 14:08threshold automatically because 14:09purchases start as drafts and they 14:11require approval to commit or because 14:12you've approved and seen the accuracy up 14:16to X dollars and you're comfortable with 14:18the risk. Another example, you can let 14:20agents triage your support tickets and 14:22draft the responses because sends are 14:24staged and gated. Or you can let them 14:27draft and send responses up to a certain 14:30customer tier. You can let agents handle 14:32access requests because access is 14:34timelmited and logged. You can let 14:36agents prepare financial close packages 14:38for the markets because nothing posts 14:40without a human commit. This is why back 14:42office operations, I think, are likely 14:44to be the first major wave of real 14:47agentic delegation. It's not because 14:49finance and HR is the most exciting 14:51thing for AI to do. It's because those 14:53workflows all happen entirely inside 14:56systems that you control. You can add 14:58drafts. You can add approvals. You can 15:00add time windows. You can add logs. You 15:02can create two-way doors inside the 15:05enterprise envelope. And you can't 15:07obviously you can't always do that 15:08across the open market. Now, take the 15:10thought experiment there and look at the 15:13the the one that I shared earlier, 15:15right? What would it take for an agent 15:16to buy a car? And look at how we can 15:18apply those same primitives to a larger 15:21scale. Look at how you would need new 15:24market primitives if you really wanted 15:26to change this. I'm going outside the 15:28enterprise now and I'm saying if we want 15:30to step outside back office operations, 15:32if we want to see how the market would 15:34change, if we want to see what it would 15:35take to have a widely agreed substrate 15:38for agent commerce, we would need market 15:40primitives like standard hold periods, 15:42like standardized cancellation terms, 15:44like delayed title transfer, like clear 15:47dispute resolution, like liability 15:49allocation, like machine readable 15:50contracts that remove ambiguity. Those 15:53are not really model features. They're 15:55not really agent tool features. They're 15:58not prompt features. They're 16:00institutional upgrades. Our marketplace 16:03is going to need to become agentic. And 16:06only a few very large companies have the 16:09ability to shift the market in this way 16:11to say this is the new marketplace norm 16:14and we want to make this a marketplace 16:16norm for commerce and we will shift the 16:18market as a result. And this is where 16:20the story gets really big and really 16:21interesting. The Asian era is forcing a 16:24question that we've been able to avoid 16:26for a long long time as a species. How 16:29much of our world ought to be designed 16:31around reversible commitments versus 16:33irreversible ones? For for thousands of 16:36years, we have made that decision based 16:39on what is socially acceptable to us and 16:41what is risk avoidant. For the first 16:44time in our species history, with 16:46machine speed and machine intelligence, 16:48we can now intentionally choose what is 16:51the correct allocation of reversible 16:53commitments and irreversible 16:55commitments. Some irreversibility is 16:57essential, right? It creates trust. It 16:59prevents frauds. I'm not saying don't 17:01make decisions that are irreversible. I 17:03think we have to have them. It makes 17:05promises meaningful. But a lot of 17:07irreversibility ends up being an 17:09artifact of our history and not 17:10intentional. As an example, test scores 17:13for small children should be more 17:15reversible than they are in most cases, 17:17especially if the child is dedicated to 17:19learning and is retaking the test. That 17:21is good for society as a whole because 17:23your goal as an outcome is learning. But 17:26a and a lot of irrevers irreversibility 17:28can be changed if we put the effort in 17:31as a society. We don't have to tolerate 17:33the legacy processes. We don't have to 17:35tolerate paper era institutions. And we 17:37don't have to tolerate a world where 17:39coordination is measured by the speed at 17:42which an envelope runs through a mail 17:44system. Right? I still have to deal with 17:47aspects of government through the mail. 17:51And yes, we've talked about that in the 17:53digital era, but really I'm talking 17:54about that in the machine intelligence 17:56era. How can we make those kinds of 17:57things more reversible? And I am 18:00intentionally zooming out here. I know 18:01we talked about what leaders can do in 18:04their businesses is the most 18:05controllable thing, but I want you to 18:07get the larger vision because really all 18:09we're doing as we build these businesses 18:11is we're starting to change our norms as 18:14a species around how corporations behave 18:16and really long-term around how we 18:19expect society to behave. And that's why 18:21we're zooming out of agents are a 18:22forcing function here because they 18:24remove the human throttle. They make it 18:26obvious when the world cannot safely 18:29absorb mistakes, which is a good thing 18:30because then we'll name them as that. 18:32But they also make it obvious where we 18:34can redesign our systems. So more of 18:36reality has that sort of software style 18:39safe commit phase. And that means that 18:41we stop treating irreversible action as 18:43our default way of interacting with the 18:46world and we start to treat it as an 18:47intentional choice, a design choice that 18:50we make in our systems. So if you're a 18:52leader watching this unfold, the 18:53practical takeaway is surprisingly 18:56direct. Don't start with where can we 18:58deploy agents. Start with where can we 19:01redesign our decisions so that 19:03delegation becomes a safe thing for us 19:06to do with agents. Audit your recurring 19:08actions. Identify where you have one-way 19:11doors in the system. You can build your 19:12your draft, your preview, your time 19:14windows, your durable records like I was 19:16describing. You want to create 19:18thresholds intentionally for when you 19:21think humans ought to approve. And I 19:23would label all of that as building the 19:26decision infrastructure that agents can 19:29then operate against. In the end, the 19:31organizations that win are not 19:32necessarily going to be the ones that 19:34have the flashiest AI demos or the ones 19:36with the smartest models. We're all 19:38going to have the same models. They'll 19:40be the ones that make agent actions 19:42boring, predictable, bounded, repable. 19:46Software learned this over decades.