Learning Library

← Back to Library

Governance of Agentic AI

Key Points

  • Agentic AI represents a new class of autonomous systems that set goals, make decisions, and act without direct human oversight, distinguishing them from traditional predictive models.
  • This autonomy introduces heightened risks—including underspecification, long‑term planning errors, goal‑directed misbehavior, and impacts without a human in the loop—amplifying issues like misinformation, security vulnerabilities, and decision‑making flaws.
  • Effective governance of agentic AI requires layered safeguards such as interruptibility, human‑in‑the‑loop checkpoints, confidential data handling, risk‑based permissions, and robust auditability to trace decisions.
  • Ongoing monitoring, performance evaluation, and clear organizational accountability structures are essential to manage the evolving risks and ensure responsible deployment of autonomous AI systems.

Full Transcript

# Governance of Agentic AI **Source:** [https://www.youtube.com/watch?v=v07Y4fmSi6Y](https://www.youtube.com/watch?v=v07Y4fmSi6Y) **Duration:** 00:06:45 ## Summary - Agentic AI represents a new class of autonomous systems that set goals, make decisions, and act without direct human oversight, distinguishing them from traditional predictive models. - This autonomy introduces heightened risks—including underspecification, long‑term planning errors, goal‑directed misbehavior, and impacts without a human in the loop—amplifying issues like misinformation, security vulnerabilities, and decision‑making flaws. - Effective governance of agentic AI requires layered safeguards such as interruptibility, human‑in‑the‑loop checkpoints, confidential data handling, risk‑based permissions, and robust auditability to trace decisions. - Ongoing monitoring, performance evaluation, and clear organizational accountability structures are essential to manage the evolving risks and ensure responsible deployment of autonomous AI systems. ## Sections - [00:00:00](https://www.youtube.com/watch?v=v07Y4fmSi6Y&t=0s) **Risks of Autonomous Agentic AI** - The speaker explains how agentic AI differs from traditional models by operating autonomously toward goals, outlining four risk‑linked characteristics—underspecification, long‑term planning, goal‑directedness, and impact directedness—and warns that increasing autonomy amplifies potential harms. - [00:03:36](https://www.youtube.com/watch?v=v07Y4fmSi6Y&t=216s) **Governance and Safeguards for Agentic AI** - The speaker outlines accountability concerns—including responsibility, regulation, and vendor liability—and details a multi‑layer technical safety framework (model‑level checks, orchestration loop detection, tool‑level RBAC), augmented by rigorous red‑team testing and continuous monitoring to ensure compliant, reliable AI agent deployments. ## Full Transcript
0:00AI is evolving at an unprecedented pace 0:03and we're entering into a new frontier, 0:06agentic AI. These aren't just chat bots 0:10or recommendation engines. These are AI 0:12systems that can set goals, make 0:15decisions, and take actions 0:17autonomously. 0:19This shift brings massive opportunities, 0:21automating complex workflows, 0:23accelerating innovation, but also 0:25introduces serious risks. What happens 0:28when AI makes decisions without human 0:30oversight? How do we govern AI that 0:32thinks and acts for itself? And that's 0:35exactly what we're here to discuss. 0:36Let's start with why agentic AI is 0:39different from traditional AI. Unlike 0:42classical machine learning models which 0:45respond to predictive inputs and produce 0:47expected outputs, agentic AI takes the 0:51output from one AI model and actually 0:54uses it as the input for another AI 0:57model. There are four key 0:59characteristics all that stem from 1:02autonomy which amplifies various new 1:05forms of risk. First, there's 1:07underspecification. 1:09The AI is given a broad goal but no 1:11explicit instructions on how to actually 1:14achieve it. Long-term planning. These 1:17models make decisions that build on the 1:19previous ones. Goal directedness. 1:22Instead of simply responding to the 1:24inputs, they work towards a goal. And 1:27then there's directedness of impact. 1:29Some of these systems operate without 1:32any human in the loop. So what you want 1:36to what I want you to remember is that 1:39autonomy 1:41itself is equal to increased 1:47risk. And I'm going to put three 1:49exclamation 1:52points. And that's the issue. As 1:55autonomy increases, so do risks like 1:58misinformation, decision-making errors, 2:01and security vulnerabilities. Many 2:03organizations are still catching up with 2:06the generative AI risks and agentic AI 2:09just amplifies them. Note with outcomes 2:13like these, there are even fewer humans 2:17in the loop. Fewer domain experts making 2:19course corrections. Look, we don't have 2:22time to define each and every one of 2:24these risks for you. We could record a 2:27show on each and every single one of 2:29them, but we do want you to see this 2:32impressive list of risks that are 2:34amplified or net new with Agentic AI 2:38because we want you to understand why 2:41governance is so critical. Now, let's 2:45talk about how we actually govern this 2:46technology. Effective governance for 2:48Agentic AI requires a multi-layered 2:50approach covering technical safeguards, 2:52guard rails like interruptability. Can 2:55we pause or shut down specific requests 2:57or even the entire system? Human in the 3:00loop. When does AI require human 3:03approval? Is the agent able to stop and 3:05wait for that input? And confidential 3:07data treatment. Do we have the adequate 3:10data sanitation like PII detection and 3:12masking to avoid a sensitive information 3:15disclosure. Additionally, we have 3:17process controls. Things like riskbased 3:20permissions. What action should AI never 3:22take autonomously? Auditability. If an 3:25AI arrives at a decision, can we trace 3:27back to how it made that choice? And 3:30monitoring and evaluation. AI 3:31performance needs constant oversight. 3:34And lastly, accountability and 3:36organizational structures. Who takes 3:39responsibility when AI decisions lead to 3:41harm? What regulations apply to your AI 3:44use cases? And how do we hold our 3:46vendors accountable for the AI's 3:48behavior? 3:53Now let's dive into the technical 3:55safeguards. Any organization deploying 3:57Agentic AI needs guard rails at each of 4:00the main components of an agent. The 4:02first one being at the model 4:07layer. This is to check for bad actors 4:09who are trying to have the agent take 4:10actions that are not aligned with your 4:12organiz organizational's policies or 4:14guidelines or even human ethical values. 4:18Absolutely. The next layer is the 4:20orchestration 4:22layer. Here you're going to want to have 4:24infinite loop detection to not only 4:26maintain an enjoyable user experience, 4:28but to avoid very costly failures. Then 4:31at the tool 4:34layer, we're going to want to make sure 4:36we limit each tool for a specific agent 4:39to give them the appropriate usage and 4:41not go outside of their predefined 4:43areas. And we do that via role-based 4:44access control. And how do we know all 4:46of this fits together? We need to 4:48rigorously test the 4:51system. We highly recommend red teaming 4:54so we can expose any vulnerabilities 4:56before we get to deployment. And once we 4:58do get to that deployment, we want to 5:00make sure that we are continuously 5:05monitoring so that we have automated 5:07evaluations to understand if we have any 5:09hallucinations or comp or compliance 5:11violations. The most successful 5:14organizations are already leveraging 5:16advanced tools and frameworks to ensure 5:18safe and effective AI deployment. These 5:22include 5:25models and guardrails designed to detect 5:28and mitigate risks in AI generated 5:30prompts and responses. Agent 5:33orchestration 5:36frameworks that enable the safe 5:38coordination of workflows across 5:40multiple AI systems. Security focused 5:43guard 5:47rails that help enforce policies and 5:49protect sensitive data during 5:52interactions and observability 5:55solutions that provide insights into 5:58system behavior, helping teams monitor 6:00and understand what's actually happening 6:02underneath the 6:03hood. Agentic AI is here. It's powerful. 6:08It's evolving fast. And organizations 6:11that don't take governance seriously 6:14today will regret it tomorrow. And 6:18governance is not just about security. 6:20It's about control. AI should empower 6:22organizations, not create unmanaged 6:24risks. So here's our challenge to 6:28you. Before you let AI act on your 6:31behalf, make certain you have the right 6:34guard rails in place. Because in the age 6:36of agentic AI, responsibility doesn't 6:39just fall on the machine. It falls on 6:42us.