Human vs AI Agent Identities
Key Points
- The speaker introduces a discussion on AI agents and “agentic identities,” inviting open, non‑debative feedback from the audience on emerging industry questions.
- Human employees are framed as physical beings belonging to organizational structures who follow a task lifecycle: receive → assess → plan steps → execute → learn and improve.
- Traditional non‑human identities (NHIs) are described as purely digital, deterministic entities that perform tasks in a fixed, unchanging manner.
- By contrasting human and non‑human identity models, the speaker sets the stage for exploring how AI agents might blend or redefine these characteristics in corporate settings.
Sections
- Opening Discussion on AI Agent Identities - The speaker invites an open, non‑debative dialogue about unresolved industry questions surrounding AI agents and their identities, framing it against the concrete, physical identities humans hold within corporations.
- From Deterministic Bots to Adaptive Agents - The speaker contrasts static, deterministic digital entities with AI agents that assess prompts, orchestrate task flows, and iteratively retrain based on performance metrics, making them behave more like humans.
- Treating AI Agents as Coworkers - The speaker interrogates whether organizational agents that perform support tasks should be regarded, managed, and listed in directories like human employees.
- Cost and Governance of Scalable Agents - The speaker highlights the CPU, network, and data costs of persistent agents, warns that scaling to thousands of agents will overwhelm current identity‑governance systems, and questions whether the existing IGA solution can handle the resulting approval and entitlement workload.
Full Transcript
# Human vs AI Agent Identities **Source:** [https://www.youtube.com/watch?v=D9fp2OVeAIo](https://www.youtube.com/watch?v=D9fp2OVeAIo) **Duration:** 00:11:52 ## Summary - The speaker introduces a discussion on AI agents and “agentic identities,” inviting open, non‑debative feedback from the audience on emerging industry questions. - Human employees are framed as physical beings belonging to organizational structures who follow a task lifecycle: receive → assess → plan steps → execute → learn and improve. - Traditional non‑human identities (NHIs) are described as purely digital, deterministic entities that perform tasks in a fixed, unchanging manner. - By contrasting human and non‑human identity models, the speaker sets the stage for exploring how AI agents might blend or redefine these characteristics in corporate settings. ## Sections - [00:00:00](https://www.youtube.com/watch?v=D9fp2OVeAIo&t=0s) **Opening Discussion on AI Agent Identities** - The speaker invites an open, non‑debative dialogue about unresolved industry questions surrounding AI agents and their identities, framing it against the concrete, physical identities humans hold within corporations. - [00:03:03](https://www.youtube.com/watch?v=D9fp2OVeAIo&t=183s) **From Deterministic Bots to Adaptive Agents** - The speaker contrasts static, deterministic digital entities with AI agents that assess prompts, orchestrate task flows, and iteratively retrain based on performance metrics, making them behave more like humans. - [00:06:24](https://www.youtube.com/watch?v=D9fp2OVeAIo&t=384s) **Treating AI Agents as Coworkers** - The speaker interrogates whether organizational agents that perform support tasks should be regarded, managed, and listed in directories like human employees. - [00:09:31](https://www.youtube.com/watch?v=D9fp2OVeAIo&t=571s) **Cost and Governance of Scalable Agents** - The speaker highlights the CPU, network, and data costs of persistent agents, warns that scaling to thousands of agents will overwhelm current identity‑governance systems, and questions whether the existing IGA solution can handle the resulting approval and entitlement workload. ## Full Transcript
Howdy, everyone.
We're going to do something a little bit different today.
We're still going to talk about AI and agents,
and we've had a lot of conversations about that.
But when we start thinking about AI and AI
identities, there's actually some open questions that are still being debated out
amongst companies and out in the industry.
So I want to pose a set of questions today
and engage everyone in feedback and comments
so that we can actually have an open
bit of a conversation about some of these questions and
and invite you to give me your thoughts on how you think things are going.
I don't want to make this a debate.
I really am not looking for,
you know, people to chime in with that you really are against something.
I really am inviting everybody to just have an open conversation
and think about some of the questions
that are actually being posed out now
about agents and agentic identities.
All right. Before we get to that,
I do want to revisit a little bit about identities
and set that up as kind of a foundation for our questions and our conversation.
So the first thing I want to talk about is humans
and how we are represented
as identities in corporations.
Now, the first thing we have to understand,
and this one is obvious,
is that we have a physical existence.
We we are here in a physical world,
in organizations performing tasks and jobs, right?
So we actually have a physical existence. Now, and
I kind of just said this, the
next thing that we have is we belong to organizations.
So we belong to an enterprise or a company.
We belong to a department, we belong to a business unit.
But we do belong, in some way,
to an organization within an enterprise.
The next thing is
I want to start thinking about how we perform tasks
so as as employees.
We perform tasks.
And so the first thing that we typically do is we're given a task.
And we have to assess
what it is that I'm supposed to do.
And once we assess it, we actually start breaking down the problem
into a set of steps,
especially for complex
tasks like, okay, what is it that I need to do?
What are the steps that I need to take?
Once we've assessed the problem or the task, not a problem, and
we break that down, the
next thing we do is we execute. We
actually perform the task. And
then finally, we come back and we learn. All right.
Did that work? How did I do at the task? Could
I have done it better?
Is there some way to change? And we learn from that.
And we apply that the next time we do a task.
All right, so that's humans.
The next thing is our traditional non-humans.
Or what we call
NHIs, non-human identities.
Now they're a little bit different.
Obviously, they're digital.
They have a digital existence.
And traditionally, they also, when they perform a task,
they're very deterministic.
We know exactly what they're going to do and how they're going to do that.
And it is typically unvarying, right?
We really don't change
how a non-human entity or identity works.
It it does the task.
So now let's think about agents.
So we've been talking about agentic flows and agentic systems.
I'm not going to talk about assistants.
I really want to leave this to agents.
When we think about AI and we think about agents.
Now first obviously, they're not physical, right?
They're digital, just like other non-human identities.
But we really largely think they're going to belong as part of an organization.
They're going to be in a business unit or a department performing task.
And when they do perform a task, they're going to assess.
This is really what we're trying to do with with AI and with agents.
They look at what the prompt or the ask is.
They assess what it is.
They break it in down into tasks. How
am I going to orchestrate this flow? How
I'm going to perform the steps that I need.
They execute on that flow, and then, they actually learn.
We actually will retrain them.
Well, look, what was the accuracy of the task they performed?
Was it 70%? Was it 80%? Was it 90%?
If it wasn't high enough, we'll go back and learn. So,
in this case, they really start operating
and behaving a lot like a human.
And so when we talk about that, then
so, as we start embracing agents into a workplace,
there are questions that start arising around that.
And so these are the questions that I want to pose
to everyone to, kind of, chime in and give me your thoughts on this.
The first one is:
When we think about agents, is/are agents
just software?
Now obviously they are right. We say they're digital.
We know that they're living in IT systems.
And there are actually applications that we have running.
But they're also learning and they're assessing
and they're thinking about tasks.
And so, while yes, they are just software,
they're also kind of more.
They're somewhere
starting to behave
the ways that humans would behave when they're taking on tasks.
And that's the artificial intelligence piece of this, right?
So, so the question is: are agents, are they just software?
Is that just, are we just going to look at them like traditional
non-human identities and applications and like that?
all it is right.
So that's the first big question I have.
The second question then is: should we
or do we recognize
agents as coworkers?
Now this is actually a point of a lot of debate.
There was there's a good use case from a year or so ago
where an organization actually treated them
from an HR perspective
as an agent was just another worker.
And there was actually, you know, some documented things about that
that didn't really work out so well from a human,
you know, from an HR perspective.
But this is not that.
This is not the question I'm asking here.
This is if a, if an agent is performing
tasks for an organization, for a business unit,
should we treat them like they are just another coworker?
And part of the reason I ask that is
if we think about a case where we have
agents that are supplementing or extending,
virtually extending a support team. And
they're going to pick up and do tasks similar,
or maybe even the same way that a human would process,
support and, and find answers and report on those answers.
If we're virtually treating them like a workforce,
do we recognize them as a coworker?
So this is the next question that I posed to everyone is:
do we recognize them as a coworker?
The third question that I really want to ask then
is, and all of these are very much related,
but should an agent,
do we put an agent
into the directory?
And you know, this can be
your Active Directory, could be an enterprise directory.
It could be a lot of things.
And the reason I posed this question
as in one of my earlier videos, we
were talking a lot about agent identities and governance.
And one of the comments
actually came and said: Please don't tell me that
we're now going to put agents in the Active Directory.
That's actually the question.
Should we be doing that?
Should we be putting, if they're coworkers,
if we think of them as supplementing our workforce
because they're performing tests similar to the way we do it?
Should they be in the directory?
So I posed that question out to everyone.
The next question I'd like to ask
is: are agents,
are they persistent?
or ephemeral?
In other words,
Me, as a human worker, I have a physical existence, right?
So I'm persistent. I come to work,
I am there,
and I perform the tasks that are coming my way.
Agents can actually be, back to the first question
if we, you know, they are software.
So we can actually, we can provision them.
We can de-provision them. We can spin them up, or we can spin them down.
So the question is: should an agent be persistent?
Should it always be waiting around
to take on a task? Or
should it just be brought up to do that task, and
then, we bring it down?
And there's actually some cost implications with this.
There is a cost associated from an IT
perspective of running agents. You
know, if they're running in a container or a part or whatever,
however you're doing that, the question is,
there's a cost associated with that CPU usage,
you know, network traffic, data consumption, whatever that is.
So if they're persistent, they could then
be continually consuming resources.
And do we want that or not?
So this, this is the next question I pose.
The final question I want to pose to everyone
is: does our current IGA system,
our identity, governance and administration.
Does it do what we need?
In other words, is,
is it, is it enough?
And the reason I say that is that we think about
if agents could be ephemeral, if there are supplementing workforces,
if we need to spin them up, we could actually make
thousands and thousands of these agents.
There's actually multiplier
numbers out there now that are all over the place.
Whether you need 5x agents to perform tasks,
if they're. There's a number
that says that if how many people you have in your organization,
you're ultimately someday going to have 45x that in agents, whatever,
however you do your multiplier,
we know there's going to be a lot of agents.
And that means from a governance perspective,
a lot of approvals, a lot of entitlements,
a lot of annual validations.
And if we think about the number of potential agents that should come out, the
question is: will our current way that we handle identities,
really around human identities?
And if we're drawing some parallels here,
is that enough?
So these are the questions.
These are the questions I am posing to everyone.
And what I really would like you to do
is, as mentioned, join in the conversation.
some comments in.
Tell me what you think about one or several
or all of these questions.
Other people can chime in and let's have a conversation.
I do not want this to be a debate.
It's not about right or wrong.
It's not about whether it is or is not.
It's really a conversation about where
we think things are heading.
So please engage and I look forward to reading your comments.