Turn Your Current Role into an AI Job
Key Points
- The crucial mindset shift is to ask how you can turn your existing role into an AI‑enhanced one rather than hunting for a separate “AI job.”
- In 2025 AI moved from being a superficial chat/assistant layer to becoming a core infrastructure layer that underpins everyday workflows.
- A standardized agent architecture emerged—defining agents as goal‑driven loops with context gathering, reasoning, action, and observation—and introduced maturity models and design principles for multi‑agent systems.
- Security moved from a theoretical concern to an operational reality, as “shadow IT” and personal AI tools exposed organizations to risks that must now be governed.
- The practical path forward for 2026 is to leverage your company’s AI infrastructure and the new agent frameworks to redesign how work gets done in your current position.
Sections
- Make Your Current Job AI‑Native - The speaker urges professionals to stop seeking separate AI positions and instead transform their existing roles by leveraging AI as an infrastructure layer—a shift that occurred in 2025—and outlines the new mental models and practical steps needed to become an AI‑native worker by 2026.
- AI Agents: 2025 Lessons - The speaker explains how 2025 clarified that AI agents deliver strong ROI when confined to repetitive, verifiable tasks, highlighting the need for IT partnership, new deployment skills, and careful safety considerations.
- AI Agent Governance Essentials - The speaker stresses the need to understand where AI agents store logs and metrics, how they communicate via protocols, enforce security roles, and integrate governance as a core operating system for credible, auditable AI adoption.
- Designing Human‑AI Collaborative Workflows - The speaker urges professionals to transition from repetitive tasks to strategic, people‑centric roles by mapping their existing workflows and prototyping AI‑augmented processes, positioning themselves as designers and supervisors of AI agents.
Full Transcript
# Turn Your Current Role into an AI Job **Source:** [https://www.youtube.com/watch?v=gtkRAXQf49k](https://www.youtube.com/watch?v=gtkRAXQf49k) **Duration:** 00:13:29 ## Summary - The crucial mindset shift is to ask how you can turn your existing role into an AI‑enhanced one rather than hunting for a separate “AI job.” - In 2025 AI moved from being a superficial chat/assistant layer to becoming a core infrastructure layer that underpins everyday workflows. - A standardized agent architecture emerged—defining agents as goal‑driven loops with context gathering, reasoning, action, and observation—and introduced maturity models and design principles for multi‑agent systems. - Security moved from a theoretical concern to an operational reality, as “shadow IT” and personal AI tools exposed organizations to risks that must now be governed. - The practical path forward for 2026 is to leverage your company’s AI infrastructure and the new agent frameworks to redesign how work gets done in your current position. ## Sections - [00:00:00](https://www.youtube.com/watch?v=gtkRAXQf49k&t=0s) **Make Your Current Job AI‑Native** - The speaker urges professionals to stop seeking separate AI positions and instead transform their existing roles by leveraging AI as an infrastructure layer—a shift that occurred in 2025—and outlines the new mental models and practical steps needed to become an AI‑native worker by 2026. - [00:03:56](https://www.youtube.com/watch?v=gtkRAXQf49k&t=236s) **AI Agents: 2025 Lessons** - The speaker explains how 2025 clarified that AI agents deliver strong ROI when confined to repetitive, verifiable tasks, highlighting the need for IT partnership, new deployment skills, and careful safety considerations. - [00:07:36](https://www.youtube.com/watch?v=gtkRAXQf49k&t=456s) **AI Agent Governance Essentials** - The speaker stresses the need to understand where AI agents store logs and metrics, how they communicate via protocols, enforce security roles, and integrate governance as a core operating system for credible, auditable AI adoption. - [00:10:42](https://www.youtube.com/watch?v=gtkRAXQf49k&t=642s) **Designing Human‑AI Collaborative Workflows** - The speaker urges professionals to transition from repetitive tasks to strategic, people‑centric roles by mapping their existing workflows and prototyping AI‑augmented processes, positioning themselves as designers and supervisors of AI agents. ## Full Transcript
My inbox and my DMs are full of people
saying, "Can I get an AI job? How do I
get an AI job?" And that is the wrong
question, people. The right question is,
"How do I turn my current job into an AI
job?" I'm dead serious. And I'm going to
talk about it here. Your goal in 2026 is
going to be much more specific than a
dream of another job. It's going to be
not changing careers, not becoming a
prompt engineer, but how can you change
the way work actually gets done in your
current role using the AI infrastructure
your company is already rolling out. I
am telling you for 95% of us that is the
way AI is going to come. And we don't
talk about it. We talk about changing
jobs all the time, but like that's a
tiny sliver of the world. For so much of
us, it is not about that. I'm actually
going to focus on what changed in AI in
2025 underneath the hype, the new mental
models that you need to understand what
matters in 2026, particularly around AI
agents, and a practical path to making
your existing job an AI native job. So,
what actually changed in 2025? like
underneath the hood, underneath all the
hype, stepping back, the first thing you
need to recognize is that AI moved from
a chat interface into being an
infrastructure layer this year. So for
the last two years, for most of us, the
experience of AI has been it's a chat
box, it's a writing assistant, maybe it
does some code completion. That is now
the most superficial layer of AI.
Underneath the surface, three big shifts
happened in 2025 that changed the game
on AI. Number one is that architecture
started to get standardized. Google's
recent introduction to AI agents paper
is just the latest example of this. The
larger perspective if you step back is
that we have started to get a clear
industry definition around an agent as a
loop. An agent has a goal, gathers
context, it reasons, it acts, it
observes. And we have patterns now for
multi- aent systems that include planner
agents, retriever agents, executor
agents, etc. We also have a the
beginning of an industry model for agent
maturity from simple tool calling all
the way up to self-improving systems
which nobody has or almost nobody has.
And finally, we have design principles
around how we think around issues like a
budgetary authority for agents,
boundaries for agents, security identity
for agents. Still evolving, but it's
starting to come into place. The the
reason you need to care about this is
that until we had that architecture,
agents were mostly theoretical or they
were point solutions to problems.
Because of the work done in 2025,
because that architecture is more
standardized, we are now set up to do
much more interesting things, much more
comprehensive work with agents in 2026.
The second big piece in 2025 is that
security is no longer a hypothetical.
2025 was a year of shadow IT. bring your
own AI to work. Maybe security won't
check. Maybe your chief information
security officer won't notice you
brought your personal chat GPT. That is
increasingly going to be out of bounds,
caught and not allowed. And the reason I
say that is because these CISOs,
information officers have had a year to
get their teams in gear to approve a
bunch of tools like Cloud Code, like
Chad GBT, like lovable. And so
increasingly the tools that are allowed
are inside the fences now. And the
critical thing that you need to be aware
of is that the security focus is now
moving into that agent space. And so
more and more the real meaningful shifts
are going to be done in partnership with
your security teams at work. It's not
going to be just the marketing team
setting up their individual little tool
and hoping and praying nobody notices.
more and more that's going to require
your partnership with the rest of the IT
department and that is something I will
absolutely get into but it's it's a
skill we need to develop that most of us
haven't had to use before because
frankly the ability to deploy technical
agents to do this work is brand new. The
third major change in 2025 is that
enterprises learned where AI agents
actually work. This is probably the
biggest one. I can't underline this one
enough. across hundreds of deployments.
The pattern is annoyingly consistent.
Agents are reliable and deliver really
good ROI on work tasks when they are
bounded in scope, when they are
objectively verifiable, when they are
repetitive, and when they have clearly
defined inputs and outputs. So you can
think back office operations, triage
operations, claims, lead qualification,
document checks, basic compliance,
customer support flows. It is not invent
our product strategy the AI agent. It is
hey can you execute this same process we
do 10,000 times a week and please don't
get bored. That's where AI agents are
going. So 2025 gave us a lot of clarity
and that shapes how we prepare ourselves
in our roles for AI agents and yes it
will touch all of us. So it gave us
clarity on what agents are how they
operate at scale when where they're safe
where they're useful and where they're
dangerous if you're sloppy. This all
lays the foundation for what comes next.
If you're looking ahead to 2026, these
are the three mental models that you
need to survive in your career as we
start to have AI agents more and more in
the workplace. Number one, AI is a
collaborator on structured work. It is
not a magic brain. So, I'm going to say
it again, LLMs are pattern machines.
They're very, very good at transforming
text and code. They they can map messy
inputs to structured outputs very well.
They follow explicit instructions
increasingly well and they can do the
same thing a thousand or 10,000 times
and never get bored. But they are not
inherently good at making high stakes
decisions with very ambiguous
trade-offs. They don't understand your
organization's politics or background
well. They don't know your context
unless you give it to them. and they are
very very bad at respecting boundaries
that you have not defined previously.
And so the right question is not can AI
do my job although I hear that a lot
that that's wrong. That's not the right
question to ask given what we know about
AI agents today. Instead it is which
parts of my job are repetitive are
checkable are describable or verifiable
and how do I turn those into workflows
that AI can run or assist with? How do I
begin to take charge of how AI shapes my
job? And if you can't describe the work
clearly, that's something that you're
going to have to do. The AI just doesn't
have a chance at that. The second major
mental model is agents plus
orchestration are becoming the new
middleware. And if that sounds abstract,
the key thing to understand is that
middleware has always existed in our
software stacks. In between backend and
front end, there has always been a piece
of the stack that translates. That part
of the stack now got intelligent. It got
intelligent because agents are
increasingly going to be that
middleware. All an agent is is a loop
around a model. It has tools. It has
some kind of state that it's working
with and it has decision logic. That's
it. The important part here isn't that
we label this middleware. It's that we
understand that this orchestration layer
is going to be driving a lot of how we
do productivity. And we need to take
charge of what that looks like. So what
tools does it allowed to use? Under what
identity is it secure with what budget?
What where are the logs and the metrics
stored? What does it do when it doesn't
know? This is the part that most people
don't see or think about. But you need
to think about it if you want to have a
productive relationship with AI agents
in your role. You need to at least
understand the vocabulary, how models
talk to tools and data. Maybe through
model context protocol, maybe other
ways. What are agentto agent protocols?
How do teams of agents coordinate? And
how can you talk about that at a high
level even if you're not an engineer?
Control panes, gateways. What are the
choke points where organizations are
going to enforce security policies and
observe behavior? How do you ensure that
the agents that are built have the right
roles and permissions? I am not
expecting you to implement this
yourself. Most people won't. But if you
want to be taken seriously, you do need
to be able to talk at a high level about
AI workflows in your area in these terms
because that makes you translatable.
That makes you accessible to people who
will be building this for you and you
will want that skill. The third major
mental model for 2026 is governance.
It's not a bolt-on. It is it's going to
be the new operating system, guys. AI is
becoming grown up. If your AI adoption
story doesn't include security and
privacy and auditability and all of that
stuff that seems boring, it's not going
to be taken seriously. And so you need
to be providing proactive answers to in
your domain, where would you allow AI to
act autonomously? Where would you allow
it to only draft? Where would you
require a human approver? How do you
shut it down safely? This is no longer
just your chief information security
officer's problem. It is becoming
everyone's problem because AI agents
will not roll out successfully if they
do not know your local information and
data. So where will AI actually reshape
your job keeping all of that in mind?
Fundamentally, you need to think of your
job as a stack of workflows. Your job is
going to be decomposed and you need to
take charge of what that looks like. So
don't think of it as doing marketing.
Think of it as you run campaigns, you
create briefs, you analyze performance,
you manage stakeholders. Those are
workflows. You don't do product
management. Instead, you collect
requirements. You prioritize. You write
specs. You coordinate launches.
Workflows. Again, you don't do finance.
Instead, you reconcile. You forecast.
You analyze variants. You produce
reports. Again, workflows. Each of these
can be decomposed into triggers, what
starts the work, inputs, what you look
at, transformations, what do you do with
it? Decisions, outputs, and checks to
know if it's correct. AI slots into a
structure like that. AI will handle the
boring and repetitive parts of those
workflows. It is up to you to figure out
how that actually shapes in your role.
Across industries, the same categories
keep getting automated or heavily
assisted. Triage tasks, routing tasks,
summarization tasks, synthesis tasks,
policy and rule tasks, repetitive
document workflows like pulling data
from forms, glue work across tools,
moving information from Excel into Word
or vice versa. If you look at your job
honestly, for most of us, a non-trivial
percentage is in one of those buckets.
And that is what is going to move first.
Now, the parts that stay human for a
long time to come are parts around
negotiation, around trust building,
around politics, around deciding which
problems to solve, around setting
strategy, around being accountable when
things go wrong. So, I don't want you to
hear Nate is proposing that I AI away my
job. I want you to hear that AI drains
repetitive and checkable work out of
your role. You should be in charge of
what that looks like or someone else
will do it for you. And your value is
going to shift toward defining
workflows, supervising them, handling
exceptions, choosing what to build,
touching the work that matters. And so
when you think about what to do in the
next few weeks as you head into 2026, if
you want to get a running start, number
one, map your work as if you were a
systems designer. I've given you a cheat
code here. Write down your workflows.
Write down what triggers them, what
inputs there are, what outputs there
are, what decisions there are. Learn to
express those workflows to the tools you
already have. Try something even if it's
prototypy in chat GPT enterprise or in
Copilot or in Gemini to get the idea of
what that workflow would look like so
you can show a workable prototype when
it comes along and has an AI agent
initiative. I'm not saying spin up rogue
infrastructure. I'm saying try and
prototype something so you get a living
feel for what AI agents working with you
would look like and then be in the
driver's seat when you have these
conversations with your engineering
teams. I would also encourage you to
build a relationship with the people
championing AI in your org. Maybe that's
you cuz you watch this channel or maybe
it is someone else who is responsible
for the technical side. But either way,
make sure that you are finding the right
team who is responsible for AI
initiatives in your area. That you are
showing them you've done your homework
and you're thinking inside the existing
organizational guard rails. You're
thinking about workflows. You're
thinking about patterns and tools. At
that point, you're no longer a random
person. You're a valuable champion and
an ally who speaks both languages. The
messy reality of the business language
and the constraints of the platform that
the technical teams think about. and you
are in a position to be a fluent
translator of AI and drive how AI agents
work with you in your role. That is a
very valuable position and that is what
you need to be able to do to be in the
driver's seat. This is what I wish I
could tell 95% of people who are not
going to switch jobs in the next year
for AI roles. This is what you need to
know to be in charge of AI in your role.
So, I hope this has been helpful.
There's a lot more written up on the
substack, including some prompts to help
you think through this. My goal is to
give you a guide so that you can
meaningfully engage with your existing
role and prepare for it now before we go
into 2026. And AI agents are absolutely
everywhere. Good luck.