AI Execution: Cheaper Yet Riskier
Key Points
- AI is dramatically lowering the cost of execution across functions—from product management to engineering to customer success—by enabling faster, higher‑volume work.
- Paradoxically, this cheaper, faster execution spawns new jobs focused on quality assurance and security because AI‑generated code and outputs introduce “dirty” code, hallucinations, and prompt‑injection vulnerabilities.
- Real‑world examples illustrate the risk: engineers grapple with low‑quality AI‑written code, and sales teams inadvertently rely on hallucinated AI‑crafted decks, leading to potential misinformation and contractual errors.
- Addressing these challenges requires specialized safeguards such as red‑team testing, robust prompt engineering, data chunking controls, and mechanisms to constrain LLM responses within trusted answer distributions.
- The core tension now lies between the benefits of accelerated AI‑driven productivity and the need to manage the accompanying quality and security nightmares.
Sections
- AI Execution Dynamics and Job Impact - The speaker outlines two opposing dynamics—AI making execution cheaper, which threatens jobs, and the resulting quality and security challenges that actually generate new employment opportunities.
- The Human‑AI Boundary Crisis - The speaker argues that exploding compute costs demand new infrastructure roles, while a looming “human‑AI boundary crisis” will spawn billion‑dollar businesses to define, debug, and manage ambiguous AI behaviours like hallucinations.
- AI, Chaos, and Trust in Product Management - The speaker argues that amidst AI‑driven chaos, product and program managers can differentiate themselves by earning trust and retaining accountability, even as AI automates many routine tasks.
- Software Engineering Evolving With AI - The speaker urges engineers not to abandon the field despite AI hype, emphasizing the need for robust, scalable design to manage the surge of low-quality AI‑generated code.
- Middle Managers' Future in AI Era - The speaker contends that AI will erode the traditional information‑filtering role of middle managers, expanding directors' strategic accountability and span, and stresses the need for leaders who excel in ownership and AI‑enabled decision‑making.
- Aligning Deployment & AI UX - The speaker emphasizes coordinating secure deployment practices with human‑centric AI interface design, arguing that polished, innovative UI features like Perplexity’s in‑task messaging set products apart.
- AI Security & Cloud Infrastructure Opportunities - The speaker emphasizes that the expanding AI attack surface creates demand for security talent to “jailbreak” and protect models, while the massive growth of AI data centers makes cloud AI infrastructure engineers—specialists in GPU arbitrage and cost‑optimal pipelines—extremely valuable for cutting trillion‑dollar spend.
- AI-Driven Shift for Solutions Engineers - The speaker stresses a required mindset change—especially among QA professionals—as AI now lets sales and forward‑deployed engineers rapidly prototype and personalize B2B SaaS solutions, provided they understand technical feasibility and the lowered cost of coding.
- Emerging AI Workforce Roles - The speaker outlines a suite of upcoming high‑demand AI specialties—including behavioral data extraction, context supply‑chain management, human‑factor tuning, power‑efficient scheduling, regulatory compliance, synthetic data generation, edge inference/robotics, and AI psychology—that will shape the future talent landscape.
- Future of Jobs in AI - The speaker questions how AI will affect their career and asks viewers to share any jobs not yet discussed in the comments.
Full Transcript
# AI Execution: Cheaper Yet Riskier **Source:** [https://www.youtube.com/watch?v=CNw443X6dB0](https://www.youtube.com/watch?v=CNw443X6dB0) **Duration:** 00:31:47 ## Summary - AI is dramatically lowering the cost of execution across functions—from product management to engineering to customer success—by enabling faster, higher‑volume work. - Paradoxically, this cheaper, faster execution spawns new jobs focused on quality assurance and security because AI‑generated code and outputs introduce “dirty” code, hallucinations, and prompt‑injection vulnerabilities. - Real‑world examples illustrate the risk: engineers grapple with low‑quality AI‑written code, and sales teams inadvertently rely on hallucinated AI‑crafted decks, leading to potential misinformation and contractual errors. - Addressing these challenges requires specialized safeguards such as red‑team testing, robust prompt engineering, data chunking controls, and mechanisms to constrain LLM responses within trusted answer distributions. - The core tension now lies between the benefits of accelerated AI‑driven productivity and the need to manage the accompanying quality and security nightmares. ## Sections - [00:00:00](https://www.youtube.com/watch?v=CNw443X6dB0&t=0s) **AI Execution Dynamics and Job Impact** - The speaker outlines two opposing dynamics—AI making execution cheaper, which threatens jobs, and the resulting quality and security challenges that actually generate new employment opportunities. - [00:03:33](https://www.youtube.com/watch?v=CNw443X6dB0&t=213s) **The Human‑AI Boundary Crisis** - The speaker argues that exploding compute costs demand new infrastructure roles, while a looming “human‑AI boundary crisis” will spawn billion‑dollar businesses to define, debug, and manage ambiguous AI behaviours like hallucinations. - [00:06:46](https://www.youtube.com/watch?v=CNw443X6dB0&t=406s) **AI, Chaos, and Trust in Product Management** - The speaker argues that amidst AI‑driven chaos, product and program managers can differentiate themselves by earning trust and retaining accountability, even as AI automates many routine tasks. - [00:09:57](https://www.youtube.com/watch?v=CNw443X6dB0&t=597s) **Software Engineering Evolving With AI** - The speaker urges engineers not to abandon the field despite AI hype, emphasizing the need for robust, scalable design to manage the surge of low-quality AI‑generated code. - [00:13:19](https://www.youtube.com/watch?v=CNw443X6dB0&t=799s) **Middle Managers' Future in AI Era** - The speaker contends that AI will erode the traditional information‑filtering role of middle managers, expanding directors' strategic accountability and span, and stresses the need for leaders who excel in ownership and AI‑enabled decision‑making. - [00:17:12](https://www.youtube.com/watch?v=CNw443X6dB0&t=1032s) **Aligning Deployment & AI UX** - The speaker emphasizes coordinating secure deployment practices with human‑centric AI interface design, arguing that polished, innovative UI features like Perplexity’s in‑task messaging set products apart. - [00:21:19](https://www.youtube.com/watch?v=CNw443X6dB0&t=1279s) **AI Security & Cloud Infrastructure Opportunities** - The speaker emphasizes that the expanding AI attack surface creates demand for security talent to “jailbreak” and protect models, while the massive growth of AI data centers makes cloud AI infrastructure engineers—specialists in GPU arbitrage and cost‑optimal pipelines—extremely valuable for cutting trillion‑dollar spend. - [00:24:45](https://www.youtube.com/watch?v=CNw443X6dB0&t=1485s) **AI-Driven Shift for Solutions Engineers** - The speaker stresses a required mindset change—especially among QA professionals—as AI now lets sales and forward‑deployed engineers rapidly prototype and personalize B2B SaaS solutions, provided they understand technical feasibility and the lowered cost of coding. - [00:28:27](https://www.youtube.com/watch?v=CNw443X6dB0&t=1707s) **Emerging AI Workforce Roles** - The speaker outlines a suite of upcoming high‑demand AI specialties—including behavioral data extraction, context supply‑chain management, human‑factor tuning, power‑efficient scheduling, regulatory compliance, synthetic data generation, edge inference/robotics, and AI psychology—that will shape the future talent landscape. - [00:31:42](https://www.youtube.com/watch?v=CNw443X6dB0&t=1902s) **Future of Jobs in AI** - The speaker questions how AI will affect their career and asks viewers to share any jobs not yet discussed in the comments. ## Full Transcript
I want to talk about AI and jobs. It's
the number one thing I get asked, Nate,
what about my job? And those jobs are
all distinct and unique. So, we are
going to get into that level of detail.
But before I do that, I want to talk
about dynamics. What are the big
dynamics and how are they shifting? The
first one I want to call out is one that
you're going to be familiar with
probably because it's the one that makes
headlines. It's the idea that execution
is getting cheaper. So if you see an
expectation as a PM that you should
prompt engineer so you can do twice as
many things, guess what? That's
execution getting cheaper. If the
expectation from an engineering
perspective is you can use cursor and
ship twice as much code, I don't know if
that's better, but that's execution
getting cheaper. If the expectation is
customer success can now do uh you know
other things because AI can do a lot of
the customer success, that's the same
theme, execution getting cheaper. We're
going to take that one as red because
you've probably read those headlines.
So, what's dynamic number two? That's
the interesting one because it is in
conflict with dynamic number one.
Dynamic number two says execution
getting cheaper creates jobs because of
quality and security nightmares. I know
engineers who will tell you that they
hate touching vibecoded stuff because
the vibe code is so dirty. I know other
engineers who say it's okay that it's
dirty. I write it from scratch anyway.
It's a nice idea. The point is there are
new security and quality challenges with
fibe coding. You look across the board
with AI that is true. There are
challenges with AI security around red
teaming around prompt injection around
all kinds of security concerns when you
frontload AI into public facing
websites. If you have bad chunking on
your data, which I talked about last
time, you're going to have issues with
hallucinations that are very difficult
to trace. If you cannot hedge your LLM
so that it only answers within a
specified distribution range of answers
and if if you ask something wild, if you
try to inject it, it's not going to
respond. If if you can't figure out how
to do that, you are going to be in big
trouble. And so those are just two
simple examples. There are a lot of
other examples where people even
internally are misusing AI and creating
security nightmares. I'll give you one
example on the internal one. This is a
fun one for me. Sales will sometimes use
chat GPT before a sales call and they
will feed Slack and they will feed decks
and so on into chat GPT to try to make
sense of it. What they don't do is they
don't prompt carefully all the time and
they will sometimes get to a
hallucinated deck and that's very bad
because then the company is committed to
something that was made up by AI. You
see that kind of dynamic all the time.
And so you have these twin competing
dynamics and we're stuck in the middle.
You have execution getting faster and
you have speed creating huge quality
security nightmares. But we're not done.
There's two other dynamics I want to
cover. Number three, compute costs are
absolutely exploding. And when compute
costs explode, it means there's a whole
forest of downstream jobs that come with
that exploding compute. And so that one
is much more of a straightforward. If we
have this many GPUs, we have downstream
roles. So, for example, being able to
get into tuning costs is a very
lucrative business because everyone is
spending so much on AI. And so, I want
to call that out because that doesn't
get talked about a lot. A lot of the
time people talk about AI as almost a
scale-free thing. Like we we'll do the
AI. The AI isn't free as you scale it.
You not you're not just spending on
GPUs. you're spending on the ability to
scale up your model, scale up your
inference cost, scale up your prompt and
context engineering if you're serving a
model. All of this comes down to compute
costs are exploding. How do we manage
our infrastructure? There are whole
forests of new roles there that people
don't talk about much. Dynamic number
four, the human AI boundary crisis. You
can have perfect tech and you can still
have very angry users. And I think one
of the challenges right now is that
humans and AI don't have norms to
interact yet. And you have vast
confusion between the two. And so there
are going to be roles that develop just
to figure out how to manage that event
horizon. In fact, there's going to be
entire billion-dollar software
businesses built around managing the
human AI boundary crisis. As an example,
suppose someone tells you that AI is
hallucinating. That sounds clear to
them. But if you peel that back, it
takes someone very very technical to
understand and debug that. They have to
understand what is meant by
hallucination because it's a very vague
human term. Is it an undesired response?
Is it a lack of a response? Is it a
partial response? Is it an overcomplete
response? Is it a response that somebody
noticed and that's the only reason it's
being reported? But there's like 15
other examples that are more
complicated. Just that one statement
from a human reveals the tension in the
boundary crisis. Again, where there are
problems like that, there are jobs. And
that is the thing I want to call out for
all four of these dynamics. These
dynamics all create problems. People get
paid to solve problems. And so AI, if
you want to look for where the jobs in
AI, they are where the problems are
moving toward. So let's go from here
from these four dynamics around the
human AI boundary crisis around
infrastructure exploding around trust
deficits as speed creates quality
nightmares and finally automation and
how fast it's happening. Those are the
four dynamics. What does that mean for
call it 15 of the top roles in tech?
Number one, not because it's special but
because I did it a lot product manager.
What does that mean for product
management? PM is right in the middle of
the automation and trust crisis. PMs
simultaneously
need to be scaling up their ability to
manage agents. They need to be becoming
more technical and at the same time they
have never had more value in figuring
out how to scale trust. And so if you
are a PM and you can figure out how to
take all of the ideas that the
organization is generating through vibe
code and you can figure out how to
filter that and you can be technical
enough to have a perspective when you
talk to your engineering team about AI
models and you can articulate this is a
path forward. This is something that
creates trust and then even more
important than all of that deliver
quality models in production. Now you're
talking about value. By the way, this is
underlining farther the whole age-old
debate about PM and MBAs. MBAs aren't
learning this stuff. If you need to
learn hard skills in AI, the only way to
learn it is by building an AI now
yourself or by learning it on the job.
There's not really another way to do it.
Academics isn't keeping up. But the
thing I want to call out, everyone's
going to tell you to build as a PM. You
can learn to do that. You probably
should. What people aren't telling you
and what I look for is the ability to
earn trust in the middle of the chaos.
PM as a role has always been about
managing chaos and earning trust. How
can you do that better with AI now that
AI itself is a chaos creator? You have
even more opportunity to earn trust
amidst organizational chaos because AI
is multiplying that chaos. And so that
is where I see really compelling PM
action and nobody is talking about it.
Role number two, program and project
manager. These guys, when I talk to
them, they're very nervous. They worry
that AI can plan out an entire program
in a doc better than they can. AI can
write the Slack messages. AI can write
the email updates. All the individual
pieces that they did, AI can do. AI
agents can schedule the calendar
meetings, right? Like where where does
the program manager go? You know what
program managers really do? They're
accountable for delivering against time
and budget and resources. That point of
accountability LLMs are not taking. And
so, yes, do you want to be the program
manager who can probably execute better
because you have AI tooling that can
build Gant charts? Sure, you do want
that. Do you want to be the program
manager fluent enough in how AI works
that you can marshall resources and
manage AI projects effectively because
they're exploding? 100% you want to be
doing that. But the heart of that role
as someone who has worked with program
managers is accountability. That isn't
going anywhere. And great program
managers know that accountability is the
beating heart of the role and will stick
to it. Accountability is not going to go
out of style. You still need people who
are accountable, especially as things
like infrastructure costs for AI are
exploding. There's more money pouring in
than ever to the AI space. We need
people who can hold teams accountable to
their use of resources. Rule number
three, customer success. This one always
gets the black flag, right? Like Sam
Alman will talk about it as this is just
going to go away. Just totally gone.
Wonder why. I really wonder why. Because
the customer success people who are
absolutely brilliant that I know, their
success does not depend on their ability
to answer tickets. Their success depends
on their ability to hold relationships.
That is a human thing. You can't get an
LLM to hold a relationship with you. And
so my bet is that actually customer
success is sticking around and leaning
into customer relationship management.
That is where I think it's going because
the customer relationship person who can
who can advocate for the customer
internally aggressively with those pesky
PMS who can talk with sales about
expansion revenue. That's not something
we're talking about automating with AI.
It's the ticket stuff that we're talking
about automating with AI. Well, great.
Fantastic. Let let them do that piece.
The the beating heart of the role is in
the relationship because that directly
extends the lifetime of the customer.
Role number four, software engineering.
Boy, I see people, frankly, people who
are just out of college advocating that
no one studied computer science anymore
and that people run away from
engineering. Don't run away from
engineering right now. You may have to
change how you do engineering, but this
role, the role of software engineer has
evolved more times than I can count over
the course of my career in tech, let
alone the 70 years or so the role has
been active. Software engineering is by
definition a compute enabled role and
it's going to keep evolving. And so, of
course, it's going to evolve in the age
of AI. That's not a reason to walk away
from it. Do you know how many people are
going to need all of their vibecoded
work cleaned up? Insane amounts of code
are being generated with security holes,
quality issues. It gets back to that
dynamic I called out is that the fact
that we can go faster is creating
massive speed and quality issues. Now AI
in some cases makes you not care if you
want the prototype out there. If you
want to initiate the idea, the the debt
is not a debt. It's an asset. It helps
you go faster. I get that. At the same
time, if you're production deploying,
you have to build well. You know where
the beating heart of a good engineer is?
It's someone who understands how to
design durable technical systems,
especially ones that scale. Yes, you
have some engineers who can lean farther
toward the prototyping side, and that's
fantastic. Some who are going to be able
to code something up in a weekend that
shows how it could work. And there is
going to be tremendous value there,
especially if you can do that with real
life data. Because a lot of these
prototyping ideas your PMs are handing
to you, they don't have real data on
them. And so if you can knock together,
and I know engineers who do this,
they're brilliant. You knock together
something with real data and you knock
it up in a weekend, you say, "Ah, it's a
Tracer bullet. It's fine." And people
use it and they're just wowed. Not a a
skill that's going out of style. If you
can production deploy something to a 100
million boxes, not a skill that's going
out of style. And so the challenge if
you're getting into engineering is you
need to recognize that the way you work
may be evolving with AI, but those
fundamentals are not changing with good
engineering. And you should not let the
fact that AI can write some code confuse
you. You must learn how technical
systems get put together because that is
actually the path toward career
leadership. And a lot of senior
engineers are very worried about junior
engineers coming in and overdepending on
AI because they don't understand the
fundamentals. So if I actually wanted to
call out a risk here, it is not that
there will not be jobs. It is that there
will not be qualified people for jobs
because people are expecting and reading
the hype and believing that AI will just
write the code for them. Not true.
Executive leadership. Should managers be
more worried? I think I want to talk
especially to senior managers and
directors here. I've sat in those
chairs. Those chairs are at risk. And I
know people who feel that in their
bones. And I got to tell you, you're
kind of right. Because look at these
dynamics. None of them play out in favor
of senior managers and directs. If you
can execute faster, it doesn't really
help a senior manager or director whose
core job isn't execution. If you have
speed, trust, quality issues, that
doesn't really help you because you're
the one that has to deliver anyway.
Middle managers are fundamentally
information bottlenecks. Their entire
job for most of the history of
corporations has been to filter
information. Well, guess what? OM are
already really good at filtering
information. And so, people joke around
about the CEO really should be AI. I
think it's more the middle manager is at
risk. Not of being an AI agent. I don't
buy that. I know that uh there's that
throwaway line in Project Vend about
Claudius being a middle manager. That's
not the future I'm talking about. I
think it's more likely that the role is
limited and somewhat endangered. We will
still have directors. We will still have
senior managers. Their spans will be
much bigger. They will be more stressed.
They will depend more on AI tooling to
help them ladder up all that information
flow. and they will chiefly exist as a
strategic point of accountability. And
so if the company is executing a
strategy, you want to hand a big piece
to someone who's accountable for it.
That's the director. You're going to
hand that to the director, not an AI
agent. And so if you want to get ready
for that, it's sort of like the PM and
the program manager. Get good at
accountability. get good at saying I can
take this strategy and I can put legs on
it with the people and resources I have
with almost no direction from my VP or
SVP. I can just go and do it. That is
the heart of being a director. If you
are good at that, if you are good at
building AI cultures for your team,
you're probably going to be okay. But
don't expect that role to grow. There
are not going to be lots more directors
out there because the dynamics aren't in
favor of job growth on this one. Number
six, data scientists. This is a really
interesting one because the the demand
is skyrocketing for this. People worry
that like data scientists might not be
doing well because there's like research
scientists for AI now and maybe data
scientists are out. It's not really
working out that way because people have
so many needs for data science related
to preparing their data for the AI age.
In a sense, this is one of the most
blessed roles in the age of AI because
at the end of the day, there's so much
data in the world that has to be got
ready for AI and there's so much custom
work that needs to be done at the
enterprise level to suit models to data
sets etc. The data scientists are just
never bored and the heart of the role is
a design. It's a creative role. People
think it's not creative, it's creative.
I worked with data scientists. It's a
really thoughtful role. It is not a role
that is easy to automate and it is a
role where quality matters. All of those
things strongly argue for this trend of
like boosted demand for data science
being durable. I'm quite bullish on data
science, DevOps or machine learning
operations. Demand is exploding
especially for machine learning ops.
People don't know how to implement
machine learning pipelines and
operations. If you as a DevOps person
can go from how do we help developers
deploy software effectively to how do we
help AI engineers effectively deploy and
maintain models which is really like
it's automation out of new chaos
patterns but that is what you do right
like what you do in DevOps is
fundamentally taking the herd of cats
that is a bunch of developers and
figuring out how to get it into a clean
production pipeline. Well, quite
similarly, you have a herd of cats that
are now ML engineers or or AI engineers
and you have to herd them into an
effective deploy pipeline and
effectively manage the model as it's in
production. If you are in DevOps or in I
guess maybe you'll call it machine
learning ops now the beautiful thing is
you are solving human problems and
engineers as we've talked about are not
going out of style and so you are still
going to be needed to solve those human
problems and yes there are going to be
AI tools that help you but the heart of
this is helping to align the complex
work of building good software to
production value. So when do you deploy?
Why do you deploy? How do you fix? What
do fixes look like? How do you deploy
securely? what are your different
environments look like etc etc etc
tooling you know this better than me the
heart of it is getting all of that
aligned so you deliver the value to the
customer and so that you solve the
problems for engineers and they can
focus on building software those are
human problems you're solving those
aren't going out of style number eight
UX human AI interaction design you
remember when I talked about the
problems of the human AI boundary that's
real that is that is something that I
see happening all the time the current
AI interfaces are deeply imperfect and
create a lot of confusion. We need to
understand that as execution gets
faster, UI craft is becoming more
valuable because the cheap stuff is
becoming more commoditized. So if you
have really really polished UI, it's
going to stand out more because the sea
of the internet is going to be a bunch
of vibecoded stuff that is not
wellcrafted. And so let me give you an
example that just came out. Perplexity
has done a really good job with UX
interaction. They just launched
something today, yesterday, that I think
is really, really cool. They launched
the ability to pass a message to the AI
as you read the chain of thought in the
middle of a research task. As far as I
know, no one else lets you do that yet.
It's brilliant because how many times
have you as a user sat there and typed
out a prompt and then you're like, "Ah,
what did I for I forgot to say this,
right? Like, I have to add this." And
then you have to sit there and it's a
research prompt. You have to wait. Not
anymore. Now with perplexity, you just
pass the AI a note and it modifies. That
is AI, but it's also UX, human
interaction design. It is solving some
of the human AI boundary issues because
you're recognizing the old truth that
humans are better at correcting mistakes
we have made than checking our work.
Which is why good email systems will
often give you a delay on send and an
undo button because they know you
instinctively check your work after you
send. That's UX design. So if you are
designing for human AI interaction, your
world is getting richer and richer and
richer. If you are designing for humans
using AI systems, it's the same kind of
problem. I'm just calling it a different
name, right? Like because all of the
systems we're using now are getting
rapidly AI enabled. And that's why when
people say, well, I'm not designing for
AI. I I I'm like, but really because
almost everything is getting AI at a
tremendous rate. And so if it's not true
for you now, it probably will be true
because your board is going to ask you
to do it soon. The trend is is
unbelievably pervasive. And so I think
this is not a case where UX has to go
and get AI experience because the AI
experience is largely speaking going to
come to you. People are going to be
asking you to do this. And the challenge
for you is to think deeply about human
AI design and figure out how to build
trust through interactions. I'll give
you another example that we haven't
solved and leave you with that to to
think about in the in the UX space. How
do you take the models we have which are
not good at taking accountability and
build in interaction dynamics that track
accountability over time for models? If
I tell the model that is incorrect, do
not do that again. How can you signpost
that and indicate to the user that you
have instructed the LLM in a specific
way of behavior? And then on the back
end, can you work with AI engineers to
pass that as a prompt to remind the LLM,
thus improving the experience of even
simple chatting because you're actually
reinforcing accountability from the user
there. There are a hundred different
ideas like that that you can come up
with around UX human interaction. It's
tremendous, huge opportunity. Number
nine, security and red team. I don't
think I need to do a ton of this. There
is a new AI jailbreak issue almost every
day. red teaming and security work is
there are just not enough of folks in
the world. If if you are able to start
playing with jailbreaking, start playing
with LLMs as attack services, start
looking at prompts for potential
vulnerabilities,
start looking at systematic
vulnerability databases, start reviewing
vibecoded pieces of work for security
issues, you will never be out of work.
That is an absolutely huge area of work.
And you know what? It is the same set of
instincts that has made security people
do what they do well. I I knew grey hat
people back in the day. It's the same
instinct to go and try and mess with it
and break it. Well, it turns out we have
a whole new intelligence surface and we
have to jailbreak that to make it more
secure. Huge opportunities there. Number
10, cloud AI infrastructure engineers.
This is the infrastructure exploding
piece. You pay for yourself in this role
by cutting spend. If you can find ways
to master GPU arbitrage, to master the
way you pass calls to the GPUs, to
master your cloud infrastructure build,
you're literally optimizing for
collectively speaking the largest
infrastructure build in human history.
AI data centers are on track for like
trillions of dollars in compute capital
expenditure by 2030. Trillions, like
six, seven trillion, something like
that. It'll probably be higher by the
time we get there. They need cloud AI
engineers to avoid spending more money
than they have to. At that level, an
engineer that can do his or her job well
pays for their salary 10 or 100 times
over in the way they handle these larger
and larger fleets of GPUs. It's an
incredibly valuable occupation. And if
you were already in cloud as an
engineer, you're prepared for it. Data
engineering, figuring out how to get
from ETLs into AI pipelines. Listen, you
may think, no, what are we going to do
here? AI is going to come for like
automated pipeline builds etc etc. I
don't think that you are realizing how
much data, this is the same thing with
data science, how much data is going to
be needed and how much data preparation.
I will tell you again, I think most of
the failures I see in AI projects come
from the data side. If you are good at
figuring out feature store governance,
at figuring out vector ETLs, if you're
good at figuring out how new data types
can be made accessible and useful for
business use cases, that's where the
value is. Extraordinary data engineers
have always been distinguished by
understanding the technical side of the
business and also the customer use case.
In this case, understand the AI customer
use case, how AI is changing what
customers are expecting, the kinds of
queries that are coming through. And
then understand how the technical side
enables that. Understand how vectorizing
data is different than storing data
traditionally. It's the same job. It's
just a new technology stack. QA and AI
quality. This is an interesting one
because this is an area where we are
fundamentally seeing a transformation
and I don't know of anyone talking about
it enough. Right now we are putting most
of our energy into QAing software before
it launches. With AI we need to shift
and put much more of our energy into
QAing as a durable quality threshold
that is always on in production. Why?
Because these systems produce
probabilistic responses. You cannot
deterministically test all this
software. In a sense, the value in QA
now is sustaining the value of the
software and guarding it over time. That
again is the heart of QA is sustaining
the quality of the software, but you get
even more to do because there's more of
it because it just sustains it over
time. You can't just launch and forget
the way you did with deterministic
software. Now, this is a major mindset
shift. Most QA people that I talk to are
not ready for this world. they are used
to P 0, P1, P2, do the test and launch.
That mindset won't work. And I do worry
a little bit, not because the jobs won't
be there, but because the QA people I
know aren't really thinking that way.
And so this is an area where there's a
mindset shift that I think is important.
Number 13, sales and solutions
engineers. These can be very popular.
Forward deployed engineers is another
word for it. U some people say that's
different, but it's it's very simple uh
and very similar. This is a case where
AI is a powerful enabler for this job.
You can code something up very very
quickly that demonstrates a personalized
solution for the customer effectively.
The challenge is you also have the
quality piece. It's on you as the
forward deployed or solutions engineer,
the sales engineer to know what is
actually doable from your product
technically and to vibe code or quickly
code only those things that you can
actually reliably deliver. And you are
also on the front lines of one of the
most interesting trends in B2B SAS
because because speed and execution are
getting better at the code level. It is
possible to extend SAS frameworks in
ways that weren't possible before. When
I was coming up in product, we were
always taught to say no. PM says no,
right? We were always taught to say no
because you couldn't extend the software
because it was so expensive to code.
It's not expensive to code anymore. It's
cheap. If it's not expensive to code
anymore, then you should be able to
extend and personalize the software
more, which means more sales and
solutions engineers as long as you are
careful about quality. And so that's the
thing where I know a lot of solutions
engineers that want to advocate for the
customer and lean in on customization.
Make sure that you know the software
stack you're working with and you don't
overcommit. Number 14, edge engineers.
People who can put intelligence into
smaller devices. This one is brand new.
This is not a role that exists right
now. There are absolutely indie hackers
out there who love to build LLMs onto
small devices. Let me run it on my
laptop, right? Let me quantize it and
run it down on my phone. Whatever it is,
let me compress this vision model. If
you are that person, this role is going
to exist for you. And I know I know a
lot of people who just it's like the
Unix tinkerers in the 1990s and 2000s
and the Linux tinkerers. They just can't
stop tinkering and playing with it.
That's a fantastic preparation for this
kind of role. We are going to want
intelligence in everything. And if you
think you don't, somebody is going to
hire you to do it. Like someone is going
to hire you for the smart refrigerator
and the smart toaster and the smart home
robot that folds your laundry and the
smart washing machine and this and that.
All of them are going to take little
large language models that can fit and
that need to be secure and that need to
fit uh you know on prem on the device.
Anyone who can figure out how to deploy
intelligence at the edge is going to if
you have your ability to talk about use
cases and you're not just interested in
your own work, you're going to have
work. You're going to have roles. Number
15 and number 15 is the last one. Vector
database and retrieval engineers. This
is exploding. This this is exploding. No
one can get their hands on these people.
If you work with rag, you are in one of
the most valuable places in tech. which
is why I've called out in the past
understanding how rag works is one of
those cheat codes right now in the job
market. If you are an engineer who works
with rag even more, you are even more
valuable than you were before. It's
incredible. Okay, so we've gone through
these 15 roles. I want to just call out
that there are some really interesting
things that are coming up as dynamics
that don't yet have role titles that you
should have your eyes on if you're
looking five or 10 years down the road
in your career. Agent fleet
orchestration is one. How do you manage
fleets of agents? People have talked
about that one. People talk less about
number two, the simulation economy. How
do you simulate more things? I talked
about this in my digital twins video.
Getting behavioral data out of
simulations is going to be a big deal.
Number three, understanding context and
context supply chain. We don't really
have names for that role, but that's
going to become big. Number four,
figuring out how to tune the human
factor in AI modeling. It's almost like
you're designing AI models with cultural
bias. you're designing AI models and
understanding human preference
distributions. It's going to be a mix of
like anthropology and psychology and
deep technical understanding. Number
five, figuring out how to work extremely
efficiently with power. If you are
scheduling jobs, how do you maximize
efficiency on GPUs? We haven't had to
get to this level before, but we haven't
had a capex spend this big in compute
before. It's going to become a big deal.
I also want to call out AI risk and
compliance is just starting to come up.
It's going to be absolutely massive. the
EU AI act, SEC disclosure rules, GDPR
implications for training data. It's
everywhere and it's going to get bigger.
Synthetic data is one we don't talk
about. People who are good at producing
very high quality synthetic data are
going to be in demand. Edge inference
optimizers, people who not only can put
stuff on devices, but figure out how to
make it reason and figure out how to
make it into robotics. It's like a
crossover between robotics and
inference. It's going to be a big deal.
calling out that the idea of an AI
psychologist sounds like science
fiction, but it may help with security
and red teaming. You're going to see
psychologists on red teams to help debug
LLMs. And then the last one I want to
call out, business process designers.
Like figuring out how business process
designers are able to take AI and design
an endto-end human and AI process loop.
That's going to be a huge deal. We don't
know how to work well with AI.
Businesses are complex. If you can
zeroot a business process as a designer,
it's going to be extremely valuable as a
skill. Okay, how do you navigate all of
this? I want to give you just a couple
of things at the end here that will help
you put this together. First, if you're
stuck, if you're overwhelmed, look at
the survival level. Look at how you can
identify tasks in your current role that
you can automate so you are more
effective. How can you set up AI powered
email filters? All you know, all the
stuff that people talk about, right? Use
chat GPT for first drafts. Get to the
survival level first. Then get to the
adapt level. Then figure out how you
move into the kind of role I talked
about here. How do you get a
complimentary technical skill set going?
How do you build a portfolio project?
How do you demonstrate that you are
competent in where the role is going?
And finally, you get to the lead level.
Finally, you get to understanding where
the new risk areas are, where there's
frameworks that others can adopt in your
job field, where you have new tools you
can build to solve problems or you can
get others to build, where you can
establish industry standards. because
for some of this stuff it's so new there
isn't industry standards. Okay, wrapping
this all up, we have talked, frankly
fairly exhaustively about the key
dynamics driving AI jobs. I've called
out 15 key jobs I want you to think
through. I've talked about how you can
survive, how you can adapt, how you can
lead in those jobs. And I've even talked
about future job dynamics that will
become jobs in the next 5 years or so. I
hope this has been helpful. I don't want
to overwhelm you, but I do believe this
degree of specificity is necessary to
honestly answer the question, where is
my job going? So, that's it. This is it.
Where is my job going in the age of AI?
Let me know. If you found a job that I
haven't covered yet, put in the
comments.