Assessing Your Job in the AI Revolution
Key Points
- The speaker has distilled hundreds of AI‑related inquiries into 12 core questions and will also share “bonus” topics nobody asks about.
- To gauge whether your job is at risk, break the role into individual tasks, estimate how much AI could automate, and then consider the “glue work” that ties those tasks together—if removing 30% of tasks leaves you with a hollowed‑out role, you should be concerned.
- Customer success illustrates the paradox: AI can handle large chunks of routine communication, yet many firms are pulling back because nuanced, context‑dependent handoffs still require human empathy and judgment.
- Real‑world experience shows AI chatbots often fall short of the personal, humor‑laden help provided by seasoned human reps (e.g., the speaker’s Amazon contact “Thor”), underscoring the lasting value of human touch in support roles.
- The talk promises a final segment with additional insights on AI topics that people rarely ask, offering extra strategic perspective.
Sections
- Assessing Job Risk with AI - The speaker offers a heuristic for gauging whether your position will be eliminated or merely streamlined by AI, by dissecting the role into tasks, estimating the share AI can handle, and evaluating the remaining “glue work.”
- Uncertain Timeline for White-Collar Cuts - The speaker highlights that experts disagree on when AI‑driven white‑collar reductions will materialize, foreseeing notable role disruption within the next two to three years but emphasizing that widespread layoffs and severe unemployment remain speculative.
- AI’s Limits and Emerging Roles - The speaker argues that despite AI’s growing capabilities, it struggles with ambiguity and high‑liability tasks—illustrated by surgery—yet this shift is spawning new, poorly defined opportunities like AI architects for professionals at all career stages.
- Showcasing Public Work for Career Edge - The speaker advises professionals—technical and non‑technical alike—to build visible, tangible projects (e.g., GitHub repos, storytelling content) and pursue part‑time apprenticeships with indie founders to prove real competence beyond AI‑generated output.
- Five Core LLM Skills - The speaker outlines five essential abilities for working with large language models—prompt engineering, retrieval‑augmented generation/vector‑database mastery, lightweight agent orchestration, and data‑storytelling to polish model output.
- Leverage Domain Expertise with AI - The speaker urges professionals to pair their deep industry knowledge and tacit expertise with large language models, turning themselves into indispensable AI translators who can productize insights and command premium roles.
- Human Skills in Regulated AI Niches - The speaker argues that AI will augment rather than replace work in high‑risk, slow‑procurement sectors like energy, healthcare, defense, and specialized professional services, emphasizing that trust‑building, problem framing, and ambiguity‑handling will remain essential human skills.
- Remote AI Networking Strategies - The speaker advises remote AI enthusiasts to build digital communities by sharing public artifacts, engaging on platforms like Discord or X, and, if feasible, consider relocating to a tech hub to access higher‑quality training and networks.
Full Transcript
# Assessing Your Job in the AI Revolution **Source:** [https://www.youtube.com/watch?v=7RZlxqMcObE](https://www.youtube.com/watch?v=7RZlxqMcObE) **Duration:** 00:30:02 ## Summary - The speaker has distilled hundreds of AI‑related inquiries into 12 core questions and will also share “bonus” topics nobody asks about. - To gauge whether your job is at risk, break the role into individual tasks, estimate how much AI could automate, and then consider the “glue work” that ties those tasks together—if removing 30% of tasks leaves you with a hollowed‑out role, you should be concerned. - Customer success illustrates the paradox: AI can handle large chunks of routine communication, yet many firms are pulling back because nuanced, context‑dependent handoffs still require human empathy and judgment. - Real‑world experience shows AI chatbots often fall short of the personal, humor‑laden help provided by seasoned human reps (e.g., the speaker’s Amazon contact “Thor”), underscoring the lasting value of human touch in support roles. - The talk promises a final segment with additional insights on AI topics that people rarely ask, offering extra strategic perspective. ## Sections - [00:00:00](https://www.youtube.com/watch?v=7RZlxqMcObE&t=0s) **Assessing Job Risk with AI** - The speaker offers a heuristic for gauging whether your position will be eliminated or merely streamlined by AI, by dissecting the role into tasks, estimating the share AI can handle, and evaluating the remaining “glue work.” - [00:04:46](https://www.youtube.com/watch?v=7RZlxqMcObE&t=286s) **Uncertain Timeline for White-Collar Cuts** - The speaker highlights that experts disagree on when AI‑driven white‑collar reductions will materialize, foreseeing notable role disruption within the next two to three years but emphasizing that widespread layoffs and severe unemployment remain speculative. - [00:08:05](https://www.youtube.com/watch?v=7RZlxqMcObE&t=485s) **AI’s Limits and Emerging Roles** - The speaker argues that despite AI’s growing capabilities, it struggles with ambiguity and high‑liability tasks—illustrated by surgery—yet this shift is spawning new, poorly defined opportunities like AI architects for professionals at all career stages. - [00:11:38](https://www.youtube.com/watch?v=7RZlxqMcObE&t=698s) **Showcasing Public Work for Career Edge** - The speaker advises professionals—technical and non‑technical alike—to build visible, tangible projects (e.g., GitHub repos, storytelling content) and pursue part‑time apprenticeships with indie founders to prove real competence beyond AI‑generated output. - [00:14:59](https://www.youtube.com/watch?v=7RZlxqMcObE&t=899s) **Five Core LLM Skills** - The speaker outlines five essential abilities for working with large language models—prompt engineering, retrieval‑augmented generation/vector‑database mastery, lightweight agent orchestration, and data‑storytelling to polish model output. - [00:18:31](https://www.youtube.com/watch?v=7RZlxqMcObE&t=1111s) **Leverage Domain Expertise with AI** - The speaker urges professionals to pair their deep industry knowledge and tacit expertise with large language models, turning themselves into indispensable AI translators who can productize insights and command premium roles. - [00:22:00](https://www.youtube.com/watch?v=7RZlxqMcObE&t=1320s) **Human Skills in Regulated AI Niches** - The speaker argues that AI will augment rather than replace work in high‑risk, slow‑procurement sectors like energy, healthcare, defense, and specialized professional services, emphasizing that trust‑building, problem framing, and ambiguity‑handling will remain essential human skills. - [00:26:03](https://www.youtube.com/watch?v=7RZlxqMcObE&t=1563s) **Remote AI Networking Strategies** - The speaker advises remote AI enthusiasts to build digital communities by sharing public artifacts, engaging on platforms like Discord or X, and, if feasible, consider relocating to a tech hub to access higher‑quality training and networks. ## Full Transcript
I get hundreds and hundreds of questions
a month. I get them through my contact
form. I get them on my comments. I have
distilled them down into 12 highlevel
questions that punch at the hardest
things about this AI revolution. I want
to give you my answers to those right
here. And then at the end, as a bonus,
cuz we do bonuses around here, I want to
give you the things that people don't
ask that I would be thinking about. So,
let's get to that at the end. First 12
questions. Number one, Nate, I see
headlines about AI layoffs all the time.
How can I tell if my role is next or if
it's just getting rewired and like I'll
be okay? I want to suggest that the
huristic that you can use, the rule of
thumb that you can use is to look at
your role as a series of tasks and then
look at what percentage of those tasks
AI can take and then you give it a
discount. And the reason why you give it
a discount is because I have said over
and over that rolls are not just bundles
of tasks. Rolls have glue work. And so
what you need to be asking yourself is
if you took away 30% of the tasks in the
role, could you leverage yourself to be
more effective at accomplishing the team
mission at the company because you had
less busy work to do? or would it feel
like it was just eating out and
hollowing out the role and there wasn't
really a lot left to do. If it's the
latter, if it feels like it's just
hollowing out, that is when you should
get concerned. So, I'm going to give you
a couple of specific examples. I think
customer success has been one of the
hardest hit roles, but it also shows
where you still need to have hope. So
customer success is an example of
something where big Silicon Valley names
including Sam Alman himself have said
there just won't be CS jobs anymore. And
yet at the same time we see major
companies who claim to roll CS jobs to
AI like CLA roll back because they
realize they need good handoffs and they
need humans in the loop who can actually
help customers because customer help
turns out to be a very context dependent
thing. I have navigated AI menu after AI
menu, AI chat after AI chat because
everyone's rolling them out. You
probably have too. The experience has
not been as good as working with my
buddy Thor at Amazon. And that is a real
name of a customer service rep at Amazon
who I worked with a decade plus ago. And
I got my questions answered. Thor had a
great sense of humor and we all had a
great time. I've never had that kind of
an experience with a customer success
robot. And so I think CS is going to
change. I think it's an example of a
case where you can argue that large
pieces of those tasks are going to get
picked up by AI. It's just too easy for
AI to write text based on databases and
it will get more personable. Probably
not as personable as Thor. But I would
argue that if you look at it, you should
be able to architect systems. You should
be able to see places where you can lean
in as a CS rep and deliver extraordinary
value. I know CS reps that drive
expansion revenue for businesses because
they are so good at what they do. An AI
agent is not going to be as good at
driving expansion revenue for
businesses. I it just won't. And so part
of the answer is looking at your task
load versus your mission. Where is your
mission aligned to versus where are your
tasks aligned to? The other part is
something you can't control. And so part
of how you tell if your role is next is
frankly if your leadership understands
AI. Does your leadership talk about AI
in a nuanced way the way I'm talking
about it? Or is your leadership out
there saying, you know, AI is a cost
cutter? I'm happy to just dump these
roles because they might be wrong about
that. They probably are. People who tend
to dump quickly tend to regret later. We
have big stories about that. I just
mentioned one, Clara. But if that's the
way they're thinking, it pays to watch
leadership and find another role or go
hunting for a different career path
because of leadership's attitude, not
because of AI. And I do want to
distinguish those two. So if you want to
tell the answer from a task perspective,
look at it as what are the tasks that
are being automated? Discount for the
bundling, discount for the glue work you
can do, discount for your mission
alignment. If you want to look at it
from a company perspective, look at your
own leadership. Look at whether they are
willing to actually acknowledge the
nuance of AI or if they're just looking
at this as a cost cutting machete.
Number two, question number two, Nate, I
need dates. When do experts say that
white collar cutbacks are going to start
to bite? Experts disagree on this one.
They really do. There is no one answer.
I wish I could give you an answer. There
are lots of people who claim to know.
People claim to know that it will be
2027. People claim to know it will be
2030. People claim to know it will be
2028. People claim it will happen and
then there's a camp of people who aren't
sure if that's the case yet. It depends
on your attitude. If I were you, I would
assume that there will be significant
restructuring of roles and a significant
disruption to every role in white collar
in the next two to three years. That is
different from assuming that white
collar cutbacks will mean mass layoffs
across all of those job roles. I don't
think that is baked into the empirical
evidence. Do our jobs differently?
Absolutely. And in fact, we're just
getting started with that. Most of us
don't have jobs anymore. It's not clear
yet that that is happening. It's not
clear in the data. It's not clear given
the capabilities of the AI systems and
the direction they're growing. Will
there be some layoffs? Yes. Will we see
more chaos in 2026 and 2027 as job
disruption starts to hit? Yes. But I
think that when we talk about this, we
often confuse breadline level chaos
where it's like 30, 40, 50%
unemployment, it's a doomsday scenario,
we all have to go on universal basic
income, etc. with technological change
level chaos that's compressed where you
have a technological change equivalent
to the steam engine being invented or
equivalent to the internet being
invented and you have to negotiate that
change very quickly because unlike past
revolutions, this is all happening now.
But we don't really articulate those as
two different futures. My bet is sort of
that this is like other technological
changes, but it's very very compressed.
So the shocks are going to feel more
dramatic for the next few years. Other
people are betting on a more doomsday
scenario. For folks who are betting on a
more doomsday scenario, they tend to say
words like 2027 and 2028 a lot. The good
news about that is that we will find out
real fast if they're right or they're
wrong. It will not take that long. It is
well within even the entry- level part
of an initial career path. Which
suggests that if you want to plan for
your future, you should plan for them to
be wrong because it does not hurt to
plan to build your skills in case
they're wrong. And if if they're right,
it doesn't matter. So, you might as well
build your skills anyway. And that one
often surprises people. Number three,
Nate, I want work that AI cannot
cannibalize. How do I spot really
durable roles before everybody else
piles in? It feels like there's so much
hype people are just running back and
forth. I want to suggest to you that
there are certain things that do not go
out of style. Understanding how to
broker trust does not go out of style.
Understanding how to build trust in
business contexts will never go out of
style. It will never be taken by AI.
Understanding how to work in high
context situations where you have to be
aware of wide rapidly changing contexts
doesn't go out of style. It doesn't
disappear. AI doesn't take that because
AI is not good at tokenizing that. AI is
not good at tokenizing trust. You can't
really tokenize trust. Trust is a human
transaction. Understanding how to handle
high ambiguity situations where things
are gray and shifting all the time.
Those are not things that AI is super
good at either. In fact, one of my
biggest frustrations with AI models is
as their capabilities have increased,
they have not gotten better at handling
ambiguity. Arguably, they've gotten
worse because they're better at being
specific. And so look for places with
really messy real world constraints.
Look for places where deep relationships
are required. Look for places where you
need to deliver outcomes, especially if
you need to deliver outcomes against
liability. A good example of this,
people have been saying for a while that
robots are going to take over surgeons
roles. Surgeons have liability. Surgeons
can be sued. Surgeons must get it right.
Surgeons have skin in the game. Robots
don't. And so I think surgeon,
ironically, is a role that may transform
and shift as we get robotics involved in
the surgery room, but it doesn't mean,
and it already is, but it doesn't mean
that surgeons themselves are going to
disappear. And you'll see similar roles
across tech. Does that mean that these
are only available for seniors and
people who are deep in their careers? I
don't think so. I think one of the
really interesting things about AI is it
is upending so many of our assumptions
about jobs that there are all kinds of
tail opportunities opening up that
people haven't fully defined yet. AI
architect, it's a brand new role. We
haven't fully defined it. Yes, it
probably takes some degree of experience
with AI and understanding systems, but
it's an example of a role that's very,
very new. Another role that's new, AI
engineer. What does it mean to be a good
AI engineer? There's lots of other roles
beyond that. There's roles that we don't
really have good words for. We don't
really know where product management is
going or how it's disrupting, but it's
an example of a role where you need less
technical knowledge than an engineer,
probably more than you used to have, and
you need a totally different mindset in
a world where you might not be driven by
a road map anymore. And so I think that
the opportunity here is to look for
those durable relational transa
relational low transactional structures.
So high context, high ambiguity, high
trust intersections, places where it's
not super transactional, places where
you have to be relationship oriented,
places where you have to be deep in
context to understand things. And if you
think, by the way, that AI engineer and
AI architect don't have to be deep on
trust and ambiguity and context. I've
got news for you. Places where you have
to deliver outcome against liability.
Chase problems with unstructured data.
Chase problems that aren't easily
tokenized yet. AI can't eat it if it
can't ingest it. So, you want to look
for those spaces. And the thing is, I
can't name all of them for you because
they're still coming into being. That's
one of the really interesting things
about the next two or three years. And
so, I'm trying to give you the
principles to spot them for yourself.
Okay. Number four. Nate, I'm a new grad
and entrylevel roles seem to be
evaporating. Where do I earn real
experience now? How do I get onto this
ladder? This doesn't seem fair. Well, I
2008 was also really rough, let me tell
you. So, first off, I think that part of
the challenge is that you are getting
hit with the job application broken
pipeline harder than anybody else
because other people can lean on
previous work experience, but it's
harder if you haven't had that. I think
there's a couple of things that help,
but the one thing that I've seen that is
most reliable is just going to require
relentless execution on your part. It
ties into number three. So, the thing
that I think helps the most is treating
projects like the new resume. You've got
to be able to ship things. You've got to
be able to show what you're building.
You've got to be able to show you can
connect with community needs and build
something in response. If you're in tech
and you're building anything that leaves
a public artifact, if you're in
marketing, if you're trying to tell a
story, you have to be able to start
telling stories now. You want to leave a
public footprint of what you're working
on that is hard to replicate. If you
have a bunch of storytelling Tik Toks,
if you have a strong GitHub that you
have actually delivered working code
against, it actually works. It's not
just, you know, a bunch of broken
projects. It's at least something that
people can look at and investigate. And
then the question becomes not did AI do
all of this for you, but do you
understand the principles of building
for the role you're asking for? Because
sometimes like people assume like you
have to have a GitHub if you're an
engineer and you shouldn't have a GitHub
if you're not. Those rules are shifting
like yes engineers probably should still
have a GitHub but people who are not
engineers need to be able to also talk
about technical topics now and so if you
have an opportunity to build something
and you're not a technical person don't
be afraid of that. I also suggest that
you look for something that is like a
fractional apprenticeship. Small
part-time gigs for founders that need
problems solved for them. There's so
many indie founders out there. Every
single one of them does not have the
time to automate as much as they want
to. They do not have the time to build
as much as they want to. Go help them.
They can refer you. Go help them. You
will get something you can build and
show. And how do you get that? You're
like, "Well, who's going to pay
attention to me?" You should have
projects you can show and say, "This is
why you should come for me. Look, I can
show you my work." And so the ladder
that is there is changing because the
roles themselves are changing. And that
is part of why hiring is so broken right
now is because people, even hiring
people, are trying to figure out and
project what they will need in 24 months
and hire for that. I will also say part
of that chaos means that there are roles
opening up targeted toward new grads
that weren't there before. And so, for
example, there are roles that are
targeted at entrylevel folks coming in
where you need to articulate your AI
fluency from the get-go so that you can
help bring uh AI fluency to the team
you're with. That's new. You know, those
roles didn't exist before. And so part
of it is figuring out, you know, if you
were tracking towards some of the steady
tech jobs from the 2010 era, maybe those
are changing really fast, but there's
other ones that are opening up. And so I
would say look at your public artifacts,
look at fractional apprenticeships
wherever you can get them and pitch for
them. You don't wait for them to open
up, go get them. Oh, go call DM. And
then make sure that you're aware of the
fact that there are roles opening up
that may not have conventionally been in
the middle of your uh in the middle of
your degree path, but they are now.
Okay. Number five, Nate, I I can't waste
cycles. Which JI skills do I need to
learn this year? I got to tell you,
there's a few that do come up over and
over again. I do think there's a clear
answer. And I just want to go through I
if if people have these four big buckets
covered, they are already ahead of most
folks. And I have written up a ton of
these already on the Substack. So,
number one, prompt architecture.
Understanding how prompt prompts work. I
think it's it's one of the universal
skills. Now number two understanding how
retrieval augmented generation or rag
works and where it doesn't work which is
critical basic vector database hygiene
understanding embeddings understanding
refresh pipelines how you build a vector
database even if you're not building one
yourself understanding how they work so
that your eyes don't glaze over really
really helps. Number four is lightweight
agent orchestration. So understand how
tools like NAD or Langraph enable you to
wire tasks together and then it can be a
public artifact. Wire things together,
automate. And then last one, number
five, data storytelling. Understand how
to turn a raw model output into
something that is polished. That is a
meta skill. That is not necessarily just
a technical skill. People who copy and
paste are doomed. I don't say doomed
very often, but you're doomed. people
who are able to polish model output, to
think critically, to engage with model
output. That goes back to one of those
larger skills I called out. Look for
places AI can't cannibalize. Well, I got
to tell you, polishing model output and
knowing how to make it sharp is exactly
the kind of high ambiguity, high context
work I'm describing. So, get good at
data storytelling with LLM. That's skill
number five. So, to go through the five
again, prompt, prompt engineering or
context engineering if that's the
popular term. Now, rag, understanding
how vector databases work, which is
related to rag, but slightly different
because it's a little bit of a level
down from a structural perspective.
Understanding agent orchestration,
number four, and then data storytelling
with LLM, number five. All right, next
question. Nate, the stack flips every
six months. How do I stay ahead when the
tools will not sit still? Look, the best
way that you can do this is to schedule
Google 20% time. I'm not saying actually
spend 20% of your time on this. I know
we don't all have that luxury, but the
stack itself is built on fundamentals
that don't change as quickly as all that
transformer architecture is underlying
this entire AI revolution. It hasn't
changed. And so understand the things
that don't change. I call them out
really really frequently in my content.
And then be disciplined about forming
hypotheses about what you want to bet on
and explore on in a particular month and
and create that in line with your larger
intent and goals. your mission. We've
talked about this idea of being mission
aligned when we talked about career path
and like are you able to contribute to
the team mission, the company mission,
etc. What about your personal mission?
Are you able to articulate these are the
things that I really want to get done?
These are the high ambiguity or high
trust problems I dream of getting into.
will walk back from that and by the way
AI is a tool for that and figure out
which technical skills or which AI tools
are in line with that larger mission and
then focus there and do it in a time
boxed way. Say I'm going to take 4 hours
a week for a month and I'm really going
to do it. I'm going to set a timer. I'm
going to sit down. I'm going to do I'm
not going to scroll TikTok. I'm not
going to watch Netflix. I'm actually
going to do it. I'm not going to tweet
about shipping. I'm actually going to do
it. and then come back and see if your
skills have grown in the direction you
want. See if you've made progress in a
month. It's like any other habit. You
have to build it. And so my advice is
basically the tools will not feel like
they're moving so much if you have a
compass. So develop that compass. Number
seven, Nate, I am mid-career. How do I
translate what I already know into an AI
adjacent role without starting from
nothing? Look at your domain advantages.
Where do you already have strong domain
expertise, regulatory fluency, customer
access, legacy data, storytelling,
polishing capabilities? Now, pair that
with an LLM and become the bridge that
other people can't easily replace
because you have that deep domain
expertise. I have people telling me that
they desperately want their existing
senior employees to lean in more on AI
and they worry because they don't. Don't
be that person. You have the domain
expertise. you have the advantage
productize tacet knowledge. You can
think about if and this is if you want
to go into a consulting, if you want to
go into an indie role or whatever you
have dreamed up for you for the next
half of your career, you can productize
that tacet knowledge into something that
helps people who are climbing the career
ladder earlier than you to get up faster
and learn those domain secrets quicker
than you had to learn. Eventually, you
should be in a position whether you're
internal or whether you're doing some
sort of uh independent role where you
can act as an AI translator in your
vertical. You should be able to command
a premium because the untransatable, the
hard to understand, the difficult
expertise that comes from years of
knowledge is something you carry with
you and you have now successfully
coupled it with AI. And so I would
actually say look at it as my domain
gives me an incredible starting point to
get to an AI adjacent role without
starting from zero. All I need to do is
to dive in on AI literacy. The things
that I just called out the the basic
pieces that I described a couple of
questions ago. Understand agents,
understand rag, understand data
storytelling with AI. Those are things
that if you can start to get them down,
if you can start to get prompt
engineering down, you are going to be
formidable. You're going to be a very
strong candidate. Number eight, I'm
using chat GPT at work. What's career
safe usage before legal gets involved?
The answer is you must mask red data.
Red data is anything your company
considers personal or confidential. Just
don't put it into any AI. Just don't do
it. You don't want the risk. You can
mask it. you can and masking means like
obscuring all of the confidential
information but and I know people do
this anyway. There's a massive shadow IT
problem but the risk to you individually
is disproportionate. The company can
come after you for using AI
inappropriately at work and I am
expecting a court case in that vein to
come out in the next 6 months. It is
going to happen. People will leak
something that they should not have
leaked. There has already been an
instance where Claude ended up
apparently disclosing material
non-public information to an investor
that did not come from any discernable
source and that it's inferred that it
almost certainly came from a board
meeting that that company had ran across
that story last week. Not going to
reveal the name of the company. It is
not common. That is the first of those
stories that I have heard. But it does
happen and that is the thing that the
company worries about. So, just just
don't do it. Number nine, Nate, I need a
5-year road map. What industries look
stable? Well, I got to tell you, I think
road maps are changing. I think that we
should think about long-term bets on
these durable task areas that are human
friendly, like high ambiguity areas,
high trust areas. And so, I don't know
that industries are necessarily the
right lens, but I will take your
question seriously and I will answer it.
I think regulated high-risk verticals
with slow procurement cycles are going
to be fine. Energy, healthcare, defense,
AI is going to augment there way before
it does any kind of replacement world.
Look at places where atoms come ahead of
bits and how you can get involved. Now
you're jumping into the robotics
revolution there, but advanced
manufacturing, grid infrastructure,
supply chains, and then look at longtail
professional services, specialized
legal, complex insurance, bespoke
financial, things where models in
general are going to have a hard time
being as useful as your specific
expertise. There are going to be other
places. Like I said, I think there are
niches in every single industry. I don't
see industries being taken over by AI in
the same way. It's not like we'll have
like no nobody working in B2B SAS. It
will all be AI. I mean, some of us would
say that was the dream, but like the
reality is there will be places in all
of these industries for people who can
earn trust and solve hard problems. But
that's my take. If you want to look at
industries, energy, healthcare, defense,
I think supply chain, grid,
infrastructure, those are all relevant.
Number 10, Nate, I bank on human skills.
Which ones will matter when the machines
do the grunt work? So, I said this a
little bit earlier. I talked about
problem framing or I talked about
building trust. I talked about making
sure that you understand how to handle
high ambiguity situations. But if you
want to like boil that into skills, I do
actually think problem framing is a
piece of it. That's why it came to mind.
So problem framing is the act of turning
something ambiguity into something
solvable. It's actually one of the core
skills PMs bring to the table if they're
good. Taste gets talked about it a lot,
but for good reason. It's the instinct
to choose what is good. When we talk
about LLM driven storytelling and you
have to polish, it's taste that helps
you polish. Narrative persuasion,
figuring out how to craft a story that
aligns stakeholders. That's not always
intuitive. That's not always obvious.
Especially if you are in leadership, if
you are in sales, like narratives matter
a lot. In marketing, narratives matter a
lot. In product, frankly, judgment under
uncertainty. That would that's the skill
that goes with high ambiguity
navigation. decide when 78% confidence
is good enough to ship. AI doesn't have
skin in the game. AI is not going to
make that call. And so look for those
kinds of skill sets. The skill sets that
matter because they are attacking the
non-tokenized parts of the distribution.
So problem framing, taste, narrative,
persuasion, and judgment are all good
examples, but it's not an exclusive
list. Number 11, Nate, I can't afford a
pricey boot camp. Where are there
affordable options to start learning?
Well, YouTube, I actually did a whole a
whole sort of Substack on YouTube, but I
also will say like look up AI leaders
like Andre Karpathy on YouTube and watch
what they say. And I I say Andre because
he is a gifted teacher and he's also
extremely technically fluent. He is a
technical founder in the AI space. And
if you want to learn, that's an example
of a place you can go to learn. But you
don't have to just do that. If you say
that's too technical for me, you can
pick the keyword or topic you want to
get better at align to your northstar
mission and go dig up 30, 40, 50 minutee
videos on YouTube about it most of the
time. Now, I will say honestly, part of
what makes YouTube annoying is that
there are also a bunch of clickbay
videos. There are videos that are like,
you know, they're going to show you a
special thumbnail and you're going to
get six minutes of hype and like 30
seconds of insight. That's not really
going to be worth it for you. So you're
going to have to find in your particular
area of interest what are the YouTube
videos that are useful but that becomes
a window into the rest of the learning
portfolio because they will reference
other sources other other references
they'll reference books they'll
reference courses that may be free
courses there are so many university AI
courses that you can audit and so I
actually do think there's a lot of
affordable options for reskilling and
the and the last thing I will say is
that AI is experiential technology you
can reskill experientially and you
should and you should use AI to help
you. I've written prompts for that. Use
AI to help you learn AI. Number 12.
Nate, I live far away from San
Francisco. How on earth do I get high
quality AI training or get plugged into
networks? It seems like it's impossible.
Well, if you can and you want to move to
a tech hub, there's often a lot of
upside there. So, I will say like we'll
just put that on the table. If that's
something that's an option for you,
think about it. If that's something
that's not an option for you, maybe it's
because you don't want to. You like the
peace and quiet. I get it. I don't live
in San Francisco either. Then you want
to be in a place where you are building
strong online communities around
collaborative problem solving. Part of
why you put public artifacts out there,
which I said in one of my earlier
answers, is because it enables you to
form online communities around areas
you're interested in. And if you can do
that, if you can collaborate with other
people building in the space, talk to
them, engage with them, whatever social
platform they're on, maybe they're on
Discord, maybe they're on X, who knows?
Find the people working on the problems
you're interested in.
and let them guide you to other people
and hop hop hop. Now, there's a whole
art to cold DMing if you want to raise
capital, if you want to go places.
That's not what this is about. This is
about building networks digitally when
you can't be somewhere physically. And I
would say start from that common area of
interest. Start from where you're
actually building. Put out public
artifacts. Start talking about it. Start
finding people building. Start engaging
with them. And you'll start to build
that web really organically and it won't
feel fake. Last but not least, what are
two things that were not on this list
that I wish people would talk about?
Number one, I wish people would talk
more about the execution gap. There has
never been more capability to build,
learn, leverage yourself with AI. The
hype is deafening, but I see real
struggles with actually executing. And I
think part of the challenge is the start
stop problem. It is really easy to start
on something with AI, but the
easefulness is deceptive. It is actually
very hard to go through the scurve of
learning with AI because it's undefined.
If you're typing in the chat in the chat
window and you don't know what to ask,
you feel stuck. That mental block is
really big and you don't know how to
keep moving forward. The answer happens
to be try anything in the direction
you're wanting to go and iterate from
there. But you have to get over the fear
that it's going to be the wrong thing.
You're going to learn the wrong thing.
you're going to focus in the wrong
place. And so I think the execution gap
doesn't get talked about enough. People
who execute reliably on AI, even if
they're just learning AI and they're
beginners, are rare. The second thing
that I want to call out is that people
don't talk enough about the kinds of
problems that they are interested in
solving that weren't solvable before.
I'm interested in that. I'm fascinated
by that. I can't stop thinking about it.
What are the kinds of problems that we
couldn't solve before that are solvable
now? And I think that we've been so
blinded by the success of Chad GPT that
we sometimes assume that all the
problems are gone into the ether and
that there's really nothing left to do.
But I don't think that's true. We're not
short of problems. We have whole new
classes of problems that have opened up
that we can now come up with solutions
for. as an example. There still is no
good way for me to organize my library
with AI. Believe me, I've tried. Even
the best image recognition that 03
offers is not good enough to hold all my
books in memory, recognize all the
titles, reliably find them, reliably
list them, and help me organize my
library. I have to do it by hand. And
you know, people can say they enjoy
doing that by hand, but if you're
organizing a lot of books, that's a
legitimate problem. That's just one
example. I'm not saying that's an
example that matters a ton. I'm saying
it's an example of something where it's
a real AI problem. AI is supposed to be
good at it. AI may be good at it with a
specialized tool, but it doesn't exist
yet. There are hundreds and thousands of
problems like that. I wish we talked
about them more. So, there you go. 12
answers to the questions I get asked the
most and two final reflections that I
wish people would ask more. Cheers.