Learning Library

← Back to Library

Assessing Your Job in the AI Revolution

Key Points

  • The speaker has distilled hundreds of AI‑related inquiries into 12 core questions and will also share “bonus” topics nobody asks about.
  • To gauge whether your job is at risk, break the role into individual tasks, estimate how much AI could automate, and then consider the “glue work” that ties those tasks together—if removing 30% of tasks leaves you with a hollowed‑out role, you should be concerned.
  • Customer success illustrates the paradox: AI can handle large chunks of routine communication, yet many firms are pulling back because nuanced, context‑dependent handoffs still require human empathy and judgment.
  • Real‑world experience shows AI chatbots often fall short of the personal, humor‑laden help provided by seasoned human reps (e.g., the speaker’s Amazon contact “Thor”), underscoring the lasting value of human touch in support roles.
  • The talk promises a final segment with additional insights on AI topics that people rarely ask, offering extra strategic perspective.

Sections

Full Transcript

# Assessing Your Job in the AI Revolution **Source:** [https://www.youtube.com/watch?v=7RZlxqMcObE](https://www.youtube.com/watch?v=7RZlxqMcObE) **Duration:** 00:30:02 ## Summary - The speaker has distilled hundreds of AI‑related inquiries into 12 core questions and will also share “bonus” topics nobody asks about. - To gauge whether your job is at risk, break the role into individual tasks, estimate how much AI could automate, and then consider the “glue work” that ties those tasks together—if removing 30% of tasks leaves you with a hollowed‑out role, you should be concerned. - Customer success illustrates the paradox: AI can handle large chunks of routine communication, yet many firms are pulling back because nuanced, context‑dependent handoffs still require human empathy and judgment. - Real‑world experience shows AI chatbots often fall short of the personal, humor‑laden help provided by seasoned human reps (e.g., the speaker’s Amazon contact “Thor”), underscoring the lasting value of human touch in support roles. - The talk promises a final segment with additional insights on AI topics that people rarely ask, offering extra strategic perspective. ## Sections - [00:00:00](https://www.youtube.com/watch?v=7RZlxqMcObE&t=0s) **Assessing Job Risk with AI** - The speaker offers a heuristic for gauging whether your position will be eliminated or merely streamlined by AI, by dissecting the role into tasks, estimating the share AI can handle, and evaluating the remaining “glue work.” - [00:04:46](https://www.youtube.com/watch?v=7RZlxqMcObE&t=286s) **Uncertain Timeline for White-Collar Cuts** - The speaker highlights that experts disagree on when AI‑driven white‑collar reductions will materialize, foreseeing notable role disruption within the next two to three years but emphasizing that widespread layoffs and severe unemployment remain speculative. - [00:08:05](https://www.youtube.com/watch?v=7RZlxqMcObE&t=485s) **AI’s Limits and Emerging Roles** - The speaker argues that despite AI’s growing capabilities, it struggles with ambiguity and high‑liability tasks—illustrated by surgery—yet this shift is spawning new, poorly defined opportunities like AI architects for professionals at all career stages. - [00:11:38](https://www.youtube.com/watch?v=7RZlxqMcObE&t=698s) **Showcasing Public Work for Career Edge** - The speaker advises professionals—technical and non‑technical alike—to build visible, tangible projects (e.g., GitHub repos, storytelling content) and pursue part‑time apprenticeships with indie founders to prove real competence beyond AI‑generated output. - [00:14:59](https://www.youtube.com/watch?v=7RZlxqMcObE&t=899s) **Five Core LLM Skills** - The speaker outlines five essential abilities for working with large language models—prompt engineering, retrieval‑augmented generation/vector‑database mastery, lightweight agent orchestration, and data‑storytelling to polish model output. - [00:18:31](https://www.youtube.com/watch?v=7RZlxqMcObE&t=1111s) **Leverage Domain Expertise with AI** - The speaker urges professionals to pair their deep industry knowledge and tacit expertise with large language models, turning themselves into indispensable AI translators who can productize insights and command premium roles. - [00:22:00](https://www.youtube.com/watch?v=7RZlxqMcObE&t=1320s) **Human Skills in Regulated AI Niches** - The speaker argues that AI will augment rather than replace work in high‑risk, slow‑procurement sectors like energy, healthcare, defense, and specialized professional services, emphasizing that trust‑building, problem framing, and ambiguity‑handling will remain essential human skills. - [00:26:03](https://www.youtube.com/watch?v=7RZlxqMcObE&t=1563s) **Remote AI Networking Strategies** - The speaker advises remote AI enthusiasts to build digital communities by sharing public artifacts, engaging on platforms like Discord or X, and, if feasible, consider relocating to a tech hub to access higher‑quality training and networks. ## Full Transcript
0:00I get hundreds and hundreds of questions 0:02a month. I get them through my contact 0:03form. I get them on my comments. I have 0:06distilled them down into 12 highlevel 0:09questions that punch at the hardest 0:12things about this AI revolution. I want 0:14to give you my answers to those right 0:16here. And then at the end, as a bonus, 0:19cuz we do bonuses around here, I want to 0:21give you the things that people don't 0:23ask that I would be thinking about. So, 0:25let's get to that at the end. First 12 0:28questions. Number one, Nate, I see 0:30headlines about AI layoffs all the time. 0:33How can I tell if my role is next or if 0:37it's just getting rewired and like I'll 0:39be okay? I want to suggest that the 0:42huristic that you can use, the rule of 0:44thumb that you can use is to look at 0:47your role as a series of tasks and then 0:51look at what percentage of those tasks 0:54AI can take and then you give it a 0:56discount. And the reason why you give it 0:58a discount is because I have said over 1:00and over that rolls are not just bundles 1:06of tasks. Rolls have glue work. And so 1:10what you need to be asking yourself is 1:13if you took away 30% of the tasks in the 1:17role, could you leverage yourself to be 1:19more effective at accomplishing the team 1:22mission at the company because you had 1:24less busy work to do? or would it feel 1:27like it was just eating out and 1:28hollowing out the role and there wasn't 1:30really a lot left to do. If it's the 1:32latter, if it feels like it's just 1:34hollowing out, that is when you should 1:36get concerned. So, I'm going to give you 1:37a couple of specific examples. I think 1:39customer success has been one of the 1:41hardest hit roles, but it also shows 1:44where you still need to have hope. So 1:47customer success is an example of 1:49something where big Silicon Valley names 1:51including Sam Alman himself have said 1:53there just won't be CS jobs anymore. And 1:56yet at the same time we see major 1:59companies who claim to roll CS jobs to 2:02AI like CLA roll back because they 2:05realize they need good handoffs and they 2:08need humans in the loop who can actually 2:10help customers because customer help 2:12turns out to be a very context dependent 2:14thing. I have navigated AI menu after AI 2:18menu, AI chat after AI chat because 2:20everyone's rolling them out. You 2:22probably have too. The experience has 2:24not been as good as working with my 2:27buddy Thor at Amazon. And that is a real 2:30name of a customer service rep at Amazon 2:32who I worked with a decade plus ago. And 2:35I got my questions answered. Thor had a 2:38great sense of humor and we all had a 2:40great time. I've never had that kind of 2:41an experience with a customer success 2:44robot. And so I think CS is going to 2:48change. I think it's an example of a 2:50case where you can argue that large 2:52pieces of those tasks are going to get 2:55picked up by AI. It's just too easy for 2:58AI to write text based on databases and 3:01it will get more personable. Probably 3:03not as personable as Thor. But I would 3:06argue that if you look at it, you should 3:10be able to architect systems. You should 3:12be able to see places where you can lean 3:15in as a CS rep and deliver extraordinary 3:19value. I know CS reps that drive 3:21expansion revenue for businesses because 3:24they are so good at what they do. An AI 3:26agent is not going to be as good at 3:28driving expansion revenue for 3:29businesses. I it just won't. And so part 3:32of the answer is looking at your task 3:35load versus your mission. Where is your 3:37mission aligned to versus where are your 3:39tasks aligned to? The other part is 3:41something you can't control. And so part 3:43of how you tell if your role is next is 3:46frankly if your leadership understands 3:48AI. Does your leadership talk about AI 3:50in a nuanced way the way I'm talking 3:52about it? Or is your leadership out 3:54there saying, you know, AI is a cost 3:56cutter? I'm happy to just dump these 3:58roles because they might be wrong about 4:00that. They probably are. People who tend 4:03to dump quickly tend to regret later. We 4:05have big stories about that. I just 4:07mentioned one, Clara. But if that's the 4:09way they're thinking, it pays to watch 4:11leadership and find another role or go 4:14hunting for a different career path 4:16because of leadership's attitude, not 4:18because of AI. And I do want to 4:20distinguish those two. So if you want to 4:22tell the answer from a task perspective, 4:25look at it as what are the tasks that 4:27are being automated? Discount for the 4:29bundling, discount for the glue work you 4:31can do, discount for your mission 4:32alignment. If you want to look at it 4:34from a company perspective, look at your 4:35own leadership. Look at whether they are 4:37willing to actually acknowledge the 4:39nuance of AI or if they're just looking 4:41at this as a cost cutting machete. 4:44Number two, question number two, Nate, I 4:46need dates. When do experts say that 4:48white collar cutbacks are going to start 4:50to bite? Experts disagree on this one. 4:53They really do. There is no one answer. 4:55I wish I could give you an answer. There 4:57are lots of people who claim to know. 5:00People claim to know that it will be 5:022027. People claim to know it will be 5:042030. People claim to know it will be 5:052028. People claim it will happen and 5:08then there's a camp of people who aren't 5:10sure if that's the case yet. It depends 5:12on your attitude. If I were you, I would 5:15assume that there will be significant 5:19restructuring of roles and a significant 5:22disruption to every role in white collar 5:25in the next two to three years. That is 5:28different from assuming that white 5:30collar cutbacks will mean mass layoffs 5:32across all of those job roles. I don't 5:34think that is baked into the empirical 5:36evidence. Do our jobs differently? 5:38Absolutely. And in fact, we're just 5:40getting started with that. Most of us 5:42don't have jobs anymore. It's not clear 5:45yet that that is happening. It's not 5:47clear in the data. It's not clear given 5:49the capabilities of the AI systems and 5:51the direction they're growing. Will 5:52there be some layoffs? Yes. Will we see 5:56more chaos in 2026 and 2027 as job 6:00disruption starts to hit? Yes. But I 6:02think that when we talk about this, we 6:04often confuse breadline level chaos 6:07where it's like 30, 40, 50% 6:09unemployment, it's a doomsday scenario, 6:11we all have to go on universal basic 6:13income, etc. with technological change 6:16level chaos that's compressed where you 6:18have a technological change equivalent 6:20to the steam engine being invented or 6:22equivalent to the internet being 6:23invented and you have to negotiate that 6:25change very quickly because unlike past 6:27revolutions, this is all happening now. 6:29But we don't really articulate those as 6:31two different futures. My bet is sort of 6:33that this is like other technological 6:35changes, but it's very very compressed. 6:37So the shocks are going to feel more 6:38dramatic for the next few years. Other 6:40people are betting on a more doomsday 6:42scenario. For folks who are betting on a 6:43more doomsday scenario, they tend to say 6:46words like 2027 and 2028 a lot. The good 6:49news about that is that we will find out 6:51real fast if they're right or they're 6:53wrong. It will not take that long. It is 6:54well within even the entry- level part 6:57of an initial career path. Which 6:58suggests that if you want to plan for 7:00your future, you should plan for them to 7:03be wrong because it does not hurt to 7:06plan to build your skills in case 7:07they're wrong. And if if they're right, 7:09it doesn't matter. So, you might as well 7:11build your skills anyway. And that one 7:13often surprises people. Number three, 7:15Nate, I want work that AI cannot 7:18cannibalize. How do I spot really 7:20durable roles before everybody else 7:22piles in? It feels like there's so much 7:24hype people are just running back and 7:25forth. I want to suggest to you that 7:27there are certain things that do not go 7:28out of style. Understanding how to 7:31broker trust does not go out of style. 7:34Understanding how to build trust in 7:36business contexts will never go out of 7:38style. It will never be taken by AI. 7:41Understanding how to work in high 7:43context situations where you have to be 7:45aware of wide rapidly changing contexts 7:48doesn't go out of style. It doesn't 7:50disappear. AI doesn't take that because 7:52AI is not good at tokenizing that. AI is 7:54not good at tokenizing trust. You can't 7:55really tokenize trust. Trust is a human 7:57transaction. Understanding how to handle 8:00high ambiguity situations where things 8:02are gray and shifting all the time. 8:04Those are not things that AI is super 8:05good at either. In fact, one of my 8:07biggest frustrations with AI models is 8:09as their capabilities have increased, 8:11they have not gotten better at handling 8:12ambiguity. Arguably, they've gotten 8:14worse because they're better at being 8:16specific. And so look for places with 8:19really messy real world constraints. 8:21Look for places where deep relationships 8:24are required. Look for places where you 8:27need to deliver outcomes, especially if 8:30you need to deliver outcomes against 8:33liability. A good example of this, 8:35people have been saying for a while that 8:37robots are going to take over surgeons 8:39roles. Surgeons have liability. Surgeons 8:42can be sued. Surgeons must get it right. 8:45Surgeons have skin in the game. Robots 8:47don't. And so I think surgeon, 8:50ironically, is a role that may transform 8:53and shift as we get robotics involved in 8:55the surgery room, but it doesn't mean, 8:58and it already is, but it doesn't mean 9:00that surgeons themselves are going to 9:02disappear. And you'll see similar roles 9:03across tech. Does that mean that these 9:07are only available for seniors and 9:09people who are deep in their careers? I 9:11don't think so. I think one of the 9:13really interesting things about AI is it 9:15is upending so many of our assumptions 9:17about jobs that there are all kinds of 9:19tail opportunities opening up that 9:22people haven't fully defined yet. AI 9:24architect, it's a brand new role. We 9:26haven't fully defined it. Yes, it 9:28probably takes some degree of experience 9:30with AI and understanding systems, but 9:32it's an example of a role that's very, 9:33very new. Another role that's new, AI 9:35engineer. What does it mean to be a good 9:38AI engineer? There's lots of other roles 9:40beyond that. There's roles that we don't 9:41really have good words for. We don't 9:43really know where product management is 9:44going or how it's disrupting, but it's 9:47an example of a role where you need less 9:48technical knowledge than an engineer, 9:50probably more than you used to have, and 9:52you need a totally different mindset in 9:54a world where you might not be driven by 9:55a road map anymore. And so I think that 9:57the opportunity here is to look for 10:00those durable relational transa 10:03relational low transactional structures. 10:05So high context, high ambiguity, high 10:07trust intersections, places where it's 10:10not super transactional, places where 10:12you have to be relationship oriented, 10:14places where you have to be deep in 10:16context to understand things. And if you 10:18think, by the way, that AI engineer and 10:20AI architect don't have to be deep on 10:21trust and ambiguity and context. I've 10:23got news for you. Places where you have 10:25to deliver outcome against liability. 10:27Chase problems with unstructured data. 10:30Chase problems that aren't easily 10:32tokenized yet. AI can't eat it if it 10:34can't ingest it. So, you want to look 10:36for those spaces. And the thing is, I 10:38can't name all of them for you because 10:41they're still coming into being. That's 10:42one of the really interesting things 10:44about the next two or three years. And 10:45so, I'm trying to give you the 10:46principles to spot them for yourself. 10:49Okay. Number four. Nate, I'm a new grad 10:51and entrylevel roles seem to be 10:53evaporating. Where do I earn real 10:55experience now? How do I get onto this 10:56ladder? This doesn't seem fair. Well, I 10:592008 was also really rough, let me tell 11:01you. So, first off, I think that part of 11:04the challenge is that you are getting 11:06hit with the job application broken 11:10pipeline harder than anybody else 11:13because other people can lean on 11:14previous work experience, but it's 11:16harder if you haven't had that. I think 11:18there's a couple of things that help, 11:21but the one thing that I've seen that is 11:23most reliable is just going to require 11:25relentless execution on your part. It 11:27ties into number three. So, the thing 11:29that I think helps the most is treating 11:31projects like the new resume. You've got 11:34to be able to ship things. You've got to 11:36be able to show what you're building. 11:37You've got to be able to show you can 11:38connect with community needs and build 11:40something in response. If you're in tech 11:42and you're building anything that leaves 11:44a public artifact, if you're in 11:45marketing, if you're trying to tell a 11:47story, you have to be able to start 11:48telling stories now. You want to leave a 11:51public footprint of what you're working 11:53on that is hard to replicate. If you 11:57have a bunch of storytelling Tik Toks, 12:00if you have a strong GitHub that you 12:03have actually delivered working code 12:05against, it actually works. It's not 12:06just, you know, a bunch of broken 12:08projects. It's at least something that 12:10people can look at and investigate. And 12:12then the question becomes not did AI do 12:15all of this for you, but do you 12:18understand the principles of building 12:20for the role you're asking for? Because 12:22sometimes like people assume like you 12:24have to have a GitHub if you're an 12:26engineer and you shouldn't have a GitHub 12:27if you're not. Those rules are shifting 12:30like yes engineers probably should still 12:32have a GitHub but people who are not 12:34engineers need to be able to also talk 12:38about technical topics now and so if you 12:40have an opportunity to build something 12:42and you're not a technical person don't 12:44be afraid of that. I also suggest that 12:46you look for something that is like a 12:50fractional apprenticeship. Small 12:52part-time gigs for founders that need 12:56problems solved for them. There's so 12:58many indie founders out there. Every 13:00single one of them does not have the 13:02time to automate as much as they want 13:04to. They do not have the time to build 13:07as much as they want to. Go help them. 13:09They can refer you. Go help them. You 13:11will get something you can build and 13:13show. And how do you get that? You're 13:15like, "Well, who's going to pay 13:16attention to me?" You should have 13:18projects you can show and say, "This is 13:20why you should come for me. Look, I can 13:22show you my work." And so the ladder 13:26that is there is changing because the 13:29roles themselves are changing. And that 13:31is part of why hiring is so broken right 13:33now is because people, even hiring 13:35people, are trying to figure out and 13:37project what they will need in 24 months 13:39and hire for that. I will also say part 13:41of that chaos means that there are roles 13:44opening up targeted toward new grads 13:46that weren't there before. And so, for 13:48example, there are roles that are 13:49targeted at entrylevel folks coming in 13:53where you need to articulate your AI 13:55fluency from the get-go so that you can 13:57help bring uh AI fluency to the team 14:00you're with. That's new. You know, those 14:02roles didn't exist before. And so part 14:04of it is figuring out, you know, if you 14:06were tracking towards some of the steady 14:07tech jobs from the 2010 era, maybe those 14:10are changing really fast, but there's 14:12other ones that are opening up. And so I 14:13would say look at your public artifacts, 14:15look at fractional apprenticeships 14:17wherever you can get them and pitch for 14:18them. You don't wait for them to open 14:20up, go get them. Oh, go call DM. And 14:22then make sure that you're aware of the 14:24fact that there are roles opening up 14:26that may not have conventionally been in 14:28the middle of your uh in the middle of 14:30your degree path, but they are now. 14:32Okay. Number five, Nate, I I can't waste 14:34cycles. Which JI skills do I need to 14:37learn this year? I got to tell you, 14:39there's a few that do come up over and 14:41over again. I do think there's a clear 14:43answer. And I just want to go through I 14:45if if people have these four big buckets 14:49covered, they are already ahead of most 14:51folks. And I have written up a ton of 14:53these already on the Substack. So, 14:56number one, prompt architecture. 14:57Understanding how prompt prompts work. I 14:59think it's it's one of the universal 15:00skills. Now number two understanding how 15:03retrieval augmented generation or rag 15:05works and where it doesn't work which is 15:07critical basic vector database hygiene 15:10understanding embeddings understanding 15:12refresh pipelines how you build a vector 15:15database even if you're not building one 15:17yourself understanding how they work so 15:19that your eyes don't glaze over really 15:21really helps. Number four is lightweight 15:23agent orchestration. So understand how 15:25tools like NAD or Langraph enable you to 15:28wire tasks together and then it can be a 15:30public artifact. Wire things together, 15:32automate. And then last one, number 15:34five, data storytelling. Understand how 15:37to turn a raw model output into 15:40something that is polished. That is a 15:42meta skill. That is not necessarily just 15:44a technical skill. People who copy and 15:47paste are doomed. I don't say doomed 15:49very often, but you're doomed. people 15:52who are able to polish model output, to 15:54think critically, to engage with model 15:56output. That goes back to one of those 15:58larger skills I called out. Look for 16:00places AI can't cannibalize. Well, I got 16:02to tell you, polishing model output and 16:04knowing how to make it sharp is exactly 16:08the kind of high ambiguity, high context 16:10work I'm describing. So, get good at 16:12data storytelling with LLM. That's skill 16:14number five. So, to go through the five 16:15again, prompt, prompt engineering or 16:17context engineering if that's the 16:18popular term. Now, rag, understanding 16:20how vector databases work, which is 16:22related to rag, but slightly different 16:24because it's a little bit of a level 16:25down from a structural perspective. 16:27Understanding agent orchestration, 16:29number four, and then data storytelling 16:31with LLM, number five. All right, next 16:34question. Nate, the stack flips every 16:37six months. How do I stay ahead when the 16:39tools will not sit still? Look, the best 16:42way that you can do this is to schedule 16:44Google 20% time. I'm not saying actually 16:47spend 20% of your time on this. I know 16:48we don't all have that luxury, but the 16:51stack itself is built on fundamentals 16:53that don't change as quickly as all that 16:55transformer architecture is underlying 16:57this entire AI revolution. It hasn't 16:59changed. And so understand the things 17:01that don't change. I call them out 17:03really really frequently in my content. 17:05And then be disciplined about forming 17:08hypotheses about what you want to bet on 17:10and explore on in a particular month and 17:13and create that in line with your larger 17:16intent and goals. your mission. We've 17:18talked about this idea of being mission 17:20aligned when we talked about career path 17:22and like are you able to contribute to 17:24the team mission, the company mission, 17:26etc. What about your personal mission? 17:28Are you able to articulate these are the 17:30things that I really want to get done? 17:33These are the high ambiguity or high 17:34trust problems I dream of getting into. 17:37will walk back from that and by the way 17:39AI is a tool for that and figure out 17:43which technical skills or which AI tools 17:47are in line with that larger mission and 17:49then focus there and do it in a time 17:52boxed way. Say I'm going to take 4 hours 17:54a week for a month and I'm really going 17:56to do it. I'm going to set a timer. I'm 17:57going to sit down. I'm going to do I'm 17:58not going to scroll TikTok. I'm not 17:59going to watch Netflix. I'm actually 18:01going to do it. I'm not going to tweet 18:02about shipping. I'm actually going to do 18:04it. and then come back and see if your 18:06skills have grown in the direction you 18:07want. See if you've made progress in a 18:09month. It's like any other habit. You 18:10have to build it. And so my advice is 18:12basically the tools will not feel like 18:14they're moving so much if you have a 18:16compass. So develop that compass. Number 18:19seven, Nate, I am mid-career. How do I 18:21translate what I already know into an AI 18:23adjacent role without starting from 18:25nothing? Look at your domain advantages. 18:28Where do you already have strong domain 18:31expertise, regulatory fluency, customer 18:33access, legacy data, storytelling, 18:35polishing capabilities? Now, pair that 18:37with an LLM and become the bridge that 18:40other people can't easily replace 18:42because you have that deep domain 18:44expertise. I have people telling me that 18:46they desperately want their existing 18:49senior employees to lean in more on AI 18:52and they worry because they don't. Don't 18:55be that person. You have the domain 18:57expertise. you have the advantage 18:59productize tacet knowledge. You can 19:02think about if and this is if you want 19:04to go into a consulting, if you want to 19:05go into an indie role or whatever you 19:07have dreamed up for you for the next 19:09half of your career, you can productize 19:11that tacet knowledge into something that 19:13helps people who are climbing the career 19:15ladder earlier than you to get up faster 19:18and learn those domain secrets quicker 19:20than you had to learn. Eventually, you 19:22should be in a position whether you're 19:23internal or whether you're doing some 19:25sort of uh independent role where you 19:27can act as an AI translator in your 19:29vertical. You should be able to command 19:32a premium because the untransatable, the 19:35hard to understand, the difficult 19:37expertise that comes from years of 19:39knowledge is something you carry with 19:41you and you have now successfully 19:42coupled it with AI. And so I would 19:44actually say look at it as my domain 19:48gives me an incredible starting point to 19:50get to an AI adjacent role without 19:52starting from zero. All I need to do is 19:55to dive in on AI literacy. The things 19:57that I just called out the the basic 20:00pieces that I described a couple of 20:01questions ago. Understand agents, 20:03understand rag, understand data 20:05storytelling with AI. Those are things 20:08that if you can start to get them down, 20:10if you can start to get prompt 20:11engineering down, you are going to be 20:13formidable. You're going to be a very 20:14strong candidate. Number eight, I'm 20:16using chat GPT at work. What's career 20:19safe usage before legal gets involved? 20:21The answer is you must mask red data. 20:25Red data is anything your company 20:27considers personal or confidential. Just 20:29don't put it into any AI. Just don't do 20:32it. You don't want the risk. You can 20:34mask it. you can and masking means like 20:37obscuring all of the confidential 20:38information but and I know people do 20:41this anyway. There's a massive shadow IT 20:43problem but the risk to you individually 20:46is disproportionate. The company can 20:49come after you for using AI 20:52inappropriately at work and I am 20:54expecting a court case in that vein to 20:57come out in the next 6 months. It is 21:00going to happen. People will leak 21:02something that they should not have 21:03leaked. There has already been an 21:05instance where Claude ended up 21:08apparently disclosing material 21:11non-public information to an investor 21:13that did not come from any discernable 21:16source and that it's inferred that it 21:19almost certainly came from a board 21:20meeting that that company had ran across 21:22that story last week. Not going to 21:24reveal the name of the company. It is 21:26not common. That is the first of those 21:27stories that I have heard. But it does 21:29happen and that is the thing that the 21:31company worries about. So, just just 21:33don't do it. Number nine, Nate, I need a 21:355-year road map. What industries look 21:38stable? Well, I got to tell you, I think 21:40road maps are changing. I think that we 21:41should think about long-term bets on 21:44these durable task areas that are human 21:47friendly, like high ambiguity areas, 21:49high trust areas. And so, I don't know 21:51that industries are necessarily the 21:53right lens, but I will take your 21:54question seriously and I will answer it. 21:57I think regulated high-risk verticals 22:00with slow procurement cycles are going 22:02to be fine. Energy, healthcare, defense, 22:05AI is going to augment there way before 22:08it does any kind of replacement world. 22:10Look at places where atoms come ahead of 22:12bits and how you can get involved. Now 22:14you're jumping into the robotics 22:16revolution there, but advanced 22:17manufacturing, grid infrastructure, 22:19supply chains, and then look at longtail 22:22professional services, specialized 22:23legal, complex insurance, bespoke 22:26financial, things where models in 22:29general are going to have a hard time 22:31being as useful as your specific 22:33expertise. There are going to be other 22:35places. Like I said, I think there are 22:36niches in every single industry. I don't 22:39see industries being taken over by AI in 22:41the same way. It's not like we'll have 22:42like no nobody working in B2B SAS. It 22:45will all be AI. I mean, some of us would 22:46say that was the dream, but like the 22:48reality is there will be places in all 22:51of these industries for people who can 22:53earn trust and solve hard problems. But 22:55that's my take. If you want to look at 22:57industries, energy, healthcare, defense, 22:59I think supply chain, grid, 23:01infrastructure, those are all relevant. 23:02Number 10, Nate, I bank on human skills. 23:05Which ones will matter when the machines 23:06do the grunt work? So, I said this a 23:08little bit earlier. I talked about 23:09problem framing or I talked about 23:11building trust. I talked about making 23:13sure that you understand how to handle 23:15high ambiguity situations. But if you 23:17want to like boil that into skills, I do 23:19actually think problem framing is a 23:21piece of it. That's why it came to mind. 23:22So problem framing is the act of turning 23:24something ambiguity into something 23:26solvable. It's actually one of the core 23:27skills PMs bring to the table if they're 23:29good. Taste gets talked about it a lot, 23:32but for good reason. It's the instinct 23:34to choose what is good. When we talk 23:35about LLM driven storytelling and you 23:38have to polish, it's taste that helps 23:40you polish. Narrative persuasion, 23:42figuring out how to craft a story that 23:44aligns stakeholders. That's not always 23:46intuitive. That's not always obvious. 23:49Especially if you are in leadership, if 23:51you are in sales, like narratives matter 23:53a lot. In marketing, narratives matter a 23:55lot. In product, frankly, judgment under 23:58uncertainty. That would that's the skill 24:00that goes with high ambiguity 24:02navigation. decide when 78% confidence 24:05is good enough to ship. AI doesn't have 24:07skin in the game. AI is not going to 24:09make that call. And so look for those 24:11kinds of skill sets. The skill sets that 24:13matter because they are attacking the 24:16non-tokenized parts of the distribution. 24:18So problem framing, taste, narrative, 24:20persuasion, and judgment are all good 24:22examples, but it's not an exclusive 24:24list. Number 11, Nate, I can't afford a 24:27pricey boot camp. Where are there 24:29affordable options to start learning? 24:31Well, YouTube, I actually did a whole a 24:33whole sort of Substack on YouTube, but I 24:36also will say like look up AI leaders 24:38like Andre Karpathy on YouTube and watch 24:42what they say. And I I say Andre because 24:45he is a gifted teacher and he's also 24:47extremely technically fluent. He is a 24:50technical founder in the AI space. And 24:53if you want to learn, that's an example 24:56of a place you can go to learn. But you 24:57don't have to just do that. If you say 24:59that's too technical for me, you can 25:01pick the keyword or topic you want to 25:02get better at align to your northstar 25:04mission and go dig up 30, 40, 50 minutee 25:09videos on YouTube about it most of the 25:11time. Now, I will say honestly, part of 25:13what makes YouTube annoying is that 25:14there are also a bunch of clickbay 25:16videos. There are videos that are like, 25:18you know, they're going to show you a 25:19special thumbnail and you're going to 25:21get six minutes of hype and like 30 25:23seconds of insight. That's not really 25:25going to be worth it for you. So you're 25:27going to have to find in your particular 25:29area of interest what are the YouTube 25:30videos that are useful but that becomes 25:32a window into the rest of the learning 25:34portfolio because they will reference 25:35other sources other other references 25:37they'll reference books they'll 25:38reference courses that may be free 25:41courses there are so many university AI 25:43courses that you can audit and so I 25:45actually do think there's a lot of 25:46affordable options for reskilling and 25:48the and the last thing I will say is 25:50that AI is experiential technology you 25:53can reskill experientially and you 25:56should and you should use AI to help 25:58you. I've written prompts for that. Use 26:00AI to help you learn AI. Number 12. 26:02Nate, I live far away from San 26:03Francisco. How on earth do I get high 26:06quality AI training or get plugged into 26:07networks? It seems like it's impossible. 26:09Well, if you can and you want to move to 26:12a tech hub, there's often a lot of 26:14upside there. So, I will say like we'll 26:16just put that on the table. If that's 26:17something that's an option for you, 26:18think about it. If that's something 26:19that's not an option for you, maybe it's 26:21because you don't want to. You like the 26:22peace and quiet. I get it. I don't live 26:24in San Francisco either. Then you want 26:26to be in a place where you are building 26:28strong online communities around 26:31collaborative problem solving. Part of 26:33why you put public artifacts out there, 26:35which I said in one of my earlier 26:36answers, is because it enables you to 26:38form online communities around areas 26:40you're interested in. And if you can do 26:42that, if you can collaborate with other 26:43people building in the space, talk to 26:45them, engage with them, whatever social 26:47platform they're on, maybe they're on 26:48Discord, maybe they're on X, who knows? 26:50Find the people working on the problems 26:53you're interested in. 26:55and let them guide you to other people 26:59and hop hop hop. Now, there's a whole 27:01art to cold DMing if you want to raise 27:02capital, if you want to go places. 27:04That's not what this is about. This is 27:06about building networks digitally when 27:09you can't be somewhere physically. And I 27:11would say start from that common area of 27:12interest. Start from where you're 27:13actually building. Put out public 27:15artifacts. Start talking about it. Start 27:16finding people building. Start engaging 27:18with them. And you'll start to build 27:19that web really organically and it won't 27:21feel fake. Last but not least, what are 27:24two things that were not on this list 27:26that I wish people would talk about? 27:28Number one, I wish people would talk 27:31more about the execution gap. There has 27:34never been more capability to build, 27:38learn, leverage yourself with AI. The 27:40hype is deafening, but I see real 27:43struggles with actually executing. And I 27:46think part of the challenge is the start 27:48stop problem. It is really easy to start 27:51on something with AI, but the 27:53easefulness is deceptive. It is actually 27:55very hard to go through the scurve of 27:58learning with AI because it's undefined. 28:00If you're typing in the chat in the chat 28:02window and you don't know what to ask, 28:04you feel stuck. That mental block is 28:05really big and you don't know how to 28:07keep moving forward. The answer happens 28:09to be try anything in the direction 28:11you're wanting to go and iterate from 28:13there. But you have to get over the fear 28:15that it's going to be the wrong thing. 28:17You're going to learn the wrong thing. 28:18you're going to focus in the wrong 28:19place. And so I think the execution gap 28:21doesn't get talked about enough. People 28:23who execute reliably on AI, even if 28:26they're just learning AI and they're 28:27beginners, are rare. The second thing 28:29that I want to call out is that people 28:32don't talk enough about the kinds of 28:35problems that they are interested in 28:37solving that weren't solvable before. 28:40I'm interested in that. I'm fascinated 28:42by that. I can't stop thinking about it. 28:44What are the kinds of problems that we 28:45couldn't solve before that are solvable 28:48now? And I think that we've been so 28:49blinded by the success of Chad GPT that 28:52we sometimes assume that all the 28:54problems are gone into the ether and 28:56that there's really nothing left to do. 28:58But I don't think that's true. We're not 29:00short of problems. We have whole new 29:02classes of problems that have opened up 29:05that we can now come up with solutions 29:07for. as an example. There still is no 29:11good way for me to organize my library 29:13with AI. Believe me, I've tried. Even 29:15the best image recognition that 03 29:17offers is not good enough to hold all my 29:20books in memory, recognize all the 29:22titles, reliably find them, reliably 29:24list them, and help me organize my 29:26library. I have to do it by hand. And 29:29you know, people can say they enjoy 29:31doing that by hand, but if you're 29:33organizing a lot of books, that's a 29:34legitimate problem. That's just one 29:36example. I'm not saying that's an 29:37example that matters a ton. I'm saying 29:39it's an example of something where it's 29:41a real AI problem. AI is supposed to be 29:43good at it. AI may be good at it with a 29:45specialized tool, but it doesn't exist 29:47yet. There are hundreds and thousands of 29:50problems like that. I wish we talked 29:52about them more. So, there you go. 12 29:54answers to the questions I get asked the 29:56most and two final reflections that I 29:58wish people would ask more. Cheers.