Learning Library

← Back to Library

Engineering Still Essential Amid AI Revolution

Key Points

  • The speaker argues that AI actually heightens the importance of engineering because AI‑generated code can produce far‑reaching failures, requiring skilled engineers to oversee and safeguard systems.
  • While AI can automate boilerplate and produce working code, creating robust, production‑ready engineered systems remains a distinct, human‑driven discipline.
  • Engineers’ roles are shifting toward greater responsibility and partnership with AI, rather than being replaced; however, those who lack deep engineering understanding may be displaced.
  • Talent variability among engineers has always existed, and the rise of AI merely amplifies the need for high‑impact engineers who can deliver value and navigate the complexities of AI‑augmented development.

Sections

Full Transcript

# Engineering Still Essential Amid AI Revolution **Source:** [https://www.youtube.com/watch?v=gXbTh70m_q0](https://www.youtube.com/watch?v=gXbTh70m_q0) **Duration:** 00:20:06 ## Summary - The speaker argues that AI actually heightens the importance of engineering because AI‑generated code can produce far‑reaching failures, requiring skilled engineers to oversee and safeguard systems. - While AI can automate boilerplate and produce working code, creating robust, production‑ready engineered systems remains a distinct, human‑driven discipline. - Engineers’ roles are shifting toward greater responsibility and partnership with AI, rather than being replaced; however, those who lack deep engineering understanding may be displaced. - Talent variability among engineers has always existed, and the rise of AI merely amplifies the need for high‑impact engineers who can deliver value and navigate the complexities of AI‑augmented development. ## Sections - [00:00:00](https://www.youtube.com/watch?v=gXbTh70m_q0&t=0s) **AI Elevates Engineering's Crucial Role** - The speaker passionately defends engineering, asserting that AI amplifies its importance, warns junior engineers of misplaced fears, and stresses that AI‑generated code lacks the rigor of true engineered systems, raising higher failure risks. - [00:03:57](https://www.youtube.com/watch?v=gXbTh70m_q0&t=237s) **From Coding to Engineering** - The speaker asserts that the digital divide is shifting from merely being able to code to possessing engineering expertise—skills such as system integration, security, and production deployment that can be acquired outside traditional computer‑science education and are what enable faster, more effective software development. - [00:07:29](https://www.youtube.com/watch?v=gXbTh70m_q0&t=449s) **From Probabilities to Contracts** - The speaker argues that AI’s inherently probabilistic behavior necessitates engineers to impose deterministic contracts, probability budgets, and safeguards to control variance and emergent risks, especially when scaling to massive deployments. - [00:10:36](https://www.youtube.com/watch?v=gXbTh70m_q0&t=636s) **Emerging AI Engineering Disciplines** - The speaker explains how engineering is evolving with AI by introducing new fields such as semantic engineering—debugging meaning flow and building semantic firewalls against injection attacks—and boundary engineering, which bridges probabilistic language models with deterministic system expectations. - [00:14:14](https://www.youtube.com/watch?v=gXbTh70m_q0&t=854s) **Engineering Empathy in AI Systems** - The speaker argues that senior engineers must blend empathy, judgment under uncertainty, and sophisticated orchestration of complex, contract‑less AI components to prevent large‑scale failures in today’s high‑stakes, LLM‑driven landscape. - [00:17:30](https://www.youtube.com/watch?v=gXbTh70m_q0&t=1050s) **Three Laws of AI Engineering** - The speaker outlines three core principles—measureability in production, mandatory observability and forensic telemetry, and the ability to explain failures—to ensure accountability and reliable deployment of AI systems beyond demo prototypes. ## Full Transcript
0:00This is a love letter to engineering. I 0:02firmly believe that AI makes engineering 0:04more essential, not less. And I'm going 0:06to tell you why in detail because I 0:09think that people who aren't engineers 0:10don't understand this. And I think 0:12increasingly junior engineers are 0:14afraid. Because they have not 0:16experienced what it's like in detail to 0:19work with senior engineers at scale. I 0:21have. I've worked with senior engineers. 0:23I work with principal engineers. I know 0:25what it feels like to have a very strong 0:27engineering mind or collection of minds 0:29in the room reviewing a technical 0:31specification. I want to tell you in 0:33this video why I am highly convicted 0:36that engineering isn't going anywhere as 0:39a discipline. And in fact, I will go 0:41farther. I will say engineering is more 0:43important now than it was before the age 0:46of AI. The fear is real, but it's 0:48backwards. Yes, absolutely. Boilerplate 0:51code can be generated automatically. I 0:53don't doubt that. I see it all the time. 0:55Yes, AI can write working code from 0:57natural language. That's true. But 0:59working code and engineered systems are 1:02worlds apart. They're not close to the 1:05same thing. In fact, most people are 1:06finding that out as they vibe code 1:08systems that may be ready to launch to 1:11friends and family, but are not ready 1:13for actual production. I would also 1:15argue that one of the reasons why 1:17engineering matters more now is 1:20precisely because the AI can write code. 1:23The blast radius of AI generated 1:25failures is exponentially higher when 1:28the AI can write the code for you. So 1:30we're not being replaced. Engineers are 1:33not in danger of being replaced because 1:36engineers are being asked to take 1:37positions of greater responsibility over 1:40AI with AI in partnership with AI and 1:43we'll get into that. But I I want to lay 1:45that out as a contention because I think 1:47we just need to say it out loud. My 1:48position is yes, there will be some 1:50engineers who don't understand how 1:52engineering works that will absolutely 1:54lose their roles in the age of AI and 1:56frankly many of them probably would have 1:58lost their roles previously. One of the 2:00interesting things when you work with a 2:01lot of different engineers as I have is 2:03that you realize how variable the talent 2:05mix is across engineers. An engineer at 2:08the same level can be worlds apart in 2:12terms of actual capability and impact to 2:13the business. And anyone who's worked 2:15with engineers will tell you the same 2:16thing. I had an intern when I worked at 2:18Amazon who did more work and delivered 2:21more value than senior engineers I knew 2:24there. It just was the way it was. He 2:26was motivated. He had a great 2:27assignment. He was able to get something 2:29into production and he did a great job. 2:31Needless to say, he got an offer, right? 2:33Like the point is that talent is 2:35variable. Talent has always been 2:36variable. And we shouldn't mistake the 2:39fact that engineering is hard and talent 2:41is variable from the impact that AI is 2:44having on the engineering discipline. 2:45There is absolutely an impact on the 2:47engineering discipline from AI. And I'm 2:49going to get into it in the rest of this 2:50video, but it's not as simplistic as 2:53saying AI equals bad for engineering, 2:56which is what I see mostly. And I'm 2:57tired of it. And so that's why I'm 2:58basically making a video as a love 3:00letter to engineering. So let's let's 3:02bite off the first piece. I talked about 3:04AI generated code. Let's talk about the 3:06vibe coding piece of this, right? People 3:08talk about vibe coding as replacing 3:09engineering. Now anybody can speak an 3:11intent into lovable.dev dev and they can 3:14get, you know, a working software back 3:16or at least that's the idea. AI tools, I 3:18would argue, create a multiplier effect 3:21for trained engineers more more than 3:24they create democratization of code. And 3:27I know they democratize code. I know 3:28people who have never coded before who 3:30were able to do some coding. Now, 3:32engineers can do even more. Engineers 3:35can compress their expertise to carry 3:37architectural intent much more easily 3:39than non-engineers because they 3:41understand the underlying technical 3:42systems. Non-engineers, frankly, when 3:44they get the chatbot and they get the 3:46ability to vibe code, I've seen them 3:47build stuff, but I've also seen them 3:49very frequently get just enough rope to 3:51hang themselves. Engineers understand 3:53those limitations that coding brings. 3:56They understand how to read code. They 3:57understand how the system components 3:58work together and they're able to go 4:01faster as a result. If the non-engineers 4:04get roped to hang themselves, the 4:05engineers get rocket fuel. So the 4:07digital divide is shifting very rapidly 4:09from who can code to who can engineer. 4:11And I want to pause there because I 4:13actually don't believe that only people 4:15from conventional computer science 4:17backgrounds can engineer. If you 4:19understand how software components go 4:22together, that is increasingly the 4:24essential skill. If you can engineer 4:26systems so they're efficient, if you 4:28understand how backend works with front 4:30end, if you understand what attack 4:32surfaces look like from a security 4:33perspective, if you understand how to 4:36move software into production, what that 4:38requirement looks like, those are not 4:40exclusively skills you learn in computer 4:42science. And in fact, many engineers 4:43will tell you they didn't learn them in 4:45their computer science majors. They 4:47learned classical computer science. 4:49Engineering is something we have 4:51frequently learned on the job. But it's 4:53not something limited only to software 4:55engineers. A lot of people can learn 4:57engineering principles. And what I've 4:59observed is that part of the reason why 5:01I'm saying software engineers go faster 5:03with vibe coding is because they have 5:05already inculcated into themselves. 5:08They've absorbed into themselves these 5:09principles of engineering. So they feel 5:11native and that is really the 5:12differentiator. It's not that they have 5:13a computer science degree. It's not that 5:15they know every single bit of JavaScript 5:17or TypeScript or whatever it is. It's 5:19that they know how to engineer. And 5:21that's encouraging because it means if 5:22you're trying to also learn how to build 5:25efficiently in the age of AI, you can do 5:28so faster by just learning some of the 5:30skills of engineering. And I'm going to 5:32lay out what I view as some of the new 5:34core skills for engineering based on 5:36lots of work with engineers through this 5:38AI transition. So the thing I want to 5:40leave you with as we move on from sort 5:42of the AI coding piece and get into some 5:43of the other parts of the this 5:45engineering domain that we're exploring 5:46in this video, effective prompting is an 5:50engineering skill. When I and I have 5:52taught courses, right? I've taught 5:54courses on effective prompting. And 5:57increasingly I think that this is a 5:58truth that we don't admit. Effective 6:00prompting is an engineering skill that 6:02requires some degree of engineering 6:04understanding. And the more engineering 6:06understanding you have, the more 6:08effective your prompting is going to be. 6:10All right, let's go from just sort of 6:14that sense of vibe coding and getting 6:15rid of the fear of engineering into the 6:18human responsibilities that I don't 6:21think change and that I think still sit 6:23with engineers. Now, some of these will 6:25sit with engineers more in at scaled 6:28systems like at Amazon, at Google, etc. 6:31And some will still work for smaller 6:32teams as well. But I wanted to call out 6:34the human responsibilities here because 6:36I think that we spend a lot of time 6:38talking about AI responsibilities and 6:40not a lot of time talking about the 6:41human component. And the human component 6:43is I would argue getting more important 6:45as we multiply our code by ax. Number 6:47one, it is a human responsibility to 6:50translate intent to correct 6:52specification. So name the invariance, 6:55name the hazards, name the success 6:57criteria, translate human needs into 6:59system edges and boundaries. decide not 7:01just whether we can do something but 7:03should we do something. You are carrying 7:05the weight of systems that if you're 7:07working at a large company can affect 7:09billions of people. And so translating 7:12intent to specification implies a degree 7:15of skin in the game that AI systems 7:17don't have. The second one I want to 7:18call out is humans are responsible for 7:21writing guarantees on probabilistic 7:23systems. 7:25AI is behaviorally at scale a functional 7:29probabilistic system. It's not always X 7:32or Y. It is a probability at scale. You 7:35are turning likelihood into contracts if 7:38you are engineering systems. You have to 7:40be able to guarantee outcomes. You have 7:42to be able to guarantee edges. You have 7:44to be able to guarantee security to some 7:45degree. Fundamentally, a lot of the 7:47human job is taking these probabilistic 7:49systems and writing contracts against 7:51them that you can uphold. So you have to 7:54be able to create boundaries that are 7:55deterministic, not probabilistic. You 7:57have to define probability budgets that 8:00work at scale across pipelines. You have 8:03to ensure that what must never happen 8:05really doesn't ever happen. And this I 8:08mean I've talked about scale a lot, but 8:10this is true even at a small scale. Chad 8:12GPT will not give you the same response 8:14if you give it the same prompt. There 8:16will be subtle differences. Your job as 8:18an engineer, the role of engineering is 8:21to ensure that that kind of variance is 8:23not toxic to the system. It is also a 8:26human responsibility to think at scale. 8:29You have to understand emergent 8:30behaviors when you scale up to a very 8:33very large footprint. And this one I 8:35think is specific to big companies. But 8:37understanding emergent behaviors at 100 8:39million boxes is its own skill set. It 8:42is a human skill set that very few 8:44humans have. Knowing when algorithms 8:47become bottlenecks and where they 8:49bottleneck is a human skill and it gets 8:51at something that is essentially a risk 8:53with AI. AI does much much better at 8:55writing code than deleting code. And one 8:58of the things that you see with really 8:59good engineers at scale is they know 9:01what they can remove. Being able to 9:03intuitit how the system works at scale, 9:05being able to intuitit where phase 9:07transitions from stable to chaotic occur 9:09in complicated systems. I've seen 9:11principal engineers do that. It's a 9:13remarkable skill. It's a human skill and 9:16it means that they understand how to 9:19effectively deal with a world where one 9:21in a billion events are actually things 9:23that happen on a regular basis because 9:25of the trillions of events that they're 9:27processing. And I don't want to terrify 9:29you if you are an engineer who has not 9:31worked at that scale. You'll notice that 9:33this is just one of a very large array 9:36of skills that I'm talking about in 9:37engineering. My intent here is not to 9:40convey that only those engineers that 9:42work at 100 million boxes scale will 9:44survive. Instead, I'm trying to call out 9:46that that is one aspect of engineering 9:48that remains very human even if AI 9:52increasingly assists in helping us 9:54understand these systems. The last human 9:56skill that I want to call out is 9:58economic engineering. So, you have to be 10:01able to manage intelligence like a 10:03utility. You have to be able to optimize 10:05latency and quality and cost through 10:07trade-offs. You have to be able to 10:09design degraded experiences that 10:12prioritize value even with additional 10:14constraints. You have to be able to 10:17understand where inefficiency matters 10:19and how it impacts margins. You have to 10:21engineer systems especially in an age 10:23when tokens are intelligent and tokens 10:25cost money. How do you deliver 10:27intelligence cost economically, cost 10:30effectively? That's a human skill and 10:32that's a skill that is scale invariant. 10:34You you should care about that even at a 10:36small scale. Now I've talked about just 10:38a few s there are more human 10:40responsibilities but I also as we're 10:42touring across the engineering domain 10:44that I love so much I want to talk about 10:46some of the new disciplines that are 10:47emerging because it's absolutely true 10:49that engineering is changing and I don't 10:52want to pretend it's not and so I'm 10:53going to suggest for you a few of the 10:55ways that we see engineering starting to 10:57shift in the age of AI and then we'll 10:59come back to the human skills and kind 11:01of revisit them after we look at that. 11:03So, semantic engineering, that's a new 11:05discipline, right? How do you debug 11:07meaning flow, not just data flow? How do 11:09you build semantic firewalls against 11:11injection attacks? I saw a new injection 11:13attack just today where someone can use 11:17the name field in chat GPT to prompt 11:21inject something. Injection attacks come 11:24in all shapes and sizes. People are able 11:25to put injection attacks in white text 11:27on white on Reddit boards now because 11:30the system can't distinguish between 11:32your prompt and the context content it's 11:34reading. It's up to engineers to figure 11:36out how to address this stuff. It's up 11:38to engineers that design to design 11:39systems that will appropriately refuse 11:41to act. And no, it is not just model 11:44makers. Engineers installing these 11:45systems have the accountability to act 11:47as well. Boundary engineering. Engineers 11:50have to architect the space between the 11:52probabilistic world of the LLM and the 11:55deterministic world that we expect with 11:57software. They have to create interfaces 11:59that feel consistent. And yes, I am 12:01going to go out on a limb and say not 12:02all interfaces are going to created be 12:04created by AI on the fly. I don't think 12:06that's true. They have to be able to 12:09maintain human AI boundaries in ways 12:12that preserve human agency. Increasingly 12:14part of the engineering responsibility 12:16is figuring out how to map that boundary 12:18between human and LLM collaboration in 12:20software at scale. Memory and knowledge 12:22engineering is another one. How do you 12:24build institutional memory for AI system 12:27failures? How do you version data and 12:29prompts even model weights with rigor? 12:31How do you manage context windows 12:34economically? How do you build semantic 12:36forensics? How do you do debugging on a 12:39system that's in production and you 12:40could not fully debug it prior? And that 12:42gets into safety and assurance 12:44engineering. How do you create live 12:46evaluation cultures? How do you build 12:48safety cases that have explicit maps 12:51between hazards and mitigations and 12:54evidence change for audit? How do you 12:56design for hostile inputs as an 12:58assumption? How do you show what the 13:00system thought when the system is 13:01probabilistic? These are new skills for 13:03a reason. We don't fully have the 13:05answers here, but today's engineers are 13:08being tasked with using that core 13:09engineering skill set I talked about to 13:11attack these kinds of problems in the 13:13age of AI. So, let's revisit and ask 13:16ourselves in that world with these kinds 13:18of new engineering skills emerging, what 13:21human skills really pop out. We talked 13:24about some initially. We talked about 13:26the importance of of intent to 13:28specification. We talked about 13:30probabilistic systems and how you write 13:32guarantees against them. about thinking 13:33at scale, about economic engineering. 13:35There's some other human skills that I 13:37want to call out here that are going to 13:39be useful regardless of scale and 13:42regardless of where you are across these 13:44new engineering disciplines. I wanted to 13:46give you a flavor of what's new. And 13:48then we're going to come back here to 13:50what stays the same. System intuition 13:52saves stays the same. Good engineers 13:55sense bottlenecks. They sense problems 13:57to solve. They recognize emergent 13:59failures. Empathy is an engineering 14:01skill because empathy requires you to 14:04bridge between precision that machines 14:06require and the ambiguity that humans 14:08deal with. And effectively that's what 14:10we're all doing in the age of AI. 14:12Empathy requires you to understand how 14:14millions of users will misuse your API. 14:17It requires understanding how to build 14:19systems that account for human nature. 14:21Judgment under uncertainty. That's 14:23another engineering skill. Requires you 14:25to make expensive decisions on very 14:28incomplete information. It requires you 14:30to know when good enough beats perfect, 14:32which by the way is one of those 14:33distinguishing characteristics of really 14:35good senior engineers. It requires you 14:37to choose appropriate trade-offs within 14:39constraints to decide when randomness is 14:41helpful versus when it's unhelpful. 14:43Another human skill is the orchestration 14:45of complexity. You have to be able to 14:48coordinate tool chains to conduct 14:50symphonies of intelligence involving 14:52multiple LLMs that actually work and 14:55deliver value. You have to be able to 14:57manage distributed systems where the 14:59components don't have pre-written 15:00contracts increasingly. You have to 15:02understand that semantic composability 15:06doesn't follow traditional rules of 15:08software engineering and you're going to 15:09have to help create them. There's a lot 15:11of complexity to orchestrate. Now, we've 15:12always had to orchestrate complexity 15:14from the engineering perspective. It 15:16gets harder now. Why? Why am I writing 15:18this? Why does all of this matter? The 15:20truth is, if we didn't have engineers, 15:22we would be in real trouble. The stakes 15:24have never been higher. AI makes it so 15:26trivial to ship failure at scale. As I 15:29said, systems will now accept paragraphs 15:31of instructions from the open internet. 15:33The attack vectors are so high. Model 15:36rot can corrupt systems without any 15:38warning at all. The reality is that we 15:40we have to recognize that engineering is 15:44what enables us to take this wild world 15:47where LLMs can speak language and where 15:50they're probabilistic and where they can 15:51bring intelligence to bear. Engineers 15:54help wrestle that into an operational 15:56stable production system. They help to 15:58bring observability to those systems. 16:00They help to debug those systems. They 16:02help to figure out the energy and 16:04compute footprint that's appropriate for 16:06those systems. And engineers ultimately 16:08are cultural architects. They help us to 16:11design workflows that preserve human 16:14judgment. They help us to build 16:15interfaces that make AI reasoning 16:18inspectable. They help us to prevent 16:20automation bias and skill atrophy if 16:23they're designing systems well. 16:25Ultimately, they help us maintain 16:27dignity because they can build systems 16:29that have to admit ignorance. Engineers 16:31have more responsibility now, not less. 16:34And that's one of the takeaways that I 16:35want you to sit with. I'm going to close 16:38with what I would suggest are three new 16:40laws of engineering in the age of AI. 16:42And I chose these three for a reason. 16:44Number one, if you can't write what is 16:47invariant, then you have not engineered 16:50the system. So this captures the 16:51fundamental difference between vibe 16:53coding and engineering. In the age of 16:54AI, you have to be able to understand 16:56that LLM will give you likelihood, not 16:58correctness. And so an invariant, 17:01something that doesn't change, is what 17:03separates engineering from gambling, 17:05which a lot of vibe coders are gambling. 17:07It makes you think what properties 17:09survive when the probabilistic 17:11components do unexpected things. It 17:13forces you to do resilience engineering. 17:15It's the difference between make it work 17:17and define what working means between it 17:19seems right and here's what must always 17:22be true. I would also say that 17:23engineering the the second law of 17:25engineering for lack of a better term. 17:27If you can't measure it in production, 17:30then you didn't really build it. This 17:32requires you to go beyond the demo 17:33culture that AI enables. It's really 17:36easy to generate prototypes. Now AI 17:38makes demos almost free, but production 17:41is different. Production means real 17:42users doing really weird things. It 17:44means scale effects. It means edge 17:46cases. It means model drift. Engineering 17:49insists on observability, telemetry, 17:51semantic forensics. You can't just ship 17:54code that worked once in a workbook. And 17:56we've always insisted on production as a 17:58bar for engineering. It is harder to hit 18:00now. And so that's why I'm reiterating 18:02it in this second law of engineering. 18:04The third one, the third law, if you 18:06can't explain why it failed, you haven't 18:09owned the system. Again, we've 18:10emphasized accountability with 18:12engineering, but it is more important 18:14now. AI systems are entering regulated 18:16spaces, spaces where they have to be 18:18able to explain what happened. Human 18:21responsibility requires us to own the 18:24explanation, the accountability, and 18:25where the buck stops. If you can't 18:27explain what happened in your system to 18:29a very smart non-engineer, then you 18:32probably don't really understand your 18:34own system, and you probably didn't 18:36really engineer it. And so these three 18:38laws are actually designed to fit 18:39together. They're designed to be three 18:41pieces of the engineering life cycle, 18:43the new engineering life cycle in the 18:45age of AI. Number one is specification. 18:48What we promise when we build a system, 18:51how we write contracts that stick 18:52regardless of probabilistic systems. 18:55That's the invariant piece. Number two 18:57is verification or measurement. How we 18:59prove that we delivered something in 19:01production. And number three is 19:02accountability or explanations. How you 19:04take ownership of outcomes. Ultimately, 19:08the engineers that succeed are going to 19:10be engineers who think before they 19:11build, who validate in production, and 19:14who own the consequences. And you know 19:15what? That's not a new skill. That is 19:18part of why I created this video, 19:21circling all the way back to the 19:22beginning. This is a love letter to 19:25engineering. Because even though 19:26engineering is evolving in the age of 19:28AI, and I hope I've given you a sense of 19:30that here, engineering principles are 19:32remarkably constant. The need to design 19:35systems that work isn't changing. If you 19:37walk away with anything, I want you to 19:39walk away with the recognition that 19:41computing requires engineering. 19:44Engineering isn't going out of style. 19:46And if anything, the increased 19:48complexity of computing, the 100x, the 19:51thousandxed complexity of computing that 19:53we get in the age of AI is going to 19:56increase the need for skilled engineers. 19:59So there you go. That's why I think 20:01engineers aren't going anywhere. And 20:02that's why I think we need to appreciate 20:04them more.