Engineering Still Essential Amid AI Revolution
Key Points
- The speaker argues that AI actually heightens the importance of engineering because AI‑generated code can produce far‑reaching failures, requiring skilled engineers to oversee and safeguard systems.
- While AI can automate boilerplate and produce working code, creating robust, production‑ready engineered systems remains a distinct, human‑driven discipline.
- Engineers’ roles are shifting toward greater responsibility and partnership with AI, rather than being replaced; however, those who lack deep engineering understanding may be displaced.
- Talent variability among engineers has always existed, and the rise of AI merely amplifies the need for high‑impact engineers who can deliver value and navigate the complexities of AI‑augmented development.
Sections
- AI Elevates Engineering's Crucial Role - The speaker passionately defends engineering, asserting that AI amplifies its importance, warns junior engineers of misplaced fears, and stresses that AI‑generated code lacks the rigor of true engineered systems, raising higher failure risks.
- From Coding to Engineering - The speaker asserts that the digital divide is shifting from merely being able to code to possessing engineering expertise—skills such as system integration, security, and production deployment that can be acquired outside traditional computer‑science education and are what enable faster, more effective software development.
- From Probabilities to Contracts - The speaker argues that AI’s inherently probabilistic behavior necessitates engineers to impose deterministic contracts, probability budgets, and safeguards to control variance and emergent risks, especially when scaling to massive deployments.
- Emerging AI Engineering Disciplines - The speaker explains how engineering is evolving with AI by introducing new fields such as semantic engineering—debugging meaning flow and building semantic firewalls against injection attacks—and boundary engineering, which bridges probabilistic language models with deterministic system expectations.
- Engineering Empathy in AI Systems - The speaker argues that senior engineers must blend empathy, judgment under uncertainty, and sophisticated orchestration of complex, contract‑less AI components to prevent large‑scale failures in today’s high‑stakes, LLM‑driven landscape.
- Three Laws of AI Engineering - The speaker outlines three core principles—measureability in production, mandatory observability and forensic telemetry, and the ability to explain failures—to ensure accountability and reliable deployment of AI systems beyond demo prototypes.
Full Transcript
# Engineering Still Essential Amid AI Revolution **Source:** [https://www.youtube.com/watch?v=gXbTh70m_q0](https://www.youtube.com/watch?v=gXbTh70m_q0) **Duration:** 00:20:06 ## Summary - The speaker argues that AI actually heightens the importance of engineering because AI‑generated code can produce far‑reaching failures, requiring skilled engineers to oversee and safeguard systems. - While AI can automate boilerplate and produce working code, creating robust, production‑ready engineered systems remains a distinct, human‑driven discipline. - Engineers’ roles are shifting toward greater responsibility and partnership with AI, rather than being replaced; however, those who lack deep engineering understanding may be displaced. - Talent variability among engineers has always existed, and the rise of AI merely amplifies the need for high‑impact engineers who can deliver value and navigate the complexities of AI‑augmented development. ## Sections - [00:00:00](https://www.youtube.com/watch?v=gXbTh70m_q0&t=0s) **AI Elevates Engineering's Crucial Role** - The speaker passionately defends engineering, asserting that AI amplifies its importance, warns junior engineers of misplaced fears, and stresses that AI‑generated code lacks the rigor of true engineered systems, raising higher failure risks. - [00:03:57](https://www.youtube.com/watch?v=gXbTh70m_q0&t=237s) **From Coding to Engineering** - The speaker asserts that the digital divide is shifting from merely being able to code to possessing engineering expertise—skills such as system integration, security, and production deployment that can be acquired outside traditional computer‑science education and are what enable faster, more effective software development. - [00:07:29](https://www.youtube.com/watch?v=gXbTh70m_q0&t=449s) **From Probabilities to Contracts** - The speaker argues that AI’s inherently probabilistic behavior necessitates engineers to impose deterministic contracts, probability budgets, and safeguards to control variance and emergent risks, especially when scaling to massive deployments. - [00:10:36](https://www.youtube.com/watch?v=gXbTh70m_q0&t=636s) **Emerging AI Engineering Disciplines** - The speaker explains how engineering is evolving with AI by introducing new fields such as semantic engineering—debugging meaning flow and building semantic firewalls against injection attacks—and boundary engineering, which bridges probabilistic language models with deterministic system expectations. - [00:14:14](https://www.youtube.com/watch?v=gXbTh70m_q0&t=854s) **Engineering Empathy in AI Systems** - The speaker argues that senior engineers must blend empathy, judgment under uncertainty, and sophisticated orchestration of complex, contract‑less AI components to prevent large‑scale failures in today’s high‑stakes, LLM‑driven landscape. - [00:17:30](https://www.youtube.com/watch?v=gXbTh70m_q0&t=1050s) **Three Laws of AI Engineering** - The speaker outlines three core principles—measureability in production, mandatory observability and forensic telemetry, and the ability to explain failures—to ensure accountability and reliable deployment of AI systems beyond demo prototypes. ## Full Transcript
This is a love letter to engineering. I
firmly believe that AI makes engineering
more essential, not less. And I'm going
to tell you why in detail because I
think that people who aren't engineers
don't understand this. And I think
increasingly junior engineers are
afraid. Because they have not
experienced what it's like in detail to
work with senior engineers at scale. I
have. I've worked with senior engineers.
I work with principal engineers. I know
what it feels like to have a very strong
engineering mind or collection of minds
in the room reviewing a technical
specification. I want to tell you in
this video why I am highly convicted
that engineering isn't going anywhere as
a discipline. And in fact, I will go
farther. I will say engineering is more
important now than it was before the age
of AI. The fear is real, but it's
backwards. Yes, absolutely. Boilerplate
code can be generated automatically. I
don't doubt that. I see it all the time.
Yes, AI can write working code from
natural language. That's true. But
working code and engineered systems are
worlds apart. They're not close to the
same thing. In fact, most people are
finding that out as they vibe code
systems that may be ready to launch to
friends and family, but are not ready
for actual production. I would also
argue that one of the reasons why
engineering matters more now is
precisely because the AI can write code.
The blast radius of AI generated
failures is exponentially higher when
the AI can write the code for you. So
we're not being replaced. Engineers are
not in danger of being replaced because
engineers are being asked to take
positions of greater responsibility over
AI with AI in partnership with AI and
we'll get into that. But I I want to lay
that out as a contention because I think
we just need to say it out loud. My
position is yes, there will be some
engineers who don't understand how
engineering works that will absolutely
lose their roles in the age of AI and
frankly many of them probably would have
lost their roles previously. One of the
interesting things when you work with a
lot of different engineers as I have is
that you realize how variable the talent
mix is across engineers. An engineer at
the same level can be worlds apart in
terms of actual capability and impact to
the business. And anyone who's worked
with engineers will tell you the same
thing. I had an intern when I worked at
Amazon who did more work and delivered
more value than senior engineers I knew
there. It just was the way it was. He
was motivated. He had a great
assignment. He was able to get something
into production and he did a great job.
Needless to say, he got an offer, right?
Like the point is that talent is
variable. Talent has always been
variable. And we shouldn't mistake the
fact that engineering is hard and talent
is variable from the impact that AI is
having on the engineering discipline.
There is absolutely an impact on the
engineering discipline from AI. And I'm
going to get into it in the rest of this
video, but it's not as simplistic as
saying AI equals bad for engineering,
which is what I see mostly. And I'm
tired of it. And so that's why I'm
basically making a video as a love
letter to engineering. So let's let's
bite off the first piece. I talked about
AI generated code. Let's talk about the
vibe coding piece of this, right? People
talk about vibe coding as replacing
engineering. Now anybody can speak an
intent into lovable.dev dev and they can
get, you know, a working software back
or at least that's the idea. AI tools, I
would argue, create a multiplier effect
for trained engineers more more than
they create democratization of code. And
I know they democratize code. I know
people who have never coded before who
were able to do some coding. Now,
engineers can do even more. Engineers
can compress their expertise to carry
architectural intent much more easily
than non-engineers because they
understand the underlying technical
systems. Non-engineers, frankly, when
they get the chatbot and they get the
ability to vibe code, I've seen them
build stuff, but I've also seen them
very frequently get just enough rope to
hang themselves. Engineers understand
those limitations that coding brings.
They understand how to read code. They
understand how the system components
work together and they're able to go
faster as a result. If the non-engineers
get roped to hang themselves, the
engineers get rocket fuel. So the
digital divide is shifting very rapidly
from who can code to who can engineer.
And I want to pause there because I
actually don't believe that only people
from conventional computer science
backgrounds can engineer. If you
understand how software components go
together, that is increasingly the
essential skill. If you can engineer
systems so they're efficient, if you
understand how backend works with front
end, if you understand what attack
surfaces look like from a security
perspective, if you understand how to
move software into production, what that
requirement looks like, those are not
exclusively skills you learn in computer
science. And in fact, many engineers
will tell you they didn't learn them in
their computer science majors. They
learned classical computer science.
Engineering is something we have
frequently learned on the job. But it's
not something limited only to software
engineers. A lot of people can learn
engineering principles. And what I've
observed is that part of the reason why
I'm saying software engineers go faster
with vibe coding is because they have
already inculcated into themselves.
They've absorbed into themselves these
principles of engineering. So they feel
native and that is really the
differentiator. It's not that they have
a computer science degree. It's not that
they know every single bit of JavaScript
or TypeScript or whatever it is. It's
that they know how to engineer. And
that's encouraging because it means if
you're trying to also learn how to build
efficiently in the age of AI, you can do
so faster by just learning some of the
skills of engineering. And I'm going to
lay out what I view as some of the new
core skills for engineering based on
lots of work with engineers through this
AI transition. So the thing I want to
leave you with as we move on from sort
of the AI coding piece and get into some
of the other parts of the this
engineering domain that we're exploring
in this video, effective prompting is an
engineering skill. When I and I have
taught courses, right? I've taught
courses on effective prompting. And
increasingly I think that this is a
truth that we don't admit. Effective
prompting is an engineering skill that
requires some degree of engineering
understanding. And the more engineering
understanding you have, the more
effective your prompting is going to be.
All right, let's go from just sort of
that sense of vibe coding and getting
rid of the fear of engineering into the
human responsibilities that I don't
think change and that I think still sit
with engineers. Now, some of these will
sit with engineers more in at scaled
systems like at Amazon, at Google, etc.
And some will still work for smaller
teams as well. But I wanted to call out
the human responsibilities here because
I think that we spend a lot of time
talking about AI responsibilities and
not a lot of time talking about the
human component. And the human component
is I would argue getting more important
as we multiply our code by ax. Number
one, it is a human responsibility to
translate intent to correct
specification. So name the invariance,
name the hazards, name the success
criteria, translate human needs into
system edges and boundaries. decide not
just whether we can do something but
should we do something. You are carrying
the weight of systems that if you're
working at a large company can affect
billions of people. And so translating
intent to specification implies a degree
of skin in the game that AI systems
don't have. The second one I want to
call out is humans are responsible for
writing guarantees on probabilistic
systems.
AI is behaviorally at scale a functional
probabilistic system. It's not always X
or Y. It is a probability at scale. You
are turning likelihood into contracts if
you are engineering systems. You have to
be able to guarantee outcomes. You have
to be able to guarantee edges. You have
to be able to guarantee security to some
degree. Fundamentally, a lot of the
human job is taking these probabilistic
systems and writing contracts against
them that you can uphold. So you have to
be able to create boundaries that are
deterministic, not probabilistic. You
have to define probability budgets that
work at scale across pipelines. You have
to ensure that what must never happen
really doesn't ever happen. And this I
mean I've talked about scale a lot, but
this is true even at a small scale. Chad
GPT will not give you the same response
if you give it the same prompt. There
will be subtle differences. Your job as
an engineer, the role of engineering is
to ensure that that kind of variance is
not toxic to the system. It is also a
human responsibility to think at scale.
You have to understand emergent
behaviors when you scale up to a very
very large footprint. And this one I
think is specific to big companies. But
understanding emergent behaviors at 100
million boxes is its own skill set. It
is a human skill set that very few
humans have. Knowing when algorithms
become bottlenecks and where they
bottleneck is a human skill and it gets
at something that is essentially a risk
with AI. AI does much much better at
writing code than deleting code. And one
of the things that you see with really
good engineers at scale is they know
what they can remove. Being able to
intuitit how the system works at scale,
being able to intuitit where phase
transitions from stable to chaotic occur
in complicated systems. I've seen
principal engineers do that. It's a
remarkable skill. It's a human skill and
it means that they understand how to
effectively deal with a world where one
in a billion events are actually things
that happen on a regular basis because
of the trillions of events that they're
processing. And I don't want to terrify
you if you are an engineer who has not
worked at that scale. You'll notice that
this is just one of a very large array
of skills that I'm talking about in
engineering. My intent here is not to
convey that only those engineers that
work at 100 million boxes scale will
survive. Instead, I'm trying to call out
that that is one aspect of engineering
that remains very human even if AI
increasingly assists in helping us
understand these systems. The last human
skill that I want to call out is
economic engineering. So, you have to be
able to manage intelligence like a
utility. You have to be able to optimize
latency and quality and cost through
trade-offs. You have to be able to
design degraded experiences that
prioritize value even with additional
constraints. You have to be able to
understand where inefficiency matters
and how it impacts margins. You have to
engineer systems especially in an age
when tokens are intelligent and tokens
cost money. How do you deliver
intelligence cost economically, cost
effectively? That's a human skill and
that's a skill that is scale invariant.
You you should care about that even at a
small scale. Now I've talked about just
a few s there are more human
responsibilities but I also as we're
touring across the engineering domain
that I love so much I want to talk about
some of the new disciplines that are
emerging because it's absolutely true
that engineering is changing and I don't
want to pretend it's not and so I'm
going to suggest for you a few of the
ways that we see engineering starting to
shift in the age of AI and then we'll
come back to the human skills and kind
of revisit them after we look at that.
So, semantic engineering, that's a new
discipline, right? How do you debug
meaning flow, not just data flow? How do
you build semantic firewalls against
injection attacks? I saw a new injection
attack just today where someone can use
the name field in chat GPT to prompt
inject something. Injection attacks come
in all shapes and sizes. People are able
to put injection attacks in white text
on white on Reddit boards now because
the system can't distinguish between
your prompt and the context content it's
reading. It's up to engineers to figure
out how to address this stuff. It's up
to engineers that design to design
systems that will appropriately refuse
to act. And no, it is not just model
makers. Engineers installing these
systems have the accountability to act
as well. Boundary engineering. Engineers
have to architect the space between the
probabilistic world of the LLM and the
deterministic world that we expect with
software. They have to create interfaces
that feel consistent. And yes, I am
going to go out on a limb and say not
all interfaces are going to created be
created by AI on the fly. I don't think
that's true. They have to be able to
maintain human AI boundaries in ways
that preserve human agency. Increasingly
part of the engineering responsibility
is figuring out how to map that boundary
between human and LLM collaboration in
software at scale. Memory and knowledge
engineering is another one. How do you
build institutional memory for AI system
failures? How do you version data and
prompts even model weights with rigor?
How do you manage context windows
economically? How do you build semantic
forensics? How do you do debugging on a
system that's in production and you
could not fully debug it prior? And that
gets into safety and assurance
engineering. How do you create live
evaluation cultures? How do you build
safety cases that have explicit maps
between hazards and mitigations and
evidence change for audit? How do you
design for hostile inputs as an
assumption? How do you show what the
system thought when the system is
probabilistic? These are new skills for
a reason. We don't fully have the
answers here, but today's engineers are
being tasked with using that core
engineering skill set I talked about to
attack these kinds of problems in the
age of AI. So, let's revisit and ask
ourselves in that world with these kinds
of new engineering skills emerging, what
human skills really pop out. We talked
about some initially. We talked about
the importance of of intent to
specification. We talked about
probabilistic systems and how you write
guarantees against them. about thinking
at scale, about economic engineering.
There's some other human skills that I
want to call out here that are going to
be useful regardless of scale and
regardless of where you are across these
new engineering disciplines. I wanted to
give you a flavor of what's new. And
then we're going to come back here to
what stays the same. System intuition
saves stays the same. Good engineers
sense bottlenecks. They sense problems
to solve. They recognize emergent
failures. Empathy is an engineering
skill because empathy requires you to
bridge between precision that machines
require and the ambiguity that humans
deal with. And effectively that's what
we're all doing in the age of AI.
Empathy requires you to understand how
millions of users will misuse your API.
It requires understanding how to build
systems that account for human nature.
Judgment under uncertainty. That's
another engineering skill. Requires you
to make expensive decisions on very
incomplete information. It requires you
to know when good enough beats perfect,
which by the way is one of those
distinguishing characteristics of really
good senior engineers. It requires you
to choose appropriate trade-offs within
constraints to decide when randomness is
helpful versus when it's unhelpful.
Another human skill is the orchestration
of complexity. You have to be able to
coordinate tool chains to conduct
symphonies of intelligence involving
multiple LLMs that actually work and
deliver value. You have to be able to
manage distributed systems where the
components don't have pre-written
contracts increasingly. You have to
understand that semantic composability
doesn't follow traditional rules of
software engineering and you're going to
have to help create them. There's a lot
of complexity to orchestrate. Now, we've
always had to orchestrate complexity
from the engineering perspective. It
gets harder now. Why? Why am I writing
this? Why does all of this matter? The
truth is, if we didn't have engineers,
we would be in real trouble. The stakes
have never been higher. AI makes it so
trivial to ship failure at scale. As I
said, systems will now accept paragraphs
of instructions from the open internet.
The attack vectors are so high. Model
rot can corrupt systems without any
warning at all. The reality is that we
we have to recognize that engineering is
what enables us to take this wild world
where LLMs can speak language and where
they're probabilistic and where they can
bring intelligence to bear. Engineers
help wrestle that into an operational
stable production system. They help to
bring observability to those systems.
They help to debug those systems. They
help to figure out the energy and
compute footprint that's appropriate for
those systems. And engineers ultimately
are cultural architects. They help us to
design workflows that preserve human
judgment. They help us to build
interfaces that make AI reasoning
inspectable. They help us to prevent
automation bias and skill atrophy if
they're designing systems well.
Ultimately, they help us maintain
dignity because they can build systems
that have to admit ignorance. Engineers
have more responsibility now, not less.
And that's one of the takeaways that I
want you to sit with. I'm going to close
with what I would suggest are three new
laws of engineering in the age of AI.
And I chose these three for a reason.
Number one, if you can't write what is
invariant, then you have not engineered
the system. So this captures the
fundamental difference between vibe
coding and engineering. In the age of
AI, you have to be able to understand
that LLM will give you likelihood, not
correctness. And so an invariant,
something that doesn't change, is what
separates engineering from gambling,
which a lot of vibe coders are gambling.
It makes you think what properties
survive when the probabilistic
components do unexpected things. It
forces you to do resilience engineering.
It's the difference between make it work
and define what working means between it
seems right and here's what must always
be true. I would also say that
engineering the the second law of
engineering for lack of a better term.
If you can't measure it in production,
then you didn't really build it. This
requires you to go beyond the demo
culture that AI enables. It's really
easy to generate prototypes. Now AI
makes demos almost free, but production
is different. Production means real
users doing really weird things. It
means scale effects. It means edge
cases. It means model drift. Engineering
insists on observability, telemetry,
semantic forensics. You can't just ship
code that worked once in a workbook. And
we've always insisted on production as a
bar for engineering. It is harder to hit
now. And so that's why I'm reiterating
it in this second law of engineering.
The third one, the third law, if you
can't explain why it failed, you haven't
owned the system. Again, we've
emphasized accountability with
engineering, but it is more important
now. AI systems are entering regulated
spaces, spaces where they have to be
able to explain what happened. Human
responsibility requires us to own the
explanation, the accountability, and
where the buck stops. If you can't
explain what happened in your system to
a very smart non-engineer, then you
probably don't really understand your
own system, and you probably didn't
really engineer it. And so these three
laws are actually designed to fit
together. They're designed to be three
pieces of the engineering life cycle,
the new engineering life cycle in the
age of AI. Number one is specification.
What we promise when we build a system,
how we write contracts that stick
regardless of probabilistic systems.
That's the invariant piece. Number two
is verification or measurement. How we
prove that we delivered something in
production. And number three is
accountability or explanations. How you
take ownership of outcomes. Ultimately,
the engineers that succeed are going to
be engineers who think before they
build, who validate in production, and
who own the consequences. And you know
what? That's not a new skill. That is
part of why I created this video,
circling all the way back to the
beginning. This is a love letter to
engineering. Because even though
engineering is evolving in the age of
AI, and I hope I've given you a sense of
that here, engineering principles are
remarkably constant. The need to design
systems that work isn't changing. If you
walk away with anything, I want you to
walk away with the recognition that
computing requires engineering.
Engineering isn't going out of style.
And if anything, the increased
complexity of computing, the 100x, the
thousandxed complexity of computing that
we get in the age of AI is going to
increase the need for skilled engineers.
So there you go. That's why I think
engineers aren't going anywhere. And
that's why I think we need to appreciate
them more.