The Crumbling Knowledge Economy
Key Points
- The pace of human knowledge is accelerating dramatically, moving from a century‑long doubling before 1900 to potentially a year‑long or faster “knowledge hyperinflation” today, driven by AI‑enabled software cycles.
- This rapid expansion makes it practically impossible for anyone to keep up with all new information, leading to widespread uncertainty about which skills or credentials (MBA, AI degree, CS, liberal arts) actually matter.
- Traditional cultural markers of knowledge—college degrees, curricula, and the intrinsic value of learning—are losing relevance, turning education into a ritual focused on grades, networks, and job access rather than genuine understanding.
- As the system feels increasingly “rigged,” many see leveraging tools like ChatGPT to game grades and hiring as the rational response, highlighting a deep crisis of trust in the knowledge economy.
- The underlying driver of these shifts is AI, which is reshaping how knowledge is created, distributed, and monetized, forcing a fundamental reassessment of education and career pathways.
Sections
- Knowledge Hyperinflation and AI Acceleration - The speaker critiques the broken knowledge economy by tracing Buckminster Fuller’s knowledge‑doubling curve, highlighting how AI has sped it up into a hyperinflation of information that overwhelms anyone trying to keep up.
- Rethinking Hiring in the AI Era - The speaker uses Monster’s bankruptcy and AI‑generated résumés to argue that traditional resumes and credentials have lost meaning, calling for a fundamental redesign of how we assess job applicants.
Full Transcript
# The Crumbling Knowledge Economy **Source:** [https://www.youtube.com/watch?v=W3cIo4xcrWo](https://www.youtube.com/watch?v=W3cIo4xcrWo) **Duration:** 00:08:45 ## Summary - The pace of human knowledge is accelerating dramatically, moving from a century‑long doubling before 1900 to potentially a year‑long or faster “knowledge hyperinflation” today, driven by AI‑enabled software cycles. - This rapid expansion makes it practically impossible for anyone to keep up with all new information, leading to widespread uncertainty about which skills or credentials (MBA, AI degree, CS, liberal arts) actually matter. - Traditional cultural markers of knowledge—college degrees, curricula, and the intrinsic value of learning—are losing relevance, turning education into a ritual focused on grades, networks, and job access rather than genuine understanding. - As the system feels increasingly “rigged,” many see leveraging tools like ChatGPT to game grades and hiring as the rational response, highlighting a deep crisis of trust in the knowledge economy. - The underlying driver of these shifts is AI, which is reshaping how knowledge is created, distributed, and monetized, forcing a fundamental reassessment of education and career pathways. ## Sections - [00:00:00](https://www.youtube.com/watch?v=W3cIo4xcrWo&t=0s) **Knowledge Hyperinflation and AI Acceleration** - The speaker critiques the broken knowledge economy by tracing Buckminster Fuller’s knowledge‑doubling curve, highlighting how AI has sped it up into a hyperinflation of information that overwhelms anyone trying to keep up. - [00:03:15](https://www.youtube.com/watch?v=W3cIo4xcrWo&t=195s) **Rethinking Hiring in the AI Era** - The speaker uses Monster’s bankruptcy and AI‑generated résumés to argue that traditional resumes and credentials have lost meaning, calling for a fundamental redesign of how we assess job applicants. ## Full Transcript
We need to talk about the knowledge
economy. It's fundamentally broken and I
want to take it apart and talk about
each of the pieces. We're going to talk
about college. We're going to talk about
job seeking. We're going to talk about
how knowledge accumulates in the economy
and underneath it all AI. So stick with
me. Number one, let's learn about the
knowledge doubling curve. The knowledge
doubling curve is actually something
that Buckminister Fuller came up with in
the 20th century. What he realized is
the pace at which humans are gaining
knowledge is getting faster. And so
until 1900, Buckminister Fuller observed
that it took about a century for human
knowledge to double. Post World War II,
the doubling rate had gotten four times
faster. It was up to 25 years. Sources
in the early 2000s suggested it had
gotten as fast as every 12 or 13 months.
Guess what, guys? It got faster. When
you have the ability of entire gigantic
pieces of software to be re-released and
re-released every 3 or 4 months because
of AI, you are seeing signs that you are
in a world where we are super linear on
this knowledge curve. What I call it is
a knowledge hyperinflation economy. It's
a world where knowledge is becoming so
ubiquitous it is almost impossible to
keep up. You can't read it all. You
can't consume it all. And this looks
like the world we live in, doesn't it? I
get so many DMs and emails every day
saying, "Nate, how do you keep up with
all of it?" And the honest truth is, I
can't and you can't and nobody can. We
all just do our best. It's not just
keeping up with AI news, though. It is
also, hey, how can I get the skills I
need for this new economy? Where do I go
to learn? Do I go back and get my MBA?
Is there an MBA in AI that I can get? Do
I go to college? And do I try and like
major in computer science when computer
science curriculums haven't changed? Do
I try and go for liberal arts because
apparently, you know, Andre Carpathy
says the future programming language is
English. So, I'm going to double down on
my Toltoy. What's it going to be? This
uncertainty itself is a sign that the
cultural signifier of knowledge is
breaking down. What knowledge used to
mean in human society is no longer true.
And so all of the cultural rituals that
go with knowledge are losing their
value. That is why people feel like they
can question college. That is why
students feel like it's rational to hit
up Chad GPT and just get through college
with as good a grades as possible. It's
a ritual that's lost meaning. It's not
about learning for the sake of learning.
It's about getting the grades, getting
the network, getting into the job. This
is why Roy and Clooney have hit such a
chord because they are speaking a truth
that a lot of people have held in their
hearts and not wanted to say out loud
that this feels like a rigged system and
the only rational thing to do in a
rigged system is to do whatever you can
to get ahead. Unfortunately,
the knowledge economy doesn't just stop
in a college situation. It doesn't just
stop with how we acquire skills. It
bleeds into the job market as well. And
so when we look around us, the job
application system is also broken.
Monster filed for bankruptcy. Do you
remember the Monster ads, the Super Bowl
ads? It was the whole thing. Anyway,
Monster was one of the first internet
companies and they made their bones on
saying that jobs would be easy to find
on the internet. And now, guess what?
Résumés aren't worth a lot. Monster is
not sticking around. What do you do?
What I take away from this is that we
are in a world where not only is the
accumulation of knowledge something that
has become devoid of meaning. So is the
demonstration of knowledge to job
applicants and or and to employers. Job
applicants have to demonstrate knowledge
and they only can do it by resume
because that's what we've always done.
And if they can only do it by resume
because that's what we've always done.
We have no way of knowing if applicants
are any good because the stochastic
parrots can simulate a resume perfectly.
So what do we do? And right now the
truth is no one knows. Some of the big
companies are coming in and saying come
in person. Some of them are coming in
and saying do something on a whiteboard.
Write something where it's clearly not
an AI helping you. At least for now
until the AI is in your glasses. The AI
is not in my glasses. The point is that
we answers for jobs that do not depend
on knowledge. We need answers for jobs
that do not depend on showing that you
have gone to college and know all the
things because those things are devoid
of meaning now. And so we have to
rethink from the ground up what makes a
human job interesting and worth doing
and meaningful. And it doesn't make it
easier to do that if all around us
everyone is shouting about AI taking
away jobs. It is much more productive to
sit there and actually ask yourself,
well, what do we know about what AI is
good at? And what do we know about what
AI is architecturally maybe not so great
at? And if we know what those things
are, can we start to get a sense of
where our skills might lie in the
future? And so I want to suggest to you
a short list of five five things that I
think you can take with you and that I
think are unlikely to be disrupted by AI
given where jagged intelligence is
going. Number one is taste. This is get
talked about a lot. I don't want to
pretend I'm the first person to say it,
but knowing what to build matters.
Knowing how to solve a problem matters.
Choosing the right thing from the
million options AI gives you matters.
Understanding what not to build matters.
Taste matters. Extreme agency is number
two. The ability to operate with minimal
direction. Maximize ownership. If AI is
good at execution, humans must get good
at goal setting. We must get good at
defining priorities, course correcting,
building systems. Agency is going to be
highly valued. Number three is learning
velocity. This one does not get talked
about as much. It's not about knowledge
accumulation. It's about speed of
adaptation. If the half-life of a
technical skill is being compressed
farther and farther, then the value is
going to acrue to those who can learn
faster than knowledge inflates, who can
surf the wave of obsolescence instead of
just drowning in it. That's what
matters. And I want to suggest to you
that actually LLMs are not super great
at learning right now. And the model
makers know it right now. No LLM really
fundamentally learns after it is
released. Now, are they working on that
problem? Yes, they're working on that
problem, but that's a lot to work on,
and it is fair to describe it as one of
the weak spots in the jagged
intelligence of AI right now. Number
four, intent horizon. The capacity to
maintain coherent goals. I've mentioned
this on this channel before. I think
it's a really big one. I don't know why
it's not getting called out more. I
don't care if your AI can go from 3
hours to 7 hours. It's nice. It's
helpful for tactical tasks, but it's not
a gamecher. We need very long-term
thinking and that requires systems that
are on that are not just instantiated
and amnesiac when they appear which is
what Andre Carpathy described in his Y
cominator talk last week that they're
they just have no previous memory
they're just instantiated and here's the
chat that is a fundamental problem
interruptability is number five what do
we do when we get interrupted most LLMs
that is against best practice you
interrupt the chat that's a bad idea
don't do that try and keep the chat
really consist consistent. Come on, what
are you doing? Humans can be
interrupted. Humans can switch. Humans
understand that shift. And so what I
want to suggest is that those are
examples of the kinds of skills that the
jagged patterns of AI intelligence are
telling us large language model
architectures aren't intuitively great
at. They're not telling us that capital
isn't being allocated to fix those weak
spots. It is. It's just that they're
weak spots. And the pace of gain for
those weak spots in the intelligence
frontier may not be nearly as fast as
the pace of gain for areas where LLMs
are very very strong like pure
knowledge. Okay, I think you get the
idea. The choice that defines the next
decade is this. We are living in an
hyperinflating knowledge economy. Do we
keep trying to desperately outnow the
machines and accumulate credentials in a
hyperinflationary spiral or do we start
to get into the judgment economy? We
start to think about knowing when the
machines are wrong, knowing when they're
rigid, knowing when they're headed
toward catastrophe, knowing how to have
good judgment in a world that is full of
knowledge. That is where I want to leave
you. And I want to challenge you to tell
me what other skills is AI not showing
very well right now on the jagged
frontier. Cheers.