AI's Time Compression and Intent
Key Points
- The AI revolution is “hyper‑compressing” time for humans, making us feel constantly rushed to keep up with new news, prompts, and agents.
- Unlike humans, whose perception of time is subjective and non‑linear, AI experiences time as a logical, clock‑driven metric that speeds up as compute power grows.
- For AI, the compression isn’t about shortening tasks but about fitting more work into the same unit of time, effectively expanding what can be done per second.
- A major current limitation is AI’s inability to maintain intent and contextual awareness over long periods, a capability humans still outperform despite our own forgetfulness.
- Experts project that by around 2026 AI agents may be able to sustain focus on a task for a week, but the rapid scaling of raw AI intelligence is outpacing the development of long‑term intent retention.
Sections
- AI’s Logical Compression of Time - The speaker explains how AI accelerates work by compressing time through a clock‑driven, rational perception, contrasting it with humans’ subjective, “hyper‑compressed” experience of rapid information flow.
- Beyond Conversation: The Physical Turing Test - The speaker notes that AI can already fool humans in conversational Turing tests, but the far tougher physical Turing test—requiring robots to move and interact in the world like people—is still distant, exposing the gap between sci‑fi visions and present technology.
- Compressing Decades of AI Training - The speaker explains how Nvidia’s virtual simulation environment and massive parallel compute can collapse ten years of AI model training into just a few hours, illustrating how scaling hardware expands the effective “time” available for AI work.
- AI Agent Devon: Scope and Limits - The speaker explains how to effectively manage the Devon engineering agent—assigning well‑defined tasks, verifying tools, and reviewing results—while highlighting its strengths as an intern‑level coder and its shortcomings for broader architectural decisions or extensive work due to token limits and occasional off‑track behavior.
Full Transcript
# AI's Time Compression and Intent **Source:** [https://www.youtube.com/watch?v=3fQbxz--nFk](https://www.youtube.com/watch?v=3fQbxz--nFk) **Duration:** 00:12:41 ## Summary - The AI revolution is “hyper‑compressing” time for humans, making us feel constantly rushed to keep up with new news, prompts, and agents. - Unlike humans, whose perception of time is subjective and non‑linear, AI experiences time as a logical, clock‑driven metric that speeds up as compute power grows. - For AI, the compression isn’t about shortening tasks but about fitting more work into the same unit of time, effectively expanding what can be done per second. - A major current limitation is AI’s inability to maintain intent and contextual awareness over long periods, a capability humans still outperform despite our own forgetfulness. - Experts project that by around 2026 AI agents may be able to sustain focus on a task for a week, but the rapid scaling of raw AI intelligence is outpacing the development of long‑term intent retention. ## Sections - [00:00:00](https://www.youtube.com/watch?v=3fQbxz--nFk&t=0s) **AI’s Logical Compression of Time** - The speaker explains how AI accelerates work by compressing time through a clock‑driven, rational perception, contrasting it with humans’ subjective, “hyper‑compressed” experience of rapid information flow. - [00:03:36](https://www.youtube.com/watch?v=3fQbxz--nFk&t=216s) **Beyond Conversation: The Physical Turing Test** - The speaker notes that AI can already fool humans in conversational Turing tests, but the far tougher physical Turing test—requiring robots to move and interact in the world like people—is still distant, exposing the gap between sci‑fi visions and present technology. - [00:06:41](https://www.youtube.com/watch?v=3fQbxz--nFk&t=401s) **Compressing Decades of AI Training** - The speaker explains how Nvidia’s virtual simulation environment and massive parallel compute can collapse ten years of AI model training into just a few hours, illustrating how scaling hardware expands the effective “time” available for AI work. - [00:09:48](https://www.youtube.com/watch?v=3fQbxz--nFk&t=588s) **AI Agent Devon: Scope and Limits** - The speaker explains how to effectively manage the Devon engineering agent—assigning well‑defined tasks, verifying tools, and reviewing results—while highlighting its strengths as an intern‑level coder and its shortcomings for broader architectural decisions or extensive work due to token limits and occasional off‑track behavior. ## Full Transcript
I want to talk today about one of the
most subtle aspects but pervasive
aspects of this AI revolution. AI
compresses
time. And we are living through it,
right? We are living through this moment
where we feel like we're always trying
to catch up. DM after DM, email after
email I get is, "Nate, tell me how I can
keep up with this. The news drops on
Tuesday. The news drops on Wednesday.
The news drops on Wednesday night. I
can't keep up with everything. My
prompting has to evolve. I have to pick
up agents now. The list goes on. We feel
like we're living through
hypercompressed time is what I'm getting
at. But the thing I want to talk about
today is not our experience of time.
It's the AI's experience of time because
I think that's actually something we
need to understand better. We experience
time the way our species has always
experienced time. Going forward, we have
uh a surprisingly nonrational perception
of time. The older you get, the more you
understand the idea that the years that
you've had as an adult feel shorter than
the years that you had as a kid. So, our
sense of time is wonky and tied to our
species. I could go on and on. You can
read up on it. AI doesn't have any of
that. AI has logical clockdriven
perception of time. And because of
compute advances, AI ability to do
things in time continues to accelerate.
And so for AI, time is compressing as
well. But it doesn't feel the same way.
It's compressing the work you can do in
a unit of time, not compressing the time
it takes to do work. I'm going to say
that one more time. For humans, it feels
like time is getting short because there
is so much work to do. For AI, it feels
like work is getting compressed in
because there's so much more compute and
time is therefore expanding. And so even
though I'm not here to talk about the
interior perception of time for AI, I'll
leave that to the philosophy students,
it is clear that we have a very
different understanding of what can be
done in a given unit of time and it has
realworld implications for us. As an
example, we are not very far along on
the idea of AI agents maintaining intent
over time. It's very difficult to do
this. The projection right now is that
by 2026, maybe we will get to a point
where an AI agent can spend a week on a
task, which is a big deal. I got to tell
you, people at my work spend months on
tasks. We have to maintain strategic
alignment over, you know, a year's time.
We have to look multiple years into the
future. We need to have a much larger
sense of time. And when we do tasks, we
need to retain important context for
longer stretches of time, too. Now, we
have all been there. The juror tickets
do drop out of our brains. We do forget
context. We are forgetful. And so, it's
important not to judge the AI too hard
if it also forgets. But the point is
that humans generally speaking are
better at maintaining intent over time
than AI agents right now. And critically
AI intelligence scaling is happening
faster than intent over time scaling. So
in intelligence is going like this,
right? We all talk about it all the
time. It's going vertical. Great. But
the ability to scale intent over time is
moving like this. Not moving very fast.
And the only reason those slow advances
are super meaningful is because compute
advances and intelligence advances
continue to enable the AI to do more
with that time. And I want to come up
with a little case study that I ran
across from a Sequoia talk given by Jim
Fan who works at NVIDIA. Now, Jim does
robotics at NVIDIA and he proposes
something he calls the physical touring
test. So, if you're familiar with the
touring test, the idea is you don't know
if you're talking to a human or a robot.
And I'm vastly simplifying. We basically
have AIs that pass that now. Like, you
can literally run a classical touring
test and it will pass. And we've mostly
not noticed. And that's really funny
because all of the science fiction books
thought that when a robot could pass the
touring test, the whole world would
change. I guess you could say we're
living through an AI revolution and it's
changing. I don't have my flying cars
yet. I can't look out of the window of
my space
castle. That being said, the physical
touring test is a much harder bar and we
are not anywhere close to passing it.
And I suspect that science fiction has
sort of combined the conversational
touring test with the physical touring
test in most writing. And that's part of
the disconnect right now between the
future the writers of the 60s and 70s
and 80s envisioned and what we actually
have because the physical touring test
requires a robot to be able to
physically navigate a space like a
human. And I give credit to Jim. don't
really have words for it until he
started to put it together. So he talks
about the idea of like you're at a
hackathon, you are cleaning up after the
next morning, it's a complete mess. Your
living room is a disaster. There's a
pizza box here, there's a beer can
there, you were on the Balmer curve, you
have disorganized couch cushions, your
video game was up cuz you were playing
video games. Everything's a mess.
And what he wants to challenge us to do
is imagine a world where you go off to
work. You come back at 5, everything is
put back together. It's neat. The lamp
is standing up. Now, you cannot tell if
it was a robot that did it or a person
that did it because you live in a world
where robots can do all of that
complicated physical work without
issues.
And if you've seen any footage of robots
lately, you know that we are not close.
We are not close to a world where a
robot can go over to a beer can, gently
pick it up, put it in recycling,
navigate the whole house to do so, dodge
the dog, avoid the tennis ball that's on
the bottom of the stairs, uh, and then
come back and set the pillows out and
all of that. And do that for the entire
room and clean it up entirely,
autonomously, without direction.
Now, intelligence is scaling really,
really, really, really fast, but that
kind of ability to be in physical space
isn't. And that's sort of Jim's point.
And this is where it comes back to time
and AI. Jim suggests that part of what's
interesting about the physical touring
test is we can use virtual environments
like the one Nvidia launched this year
to compress a lot of training work for
AI into a small amount of actual clock
time. And so he talks in his
conversation with Ed Sequoia about the
fact that they were able to at
Nvidia take 10 years worth of training
in like ordinary time and com compress
it down to 2 hours 10 years to two hours
in a special simulated environment.
Because in the simulated environment,
you could parallelize, you could run
stuff super fast, and the chips and the
processors kept up, and the LLMs kept
up, and there was no reason to go as
slow as you would go in the real world
with training. And so, they were able to
take a 10-year training task and
compress it to two
hours. That is the kind of thing that
gets my wheels turning when I think
about how AI with compute scaling is
going to enable new kinds of work in
shorter time spans. And so if we go back
to this idea that for us work is growing
but time isn't changing and for the AI
time may seem to be expanding because
the compute and the capacity is
expanding and they can do so much more
as time passes inside a given unit of
time that given a new Blackwell chip you
can do more in an hour as an LLM than
you could do before with an H100. Right?
Just getting really physical.
So when you think about it that
way, if you have an agent working, I'm
back to software now. We're we're
leaving robotics behind. We'll do the
robotics conversation another time. If
you have an
agent and you are asking that agent at
work to do a task that normally takes
you three months and the agent does very
very high quality work but has a short
ability to maintain intent over time.
the intent over time may matter less
because the agent has all the tools it
needs, tremendous compute and throughput
and works very very fast. And so it may
be able to get done in four hours what
would take you three
months. Now I'm not here to say that
means the end of work because I believe
in Poleani's paradox which is that work
is more than we can speak. I don't think
work can be efficiently tokenized. Um,
there's a lot more to work than our
ability to describe a task. I can leave
all of that aside. You can go find that
on my Substack if you want to read about
it. I've written about it pretty
extensively. But I do think that with
the right scopes, with with the right
autonomy of decision-m around a
particular problem scope, with the right
tools, I can see a world where agents
become incredibly intelligent interns.
It's like you can manage an agent and
you can give it a tough task. You have
to give it the scope. You have to
confirm the tooling is right. You have
to validate the results. But it does an
incredible amount of work in a short
amount of
time. We will see if that's true. Right
now, probably the closest parallel to
that is Devon. Devon's an engineering
agent and really Devon acts like an
engineering intern. If you are a senior
engineer and you know what you're doing
and you could code your way out of a
corner, Devon's great if you want
someone who will pick up your P3s, your
SE3s and knock those out. Devon's great
if you want someone to tackle a specific
defined task and go after it. If you
want someone to break out and go after
um a few pieces of work in a particular
area you want to code in today and get
you some pull requests that you can
review at the end of the day. If you
want someone who will be your founding
engineer, which some people have tried
to use Devon for, it is a bad idea.
Devon is not ready for that level of
responsibility, Devon cannot decide or
define system
architectures. And people overusing it
that way get frustrated. People also get
frustrated to some extent because Devon
isn't perfect. Devon will sometimes
stray from the point. Devon will
sometimes not be able to complete the
work because Devon runs out of tokens.
other issues come up that I suspect will
come up with agents. In fact, I see
Devon as an agent as sort of a first
approximation preview of what it's going
to look like to work with agents in the
future. I think we're going to have
running out of token issues. I think
we're going to have did you give the
agent a clear enough task issues. I
think we're going to have did you give
the agent too much responsibility
issues. I think you're going to have did
you scope what the agent was working on
to the degree that fit its intelligence
issues. And we're all going to have to
be figuring that out, not just
engineers, within the next year or
so. But the point is this. If you think
about that time piece again as we circle
back, the more compute and the smarter
these models get, the more they can get
done with that time. And so it may be
that even if that intent over time is
only a week by the end of next year, it
is enough time that real meaningful
project work can get done if we define
that scope correctly. And that's just
kind of weird to me because as a human,
I don't think about it that way. I don't
think about it as I can get all of this
work done next year because I'm going to
get that much smarter. I mean, sure, I'm
going to try and read. I'm going to try
and learn from AI. We all try and get
smarter, but I have no illusions. I
can't upgrade the CPU up here. The
machine can. And so I think one of the
most interesting things we're not
talking about is that intelligence gains
are related to the way we use agent
intent over time. And we should probably
talk about that more. Chips.