AI vs Human Thought: Six Comparisons
Key Points
- The video sets up a six‑point comparison of human thinking versus large language models (LLMs), covering learning, processing, memory, reasoning, error handling, and embodiment.
- Human learning relies on neuroplasticity and Hebbian “neurons that fire together wire together,” allowing rapid, few‑shot acquisition and continuous weight updates, whereas LLMs learn via back‑propagation on massive text corpora, requiring millions of examples and resulting in largely static parameters after training.
- Information processing in the brain is massively parallel and distributed across specialized regions (e.g., visual cortex), while LLMs process input as tokenized vectors that pass sequentially through stacked attention layers to compute relevance scores.
- Because humans update their internal networks constantly, they can adapt on the fly, whereas LLMs only change behavior when explicitly retrained, highlighting a key difference in adaptability and learning dynamics.
- The contrast in how each system encodes, updates, and applies knowledge forms the foundation for later discussions on memory, reasoning, error patterns, and the role of embodiment in intelligence.
Sections
- AI vs Human Thought: Learning - The segment contrasts how humans and large language models acquire knowledge—human brains adjust via neuroplasticity and Hebbian learning from minimal examples, while AI models train on massive datasets through artificial neural network optimization.
- Brain vs Token‑Based LLM Processing - The speaker contrasts the brain’s massively parallel, concept‑driven computation and multiple memory systems with large language models’ token‑level, attention‑driven pattern‑completion architecture.
- LLM Reasoning vs Human Cognition - The passage contrasts human system‑1/2 reasoning with how large language models generate superficial token sequences, noting that LLMs mimic logical steps without true understanding and can stumble on simple tasks like counting letters.
- Embodiment vs AI Hallucination - The speaker argues that because humans learn from direct, embodied interactions with the world while LLMs are purely disembodied text processors, AI systems frequently hallucinate or lack common‑sense knowledge that humans acquire through sensory experience.
Full Transcript
# AI vs Human Thought: Six Comparisons **Source:** [https://www.youtube.com/watch?v=-ovM0daP6bw](https://www.youtube.com/watch?v=-ovM0daP6bw) **Duration:** 00:11:59 ## Summary - The video sets up a six‑point comparison of human thinking versus large language models (LLMs), covering learning, processing, memory, reasoning, error handling, and embodiment. - Human learning relies on neuroplasticity and Hebbian “neurons that fire together wire together,” allowing rapid, few‑shot acquisition and continuous weight updates, whereas LLMs learn via back‑propagation on massive text corpora, requiring millions of examples and resulting in largely static parameters after training. - Information processing in the brain is massively parallel and distributed across specialized regions (e.g., visual cortex), while LLMs process input as tokenized vectors that pass sequentially through stacked attention layers to compute relevance scores. - Because humans update their internal networks constantly, they can adapt on the fly, whereas LLMs only change behavior when explicitly retrained, highlighting a key difference in adaptability and learning dynamics. - The contrast in how each system encodes, updates, and applies knowledge forms the foundation for later discussions on memory, reasoning, error patterns, and the role of embodiment in intelligence. ## Sections - [00:00:00](https://www.youtube.com/watch?v=-ovM0daP6bw&t=0s) **AI vs Human Thought: Learning** - The segment contrasts how humans and large language models acquire knowledge—human brains adjust via neuroplasticity and Hebbian learning from minimal examples, while AI models train on massive datasets through artificial neural network optimization. - [00:03:03](https://www.youtube.com/watch?v=-ovM0daP6bw&t=183s) **Brain vs Token‑Based LLM Processing** - The speaker contrasts the brain’s massively parallel, concept‑driven computation and multiple memory systems with large language models’ token‑level, attention‑driven pattern‑completion architecture. - [00:06:15](https://www.youtube.com/watch?v=-ovM0daP6bw&t=375s) **LLM Reasoning vs Human Cognition** - The passage contrasts human system‑1/2 reasoning with how large language models generate superficial token sequences, noting that LLMs mimic logical steps without true understanding and can stumble on simple tasks like counting letters. - [00:09:21](https://www.youtube.com/watch?v=-ovM0daP6bw&t=561s) **Embodiment vs AI Hallucination** - The speaker argues that because humans learn from direct, embodied interactions with the world while LLMs are purely disembodied text processors, AI systems frequently hallucinate or lack common‑sense knowledge that humans acquire through sensory experience. ## Full Transcript
They write stories, answer questions, crack jokes, and they do it in flawless grammar,
but are artificial intelligence models actually thinking?
How do these AI systems really compare to the human mind?
Do they think the way that we do?
Well, both the human brain and LLMs, they process information through complex networks biological neurons for us and artificial neurons for them,
and both can improve performance through some form of learning.
So, let's compare AI thinking and human thinking across six key areas.
We're going to take a look at learning, at processing, at memory.
We're also going to look at reasoning, at error, and also embodiment.
Now, let us get started with the first of those, which is learning.
Humans and LLMs both learn, but the mechanisms are pretty different.
So human learning that occurs through a property known as neuroplasticity.
Neuroplasticy.
Now what that means is that's the brain's ability to recognize its neural networks in response to experience.
So when a person learns a new skill or a new fact,
networks of neurons in relevant brain regions adjust their firing patterns and their synaptic weights based on how frequently and strongly the neurons fire together.
Essentially hebbian theory, neurons that fire together wire together.
And of particular relevance here is that the brain can learn from just a few examples.
Even a single exposure to a new concept can form a lasting memory.
Now, LLMs, on the other hand...
They learn through an entirely artificial training process which is called backpropagation.
Now, backp propagation is used during training,
when an LLM model processes millions of text examples and adjusts its internal weights to minimize the difference between its predicted outputs
and the actual text in the training data.
And it requires huge numbers of training examples and many forward and backward passes to refine those predictions.
So well, you and I might be able to pick up a new word after hearing it just once or twice,
an LLM may effectively see that word thousands of times in its training corpus before it can use it reliably in context.
And also, once trained, an AI model's parameters are generally pretty static.
The model weights don't change with usage, but humans, in contrast, we are dynamic.
We're constantly learning and updating and adjusting as new information comes in.
So that's learning.
What about information processing?
Well, processing in the human brain is massively parallel and distributed.
Billions of neurons and trillions of synapses are active concurrently, with different brain regions specialized for different functions, like the visual cortex for sight.
Now, LLMs, they operate on a very different set of principles.
They use sequences of discrete symbols
called tokens.
Now, when an LLM receives some input, like a user prompt, it encodes the text into a series of vector representations.
And these representations then pass through multiple layers where the model calculates attention scores,
essentially figuring out which tokens are relevant to predicting the next one.
But humans, on the other hand, we don't tokens, we process concepts.
When you hear or read a sentence, you're not decoding it word by word, you're grasping chunks of meaning and linking them to prior knowledge and context.
Where LLMs work at the level of tokens, we humans kind of work at their level of ideas.
Now the brain's mode of operation is often described as being content addressable, which basically means that can trigger memories or predictions.
Whereas an LLM's operation is a bit more like next step pattern completion,
which is quite different here because it's just really doing pattern completion based on its training data.
Alright, so that's processing.
Another big part of thinking, is memory.
Now humans have multiple memory systems.
We have sensory memory.
Now that's used for information received through our senses that only lasts a few seconds.
We have working memory.
That is used as kind of a temporary storage space for holding information.
It's short-term and it's pretty limited in capacity.
And we also have long-term memory as well.
Which has a much larger capacity and allows us to retain information potentially for years.
And the thing with human memory is it is associative.
So that means that memories are linked by meaning and context and emotion.
Every time I smell glass cleaner, I think of this video studio.
Now, LLMs by contrast, they have a much simpler memory architecture.
Their knowledge of the world is essentially everything absorbed.
During training and that is encoded within the model's weights.
Now the AI equivalent of working memory, that is the model context window.
That's the sequence of tokens the model is currently considering as input
and this includes things like the user's prompt and prior dialog and maybe an attached document.
And once that context window fills up, all the information falls out.
And is entirely forgotten.
Okay, so so far we've seen that methods of learning and information processing and memory, they all work quite different when comparing humans and AI.
But what about the next factor, which is reasoning?
After all, we have entire cognitive models designed to emulate human step-by-step reasoning.
And anyone familiar with Daniel Kahneman's work has probably heard of the idea of system one thinking and system two thinking.
That's where system one that represents fast intuitive judgments and then system two that represents slow deliberative logical reasoning.
And LLMs have been primarily trained on the outputs of system two-thinking well-structured explicit information found in their training data.
Now reinforcement learning and chain of thought prompting that can coax a model to produce some intermediate reasoning steps
from a there is an important difference here.
An LLM is not consciously performing reasoning the way that we are.
It's actually generating a plausible sequence of tokens that merely appears to be reasoning.
When it gets the answer right,
it's because the token sequence happens to align with the logical rules, not because the model inherently understands those rules.
And that's why LLMs can fail at tasks that seem quite trivial to humans, like...
Counting the number of R's in the word strawberry, which is a problem that tripped up many models for years.
And that brings us on quite nicely to the next one, which is error.
Now, one of the most discussed flaws about LLMs is the fact that they are prone to hallucinate.
That means producing confident-sounding statements that are factually incorrect.
Now, in human terms...
If you're hallucinating, you're not smartly touting falsehoods, you are probably seeing visions or hearing voices that aren't really there.
So perhaps a better word for the human equivalent of an AI hallucination is actually a confabulation.
That would be a better way to describe it.
And confabulations is a term that is used in psychology to describe when a person Unknowingly creates a false memory or an explanation.
Now it's not a deliberate lie It's the person genuinely believing the information being true,
but it isn't,
so they might earnestly recall details of a childhood event that never actually happened,
or they might offer an explanation for their behavior that isn't really accurate, but feels true to them.
And this is because the brain has a natural tendency to fill in missing details.
It's what Daniel Gilbert, who's the author of one of my favorite books, Stumbling on happiness, he calls the filling in trick.
And it happens more often than you might realize.
So the next time you wonder why LLMs hallucinate so often, maybe you should also consider how many times you yourself might confabulate in a single day.
Now, arguably the most fundamental difference between human thinking and AI thinking all comes down to something called embodiment.
What do I mean by that?
Well, simply put, embodied being.
You exist in the real world.
Your thoughts and behaviors are all deeply influenced by your interactions with the physical environment.
Your concept of wetness, for example, is tied to the tactile sensation you feel from water.
But poor AI models, they're not embodied at all.
They are disembodied.
They don't exist in the real world.
They exist as software.
on servers.
An LLM, it doesn't taste, it doesn't smell, it, doesn't feel.
It's knowledge of the physical world, it's all second hand.
It is learned from words written by humans who do have embodied experiences.
And that's one reason that LLMs often lack common sense knowledge.
I know that if I let go of this marker, it will fall to the table below
Not because I've read about gravity, but because I live with it.
Now an LLM might also know the same fact if it's been stated explicitly in text enough times,
but it could just as easily produce a scenario where someone gets a marker and they throw it up, and it floats.
Perhaps the LLMs read a little bit too much science fiction.
So, LLM's are not anchored in physical reality the way that we are.
So while AI models and human minds can produce superficially similar outputs,
like essays or answers to questions or creative stories, the underlying nature of their cognition is really fundamentally quite different.
Both systems learn and process information by adjusting connections in complex networks and both can generalize patterns and predict upcoming information,
but they do so in very different ways.
Humans bring meaning and genuine comprehension,
AI that brings speed and breadth of knowledge And when these methods of thinking are combined in the right ways, that's where we can achieve the best of both worlds.