Learning Library

← Back to Library

AI vs Human Thought: Six Comparisons

Key Points

  • The video sets up a six‑point comparison of human thinking versus large language models (LLMs), covering learning, processing, memory, reasoning, error handling, and embodiment.
  • Human learning relies on neuroplasticity and Hebbian “neurons that fire together wire together,” allowing rapid, few‑shot acquisition and continuous weight updates, whereas LLMs learn via back‑propagation on massive text corpora, requiring millions of examples and resulting in largely static parameters after training.
  • Information processing in the brain is massively parallel and distributed across specialized regions (e.g., visual cortex), while LLMs process input as tokenized vectors that pass sequentially through stacked attention layers to compute relevance scores.
  • Because humans update their internal networks constantly, they can adapt on the fly, whereas LLMs only change behavior when explicitly retrained, highlighting a key difference in adaptability and learning dynamics.
  • The contrast in how each system encodes, updates, and applies knowledge forms the foundation for later discussions on memory, reasoning, error patterns, and the role of embodiment in intelligence.

Full Transcript

# AI vs Human Thought: Six Comparisons **Source:** [https://www.youtube.com/watch?v=-ovM0daP6bw](https://www.youtube.com/watch?v=-ovM0daP6bw) **Duration:** 00:11:59 ## Summary - The video sets up a six‑point comparison of human thinking versus large language models (LLMs), covering learning, processing, memory, reasoning, error handling, and embodiment. - Human learning relies on neuroplasticity and Hebbian “neurons that fire together wire together,” allowing rapid, few‑shot acquisition and continuous weight updates, whereas LLMs learn via back‑propagation on massive text corpora, requiring millions of examples and resulting in largely static parameters after training. - Information processing in the brain is massively parallel and distributed across specialized regions (e.g., visual cortex), while LLMs process input as tokenized vectors that pass sequentially through stacked attention layers to compute relevance scores. - Because humans update their internal networks constantly, they can adapt on the fly, whereas LLMs only change behavior when explicitly retrained, highlighting a key difference in adaptability and learning dynamics. - The contrast in how each system encodes, updates, and applies knowledge forms the foundation for later discussions on memory, reasoning, error patterns, and the role of embodiment in intelligence. ## Sections - [00:00:00](https://www.youtube.com/watch?v=-ovM0daP6bw&t=0s) **AI vs Human Thought: Learning** - The segment contrasts how humans and large language models acquire knowledge—human brains adjust via neuroplasticity and Hebbian learning from minimal examples, while AI models train on massive datasets through artificial neural network optimization. - [00:03:03](https://www.youtube.com/watch?v=-ovM0daP6bw&t=183s) **Brain vs Token‑Based LLM Processing** - The speaker contrasts the brain’s massively parallel, concept‑driven computation and multiple memory systems with large language models’ token‑level, attention‑driven pattern‑completion architecture. - [00:06:15](https://www.youtube.com/watch?v=-ovM0daP6bw&t=375s) **LLM Reasoning vs Human Cognition** - The passage contrasts human system‑1/2 reasoning with how large language models generate superficial token sequences, noting that LLMs mimic logical steps without true understanding and can stumble on simple tasks like counting letters. - [00:09:21](https://www.youtube.com/watch?v=-ovM0daP6bw&t=561s) **Embodiment vs AI Hallucination** - The speaker argues that because humans learn from direct, embodied interactions with the world while LLMs are purely disembodied text processors, AI systems frequently hallucinate or lack common‑sense knowledge that humans acquire through sensory experience. ## Full Transcript
0:00They write stories, answer questions, crack jokes, and they do it in flawless grammar, 0:05but are artificial intelligence models actually thinking? 0:10How do these AI systems really compare to the human mind? 0:15Do they think the way that we do? 0:17Well, both the human brain and LLMs, they process information through complex networks biological neurons for us and artificial neurons for them, 0:27and both can improve performance through some form of learning. 0:31So, let's compare AI thinking and human thinking across six key areas. 0:39We're going to take a look at learning, at processing, at memory. 0:43We're also going to look at reasoning, at error, and also embodiment. 0:49Now, let us get started with the first of those, which is learning. 0:55Humans and LLMs both learn, but the mechanisms are pretty different. 1:01So human learning that occurs through a property known as neuroplasticity. 1:08Neuroplasticy. 1:10Now what that means is that's the brain's ability to recognize its neural networks in response to experience. 1:17So when a person learns a new skill or a new fact, 1:21networks of neurons in relevant brain regions adjust their firing patterns and their synaptic weights based on how frequently and strongly the neurons fire together. 1:32Essentially hebbian theory, neurons that fire together wire together. 1:37And of particular relevance here is that the brain can learn from just a few examples. 1:44Even a single exposure to a new concept can form a lasting memory. 1:49Now, LLMs, on the other hand... 1:52They learn through an entirely artificial training process which is called backpropagation. 2:01Now, backp propagation is used during training, 2:05when an LLM model processes millions of text examples and adjusts its internal weights to minimize the difference between its predicted outputs 2:12and the actual text in the training data. 2:15And it requires huge numbers of training examples and many forward and backward passes to refine those predictions. 2:22So well, you and I might be able to pick up a new word after hearing it just once or twice, 2:27an LLM may effectively see that word thousands of times in its training corpus before it can use it reliably in context. 2:36And also, once trained, an AI model's parameters are generally pretty static. 2:43The model weights don't change with usage, but humans, in contrast, we are dynamic. 2:51We're constantly learning and updating and adjusting as new information comes in. 2:58So that's learning. 3:00What about information processing? 3:04Well, processing in the human brain is massively parallel and distributed. 3:11Billions of neurons and trillions of synapses are active concurrently, with different brain regions specialized for different functions, like the visual cortex for sight. 3:21Now, LLMs, they operate on a very different set of principles. 3:25They use sequences of discrete symbols 3:29called tokens. 3:32Now, when an LLM receives some input, like a user prompt, it encodes the text into a series of vector representations. 3:39And these representations then pass through multiple layers where the model calculates attention scores, 3:45essentially figuring out which tokens are relevant to predicting the next one. 3:51But humans, on the other hand, we don't tokens, we process concepts. 3:59When you hear or read a sentence, you're not decoding it word by word, you're grasping chunks of meaning and linking them to prior knowledge and context. 4:09Where LLMs work at the level of tokens, we humans kind of work at their level of ideas. 4:15Now the brain's mode of operation is often described as being content addressable, which basically means that can trigger memories or predictions. 4:29Whereas an LLM's operation is a bit more like next step pattern completion, 4:35which is quite different here because it's just really doing pattern completion based on its training data. 4:43Alright, so that's processing. 4:45Another big part of thinking, is memory. 4:51Now humans have multiple memory systems. 4:55We have sensory memory. 4:58Now that's used for information received through our senses that only lasts a few seconds. 5:04We have working memory. 5:07That is used as kind of a temporary storage space for holding information. 5:11It's short-term and it's pretty limited in capacity. 5:15And we also have long-term memory as well. 5:20Which has a much larger capacity and allows us to retain information potentially for years. 5:26And the thing with human memory is it is associative. 5:31So that means that memories are linked by meaning and context and emotion. 5:37Every time I smell glass cleaner, I think of this video studio. 5:42Now, LLMs by contrast, they have a much simpler memory architecture. 5:47Their knowledge of the world is essentially everything absorbed. 5:50During training and that is encoded within the model's weights. 5:57Now the AI equivalent of working memory, that is the model context window. 6:06That's the sequence of tokens the model is currently considering as input 6:10and this includes things like the user's prompt and prior dialog and maybe an attached document. 6:15And once that context window fills up, all the information falls out. 6:20And is entirely forgotten. 6:22Okay, so so far we've seen that methods of learning and information processing and memory, they all work quite different when comparing humans and AI. 6:32But what about the next factor, which is reasoning? 6:37After all, we have entire cognitive models designed to emulate human step-by-step reasoning. 6:43And anyone familiar with Daniel Kahneman's work has probably heard of the idea of system one thinking and system two thinking. 6:54That's where system one that represents fast intuitive judgments and then system two that represents slow deliberative logical reasoning. 7:05And LLMs have been primarily trained on the outputs of system two-thinking well-structured explicit information found in their training data. 7:16Now reinforcement learning and chain of thought prompting that can coax a model to produce some intermediate reasoning steps 7:24from a there is an important difference here. 7:31An LLM is not consciously performing reasoning the way that we are. 7:39It's actually generating a plausible sequence of tokens that merely appears to be reasoning. 7:44When it gets the answer right, 7:46it's because the token sequence happens to align with the logical rules, not because the model inherently understands those rules. 7:53And that's why LLMs can fail at tasks that seem quite trivial to humans, like... 7:59Counting the number of R's in the word strawberry, which is a problem that tripped up many models for years. 8:05And that brings us on quite nicely to the next one, which is error. 8:10Now, one of the most discussed flaws about LLMs is the fact that they are prone to hallucinate. 8:21That means producing confident-sounding statements that are factually incorrect. 8:26Now, in human terms... 8:29If you're hallucinating, you're not smartly touting falsehoods, you are probably seeing visions or hearing voices that aren't really there. 8:37So perhaps a better word for the human equivalent of an AI hallucination is actually a confabulation. 8:46That would be a better way to describe it. 8:49And confabulations is a term that is used in psychology to describe when a person Unknowingly creates a false memory or an explanation. 8:57Now it's not a deliberate lie It's the person genuinely believing the information being true, 9:04but it isn't, 9:05so they might earnestly recall details of a childhood event that never actually happened, 9:10or they might offer an explanation for their behavior that isn't really accurate, but feels true to them. 9:16And this is because the brain has a natural tendency to fill in missing details. 9:22It's what Daniel Gilbert, who's the author of one of my favorite books, Stumbling on happiness, he calls the filling in trick. 9:29And it happens more often than you might realize. 9:32So the next time you wonder why LLMs hallucinate so often, maybe you should also consider how many times you yourself might confabulate in a single day. 9:42Now, arguably the most fundamental difference between human thinking and AI thinking all comes down to something called embodiment. 9:53What do I mean by that? 9:55Well, simply put, embodied being. 10:00You exist in the real world. 10:03Your thoughts and behaviors are all deeply influenced by your interactions with the physical environment. 10:09Your concept of wetness, for example, is tied to the tactile sensation you feel from water. 10:16But poor AI models, they're not embodied at all. 10:20They are disembodied. 10:23They don't exist in the real world. 10:26They exist as software. 10:28on servers. 10:30An LLM, it doesn't taste, it doesn't smell, it, doesn't feel. 10:33It's knowledge of the physical world, it's all second hand. 10:35It is learned from words written by humans who do have embodied experiences. 10:40And that's one reason that LLMs often lack common sense knowledge. 10:45I know that if I let go of this marker, it will fall to the table below 10:50Not because I've read about gravity, but because I live with it. 10:54Now an LLM might also know the same fact if it's been stated explicitly in text enough times, 11:01but it could just as easily produce a scenario where someone gets a marker and they throw it up, and it floats. 11:10Perhaps the LLMs read a little bit too much science fiction. 11:15So, LLM's are not anchored in physical reality the way that we are. 11:20So while AI models and human minds can produce superficially similar outputs, 11:24like essays or answers to questions or creative stories, the underlying nature of their cognition is really fundamentally quite different. 11:34Both systems learn and process information by adjusting connections in complex networks and both can generalize patterns and predict upcoming information, 11:42but they do so in very different ways. 11:46Humans bring meaning and genuine comprehension, 11:49AI that brings speed and breadth of knowledge And when these methods of thinking are combined in the right ways, that's where we can achieve the best of both worlds.