Learning Library

← Back to Library

When Smarter Bots Aren’t Enough

Key Points

  • The rapid advances in AI are driven mainly by ever‑larger pre‑training datasets and improved inference reasoning introduced with the 01 model in late 2024, but these gains are still largely narrow and domain‑specific.
  • Despite massive data consumption and billions of user interactions, the finite quality of available data and concerns over token “learning” value are prompting companies like Anthropic to restrict first‑party model access.
  • Even if pre‑training and inference challenges were solved, the current approach still falls short of handling tasks that require long‑term intent, broad contextual awareness, and the ability to track multiple, simultaneously changing variables.
  • CEOs’ promises of fully autonomous agents capable of creating full‑length movies or replacing professional colleagues hinge on achieving generalized, not just incremental, intelligence—a milestone that remains elusive.

Full Transcript

# When Smarter Bots Aren’t Enough **Source:** [https://www.youtube.com/watch?v=5PasrHSrato](https://www.youtube.com/watch?v=5PasrHSrato) **Duration:** 00:09:06 ## Summary - The rapid advances in AI are driven mainly by ever‑larger pre‑training datasets and improved inference reasoning introduced with the 01 model in late 2024, but these gains are still largely narrow and domain‑specific. - Despite massive data consumption and billions of user interactions, the finite quality of available data and concerns over token “learning” value are prompting companies like Anthropic to restrict first‑party model access. - Even if pre‑training and inference challenges were solved, the current approach still falls short of handling tasks that require long‑term intent, broad contextual awareness, and the ability to track multiple, simultaneously changing variables. - CEOs’ promises of fully autonomous agents capable of creating full‑length movies or replacing professional colleagues hinge on achieving generalized, not just incremental, intelligence—a milestone that remains elusive. ## Sections - [00:00:00](https://www.youtube.com/watch?v=5PasrHSrato&t=0s) **Smarter Bots, Still Narrow Capabilities** - The speaker questions whether continual improvements in large‑scale pre‑training and inference can fulfill ambitious promises of fully autonomous creative and professional agents, arguing that current AI progress remains narrowly focused and constrained by limited high‑quality data. - [00:04:25](https://www.youtube.com/watch?v=5PasrHSrato&t=265s) **AI Limits in Multi‑Reward Marketing** - The speaker argues that even highly intelligent AI cannot handle marketing tasks that involve evolving, partial-information environments with multiple, conflicting long‑term rewards, requiring human intuition and fuzzy reasoning to balance outcomes. - [00:08:14](https://www.youtube.com/watch?v=5PasrHSrato&t=494s) **Beyond Pre‑training: Jagged AGI Future** - The speaker urges consideration of missing technical breakthroughs beyond data and inference, arguing for a “jagged” scenario where some AGI advances arrive while others lag, and asks what obstacles still block true artificial general intelligence. ## Full Transcript
0:00What if the bots kept getting smarter, 0:03the AI kept getting smarter, but it 0:06didn't matter anymore? That is the 0:08question that has been keeping me up at 0:10night. And I want to talk about it. 0:12There's lots of ways we can talk about 0:13this. The simplest way is to say this. 0:16The bots have been getting smarter 0:18because of two key things. One is very 0:21large pre-training data sets and the 0:23other is smart inferencing which was 0:26introduced in late 2024 with the 01 0:29model. Now lots of people have it. So 0:32here's the question. If we just have 0:35pre-training and we just have inference 0:37and reasoning, is that enough to make 0:40all of these big promises that the CEOs 0:42are making come true? Are is it enough 0:46for us to make fulllength movies? Is it 0:48enough for us to have agents at work 0:51that are just like our professional 0:52colleagues and can do all of our work 0:54for 0:55us? And increasingly, as we see the 1:00successive generations of these bots, we 1:02see 03 come out, we see Gemini 2.5 Pro 1:05come out, what I see is that these bots 1:07are getting smarter, but they're getting 1:09smarter in what I would call narrow 1:11ways. They're smarter at specific 1:14things, but they're not generally smart 1:16in ways that will enable us to do this 1:19generalized work if we just keep getting 1:21better at inference or pre-training 1:23data, which by the way has its own 1:25questions because, of course, there's 1:27not infinite data in the world. We've 1:29used a lot of it. The remainder may not 1:31be as high a quality. There's there's 1:33questions about it. Now, you can say, 1:35"Chat GPT has access to a lot of data 1:38from usage now. They have almost a 1:39billion users. they can use that to 1:41refine their models. That by the way is 1:44why Enthropic has decided to cut model 1:47access to Windsurf as much as they can 1:49because Windsurf was purchased by OpenAI 1:52and Enthropic has basically said those 1:54tokens could now be used for learning by 1:56OpenAI. We don't want that. Uh you will 1:59have to get thirdparty access to your 2:00cloud models. We're not providing 2:02firstparty access anymore because those 2:05learning tokens are like gold right now. 2:08Okay, fine. So let's say just for a 2:11second we solve any questions around 2:12pre-training. We have the data which may 2:14be true like we may be able to scale for 2:16a bit longer anyway. Uh and let's also 2:18say uh that we have reasoning figured 2:21out we can get better and better at 2:22inference. Even then is that really 2:25enough for doing tasks that require 2:29months of intent? Is it really enough 2:32for what I would call widen changing 2:34context understanding where you are 2:36aware of a very broad work context or 2:38personal context and two or three 2:41elements are changing at once during a 2:42day or a week and you can track all of 2:44that context change and the fuzzy logic 2:45implications. Like for example, you are 2:48trying to hit a sales target and you are 2:50aware of the three or four things in 2:52product and in finance and in customer 2:54success that are all affecting how your 2:56deals are coming together and you are 3:00able to process all of that and then 3:02package that up in a way that is useful 3:03on conversations with 3:05prospects. Humans are really good at 3:07that 3:08stuff. AI, even really smart AI, is not 3:12as good as it needs to be. And part of 3:15why is that when you have widely 3:17changing contexts like that, you need an 3:19AI that sort of learns from experiences 3:23that it has in the field after 3:25deployment, not just from experiences 3:27that it has in 3:29pre-training. Chad GPT is making a nod 3:31at that with memory, but memory is not 3:34close to being at a point where you 3:36could say it adaptively learns on the 3:38fly to these very wide context changes, 3:40then can track it at high fidelity. It 3:43just isn't. And I think that I made a 3:46video like a few months ago that I think 3:48was vaguely popular that basically said 3:49we have a memory problem. I would go 3:52broader than that. We have a context 3:54awareness and adaptability 3:56problem. We also have a problem with 3:58intent over time where these things have 4:01to have goals. We also have a major 4:05problem with how we handle tacet 4:08knowledge in the workplace which I've 4:09talked about extensively on other 4:11videos. How do you handle knowledge that 4:13is never spoken because there are social 4:15consequences to speaking that for 4:17humans? The AI never sees it. It's 4:20invisible. What do you do with that? 4:23Even if you have infinitely smart AI, 4:25that won't help. How do you handle tasks 4:28that evolve at the edges to be 4:31successful? A great example of this um 4:34is if you are trying to do marketing, 4:38you have multiple different rewards 4:40you're optimizing for. or it's not just 4:42one clear reward like you have with code 4:44where it runs or it doesn't. Uh the 4:46relationship between those different 4:47rewards in the funnel is unclear and 4:50varies dramatically by 4:52business. Once you optimize for one of 4:54them, you risk deoptimizing for another 4:56one and you have to keep your eye on the 4:58end result with a business and like the 5:00value and the long-term customer value 5:01you're driving that you don't see for 5:04months if if not years. Some deal cycles 5:06take 5:08years. And so marketers have to adapt to 5:12this extremely changeable partial 5:14information environment and also the 5:16novelty that customers are looking for 5:18and new tactics that 5:20emerge. AI is not good at that. That's a 5:22that's an adaptable context problem. It 5:25is also a change the task at the edges 5:27and tweak it in ways that enable you to 5:30account for multiple partial rewards 5:33that the human is accounting for with 5:35some kind of fuzzy logic and intuition. 5:37and what we call intuition. We don't 5:40really have intuition for 5:42AI. The AI may feel intuitive sometimes, 5:45but at the end of the day, it is coming 5:48to a conclusion based on the 5:52reinforcement learning it's had, based 5:54on the previous interactions with you, 5:56if it has some kind of memory, and based 5:59on inference. That's what you got. 6:02And so I do think that we are 6:07underestimating the number of technical 6:10breakthroughs that we would have to have 6:12to really get into a place where these 6:15visions come true. And the critical 6:17thing is because we're not talking about 6:19it because we're only talking for the 6:21most part about how amazing A is at 6:23inference AI is at inference and at 6:25pre-training data which is great. It 6:27does magical things that the the rocks 6:30have begun to think. I'm not 6:31complaining. 6:33But even if we had all that and even if 6:37we had smarter and smarter AI, if we 6:39don't solve the problems I'm describing 6:40with adaptable context, intent over 6:42time, the ability to sort of make 6:44optimizations across multiple partial 6:47rewards, we're going to be in trouble 6:49from a perspective of wanting all of 6:52this AI magic to come true. Now, I am 6:55not clear that most of us actually want 6:57that future. So I I am under no 6:59illusions, but that is the goal that a 7:01hundred billion dollars in capital is 7:02chasing right now. And from a 7:04riskmanagement perspective, they all 7:06think the other one might have a 7:07breakthrough. And so they've got to keep 7:09chasing it because if someone gets that 7:11breakthrough, it's going to be lots and 7:13lots of money on the table. And so 7:14that's why all of that money has like 7:16coalesed and is chasing that goal. 7:19But no one is reporting on or talking 7:21about the other breakthroughs and the 7:22kinds of breakthroughs we need to 7:25actually get to this vision of a fully 7:28functional AI 7:29colleague. And if we don't talk about 7:31it, one, we're sort of not putting 7:34sunlight on AI companies and model 7:37makers and what they're working on. And 7:38I think we should be. It's a very 7:39important initiative. We should be 7:41talking about what they do. Uh, and two, 7:43if we can't name the stuff, we can't 7:47describe what we want or do not want as 7:51users, as builders, as 7:55consumers because we can't understand 7:58it. And so I I actually would love us to 8:00be able to have a conversation where we 8:02are able to say these are the things 8:04that are standing in the way of this 8:06broader vision. This is what I would be 8:08interested in. This is what I would not 8:09be interested in. This is where I want 8:11to go. This is where I don't want to go. 8:13This is the kind of product I want to 8:14build. This is the kind I don't want to 8:16build. Having a specific preference 8:18makes a big difference. 8:20And I am begging us to think beyond 8:25pre-training data and inference and have 8:27a conversation about the larger gaps 8:29because I think there is a real chance 8:31that we will live in a world where we 8:34get really smart models at inference and 8:36pre-training but those other technical 8:38breakthroughs are not inevitable and 8:39maybe we don't get to them for a while 8:42for a decade for two decades for three 8:44decades ever. We don't know. They're not 8:47inevitable. And so I want us to think 8:50more about a jagged future. What does it 8:52look like when we have some 8:54breakthroughs toward artificial general 8:55intelligence, but they don't all 8:57materialize or at least not in the same 8:59timeline? I don't know. You tell me. 9:02What is standing in the way of AGI?