Learning Library

← Back to Library

GPT-5 Pro: Smarter Yet Experientially Worse

Key Points

  • GPT5 Pro is the first AI model that is provably smarter yet experientially worse, a paradox that signals a fundamental shift in AI development.
  • Its superior intelligence comes from a compute‑time architecture that runs multiple parallel reasoning chains, letting the model debate internally like a panel of experts before delivering a unified answer.
  • This emphasis on coherent judgment enables GPT5 Pro to excel on tasks that require strong decision‑making, such as scoring highly on IQ‑style tests.
  • Effective business adoption isn’t as simple as swapping in GPT5 Pro for any use case; careful selection of scenarios where its judgment‑focused strengths shine is essential.
  • Prospective users—especially individual subscribers facing a $200‑per‑month price tag—need a decision‑making framework to evaluate whether the upgrade delivers enough practical value for their workflow.

Sections

Full Transcript

# GPT-5 Pro: Smarter Yet Experientially Worse **Source:** [https://www.youtube.com/watch?v=7-LFn11dNHA](https://www.youtube.com/watch?v=7-LFn11dNHA) **Duration:** 00:23:52 ## Summary - GPT5 Pro is the first AI model that is provably smarter yet experientially worse, a paradox that signals a fundamental shift in AI development. - Its superior intelligence comes from a compute‑time architecture that runs multiple parallel reasoning chains, letting the model debate internally like a panel of experts before delivering a unified answer. - This emphasis on coherent judgment enables GPT5 Pro to excel on tasks that require strong decision‑making, such as scoring highly on IQ‑style tests. - Effective business adoption isn’t as simple as swapping in GPT5 Pro for any use case; careful selection of scenarios where its judgment‑focused strengths shine is essential. - Prospective users—especially individual subscribers facing a $200‑per‑month price tag—need a decision‑making framework to evaluate whether the upgrade delivers enough practical value for their workflow. ## Sections - [00:00:00](https://www.youtube.com/watch?v=7-LFn11dNHA&t=0s) **GPT5 Pro: Smarter Yet Worse** - The speaker introduces GPT5 Pro, explains its paradox of higher intelligence paired with poorer user experience, and offers guidance on assessing its practical value and upgrade worthiness for businesses and individual users. - [00:04:11](https://www.youtube.com/watch?v=7-LFn11dNHA&t=251s) **GPT-5 Pro Parallel Compute Trade-offs** - It explains that GPT‑5 Pro’s higher price stems from running multiple reasoning threads simultaneously, boosting correctness but incurring heavy compute costs and occasional instability. - [00:07:22](https://www.youtube.com/watch?v=7-LFn11dNHA&t=442s) **GPT‑5 Trade‑offs: Personality, Context, Data** - The speaker explains how shifting from the emotionally‑driven GPT‑4 to the correctness‑focused GPT‑5 sacrifices personality, strains multi‑thread context continuity, and requires highly structured, multi‑perspective data. - [00:10:28](https://www.youtube.com/watch?v=7-LFn11dNHA&t=628s) **GPT‑5 Pro as Research Partner** - The speaker highlights GPT‑5 Pro’s capacity to generate advanced scientific insights and conduct comprehensive financial modeling, emphasizing that clean, multi‑layered data is essential for optimal performance. - [00:14:40](https://www.youtube.com/watch?v=7-LFn11dNHA&t=880s) **Parallel Reasoning Limits for GPT‑5** - The speaker warns that GPT‑5 Pro’s parallel, architecture‑focused reasoning can cause it to lose coherence in sequential tasks like coding and singular‑voice creative writing. - [00:19:32](https://www.youtube.com/watch?v=7-LFn11dNHA&t=1172s) **Anthropic vs Google Model Strategy** - The speaker questions whether Anthropic should keep optimizing coding‑centric, tool‑using models or pivot to broader reasoning while preserving Claude’s personality, and notes that Google already possesses strong reasoning capabilities but must decide how to productize them into a distinctive chat‑based product. ## Full Transcript
0:00This is your introduction to GPT5 Pro. 0:03Now, I know that not everybody has Chad 0:05GPT5 Pro. The reason I'm covering Chad 0:08GPT5 Pro is because it represents a 0:11different kind of computing. It gives us 0:14a hint of where AI is scaling next. And 0:17figuring out how to apply it in your 0:19business is not nearly as simple as 0:22taking all the AI use cases and adding 0:24GPT5 Pro to them. It takes a lot of 0:26judgment. What follows are my field 0:29notes as I've dug into the use cases 0:32that I'm seeing actually work and the 0:35rationale for why those use cases work 0:38so you can figure out where GPT5 Pro 0:40might work in your business. The central 0:43thesis I want to explore over the course 0:45of these notes is this. GPT5 Pro is the 0:48first AI model that is provably smarter 0:52and also experientially worse. and that 0:56this paradox reveals something really 0:58fundamental about the future of AI 1:00development. So, I'm gonna say it again 1:01because I think that people are going to 1:03kind of cough and spit out their coffee. 1:04This this model is smarter, yes, which 1:07everybody expected, but it's also 1:09experientially worse. And I'm going to 1:11get into why and kind of how that works. 1:13We're going to dive into the details on 1:14this one because I want you to walk away 1:16with the tools that you need to figure 1:18out where GPT5 Pro fits in your workflow 1:21and whether it's worth upgrading, right? 1:23because some people they are asking the 1:25question like I'm an individual user. 1:26Should I pay the really expensive $200 a 1:29month to get this thing? And I want you 1:31to walk away with the tools to make that 1:32decision. Okay, first let's talk about 1:34the architecture of GPT5 Pro because 1:37that underlies everything else we're 1:39going to discuss today. OpenAI has 1:42reimagined intelligence in terms of 1:44time. Now, I've talked about inference 1:47compute a fair bit, but it is worth 1:49revisiting because fundamentally with 1:51GPT5 Pro, that is where the smarts come 1:54from. It is not just model size. It is 1:58compute time. Specifically, GPT5 Pro, it 2:02doesn't just process your query. It's 2:04running multiple parallel reasoning 2:06chains at once. It can explore multiple 2:10solution paths independently. It 2:12evaluates them against each other and 2:14then it synthesizes the best approach 2:17out of all those reasoning chains. What 2:19this enables it to do is to think like a 2:21panel of experts that's debating 2:23internally before presenting a unified 2:25answer. I don't want to pretend to you 2:27that chat GPT has a monopoly on this 2:30general approach to inference time 2:31compute. It doesn't. There's other model 2:34makers out there that are working on 2:35this too. However, what GPT5 Pro does 2:38really, really well is it actually takes 2:42all of that parallel reasoning and it 2:44judges really coherently what is the 2:47correct decision or approach. And this 2:51emphasis on judging correctly is one of 2:53the hallmarks of GPT5 Pro and it's 2:57something that you'll see as a 2:58throughine when we get to the use cases 3:00that work. I think it is why GPT5 Pro 3:04with internet access scored so well on 3:07the IQ test. Now, I don't I'm not a huge 3:10believer in IQ tests. I think they're 3:12interesting directionally. It is 3:14unquestionably true that if you are 3:16following the story of LLMs and IQ 3:18tests, GPT5 Pro is really good. I think 3:21it scored a 148. Like, it's a 3:23phenomenally smart model in that 3:25specific measured test environment. And 3:27I think why is because that test 3:30environment values correctness too. And 3:33so GPT5 Pro is sort of in its element 3:35there. But this idea, let's come back to 3:37this idea of this panel of experts 3:39debating. This mirrors how humans 3:42actually solve hard problems. And I 3:44haven't seen this part discussed a ton 3:46online. When you face a difficult 3:48decision, you don't really just think 3:50linearly. If A then B, right? Like 3:52that's not how we actually think. It may 3:54be how we write, but it's not how we 3:55think. you are actually considering 3:58multiple perspectives like facets 3:59simultaneously. When you ruminate, when 4:02you think about an idea, it's almost 4:03like you're walking through different 4:05ideas at once and kind of even in the 4:07back of your head turning them over and 4:08looking at different angles of the idea. 4:10You might be saying, "What are the 4:11risks? What are the opportunities? How 4:13does this affect this other concept? 4:14What would happen if in a sense GPT5 Pro 4:18is mechanizing this parallel 4:19deliberation that we do in our heads?" 4:21It's trying to simulate it a little bit. 4:24You're not just paying $200 for access 4:27to a smarter model. You're paying for 4:30the compute to run multiple reasoning 4:33threads at once. And that gives you a 4:35clue as to why it's reserved for the 4:36smarter model. It's not cheap to run. 4:39Every query spawns parallel processes 4:42that take real compute resources. The 4:45thing is you get an advance on 4:47correctness. And so you can look at 4:49different sort of tests that show that 4:51you know this test 100% on advanced 4:53mathematics, right? Or 88.4% on graduate 4:56level reasoning, 22% fewer major errors 4:59in the bench. Okay, fine. Right? Like I 5:02have learned to take the test with a 5:03grain of salt. What I'm more interested 5:05in is the architecture that leads to 5:07correctness because that's what actually 5:09gets us where we need to go. However, 5:11before we get into use cases, this is 5:13where I talk about the disappointments 5:15or the fact that this is both a smarter 5:17model, which I think I've talked about 5:18with this concept of of inference time 5:21compute and the value and correctness. 5:22That's one of the things GPT5 Pro has 5:25really emphasized. We have a trade-off 5:27here. This is one of the reasons why 5:28this experience is somewhat 5:30disappointing. The parallel processing 5:32that makes GPT5 Pro really smart also 5:36breaks it depending on how you define 5:38broken in some very specific and 5:40predictable ways. The first one is a 5:42little bit ironic and it's worth paying 5:44attention to if you're in a business 5:45context. Right now GPT5 5:49Pro is much much more vulnerable from a 5:53security perspective than GPT. And 5:56that's not just me saying that. that's 5:57widely reported across the security uh 6:00publications that matter. They are using 6:02adversarial techniques, jailbreaking 6:05techniques to test these models. And 6:07what they're discovering is GPT5 Pro and 6:10the GPT5 family overall don't test well. 6:14And by the way, if you're wondering what 6:15is the difference between pro and GPT5 6:18thinking, very simply, it's about how 6:20much you're turning up the dial on that 6:22parallel reasoning. And GPT5 Pro is 6:24turned up to 11 like Spinal Tap, right? 6:26Like it that's just how it works. When 6:28the model is exploring multiple 6:30perspectives, adversarial prompts can 6:33poison a particular thread and influence 6:35the eventual synthesis. Essentially, you 6:38have more surface area for the prompts 6:40to attack. That's the architectural cost 6:43of parallel reasoning. Now, is somebody 6:45at OpenAI hard at work fixing that? I 6:47have no doubt. But at the moment, that 6:50is part of the challenge right now with 6:52GPT5 Pro. When you expand parallel 6:54threads, you expand surface attack 6:56vectors. You just do. Trade-off number 6:58two, personality loss. When you 7:01synthesize multiple reasoning chains, 7:03you get a synthesis. The model can 7:06struggle to maintain a consistent voice 7:10when it's aggregating perspectives. This 7:12is why you sometimes get really clean, 7:15really correct, but what users might 7:17call robotic responses from GPT5 Pro. 7:20It's part of the root cause for the 7:22frustration 7:24with the move from 40, which was an 7:26emotional model, to GPT5, which is a 7:29model that values correctness. 7:32When you when you look at multiple 7:34viewpoints and you pick the exact right 7:36one, and you're averaging and 7:37synthesizing, a lot of the personality 7:39just isn't there anymore. Trade-off 7:41number three, context degradation. 7:44Maintaining coherent context across 7:47parallel threads is much much harder 7:49than maintaining a single narrative 7:52thread which creates challenges because 7:55the parallel paths can start to diverge 7:57and create sort of memory fragmentation 7:59issues etc. This will come back as we 8:01talk about use cases and where to use 8:03GPT. The fourth one because before we 8:06jump on from this Chad GPT has done a 8:08lot of work behind the scenes I think to 8:10manage the risk of this so it's still 8:12usable for context. So, we'll we'll get 8:14into that. The fourth trade-off, data 8:16structure requirements. GPT5 Pro is 8:19hungry for data, but it needs data 8:21organized for multi-perspective 8:23analysis. A financial document, for 8:25example, should not just contain the 8:28numbers. It should contain multiple 8:30structured layers where it can account 8:32from a strategic perspective, a risk 8:34perspective, an accounting perspective. 8:35Organizations that are used to holding a 8:38lot of those strategic layers in the 8:40CFO's head or in multiple people's heads 8:43really are going to struggle with 8:46presenting GPT5 Pro with the kind of 8:48data it needs to thrive. So, let's get 8:51into the use cases. We've talked about 8:54some of the things that GPT5 Pro does 8:56well. We've talked about how that very 8:58power, the parallel reasoning creates 9:00vulnerabilities. Let's start to dive 9:02into where do we have use cases that 9:06work and where do we have use cases that 9:08don't. And I want to give you a key so 9:11that you can start to use these for 9:13yourself. Use GPT5 in cases where 9:17parallel reasoning is going to serve you 9:20really really well and correctness 9:22really really matters. as an example, 9:24scientific re research when uh Amgen and 9:28I believe this is a real example 9:29analyzes polymer structures, GPT5 Pro 9:33can evaluate chemical properties. It can 9:35evaluate structural integrity, 9:37manufacturing feasibility, and 9:38regulatory compliance all at once. We 9:41actually have like a lot of 9:42documentation on the web about the way 9:45GPT5 Pro and the way other OER reasoning 9:49models have helped to advance scientific 9:52research. And you see this thread over 9:54at Google as well. It's not the Oser 9:55model. They have their own reasoning 9:57models, but they are fundamentally going 9:59after scientific research because it 10:02enables you to reason across different 10:05perspectives on a body of data at once 10:08and it enables you to converge on a 10:10correct solution and correctness really 10:12matters. And so in the GPT5 Pro case, if 10:15you're analyzing these polymer 10:16structures, you can bring in multiple 10:19perspectives in each reasoning thread, 10:20right? domain expertise. You can bring 10:22in the structure of the molecule etc. 10:26And eventually the synthesis can produce 10:28insights that a single reasoning trace 10:31could not match and critically that can 10:33advance the field or at least act as a 10:35very strong thought partner to a PhD 10:38level researcher. And that is part of 10:40the reason why scientific research is so 10:43emphasized by modelmakers. They're good 10:45at it. The model's good at it. Not too 10:48many of us are scientists. So I want to 10:50give you some other examples of GPT5 Pro 10:52use cases that feel a little more 10:54accessible. Financial modeling. Every 10:57business at a certain scale has to 10:58financially model. GPT5 Pro is the kind 11:01of model that can simultaneously parse 11:04income statements, balance sheet, and 11:06cash flows and cross reference them for 11:09consistency. It can look at reconciling 11:12multiple data sources. It can look at 11:14accounting standards. It can look at 11:15time periods. If you process the data 11:18and feed it in a structured manner, it 11:20actually is going to do a great job of 11:22this. One of the things that I chuckled 11:24about when I did my review of Chat GPT5 11:26is that I deliberately didn't do this as 11:28a way to test the model. And this is my 11:30chance to make it up to GPT5 Pro. I know 11:33I gave it really dirty data on purpose 11:36as a way of testing its reasoning 11:38ability. It did okay. I would recommend 11:40in practice you put the effort in to 11:42giving GPT5 Pro multiple perspectives at 11:45different layers in the business and 11:46make the data as clean as you possibly 11:48can because then you're going to get 11:50more useful information back. I do think 11:52financial modeling is a nice use case 11:54for GPT5 Pro. Legal analysis. Do some 11:57due diligence on large collections of 12:00documents. Look at contract terms. Maybe 12:03you identify legal risk. Look at 12:05dependencies. These reasoning traces can 12:08look at things from multiple 12:09perspectives and the synthesis can catch 12:12things that human reviewers might miss. 12:14This is not about saying the humans 12:16don't need to review the legal 12:18documents. It is about saying how can a 12:23tool that is designed for parallel 12:25reasoning converge toward correctness 12:27when a correct answer is available. 12:29Because in legal analysis also a correct 12:32answer is available. there's a correct 12:33and optimal legal stance on a particular 12:36due diligence question. You can name the 12:38top risks and you would be wrong if you 12:40missed one. Similarly, with financial 12:42modeling, you can name the overall 12:45correct financial output statement and 12:48you would be incorrect not just if a 12:50number was wrong, but if you did not 12:53take account of all of the components of 12:56the business and the financial model. 12:58GPT5 Pro excels at that kind of 13:00analysis. And so you have opportunities. 13:03And by the way, the financial modeling, 13:04the legal analysis also based on early 13:07insights from teams. And so science and 13:09finance and legal. Fine. What about 13:11something that's closer to tech? McKay 13:14Wriggley is both a content creator and 13:17also a coder. One of the things that 13:19he's called out is that he is excited 13:21about GPT5 Pro in the coding space 13:25specifically for architectural 13:27decisions. And that has been one of the 13:29areas where LLMs have historically 13:31struggled. Defining how you put 13:33technical systems together has been 13:36hard. GPT5 Pro with a sizable context 13:40window can enable you to look across 13:43large chunks of your codebase and make 13:46architectural recommendations about that 13:49codebase and it reasons toward 13:51correctness. like it will think through 13:52coding best practices run multiple 13:55reasoning traces all of those hallmarks 13:57of parallel reasoning and where it sings 13:59they come through and it thinks 14:00correctly. If you want to talk about 14:02marketing if you want to talk about 14:03product and where those things have GPT5 14:07pro use cases look for areas where you 14:09have a correct or optimal decision and 14:11you can feed the model multiple parallel 14:13perspectives. And so if you are trying 14:15to enter into the market and your your 14:17product team and your marketing team are 14:18there and they're trying to figure out 14:19how to crack the market with a product, 14:21great great opportunity. Bring in some 14:24user interviews, bring in a survey of 14:26the market, bring in a company profile, 14:28bring in some product opportunities. Lot 14:30of grounds that help GPT5 Pro reason in 14:34parallel and you're going to get to a 14:36correct answer. That is the goal, right? 14:39Like you're going to get to something 14:40that gives you an optimal path through 14:42all of those variables. Let's look at a 14:44few cases where parallel reasoning 14:47probably doesn't help. I'm going to 14:49suggest to you that GPT5 Pro requires 14:53you to think architecturally to the 14:57extent that it may not help you with 15:00thinking sequentially. And that's where 15:02parallel reasoning can be a challenge 15:03because it can produce an overall 15:05coherent perspective in the ways I've 15:07described. That's really good. But for 15:09example, coding, which a lot of other 15:10LLM agents are actually quite good at. 15:12Coding is a much lower level of 15:14decisioning than architecture. Coding 15:16requires very sequential logic. There 15:18are reports already coming out that GPT5 15:22Pro can weirdly lose the plot sometimes 15:24when it is producing code. And that is 15:26likely because it is running multiple 15:28plots, multiple sequential coding 15:30threads simultaneously. So be aware of 15:33that. You may not want to use it for 15:35coding. creative writing, you have to 15:37have a narrative with a particular 15:40singular voice. I would not use GPT5 Pro 15:43for this. And I don't know of many 15:45people who are, so this feels like an 15:46easy one. But you're going to get maybe 15:49some really coherent, thoughtful plot 15:51feedback from this model, plot 15:53architecture, where it's going to give 15:55you its solution to a particular plot 15:57problem, but it's not going to make the 15:59bold creative choice. It's not going to 16:01write in a particular voice. That is not 16:03really what this model does. 16:05conversation and this is a really 16:07important LLM use case. A lot of the LLM 16:09use cases that we see in production are 16:13conversational use cases. This is not a 16:15model for conversation. One, it takes a 16:17long time. And two, human dialogue needs 16:20consistency and personality. If it feels 16:22robotic, which GPT5 Pro is going to feel 16:25robotic, if it doesn't feel sequential, 16:26if it if it jumps around, humans aren't 16:28going to like it. And I think that is 16:30part of the reason why 40 is preferred 16:33by a lot of people and why ultimately 16:36Chad GPT had to bring it back. So those 16:38are a few cases. I hope they give you a 16:39sense of where parallel reasoning works 16:41well, where parallel reasoning doesn't 16:42work well. The key is can you give it 16:44the data it needs. And that brings me to 16:47the infrastructure cost of using GPT5 16:50Pro. Success with GPT Pro requires a 16:54fundamental data restructuring that 16:56organizations tend to underestimate. 16:59Instead of linear documents that you 17:01feed, it would be ideal to feed GPT5 17:05Pro more multi-dimensional data 17:08architectures. So if you're doing 17:09financial analysis, feed it the core 17:12data statements. These are facts, 17:13metrics, these are calculations. And 17:15then feed it perspectives. Here's a risk 17:17lens. What we think could go wrong. 17:19Here's a growth lens. What are the 17:20opportunities in the space? Here's a 17:22competitive lens with our market 17:23positioning. Then feed it cross 17:24references, temporal cross references, 17:27how metrics change over time, 17:28relational, how departments interact. 17:31Basically, you need to start thinking of 17:32it as giving this multiple thread 17:37reasoning agent as much context as you 17:39can in a very structured way because 17:41each parallel thread will need a 17:43coherent data path to run. And so you 17:46want to think about how you are 17:49orchestrating a symphony of reasoning 17:52threads that need to maintain some 17:54degree of coherence. One of the things 17:57that's interesting is the responses API 18:00is able to main maintain some chain of 18:02thought persistence across threads. And 18:05so if you're giving it multiple whacks 18:06at the apple, if you're giving it 18:08multiple attacks at the problem with 18:10context, this kind of multi-dimensional 18:12data architecture can let you start to 18:14feed it perspectives that build over 18:16time. I think the thing I want to call 18:18out here is that most organizations 18:20don't have the actual patience in 18:22practice to do this. And if you're going 18:24to use GPT5 Pro at its best, this 18:27underlines one of the consistent themes 18:29with AI, which is that we need to change 18:32to take advantage of what AI brings to 18:34the table. Our data needs to change to 18:36take advantage of what GPT5 Pro and 18:39other AIs bring to the table. And GPT5 18:41Pro really forces that with a parallel 18:43reasoning architecture. So what are the 18:44strategic implications here? I would 18:46argue that GPT5 Pro presents the 18:50industry with some interesting strategic 18:53questions. So for OpenAI, they've proven 18:57that they can innovate on inference time 19:00compute and they can command premium 19:02pricing for specific use cases, but they 19:05haven't yet shown they can expand these 19:06use cases more generally. I've had to 19:08spend a lot of this video talking about 19:10where you don't use GPT5 Pro, and I 19:13think that's indicative. Claude is not 19:16actually an inference time compute 19:18model. Claude 4.1, Opus 4.1, it is using 19:23tools. It is interpreting, but it is not 19:26a traditional inference time compute 19:28model the way I've described GPT5. 19:30That's really interesting. Anthropic has 19:32been happy to train a model that is very 19:35good at tool use and tool calling and 19:37has been getting great results and great 19:39reviews, especially in the coding arena 19:42for that choice. Does Anthropic want to 19:44keep going down that path? Do they want 19:46to keep optimizing for coding because 19:49they believe coding has so much 19:50explanatory power long term over 19:53technical development trajectories? Or 19:55do they want to start to lean in on a 19:57thinking and reasoning model? And if 19:58they do, how does it reinforce their 20:00core value proposition around coding and 20:02their core value proposition around 20:03their personality? Because people love 20:05Claude's personality. Do they want to 20:07risk losing that? It's an interesting 20:08question. Google has to figure out how 20:11they are going to get to a model with a 20:15chat surface that is widely used and 20:19decide where they want to apply that 20:22reasoning power that they do have. They 20:25have reasoning power now that they 20:28employ to get phenomenal results in 20:30academic and technical domains. They 20:32have the the awards for science research 20:34and for protein folding and for math 20:36olympiad etc etc. It's not that they're 20:39missing the knowhow here at all, nor are 20:41they missing the technical architecture 20:43to get it done. They have their own 20:45separate architecture based on tranium 20:46chips, but they have to figure out where 20:50to productize that architectural 20:52innovation so that they have a unique 20:55product surface that people know to go 20:58to Google for. And that's something that 21:00Google has been struggling with for a 21:02while. Right now, the reason to go to 21:05Google is either you're already in 21:06Google Cloud or you really want the 21:09cheapest tokens per intelligence and you 21:11go to Google for that. Is that enough to 21:13sustain a strategic advantage or 21:17strategic share of the market over time? 21:19That's a question and I think it's a 21:21question GPT5 Pro puts a fine point on 21:23because what OpenAI is basically saying 21:25is we have a scaling paradigm here. 21:27We're going to keep making the model 21:28smarter and we're kind of going to dare 21:30you to beat us on smart reasoning models 21:32and Anthropic has their own corner with 21:34coding and non-reasoning models and 21:36Google's sort of in the middle right 21:37now. We are entering an era of 21:39architectural specialization. The next 21:42breakthrough and I and I think that 21:43people need to get past this idea of 21:44bigger models. The next breakthrough may 21:46not be a bigger model. It may be how we 21:48use reasoning architecture for specific 21:50cognitive tasks. Now that we're in the 21:52LLM era, we may see more specialization. 21:55That would not surprise me. So where do 21:57I want to leave you? Intelligence is not 21:59the same as utility. GPT5, however you 22:03measure it, is a very intelligent model, 22:05but its intelligence is not what makes 22:07it a success or a failure. The key is 22:10understanding that intelligence and 22:11utility are diverging as we get farther 22:13into the LLM era. And it's up to you to 22:16figure out if parallel reasoning makes 22:18AI smarter for the tasks that you want 22:21to accomplish. I think we're headed 22:23toward a future of AI stratification. I 22:25think we're going to have deep reasoning 22:26systems for very high stakes analysis. 22:28We're going to have conversational 22:30systems for daily interaction and we're 22:32going to have specialized tools for 22:33specific domains. The dream of one model 22:36that's better is I think it's dead. I 22:39don't think it's happening. And I think 22:40what's ironic is it's killed by the very 22:43GPT generation that promised the one 22:46model better at everything. I think what 22:48GPT5 Pro is showing us is that it's 22:51possible to have a model that is indeed 22:53better and also in some ways worse than 22:55its predecessors. There will not be one 22:57model to rule them all. And so the 23:00question for you isn't whether GPT5 Pro 23:03is worth $200 a month. It's whether you 23:06can define use cases that fit better 23:10with specialized tools or with deep 23:12reasoning systems or with conversational 23:14systems. If you are a conversational 23:16model person, do not pay the $200 a 23:18month. If you are a deep reasoning 23:20person, well, now you have to think 23:21about the analysis and whether you have 23:23the data to get ready and then maybe 23:24you're ready for GPT5 Pro. And if you're 23:26someone who only uses specialized tools, 23:29maybe you're not even using Chad GPT at 23:31all. This is the opening move in a new 23:34AI game where architectural 23:36differentiation is going to matter more 23:37and more. And that is why I've spent so 23:39much of this video explaining 23:41architectures and how they work and why 23:43GPT5 Pro is different. I hope this has 23:45been helpful. I hope you have a sense of 23:47where to use GPT5 Pro or whether or not 23:49to get it at all. Tears.