Learning Library

← Back to Library

Ready for ChatGPT‑5: AI Essentials

Key Points

  • The video aims to give a quick, non‑technical primer on AI now so viewers can stay ahead of the upcoming “Chat GPT‑5” release that promises to overhaul current models.
  • The speaker likens the current “summer of consolidation” to the 2007 iPhone launch, predicting that breakthroughs between now and late 2025 will make 2023‑24 AI tools look obsolete.
  • Expected rollout for Chat GPT‑5 is early Q3 (around July), with OpenAI pausing for a brief break before the launch and focusing on unifying reasoning, knowledge, voice, and search into a single “brain.”
  • Anticipated improvements include enhanced multimodality (speech, images, possibly video), deeper and more reliable reasoning, better answer selection from massive outputs, and increased personalization through memory and integration with email, calendar, and enterprise data.

Sections

Full Transcript

# Ready for ChatGPT‑5: AI Essentials **Source:** [https://www.youtube.com/watch?v=HfvO5Hcdyt4](https://www.youtube.com/watch?v=HfvO5Hcdyt4) **Duration:** 00:22:12 ## Summary - The video aims to give a quick, non‑technical primer on AI now so viewers can stay ahead of the upcoming “Chat GPT‑5” release that promises to overhaul current models. - The speaker likens the current “summer of consolidation” to the 2007 iPhone launch, predicting that breakthroughs between now and late 2025 will make 2023‑24 AI tools look obsolete. - Expected rollout for Chat GPT‑5 is early Q3 (around July), with OpenAI pausing for a brief break before the launch and focusing on unifying reasoning, knowledge, voice, and search into a single “brain.” - Anticipated improvements include enhanced multimodality (speech, images, possibly video), deeper and more reliable reasoning, better answer selection from massive outputs, and increased personalization through memory and integration with email, calendar, and enterprise data. ## Sections - [00:00:00](https://www.youtube.com/watch?v=HfvO5Hcdyt4&t=0s) **Understanding AI Before GPT-5** - The video offers a plain‑English overview of AI fundamentals, current trends, and practical resources so viewers can quickly catch up before the transformative release of ChatGPT‑5. - [00:03:17](https://www.youtube.com/watch?v=HfvO5Hcdyt4&t=197s) **Cautious Scaling of GPT‑5 Launch** - The speaker explains that deploying the upcoming model will require tens of thousands of GPUs, prompting a careful, staged rollout—from premium tiers to free access—with added alignment‑monitoring tools. - [00:06:23](https://www.youtube.com/watch?v=HfvO5Hcdyt4&t=383s) **Transformer Revolution and Scaling Laws** - The speaker outlines how the 2017 “Attention Is All You Need” paper launched the transformer era, allowing machines to capture long‑range language dependencies, adopt self‑supervised learning, and follow predictable scaling laws that spurred massive AI investment. - [00:09:46](https://www.youtube.com/watch?v=HfvO5Hcdyt4&t=586s) **How Transformers Capture Text Meaning** - The speaker explains that query‑key attention, multiple heads, deep layering, and positional encoding let transformer models mathematically represent textual patterns, enabling high‑fidelity understanding and token‑prediction across diverse data sources. - [00:13:07](https://www.youtube.com/watch?v=HfvO5Hcdyt4&t=787s) **LLM Inference and Alignment Process** - The passage walks through converting a query into embeddings, processing it with a transformer, repeatedly sampling tokens using strategies like greedy, temperature, or beam search, and then aligning the raw model through RLHF, system prompts, and curated data to produce honest, harmless, and helpful responses. - [00:16:29](https://www.youtube.com/watch?v=HfvO5Hcdyt4&t=989s) **Future GPT‑5 Challenges & Insights** - The speaker discusses rumors of an open‑source GPT‑5 release, examines current limitations like transparency, hallucinations, bias, and multi‑step reasoning, and offers a cheat‑sheet for continued learning about large language models. - [00:19:42](https://www.youtube.com/watch?v=HfvO5Hcdyt4&t=1182s) **Top AI Influencers to Follow** - The speaker lists eleven prominent AI personalities—including Leotsk, Clairvo, Dwarvesh, and Mary Mer—and recaps three insider takeaways about GPT‑5, the evolution from spam filters to ChatGPT, and the sophisticated pattern learning of LLMs. ## Full Transcript
0:00This video does one thing. It helps you 0:02understand AI before chat GPT5 comes 0:05along and changes everything all over 0:07again. I get so many DMs that say, 0:10"Nate, how do I actually understand AI 0:13before it's too late? Nate, I am late on 0:16AI. Nate, I don't know how to catch up." 0:18This video is for you. It's for anyone 0:20who has that feeling. It's also for you 0:22if you want to know what's in the box on 0:25Chad GPT5 and what we know so far. Let's 0:28start with this moment we're in today. 0:31This is the summer of consolidation. I'm 0:33comparing it to the 2007 iPhone release. 0:36Fundamentally, what is going to happen 0:38from here until October, November, 0:40December of 2025 is going to make 2023 0:44and 2024 models look completely 0:46outdated. AI itself is going through a 0:49platform shift right now. And chat GPT5 0:52is one of the big releases that we are 0:54looking forward to this year that is 0:55going to underscore that fundamental 0:58shift toward a more unified professional 1:00enterprise experience. And if you want 1:02to take advantage of that, if you want 1:03to be ready for that, it makes sense to 1:05get ready now. It makes sense to catch 1:08up now so you don't feel farther behind. 1:11So what we're going to cover is 1:12everything we know about Chad GPT5. 1:14We're going to talk about the story of 1:16AI very briefly in plain English. You 1:18don't have to have a maths degree. We're 1:20going to talk about some resources you 1:22can use to dig into. Yes, they are on 1:24YouTube. And we're going to talk about 1:26people to follow to keep up with the 1:28signal over all the noise that's going 1:30to happen this summer because there's a 1:31lot of noise. That's a ton to cover, but 1:33we're going to do it fast. First release 1:35timeline. We think July, early Q3 could 1:39be any time. We do know that the Open AI 1:42team is off this coming week through 1:44about July 4th. They've been working 1:46very hard. it would make sense to give 1:48them a break before the pressure of the 1:50roll out. Part of the pressure has to do 1:53with bringing the model into a single 1:56truly unified brain quote unquote. So 1:59bringing the Oer reasoning model, the 2:02general knowledge of GPT4 voice 2:04capabilities and deep searching tools 2:06all into one place. As Sam has said, we 2:09hate the model picker as much as you do. 2:11Getting that right is really hard. We 2:13will see if he gets it right, but that's 2:14certainly what they're going for with 2:15Jet GPT5. As far as capabilities, look 2:18for four areas of improvement. We don't 2:20know the exact specs, but I kind of 2:22don't care about the exact specs because 2:24typically you have to actually use the 2:26models to see if they're any good. 2:28Improvement area number one, 2:29multimodality, seamless speech in and 2:32out, images, maybe video with this 2:34release. Reasoning depth to go 2:36seamlessly from limited chain of thought 2:38to really reliable in-depth problem 2:41solving reliability. surfacing that one 2:44good answer in 10,000 consistently. And 2:46that, by the way, takes a lot of 2:47inference under the surface to pick the 2:49right answer. And then fourth, 2:51personalization, memory, access to 2:53email, calendar, enterprise knowledge, 2:55etc. This will take adaptive computing, 2:58so heavy GPU use only when needed. And I 3:01think it's going to lean pretty heavily 3:03into voice because that would align 3:04nicely with the very rumored Johnny Ive 3:07device. So, we will see. However, even 3:10if it's adaptive compute, it's still 3:12going to take a lot of compute cores. 3:15It's going to take perhaps tens of 3:17thousands of GPUs to properly serve this 3:20model. It has to be scaled out. And that 3:23is not something that they are going to 3:24do without being sure they got it right. 3:26Because if you recall, every time we've 3:29had an OpenAI at launch in the past year 3:31or so, we've had a brown out beforehand. 3:34We've often had scaling issues during 3:36the launch. This is their premier launch 3:38for the year 2025. They do not want to 3:41mess it up. So, they're going to take 3:43their time and make sure they get the 3:44engineering right. And that's part of 3:45why we don't have a date and they 3:47haven't announced a date. Okay. So, the 3:49takeaways for builders now based on what 3:51we know. Expect smoother user 3:53experience, not just bigger brains. 3:55Expect a gradual roll out because again, 3:56they're going to be monitoring those 3:57GPUs. So, I would expect it's going to 3:59follow their usual pattern and go from 4:01pro to plus to free, but they're going 4:03to try and accelerate it as fast as they 4:04can. I would bet because they really 4:07want this to be a flagship roll out for 4:09everybody. And so even if you get less 4:11stuff or less intelligence or whatever 4:13chat GPT5 light, it's still going to get 4:16to free pretty quick, I think. Expect 4:18expect extra tooling for monitoring 4:21alignment. I think that's going to be a 4:23bigger factor. I don't know what that 4:24will look like. It's just a guess, but I 4:26would expect more levers in the APIs in 4:28particular for monitoring alignment. I 4:30will be curious to see what the actual 4:32parameters look like just like everybody 4:34else. But mostly I want to see if they 4:37actually are able to build a single 4:39coherent brain that can infer from our 4:42prompts what the model needs to do. 4:44Whether it's a deep research task or 4:46something that's much lighter. Okay, 4:48that is what we know on chat GPT5. Part 4:50two, helping you get ready for Chat 4:52GPT5. What is AI anyway? Yes, we're 4:56going to go there and it's going to be 4:57plain English. We're going to start back 4:58in the early 2000s, classical machine 5:01learning. Machine learning is 5:02fundamentally telling an algorithm what 5:04details matter. One example that came up 5:06in the 2000s was spam filtering. You 5:08would count exclamation marks in emails. 5:11You would look for a keyword match with 5:13Viagra. You would manually encode those 5:16features and then you would use logistic 5:18regression, decision trees, etc. And you 5:21would try and get the algorithm to help 5:23you filter out the spam. In 2012, things 5:26started to shift. GPUs became cheaper. 5:29We started to get very large labeled 5:31data sets like imageet and we had deeper 5:34neural networks that actually learned 5:36features automatically. We got a 5:38computer vision breakthrough because we 5:39had more compute. So we discovered that 5:41edges and textures could be determined 5:44without being told in advanced. We 5:46discovered with word tovec in 2013 that 5:49networks could learn word relationships. 5:51The famous example is that a network 5:54could learn that king minus man plus 5:58woman equals queen. For the first time, 6:00meaning could emerge from data, not 6:03rules. And that unlocked a lot of other 6:06interesting discoveries. However, we 6:08were still limited. Fundamentally, we 6:09were limited by sequential processing. 6:11Everything had to be read one token at a 6:14time. Training was slow. These models 6:17struggled with long sentences and were 6:18generally only interesting to academics. 6:21they didn't really hit production for 6:23most use cases in the enterprise. Then 6:26everything changed in 2017 when the 6:29transformer revolution happened. It was 6:30started by the paper attention is all 6:32you need, which is super famous. I 6:34definitely recommend you go check it 6:35out. It included the insight that you 6:39could use attention weights to show 6:41token relationships and that unlocked 6:43massive GPU scaling. For the first time, 6:46you can track long range dependencies 6:49across human language. And it turns out 6:51that human language has a lot of 6:53longrange dependencies. As an example, 6:55you know in your heads, if you're still 6:56watching this, that I have been talking 6:58about the leadup to chat GPT5, even 7:01though I haven't mentioned that in a few 7:03paragraphs now. Why is that? Why is 7:05that? Because you're human and you can 7:07understand long-range dependencies. 7:08Until 2017, machines couldn't do that. 7:11Okay, so two big macro trends that 7:13emerged. One, self-supervised learning. 7:16No handlabeling was needed anymore. You 7:18could train it to fill in the blanks and 7:20predict the next token and you could 7:22scale. You could scale from millions to 7:24billions to trillions of tokens. That 7:26led to scaling laws. It turned out that 7:28performance improves in a predictable 7:30way with scale. And if bigger is 7:32reliably and quantifiably better, it 7:35makes sense to invest. There is yield 7:37there. That unlocked a massive 7:40investment in AI over the last six or 7:42seven years. So that's the brief story. 7:45Now we fast forward past 2017, past 2020 7:49up to 2025. How does AI actually work? 7:52By the way, this will work for Chat 7:54GPT5's basic architecture just like any 7:57other large language model. It is 7:59important to understand how they work. 8:00Number one, prediction. Just predicting 8:02the next word sounds really trivial, but 8:04it's not. Fundamentally, if you have 8:07scale and if you understand the 8:09structure of language, you can encode a 8:11vast amount of knowledge. You can build 8:13up answers token by token that reflect 8:15that structure that reflect that scale 8:17and you can use model weights which are 8:21conditional probabilities and they can 8:23encode a tremendously dense information 8:26set. They can encode long range 8:28relationships. They can encode short 8:30range relationships. They can talk about 8:32grammatical similarities. They can talk 8:34about cognates or meaning similarities. 8:36They can encode even relationships we 8:39don't fully understand. One of the most 8:40interesting things about LLMs and 8:42weights and encoding is that we have 8:45learned more things about language than 8:47we expected because LLMs are better in 8:50some ways at learning natural language 8:52than we are ourselves. The people who 8:54invented it. So let's talk about these 8:56weights. So we call them embeddings. 8:58Computers need to work with numbers. So 9:00we have to turn the words into numbers. 9:02Text is broken into tokens which are 9:04really subwords of about four 9:06characters. Each token is then encoded 9:09as a highdimensional vector which means 9:12that it's a fancy number set that 9:14captures meaning in a spatial way. So 9:16embeddings will discover that cat is 9:18close to kitten because the vectorred 9:21numbers are going to be somewhat similar 9:23but it will be far away from democracy 9:25unless a cat runs for president. You 9:26never know. All of this is learned 9:28during training and it enables you to 9:30conduct mathematical operations on 9:32meaning itself on semantic meaning which 9:35is really cool. Number two, I told you 9:37this would be interesting. Number two, 9:38the transformer engine. Every token 9:41computes relevance to all other tokens. 9:44That's really key. Query vectors are 9:46going to measure similarity between 9:48different keys, create a weighted 9:50average of values, and different 9:51attention heads are going to find 9:52different patterns. What all of that 9:54adds up to is different perspectives on 9:57the pattern making in text 10:00mathematically. And that adds nonlinear 10:02depth. So you can stack different layers 10:05of heads up to 60 plus and get a very 10:08complex capture of dependencies which is 10:11a fancy technical way of saying a 10:13complex highfidelity picture of a human 10:16text. You can understand the meanings 10:19inside it which is why if you ask an AI 10:22to read a text and give you a sense of 10:25the literary meanings this is why it 10:27understands it. transformer architecture 10:30is why opus 4 can understand Hemingway 10:34it's wild but it's actually math I don't 10:36know that Hemingway would agree or 10:38support or encourage it but it's 10:39actually math it is position aware so 10:42word order does matter and we see that 10:43when we prompt getting to training so 10:46this is all just understanding how they 10:47work you have to train these models the 10:49goal is to minimize the error in 10:51prediction for the next token but it 10:53turns out it is difficult to do that 10:56well and the reason it's difficult ult 10:58to do that well is because words can 11:01have different meanings and goals in 11:02different contexts. And so you have to 11:04have a lot of data sources from a really 11:06wide range. Web pages, books, 11:08newspapers, code, dialogue transcripts, 11:11highquality data sets, sometimes 11:12lowquality data sets, certainly getting 11:14started lowquality data sets. Now we're 11:17getting to real scale. Trillions of 11:18tokens, thousands of GPUs, weeks and 11:21weeks and weeks and weeks of training on 11:23this massive, massive data set. And yes, 11:25they do try and make it as high quality 11:27as they can. now because they know that 11:28affects the model. And what you're doing 11:30is you're doing something called 11:32gradient descent, which is basically 11:33trying to systematically minimize the 11:36model's propensity to error on the next 11:38token prediction across billions and 11:40trillions of it takes a long time. It's 11:42very complex to rig up and it gets 11:44exponentially harder the bigger the 11:46model gets. And guess what? The models 11:47get bigger. This is part of why llama 4 11:49behemoth has not been released. The 11:52training run did not go well. Or so the 11:54rumor has it. Zuck, don't come for me. 11:56Weights encode language patterns, facts, 11:59and reasoning, and they do it better 12:01when the training goes well. One of the 12:04reasons it is rumored that Sonnet 4 is 12:08good at writing, good at code is because 12:11Enthropic took time to get the training 12:14data right for the sonnet model. And 12:16also for Opus, it's interrelated. 12:18There's like a focus on training data 12:20that comes through for Claude. And that 12:22is rumored to be one of the reasons why 12:25Claude's personality, quote unquote, or 12:27Claude's pros, Claude's code is supposed 12:30to be very good. I certainly find it 12:31that way. And I'm not the only one. This 12:33is not an advertisement for Claude. I 12:34love lots of models. All right. Uh 12:36inference. Inference is what happens 12:38after training is complete, after launch 12:41day, when you get to generating 12:42responses. And yes, all of this is still 12:45roughly speaking how GPT5 will work. 12:47There will be some wrinkles as it is 12:49working across multiple context lengths 12:51and token lengths and or multiple 12:53context lengths to infer meaning but 12:55fundamentally the same bones will be 12:57there and I'm giving you the bones so 12:59you understand them. This is a one-stop 13:01shop so you can understand how AI works. 13:04Inference you want to take a query that 13:07you give and you want to get a good 13:09answer back. So you have to tokenize the 13:11prompt and you have to turn it into 13:13embeddings. We know what embeddings are 13:14now. You have to run it through the 13:16transformer which is going to figure out 13:17the contextual vectors that go with that 13:20prompt. Then you have to score it and 13:22you have to figure out the sampling 13:25strategies from all of the possible 13:27futures or all the possible tokens it 13:28could predict that you want. You could 13:31have a greedy strategy which just uses 13:32the highest probability token. You have 13:35temperature controls to control the 13:36randomness. You can even do something 13:38like beam search for parallel pathing. 13:40Anyway, you figure out your strategy to 13:42sample. This is just for one token. You 13:44add the token and you do it all over 13:47again and you repeat until you stop. 13:49Coherence emerges from doing that a lot 13:51and giving lots of feedback which gets 13:53to step five. How do you align these 13:55things? Step five is when you take raw 13:57models which mimic everything including 13:59really dark content and you give them 14:02very structured alignment. You give them 14:04reinforcement learning including 14:06learning with human feedback where 14:07humans rank answers. You give them 14:10system prompts and you give them curated 14:12question and answer to teach them 14:13formats. The goal is that they come back 14:15with honest and harmless and helpful 14:17responses. This is not an easy area to 14:21solve for. Even now, we are figuring out 14:23holes that we have in our responses and 14:25how to close them. The grandma hack 14:27still works. You can still tell most 14:29models that your grandma is unwell or 14:31has passed away and the model will do 14:33something it's not supposed to do out of 14:35sympathy for you and your grandma. 14:36Which, by the way, I I don't think I'm 14:38spreading much there. I think that's a 14:39very well-known hack, but it still works 14:40and that's an area of alignment. All 14:42right, where are we going after this? 14:44What are things that we would expect 14:46chat GPT5 to be able to do? Retrieval 14:48log meta generation is something that 14:50has become big. Uh it's fundamentally 14:52where a model will call a database to 14:54get fresh facts. It's like an open book 14:56exam. This can reduce hallucinations if 14:58you construct it well. It can also 14:59constrain the model if you put a rag on 15:02in a way that forces the model to only 15:05look at that data and that keeps the 15:07model from thinking outside of that 15:09space in a way that is unhelpful because 15:12it turns out you need more data than 15:13that. So I have seen rag architectures 15:15that are tremendously useful because the 15:17model can go and get the data and come 15:19back and also think more broadly. And 15:22I've also seen rag architectures that 15:24kind of feel like a dead end because you 15:26go in and you get the answers out of the 15:28HR policy manual and it's like that's 15:30all we got and there's not really much 15:32to it and nobody uses it. So rag is one 15:34of those ones you have to like actually 15:36use carefully. Second big one that you 15:37want to expect chat GPD J say that five 15:41times fast. Chat GPT5 to go after tool 15:44use. There will be a lot of tool use in 15:46chat GPT5. So output JSON triggering 15:49calculators databases agents extending 15:51beyond the static text. We already see 15:52this with 03. I expect more of it. 15:54Mixture of experts is quite 15:55controversial. I don't know if they'll 15:57talk about it, but fundamentally there's 15:59a sense in which models will sometimes 16:01call special expert submodels and the 16:03router will choose where to activate 16:04them which can lead to efficient 16:06scaling. That might be under the 16:07surface. They may not tell us and 16:09frankly they may not tell us if they're 16:10using a rag model. They are using 16:12something to keep context windows 16:14rolling. They aren't talking a ton 16:16about. And so they're also using 16:18something with memory. One of the 16:19interesting things with OpenAI as a team 16:22is they're not super transparent so far 16:25with how they do some of these things. 16:27That may change because they're also 16:29rumored to be introducing an open-source 16:31model in July along with Chad GPT5. We 16:34will see. Time will tell. This brings me 16:36to the current limitations. Yes, 16:38transparency is a question. 16:39Hallucinations are definitely something 16:41that are a concern. You know, Sam Alman 16:43had admitted on stage recently that they 16:44are figuring out that hallucinations 16:47work differently with reasoning models 16:49than with non-reasoning models and that 16:51that is leading to questions for them 16:53and they're wrestling with how to align 16:54that better. I think that's a very 16:56perceptive approach because to me like I 16:58feel like hallucination type really 16:59changes. If you have a simpler model, 17:01it's just going to be a domain 17:02completeness error. It's going to be 17:03like, well, this is just wrong. If you 17:05have a more complex model, the 17:06hallucination, quote unquote, may 17:08actually be a complete thought that's 17:10very coherent, that's scaffolded out 17:11correctly, and the error may be that the 17:14reality isn't as scaffolded out and 17:15complete as you think. So, I will be 17:18curious how chat GPG5 addresses 17:20hallucinations, how it addresses bias 17:21from training data, how it addresses 17:24multi-step reasoning and working off of 17:26memory. There's going to be a lot of 17:27things to learn from. Okay, we have gone 17:29through Chad GPT5 and what to expect, a 17:32little bit about AI, how AI works. Now I 17:34want to give you the cheat sheet on what 17:36you can keep learning. Number one, I 17:39want you to start to look through the 17:41introduction to large language models 17:42that Andre Karpathy talks about and 17:44gives on his channel. It is an 17:47absolutely phenomenal introduction to 17:49large language models in AI. Neural 17:51network series by three blue one brown 17:53also on YouTube also extraordinary. And 17:55the Stanford CS say that five times 17:58fast. Also an incredible 16 lecture 18:00course. If you just do those three, you 18:02are already going to be farther ahead 18:04than 98% of people really. There's a few 18:06others that I could get into, but for 18:08the sake of time, we'll jump jump a 18:09little bit forward. I now want to give 18:11you the 11 people that I think will give 18:13you the most signal versus noise that I 18:16can find anywhere on the internet for 18:18AI. Number one, probably haven't heard 18:20of him, Simon Willis. He co-created 18:22Django, the language. He coined the term 18:24prompt injection. He writes phenomenal 18:26blog posts and he has a ton of them, 18:28like over 1300. He's built LLM command 18:31line tools and he's absolutely an 18:33authoritative resource. Ethan Mullik is 18:35number two. He is tremendously 18:38influential on AI. He wrote a book on 18:39it. He's a Wharton professor and he has 18:42been tremendously clear about describing 18:46the impact of AI on both academia and 18:49work. I've mentioned Andre Carpathy, 18:51former Tesla AI director, Open AI 18:53co-founder. He has done a phenomenal job 18:56on teaching AI and that is why I 18:59recommended some of his courses. He is 19:01able to take a complex concept and 19:03distill it into something simple and 19:05understandable in a way that I just 19:07rarely see anywhere else. Okay, let's 19:09follow a few others. Four and five I 19:11think you're not going to be surprised 19:12by. Number four is Sam Alman, OpenAI 19:15CEO. I think enough said. Number five is 19:17Daario Amade, Anthropic CEO. Again, 19:20enough said. Deise Hosabis is slightly 19:23less wellknown unless you're deeper in 19:24the space. He did win the Nobel Prize in 19:262024. Uh and he did so for his uh alpha 19:30fold work in chemistry. Fundamentally he 19:33is one of the leading minds on AI and 19:35he's especially deep uh working with 19:37Google on the science side of things. 19:39Leotsk is uh another former founder of 19:42open AI. He's now founded safe super 19:44intelligence uh and he is pursuing super 19:47intelligence directly. He's not doing 19:48product releases. You won't see him at a 19:50dev day, anything like that. All he's 19:52doing is focusing on super intelligence. 19:54Okay. Number nine, Clairvo. She has done 19:57a phenomenal job talking about how 19:59people actually AI. She's built a 20:01product called Chat PRD and she is one 20:04of the leading lights on how you apply 20:06AI in the workplace. Number 10, 20:08Dwarvesh. Uh he's become I I guess 20:10Silicon Valley's favorite podcaster. Uh 20:12he interviews really well. He's deeply 20:14read. He's deeply thoughtful. mostly you 20:17follow his podcast because the people he 20:19picks are interesting and he has very 20:20long and interesting conversations with 20:22them. Mary Mer, I talked about her 20:24recently on this channel. She is 20:26phenomenal in the sort of deep trends 20:29investor level space. I covered her 340 20:32page AI trends report. She is someone 20:34who's been investing in internet and 20:36investing in tech for decades and she is 20:38renowned for her sharpness. Oh, I just 20:41Those are the 11, right? Those are the 20:4211 to follow. Let's wrap this up. What 20:44do you now know that most people don't? 20:46One, you know that GPT5 isn't just GPT4, 20:50but bigger. That by itself is a big 20:51piece of knowledge. Two, you can tell me 20:54and everyone else the quick journey from 20:57spam filters to chat GPT. I just told it 20:59to you. You can go back and rewatch it 21:01if you need to. Number three, you know 21:03that LLMs are sophisticated pattern 21:05recognizers. You actually have clear 21:07English that I just described to you 21:09that explains how they work. It's not 21:11magic. you know where to go learn from 21:13AI. I gave you some courses and you know 21:15who to follow this for signal over 21:17noise. I want you to realize that AI 21:19isn't about keeping up with every 21:20Twitter thread. It's about having a 21:22solid foundation. It's about knowing 21:23where to look, having the right mental 21:25models and the right guides that will 21:27set you up for this iPhone moment in 21:292025. We are replatforming AI. It's not 21:32just chat GPT5, by the way. I would 21:35expect a lot of other significant 21:37replatforming moves from Google, from 21:39Anthropic, from potentially from Grock, 21:42from Deepseek. Model makers are in a 21:46race to the finish line. Meta is going 21:47to get in there at some point and they 21:49are all trying to get to this moment of 21:51establishing a platform. They all know 21:53that LLMs by themselves are yesterday's 21:56news. They need to get to powerful 21:58models that ship compelling enterprise 22:00user interfaces, compelling experiences 22:03for consumers. That is the story. That 22:05is the iPhone moment story for 2025. And 22:07I want you to understand what drives it 22:09all. Good luck out there.