Learning Library

← Back to Library

ChatGPT‑5 Review & Memory Battle

Key Points

  • The presenter demonstrated how Chat GPT‑5 makes it simple to create tiny, practical apps, highlighting a 14‑day Kyoto itinerary that sparked requests for remixing and prompting tutorials.
  • He noted a recurring pattern after major ChatGPT releases: initial excitement followed by disappointment and a lull, while the broader AI field continues advancing.
  • Recent AI news was summarized, focusing on Claude’s new “memories” feature, which retrieves past conversation snippets rather than maintaining a persistent, editable memory store.
  • Compared to ChatGPT’s more controllable but sometimes opaque memory system, Claude’s approach offers richer retrieval options but can produce inconsistent, probabilistic outputs across similar queries.

Sections

Full Transcript

# ChatGPT‑5 Review & Memory Battle **Source:** [https://www.youtube.com/watch?v=v1Ham9sIWgo](https://www.youtube.com/watch?v=v1Ham9sIWgo) **Duration:** 00:23:43 ## Summary - The presenter demonstrated how Chat GPT‑5 makes it simple to create tiny, practical apps, highlighting a 14‑day Kyoto itinerary that sparked requests for remixing and prompting tutorials. - He noted a recurring pattern after major ChatGPT releases: initial excitement followed by disappointment and a lull, while the broader AI field continues advancing. - Recent AI news was summarized, focusing on Claude’s new “memories” feature, which retrieves past conversation snippets rather than maintaining a persistent, editable memory store. - Compared to ChatGPT’s more controllable but sometimes opaque memory system, Claude’s approach offers richer retrieval options but can produce inconsistent, probabilistic outputs across similar queries. ## Sections - [00:00:00](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=0s) **ChatGPT5 Prompt Walkthrough Demo** - The speaker showcases a ChatGPT5‑generated Kyoto itinerary, promises to walk the audience through the prompting process, offers a bonus, and contrasts new memory features from Claude with ChatGPT’s own implementation. - [00:03:09](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=189s) **Scaling Context and Brain Modeling Advances** - The passage highlights Claude’s new million‑token Sonnet context window that dramatically expands processing capacity despite imperfect retrieval, and contrasts it with Meta’s brain‑modeling challenge that predicts fMRI responses to video, illustrating that AI progress continues even if it’s hard to measure. - [00:06:33](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=393s) **AI Self‑Critique Infinite Loop** - The speaker explains how Google's Gemini model can get stuck in a repetitive self‑critique bug on difficult tasks, highlighting the unpredictable behavior of large‑scale AI systems as they reach near‑billion‑user adoption. - [00:09:59](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=599s) **Clear Intent Drives Mini-App Generation** - The speaker recounts an initial code‑generation failure, then shows how a brief, plain‑language prompt with clear constraints reliably produced a detailed Kyoto mini‑app blueprint, illustrating that precise intent can replace overly technical prompts. - [00:13:42](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=822s) **Limitations of Code Versioning in Canvas** - The speaker explains that the canvas interface only shows the latest code version, not past revisions, and discusses using ChatGPT's "thinking mode" and iterative prompts to refine a travel itinerary. - [00:17:11](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=1031s) **Rapid Prototyping & Post‑Launch Tweaks** - The speaker explains how a brief, 10‑15‑minute chat‑driven session generated an app, then discusses the frustrations, iterative bug fixes, and continuous enhancements that illustrate a product’s evolving quality. - [00:20:33](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=1233s) **Request for Bracketed Prompt Templates** - The speaker asks for prompts that enclose key user choice points (e.g., interest areas, city) in brackets to enable easy customization, reflects on improving prompting skills, and previews a comprehensive app design output—including a name, core features, UI layout, and non‑functional requirements—while suggesting further refinement with a more advanced model. ## Full Transcript
0:00So, a few days ago, I reviewed Chat 0:02GPT5, and one of the things I emphasized 0:04is it's really, really easy to make 0:06small, easy to use apps. And the one 0:09that caught everyone's attention was 0:10that I built a 14-day travel itinerary 0:13for a trip to Kyoto, Japan. I had people 0:16messaging me saying, "Hey, can I remix 0:18it for my city?" I had a lot of people 0:20saying, "Can you walk me through the 0:21prompting process?" We are going to do 0:23that today. But first, you get a little 0:26bonus. And the bonus is not about chat 0:29GPT because one of the things I want to 0:31emphasize is that the news keeps 0:32happening. Every time there's a major 0:34release with Chat GPT, I see the same 0:37audience reaction. I see people saying, 0:39"It wasn't what I expected. It's a 0:41little bit disappointing." And there's 0:42this sort of lull afterward. Everyone 0:45loses their energy. But AI keeps 0:47marching on. And in particular, we've 0:49seen a lot of really interesting updates 0:51from other labs, not from chat GPT. So, 0:54just to give you a sense of perspective 0:56before we dive deep into chat GPT5, I 0:58want to give you a few snippets from the 1:00last day in terms of news. Specifically, 1:04we have five pieces of news. We're going 1:07to start quickly with Claude. Claude 1:09launched their memories feature. I have 1:11tried it out. I want to caution you if 1:13you're used to chat GPT. This is not the 1:16same memories feature. Number one, chat 1:19GPT enabled it and you could turn it on 1:22and it would just work and you can edit 1:23individual memories. I don't know that a 1:25lot of people do that, but you can 1:26literally see what the system remembers. 1:28It it comes as little lines with a 1:29little delete button in the settings. 1:31That is not how memory works in Claude. 1:34In Claude, it's retrievalbased. You 1:36actually have to steer the memory. All 1:38it does is it searches through your past 1:41conversations based on your current 1:43conversation. So, you have to ask it in 1:46the current conversation. Please 1:47remember this or that. In my experience, 1:49as I have played with it the last day or 1:51so, what I have seen is that this memory 1:55feature is not as dependable as Chad 1:57GBT's memory feature, but it gives you a 2:00richer range of options. So, the memory 2:02feature for Chad GBT is famously 2:04somewhat uncontrollable. You don't 2:06really know what you're going to get. It 2:08will remember certain things and you 2:09wonder why. And that's why they give you 2:11the ability to edit. With this one, with 2:13Claude, you can decide exactly what you 2:16wanted to go retrieve from past 2:18conversations, but it doesn't retrieve 2:20it in the same way every time. I've 2:22actually tried this. I asked the same 2:24query in a fresh chat to the same model 2:26two different times, and I got very 2:28different structured answers. Similar 2:30overlapping content. It wasn't 2:32completely off base. But keep in mind 2:34that this is not surgical retrieval. The 2:36model is running this through a 2:39probabilistic token architecture. and 2:42you're getting different formats at 2:43different times. So, Cloud launching the 2:45memories feature, big update. It's the 2:48first major model maker that has some 2:50kind of memory besides Chad GPT. And 2:52that has been one of the stickiest 2:54features in chat GPT. I I know lots of 2:56people who stay with their chat GPT 2:58subscriptions just because it's the only 3:00one that has memories. That's starting 3:02to change. I would expect it to change 3:04more. Second one, also from Claude. 3:06Claude launched a 1 million token 3:09context window for Sonnet. That is a 5x 3:13increase from the previous 200,000 token 3:16limit in the API. It enables you to 3:18process code bases of 75,000 lines all 3:21at once. You can do extensive document 3:24sets while maintaining a degree of 3:25coherence. Now, is Sonnet perfect? Does 3:28it have perfect retrieval across that 3:30larger context window? No. But neither 3:32does any other model. The point is that 3:34it is easy. now to handle extremely 3:38large and complex queries in a way that 3:40it wasn't easy even 3 or 4 months ago. 3:42This is another sign that progress just 3:45keeps drumming along. I know that there 3:47was a lot of conversation after chat 3:49GPC5 launched that basically amounted to 3:52is progress over. I would argue with you 3:54that we have a frog boiling in the pot 3:56problem. Progress isn't over. We've just 3:59lost sight of the ability to assess it 4:02correctly. Let's jump from Claude to 4:05Meta. Meta launched a brain modeling 4:09challenge where their brain and AI team 4:12was able to encode a 1 billion parameter 4:15brain of some sort. I don't know. It's 4:17an artificial brain, right? And it 4:18basically predicts fMRI brain responses 4:21to movies by fusing together video 4:24frames, audio, and dialogue. In a sense, 4:26what what Zuckerberg is trying to do 4:28here is he's trying to build an 4:30artificial brain to figure out how to 4:33make his video algorithms for his 4:35platforms on Meta more addictive. That's 4:37really what's going on because if he can 4:39model response in the brain to video, he 4:41can make the video more directly 4:43stimulating to brains and then he can 4:45get more of your attention. I know that 4:47sounds dark, but I think given sort of 4:49the direction that Meta has gone with a 4:51lot of the way they've engineered the 4:52algorithm, it's a fair call out. Second 4:54to last but not least, we have Merge 4:56Labs forming. Merge Labs is related to 4:59OpenAI, but it's not OpenAI. This is a 5:03new brain computer interface startup 5:05involving Sam Alman. Open AAI is 5:08reportedly an investor, and Sam is 5:10listed as a co-founder. It would 5:12directly compete with Elon Musk's 5:14Neuralink. What this says to me is this. 5:17This whole idea of a brain computer 5:20interface is not just going to 5:21disappear. It's not just an Elon pet 5:24project. We are not at the point where 5:25we are anywhere close to production on 5:27these things yet, but I would expect us 5:29to be talking more about commercial 5:32products and the ethical questions they 5:34raise in 2027. That's my sort of 5:36personal horizon for when I think we're 5:38going to start to see something like 5:39this come out. And you will see a few 5:40early adopters that are like, "Yes, 5:42please hook my brain to the AI. I want 5:44to be part of the singularity." You'll 5:46see a lot of people who are like, "Get 5:48that away from me. I don't want to touch 5:50it with a 10-ft pole." Let's save that 5:51debate for 2027 for now. Just notice 5:54that there are multiple tech titans 5:56getting involved and this isn't going 5:58anywhere. Last but not least, Google 6:00Gemini has a Marvin the Paranoid and 6:03Android problem. So, if you've read 6:04Hitchhiker's Guide to the Galaxy, you 6:07know that Marvin the Paranoid and 6:08Android is a depressed little robot that 6:11just cannot get over the curse of its 6:13own intelligence. That is very much the 6:16vibe from Google Gemini. And what's 6:18interesting is that it appears to be a 6:23self-deprecation loop where Gemini is 6:26programmed to apologize when it can't 6:28get something done and then try again. 6:31But when the task is sufficiently hard, 6:33it seems to get into a dramatic 6:36self-critique loop where it critiques 6:38itself over and over and over again for 6:40failing to accomplish a difficult task 6:43until it is literally has refused to 6:45proceed further with tasks. And so the 6:48leader of of Google's AI project, uh, 6:50Logan Kilpatrick, has called this an 6:52annoying infinite looping bug, which is 6:54one way to put it, and has said that the 6:56team is working to fix it. So this is 6:58reminding me we have now hit close to a 7:01billion users with AI. We are seeing 7:04examples of AI behavior at scale that 7:07just did not show up on anybody's 7:09testing. It reminds me how probabilistic 7:11these tools are and how much unique 7:13flavor there is in each model. I think a 7:16lot of the reaction to chat GPT5 is 7:19frankly from the sense that we have a 7:20new colleague to work with and we don't 7:22know the new colleague yet. Like, hey, 7:24who's Frank? Frank is new here, right? 7:26Like, we we probably should get to know 7:28Frank before we trust Frank with our 7:29stuff. These models have personality. 7:31They have weird quirks. And Google 7:33really underlines that with the Gemini 7:35depression scale, so to speak. We will 7:37see when they get it fixed, but it's 7:39reminding me how unpredictable these 7:40tools can be, even by very large model 7:43makers. So, those were the five pieces 7:44of news. Let's go from there to part two 7:47of this video where I dig into the Kyoto 7:51travel app that I demoed back in my chat 7:54JPT5 review. This will be an on-screen 7:56demo. I'm going to share my screen, walk 7:57you through the prompts, show you what I 7:59got, and we'll have some fun. Okay, 8:01first things first, I want to show you 8:03what I showed the world. This is the app 8:05that everybody got to see. So, it has 8:08different emphases that you can click 8:10here. So, you can preset it for ramen. 8:12You can preset it for uh moss. I said 8:15that we wanted to see moss temples in 8:17Kyoto or for balanced. You can click 8:19around. You can add things if you want 8:22to add something. You can choose a 8:25different place. Like I could add uh 8:27Guini in the morning here and it will 8:29just add it right there. Calm cloers 8:32sounds like a nice way to start the 8:33morning. We have some soy broth. Maybe 8:35in the afternoon I can hit up a coffee 8:37shop. And I can just uncclick this and 8:40hit up the coffee shop. a weekender's 8:42roaster. That sounds pretty great. Just 8:44add that into the afternoon. You can see 8:46that you can kind of build up some 8:47notes. It gives you a sense of what's 8:50going on. I have a kid, so like it gave 8:52me a sense of what would happen with the 8:53baby. Is it a perfect app? I want to 8:55emphasize that it is not a perfect app, 8:58but it's relatively easy to build and to 9:01remix. You see that prominent little 9:03button? It's easy to edit. You can edit 9:05it yourself. Let's look at the prompts 9:08that led to this app. All right, here we 9:10are in chat GPT. This is the actual 9:12conversational chain that I use to 9:15produce this and I want to call out how 9:18much you can do just in the 9:20conversation. We'll go through it, but 9:22it's really exciting to me. So, this was 9:24my initial prompt. Can you do some 9:25research? Build me an interactive mini 9:27app I can use to explore various options 9:29for visiting Kyoto next year. Then I 9:32list three or four interests and I say 9:34how far I'm willing to travel. And I say 9:36this is who this is for. Who who's the 9:38audience? It's a family app. It's for my 9:40wife. Uh, and please do the research you 9:43you need to develop specific 9:45recommendations that could be used to 9:47guide a real twoe itinerary. Right? Uh, 9:50so it goes away and it thinks for three 9:52minutes. It comes back with some code 9:54and it comes back with a teaser. Right? 9:57Uh, the problem was the code failed 9:59partway through. So this is me being 10:01really blunt with you. This was the 10:02first launch day. OpenAI servers were 10:04under a lot of pressure and this just 10:06didn't generate. So I said try again. So 10:08it comes back initially and I think it's 10:10constraining tokens. It comes back with 10:12a visual teaser. It says look at how 10:14great Kyoto is. Here's a mini app 10:16blueprint. All the places you could go. 10:18These are all real places. It's citing 10:20them in line. Gives you hot springs. 10:23Gives you interaction ideas. 5day 10:25snapshot. Now I could have edited this 10:28heavily. I could have said this is not 10:29enough. I need more options etc. In this 10:32case I really want to see how good a job 10:34it does at coding. I say please code it 10:36as a mini app. That's it. Like this is 10:39keep in mind the these sum total of my 10:42substantive interaction with this has 10:46been three or four lines here and then a 10:49line here. Now I am sometimes known as 10:52the really technical prompt. And one of 10:54the things I like to balance that with 10:56is to remind people that if your intent 10:58is really clear, it doesn't have to be a 11:01super technical prompt. If you go back 11:03to the top here, this was actually 11:05pretty clear intent. It was very clear 11:07where I wanted to go, what I was 11:08interested in, how far I was willing to 11:10travel. I put some constraints in. I 11:12defined the audience. I did a lot of the 11:14things that a technical prompt would do. 11:16I just did it in a plain sentence, and 11:19that seemed to work well to evoke a 11:21really detailed app recommendation. So, 11:23I say, "Yes, please code it." It then 11:25works for a minute and a half, and I 11:28don't love what it comes up with. And 11:30principally, I don't love with it what 11:32it comes up with because it's just 11:34incredibly ugly 11:37and it's got sort of a dark blue text on 11:40black. I can't see anything. It's not 11:42interactive. It just looks terrible. 11:44This is an example where I am showing 11:46you what it looks like to actually code 11:49versus what you see in the shiny demos. 11:52Is it still worth it? Yes, because I 11:54want you to see how quickly I can get to 11:55something interesting and usable. Okay. 11:57So, I say I can't see it. That's it. 11:59That's all I tell it. I give it a 12:01screenshot and say I can't see it. Um, 12:04I've updated the code and it's basically 12:06saying I fixed it and you can see things 12:08more easily, right? I then come across a 12:11bug. And so when it says I've fixed the 12:13syntax issue, that is an indicator that 12:15when I tried to run the code, I hit fix 12:18this bug, which is an actual thing you 12:20can do in the UI. I can't do it now 12:22because we fixed the bug, but that's how 12:24that works. It then says it fixed the 12:26bug and I say fix another bug, right? 12:29This is some of the reality, right? I am 12:31starting to get fed up because there's a 12:33third error, right? I am now annoyed. 12:37Um, and so I start to get a little bit 12:40annoyed. I say, you know what, you've 12:41given me so many errors. This is the 12:43third error in a row. The app you build 12:45is dark on dark font. I cannot see it. I 12:48need it beautiful, clear, minimalist, 12:50and I need it to freaking work. Uh, I 12:53can't tell you if freaking is actually a 12:55useful prompting word. It was my 12:57expression and my frustration at that 12:58point. Um, 13:01and it actually went all in on it. And I 13:04think one of the things I noticed here, 13:06coming back to the prompt, I did not 13:08specify a visual style before, and that 13:10was probably on me. That's an example of 13:12where a more technical prompt would have 13:14challenged me to set a more beautiful 13:17style, and I just didn't do it. Anyway, 13:19it comes back. It nukes the buggy 13:20snippet. It replaces it with a clean, 13:22light theme, minimal React, all of this 13:25stuff. I then come back, and this is the 13:27first time it's actually functional. The 13:28map and the information links don't 13:30work. Um, and I I need a plain English 13:33rationale. So, if you remember when I 13:35showed you the real app, it had a plain 13:37English description of the day. That 13:39wasn't there in the original version. 13:40Now, you might be wondering, well, why 13:42aren't you showing me these code 13:43versions as we go? The answer is very 13:44simple. inside the same canvas, the code 13:48does not roll back the way it does on 13:50Claude. And so if I click on that code 13:52and run it, it shows whatever is the 13:54most recent version. And so it's it's 13:56not actually you you can access the 13:58current code through this button. You 14:00cannot access the old code. So we're 14:02going to stick with it. And then it goes 14:03to the end, which I think makes no 14:05sense. Let's go back. Uh map and info 14:08links don't work. 14:10Um give it uh like a Japanese inspired 14:13aesthetic. So then it starts to say, 14:15"Okay, let's fix these things." I then 14:17say, "Okay, we finally have something. 14:19Do the whole 14-day trip." Um, and then 14:22it starts to ask for extras, which is 14:24what chat GPT classically does, 14:26especially five. Would you like the 14:28rationale to reflect the couple's 14:30emotional arc, you know, uh, or should 14:33it be more practical and log logistical? 14:36And I say, "Look, let's be real. I'm 14:37traveling with a one-year-old. Factor 14:39that in. We'll probably want some extra 14:40time." Uh, by the way, if you're 14:42wondering what my chat GPT version is, 14:45do not look up here. This is the current 14:47sort of default. Instead, recognize that 14:50whenever it's spending time thinking, 14:52this is chat GPT5 thinking mode. And so, 14:54I've already showed you a few thinking 14:56mode examples. I was using thinking mode 14:59because I felt like I was getting better 15:01results. I actually tried this with chat 15:03GPG5 without thinking and it just did 15:05not give me runnable code, which is not 15:07super surprising. Uh, it then refactors 15:09it. Do you want me to flesh all of this 15:11out? I need to have some meaningful 15:13controls. At this point, we are really 15:15optimizing, right? And at this point, 15:17you are probably also curious for what 15:21you can actually see, right? Like what 15:22does another version besides the one 15:24I've shown you look like. Well, this is 15:26the latest version. I'll just show it to 15:27you. What's interesting about this one 15:29is it's very Japanese inflected. So, 15:32like it literally brought in Japanese 15:34language, which I don't read. So, I 15:36thought that was a nice touch, but 15:38perhaps not necessary. it expanded the 15:40number of categories a fair bit which is 15:43something I asked it to do in later 15:45versions 15:47um and it has filled out all of these 15:50elements and so one of the things that 15:51you'll notice if you go through my 15:53production version is that we have an 15:57issue with not enough mossheavy ramen 16:00night heavy onsenheavy things to do and 16:02so we need to fill out morning afternoon 16:05and evening for 14 days and so one of 16:07the later things I did is I basically 16:09said, "You need to get creative and fill 16:11out a full 14-day itinerary." And you 16:13can see that it did. Uh, now some of it 16:15is a family rest window, but 16:16realistically with a kid, that's 16:17actually not a bad idea. Um, 16:21and it gives you longer and larger 16:24narratives in the new version. And it 16:26gives you a lot more options. So, as an 16:28example, if I want to go to some of 16:31these ones that are new, I can do a lot 16:34more around Kyoto. Like, we can go to 16:35the Araimaya uh bamboo grove if we want, 16:38right? and I can add that in if we don't 16:39already have that in. We can go to the 16:41railway museum. This is good enough as 16:44it stands that I am already thinking 16:47about using it for a production planning 16:50of a trip. And I think that that 16:51underlines one of the things that I 16:53really tried to call out in my original 16:55review, which is that these things like 16:57yes, like if you go back here, it's 16:59somewhat it looks somewhat frustrating, 17:00right? Like you're going back and forth, 17:02you're asking it to make edits. Um, you 17:05know, there are blanks. Please fix this. 17:08Uh, I want to actually have like a lot 17:11more creativity. 17:13Um, 17:16and and 17:18it's I think the way I'll put it is that 17:21in this chat experience, it can feel 17:23frustrating. And that's something that 17:24didn't come through in the chat GPT5 17:26presentation. But the reality of getting 17:29through to the end of this, getting 17:30through little bugs like this that 17:32happened postp production. I was fairly 17:34frank. I'm not going to say that word on 17:36video, but you can read it. Um, and 17:39demanding restoration, getting it back. 17:42It's It's encouraging to me that you can 17:44restore stuff just by yelling at it. And 17:46it's encouraging to me that after this 17:47whole conversation, and this is post 17:50launch, right? Like if you want to think 17:51about how long it took just to get to 17:54launch for the the app that you saw at 17:58the beginning of this video, it was 18:00about 15 minutes of conversation. It was 18:02very easy. It was very fast. It might 18:04have been less. It might have been 10 18:05minutes all told. Um and it stopped 18:08about here and that was it. And then all 18:10the rest of this is post-production. Me 18:11continuing to mess with it because it's 18:13frankly fun. It writes out the code. 18:14these hundreds of lines of code that 18:16it's written out here 18:18and it's continued to make it better. 18:20It's added in a full twoe planner. It's 18:22added in more interests. I can continue 18:24to work with it. 18:27And 18:28I think one of the measures of a good 18:30product is that you do continue to work 18:32with it. And so even though like if you 18:35scroll back up to the top, even though 18:36my initial prompt missed some things I 18:39would like to have added, it missed the 18:41aesthetic I wanted to add. it missed uh 18:43the controls I wanted to add things that 18:45a better prompt would have done. There 18:46is a reason I recommend using solid 18:49prompts. Even though I was an honest 18:51human being and I was realistic and I 18:52was in a rush and I just put this down, 18:54I still got to the app that I showed you 18:56all in 10 minutes. And then in another, 18:59I want to say 15 minutes of messing 19:01around, I got to a much more uh involved 19:05destinationheavy, lots more like places 19:08to go like a riverside walk, better 19:10descriptions. I basically got to a V2 in 19:13about 15 minutes after the original 10 19:16in the chat. That's 25 minutes over 2 or 19:193 days and you're swearing at it. You're 19:20like, why isn't this fixed? This this 19:22bug is annoying. But it has never been 19:24possible to make this kind of app for an 19:27individual not looking at the code. And 19:28I did not touch or change any piece of 19:31code here. I just messed with it until I 19:34got what I wanted. And I chatted with it 19:35and I yelled at it until I got what I 19:37wanted. That is how easy it is now to 19:40make useful little app artifacts. I 19:42think it's a massive gamecher. I think 19:44the way chat GPT5 works in the canvas is 19:47special 19:49and there's a ton to think about with 19:51how this is going to change our work 19:53going forward. So, I hope you enjoyed a 19:55little description of how I built this 19:57thing. Let me know what your questions 19:59are. 20:01I can't say that this is the perfect or 20:03best way to build this. I think going 20:04back, one of the things I would do is I 20:06would actually say, "Hey, uh, and I'll 20:09actually do this so you can see it. 20:10Looking back 20:12over our work so far, write me a 20:15fantastic prompt." And I'll include this 20:18prompt, uh, in the article. Write me a 20:19fantastic prompt that would create this 20:22final version of the app. 20:26Um, as an extra treat, please uh include 20:33brackets around key user choice points 20:37like interest areas, city, etc. So a 20:40user can easily modify this prompt for a 20:43different place, right? And so I'm 20:45basically asking it to reflect back and 20:48figure out how to prompt better next 20:50time. And I like to do that because it 20:52gives me a chance to 20:56learn myself how I can prompt the model 20:58better, learn what I could change and 21:00improve. And I will be very curious to 21:03see what it comes up with. So whatever 21:05comes up with, I will be sure and let 21:07you guys know. Uh I do not want you to 21:10have to sit there and watch it just spit 21:12stuff out. Uh so I think I'm inclined to 21:15uh let this video go for now. Uh I may 21:20append a little bit at the end once 21:21something comes through. Okay, so it 21:23spent some time thinking. It came back 21:26uh and it actually has a very complete 21:28prompt here. Uh if you want to get this 21:31even better, you can run this through 21:34chat GPT5 Pro and it will be even more 21:37deliberate with the prompt. And I will 21:39actually show you the sidebyside in the 21:40article so you can see that. But for 21:43now, it's going to give you a name for 21:45the app, places you can fill stuff in. 21:47It's going to give you core features, 21:49things that you can mix in. It's filled 21:51them in, but you can obviously do more 21:53than that. Um, 21:55and it's going to give you a UI layout. 21:57Uh, obviously you don't have to use. You 22:00can use something else if it's a 22:01different destination. Uh, it's going to 22:03give you some non-functional 22:04requirements I certainly didn't ask for 22:05originally. And then some aesthetic 22:07details that you can change. This is 22:09fantastic to me because it is showing me 22:11how the system thinks about what it 22:13builds and what a controllable surface 22:17is for that build. It's giving me all 22:19the things it thinks are a variable. 22:22Uh and so one example of a variable that 22:24I think would need some work in an 22:26initial prompt, it truly is storing my 22:28itinerary somewhere in local storage. 22:30It's going to need to research and 22:32develop your itinerary, right? So you 22:33would need to include that and say the 22:35local storage, you need to research and 22:37develop this or something. But this is 22:38how we learn. This is how we go from at 22:42the top just a short threeline prompt 22:44here to this gigantic prompt at the end. 22:46I did not have to actually paste this 22:48prompt in to get this result. And I bet 22:52because LLMs are probabilistic. If I 22:54paste this prompt, it also won't look 22:56exactly the same. And that's okay. The 22:58point is that this prompt captures a lot 23:00of the detail that I iteratively evolved 23:02into over the course of this 23:04conversation. So, wrapping up, all told, 23:06about 25 minutes in this chat over two 23:09days, about 10 minutes to get to a 23:11production app that I showed you 23:12earlier, about 15 minutes to get to the 23:15V2 that I showed you in this video, and 23:17you're going to get these prompts as 23:19well that you can look into and dive 23:20into as sort of follow-ups that will 23:22help you to personalize this and use 23:24this other places. I don't think it's 23:26just for travel. It's really for 23:28anything that you have to plan in space 23:29and time. Like, you could also modify 23:31this for a corporate event really 23:33easily. I hope you've enjoyed this 23:34breakdown. Uh, I think this video's gone 23:36on long enough and I will catch you on 23:40the flip side.