ChatGPT‑5 Review & Memory Battle
Key Points
- The presenter demonstrated how Chat GPT‑5 makes it simple to create tiny, practical apps, highlighting a 14‑day Kyoto itinerary that sparked requests for remixing and prompting tutorials.
- He noted a recurring pattern after major ChatGPT releases: initial excitement followed by disappointment and a lull, while the broader AI field continues advancing.
- Recent AI news was summarized, focusing on Claude’s new “memories” feature, which retrieves past conversation snippets rather than maintaining a persistent, editable memory store.
- Compared to ChatGPT’s more controllable but sometimes opaque memory system, Claude’s approach offers richer retrieval options but can produce inconsistent, probabilistic outputs across similar queries.
Sections
- ChatGPT5 Prompt Walkthrough Demo - The speaker showcases a ChatGPT5‑generated Kyoto itinerary, promises to walk the audience through the prompting process, offers a bonus, and contrasts new memory features from Claude with ChatGPT’s own implementation.
- Scaling Context and Brain Modeling Advances - The passage highlights Claude’s new million‑token Sonnet context window that dramatically expands processing capacity despite imperfect retrieval, and contrasts it with Meta’s brain‑modeling challenge that predicts fMRI responses to video, illustrating that AI progress continues even if it’s hard to measure.
- AI Self‑Critique Infinite Loop - The speaker explains how Google's Gemini model can get stuck in a repetitive self‑critique bug on difficult tasks, highlighting the unpredictable behavior of large‑scale AI systems as they reach near‑billion‑user adoption.
- Clear Intent Drives Mini-App Generation - The speaker recounts an initial code‑generation failure, then shows how a brief, plain‑language prompt with clear constraints reliably produced a detailed Kyoto mini‑app blueprint, illustrating that precise intent can replace overly technical prompts.
- Limitations of Code Versioning in Canvas - The speaker explains that the canvas interface only shows the latest code version, not past revisions, and discusses using ChatGPT's "thinking mode" and iterative prompts to refine a travel itinerary.
- Rapid Prototyping & Post‑Launch Tweaks - The speaker explains how a brief, 10‑15‑minute chat‑driven session generated an app, then discusses the frustrations, iterative bug fixes, and continuous enhancements that illustrate a product’s evolving quality.
- Request for Bracketed Prompt Templates - The speaker asks for prompts that enclose key user choice points (e.g., interest areas, city) in brackets to enable easy customization, reflects on improving prompting skills, and previews a comprehensive app design output—including a name, core features, UI layout, and non‑functional requirements—while suggesting further refinement with a more advanced model.
Full Transcript
# ChatGPT‑5 Review & Memory Battle **Source:** [https://www.youtube.com/watch?v=v1Ham9sIWgo](https://www.youtube.com/watch?v=v1Ham9sIWgo) **Duration:** 00:23:43 ## Summary - The presenter demonstrated how Chat GPT‑5 makes it simple to create tiny, practical apps, highlighting a 14‑day Kyoto itinerary that sparked requests for remixing and prompting tutorials. - He noted a recurring pattern after major ChatGPT releases: initial excitement followed by disappointment and a lull, while the broader AI field continues advancing. - Recent AI news was summarized, focusing on Claude’s new “memories” feature, which retrieves past conversation snippets rather than maintaining a persistent, editable memory store. - Compared to ChatGPT’s more controllable but sometimes opaque memory system, Claude’s approach offers richer retrieval options but can produce inconsistent, probabilistic outputs across similar queries. ## Sections - [00:00:00](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=0s) **ChatGPT5 Prompt Walkthrough Demo** - The speaker showcases a ChatGPT5‑generated Kyoto itinerary, promises to walk the audience through the prompting process, offers a bonus, and contrasts new memory features from Claude with ChatGPT’s own implementation. - [00:03:09](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=189s) **Scaling Context and Brain Modeling Advances** - The passage highlights Claude’s new million‑token Sonnet context window that dramatically expands processing capacity despite imperfect retrieval, and contrasts it with Meta’s brain‑modeling challenge that predicts fMRI responses to video, illustrating that AI progress continues even if it’s hard to measure. - [00:06:33](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=393s) **AI Self‑Critique Infinite Loop** - The speaker explains how Google's Gemini model can get stuck in a repetitive self‑critique bug on difficult tasks, highlighting the unpredictable behavior of large‑scale AI systems as they reach near‑billion‑user adoption. - [00:09:59](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=599s) **Clear Intent Drives Mini-App Generation** - The speaker recounts an initial code‑generation failure, then shows how a brief, plain‑language prompt with clear constraints reliably produced a detailed Kyoto mini‑app blueprint, illustrating that precise intent can replace overly technical prompts. - [00:13:42](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=822s) **Limitations of Code Versioning in Canvas** - The speaker explains that the canvas interface only shows the latest code version, not past revisions, and discusses using ChatGPT's "thinking mode" and iterative prompts to refine a travel itinerary. - [00:17:11](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=1031s) **Rapid Prototyping & Post‑Launch Tweaks** - The speaker explains how a brief, 10‑15‑minute chat‑driven session generated an app, then discusses the frustrations, iterative bug fixes, and continuous enhancements that illustrate a product’s evolving quality. - [00:20:33](https://www.youtube.com/watch?v=v1Ham9sIWgo&t=1233s) **Request for Bracketed Prompt Templates** - The speaker asks for prompts that enclose key user choice points (e.g., interest areas, city) in brackets to enable easy customization, reflects on improving prompting skills, and previews a comprehensive app design output—including a name, core features, UI layout, and non‑functional requirements—while suggesting further refinement with a more advanced model. ## Full Transcript
So, a few days ago, I reviewed Chat
GPT5, and one of the things I emphasized
is it's really, really easy to make
small, easy to use apps. And the one
that caught everyone's attention was
that I built a 14-day travel itinerary
for a trip to Kyoto, Japan. I had people
messaging me saying, "Hey, can I remix
it for my city?" I had a lot of people
saying, "Can you walk me through the
prompting process?" We are going to do
that today. But first, you get a little
bonus. And the bonus is not about chat
GPT because one of the things I want to
emphasize is that the news keeps
happening. Every time there's a major
release with Chat GPT, I see the same
audience reaction. I see people saying,
"It wasn't what I expected. It's a
little bit disappointing." And there's
this sort of lull afterward. Everyone
loses their energy. But AI keeps
marching on. And in particular, we've
seen a lot of really interesting updates
from other labs, not from chat GPT. So,
just to give you a sense of perspective
before we dive deep into chat GPT5, I
want to give you a few snippets from the
last day in terms of news. Specifically,
we have five pieces of news. We're going
to start quickly with Claude. Claude
launched their memories feature. I have
tried it out. I want to caution you if
you're used to chat GPT. This is not the
same memories feature. Number one, chat
GPT enabled it and you could turn it on
and it would just work and you can edit
individual memories. I don't know that a
lot of people do that, but you can
literally see what the system remembers.
It it comes as little lines with a
little delete button in the settings.
That is not how memory works in Claude.
In Claude, it's retrievalbased. You
actually have to steer the memory. All
it does is it searches through your past
conversations based on your current
conversation. So, you have to ask it in
the current conversation. Please
remember this or that. In my experience,
as I have played with it the last day or
so, what I have seen is that this memory
feature is not as dependable as Chad
GBT's memory feature, but it gives you a
richer range of options. So, the memory
feature for Chad GBT is famously
somewhat uncontrollable. You don't
really know what you're going to get. It
will remember certain things and you
wonder why. And that's why they give you
the ability to edit. With this one, with
Claude, you can decide exactly what you
wanted to go retrieve from past
conversations, but it doesn't retrieve
it in the same way every time. I've
actually tried this. I asked the same
query in a fresh chat to the same model
two different times, and I got very
different structured answers. Similar
overlapping content. It wasn't
completely off base. But keep in mind
that this is not surgical retrieval. The
model is running this through a
probabilistic token architecture. and
you're getting different formats at
different times. So, Cloud launching the
memories feature, big update. It's the
first major model maker that has some
kind of memory besides Chad GPT. And
that has been one of the stickiest
features in chat GPT. I I know lots of
people who stay with their chat GPT
subscriptions just because it's the only
one that has memories. That's starting
to change. I would expect it to change
more. Second one, also from Claude.
Claude launched a 1 million token
context window for Sonnet. That is a 5x
increase from the previous 200,000 token
limit in the API. It enables you to
process code bases of 75,000 lines all
at once. You can do extensive document
sets while maintaining a degree of
coherence. Now, is Sonnet perfect? Does
it have perfect retrieval across that
larger context window? No. But neither
does any other model. The point is that
it is easy. now to handle extremely
large and complex queries in a way that
it wasn't easy even 3 or 4 months ago.
This is another sign that progress just
keeps drumming along. I know that there
was a lot of conversation after chat
GPC5 launched that basically amounted to
is progress over. I would argue with you
that we have a frog boiling in the pot
problem. Progress isn't over. We've just
lost sight of the ability to assess it
correctly. Let's jump from Claude to
Meta. Meta launched a brain modeling
challenge where their brain and AI team
was able to encode a 1 billion parameter
brain of some sort. I don't know. It's
an artificial brain, right? And it
basically predicts fMRI brain responses
to movies by fusing together video
frames, audio, and dialogue. In a sense,
what what Zuckerberg is trying to do
here is he's trying to build an
artificial brain to figure out how to
make his video algorithms for his
platforms on Meta more addictive. That's
really what's going on because if he can
model response in the brain to video, he
can make the video more directly
stimulating to brains and then he can
get more of your attention. I know that
sounds dark, but I think given sort of
the direction that Meta has gone with a
lot of the way they've engineered the
algorithm, it's a fair call out. Second
to last but not least, we have Merge
Labs forming. Merge Labs is related to
OpenAI, but it's not OpenAI. This is a
new brain computer interface startup
involving Sam Alman. Open AAI is
reportedly an investor, and Sam is
listed as a co-founder. It would
directly compete with Elon Musk's
Neuralink. What this says to me is this.
This whole idea of a brain computer
interface is not just going to
disappear. It's not just an Elon pet
project. We are not at the point where
we are anywhere close to production on
these things yet, but I would expect us
to be talking more about commercial
products and the ethical questions they
raise in 2027. That's my sort of
personal horizon for when I think we're
going to start to see something like
this come out. And you will see a few
early adopters that are like, "Yes,
please hook my brain to the AI. I want
to be part of the singularity." You'll
see a lot of people who are like, "Get
that away from me. I don't want to touch
it with a 10-ft pole." Let's save that
debate for 2027 for now. Just notice
that there are multiple tech titans
getting involved and this isn't going
anywhere. Last but not least, Google
Gemini has a Marvin the Paranoid and
Android problem. So, if you've read
Hitchhiker's Guide to the Galaxy, you
know that Marvin the Paranoid and
Android is a depressed little robot that
just cannot get over the curse of its
own intelligence. That is very much the
vibe from Google Gemini. And what's
interesting is that it appears to be a
self-deprecation loop where Gemini is
programmed to apologize when it can't
get something done and then try again.
But when the task is sufficiently hard,
it seems to get into a dramatic
self-critique loop where it critiques
itself over and over and over again for
failing to accomplish a difficult task
until it is literally has refused to
proceed further with tasks. And so the
leader of of Google's AI project, uh,
Logan Kilpatrick, has called this an
annoying infinite looping bug, which is
one way to put it, and has said that the
team is working to fix it. So this is
reminding me we have now hit close to a
billion users with AI. We are seeing
examples of AI behavior at scale that
just did not show up on anybody's
testing. It reminds me how probabilistic
these tools are and how much unique
flavor there is in each model. I think a
lot of the reaction to chat GPT5 is
frankly from the sense that we have a
new colleague to work with and we don't
know the new colleague yet. Like, hey,
who's Frank? Frank is new here, right?
Like, we we probably should get to know
Frank before we trust Frank with our
stuff. These models have personality.
They have weird quirks. And Google
really underlines that with the Gemini
depression scale, so to speak. We will
see when they get it fixed, but it's
reminding me how unpredictable these
tools can be, even by very large model
makers. So, those were the five pieces
of news. Let's go from there to part two
of this video where I dig into the Kyoto
travel app that I demoed back in my chat
JPT5 review. This will be an on-screen
demo. I'm going to share my screen, walk
you through the prompts, show you what I
got, and we'll have some fun. Okay,
first things first, I want to show you
what I showed the world. This is the app
that everybody got to see. So, it has
different emphases that you can click
here. So, you can preset it for ramen.
You can preset it for uh moss. I said
that we wanted to see moss temples in
Kyoto or for balanced. You can click
around. You can add things if you want
to add something. You can choose a
different place. Like I could add uh
Guini in the morning here and it will
just add it right there. Calm cloers
sounds like a nice way to start the
morning. We have some soy broth. Maybe
in the afternoon I can hit up a coffee
shop. And I can just uncclick this and
hit up the coffee shop. a weekender's
roaster. That sounds pretty great. Just
add that into the afternoon. You can see
that you can kind of build up some
notes. It gives you a sense of what's
going on. I have a kid, so like it gave
me a sense of what would happen with the
baby. Is it a perfect app? I want to
emphasize that it is not a perfect app,
but it's relatively easy to build and to
remix. You see that prominent little
button? It's easy to edit. You can edit
it yourself. Let's look at the prompts
that led to this app. All right, here we
are in chat GPT. This is the actual
conversational chain that I use to
produce this and I want to call out how
much you can do just in the
conversation. We'll go through it, but
it's really exciting to me. So, this was
my initial prompt. Can you do some
research? Build me an interactive mini
app I can use to explore various options
for visiting Kyoto next year. Then I
list three or four interests and I say
how far I'm willing to travel. And I say
this is who this is for. Who who's the
audience? It's a family app. It's for my
wife. Uh, and please do the research you
you need to develop specific
recommendations that could be used to
guide a real twoe itinerary. Right? Uh,
so it goes away and it thinks for three
minutes. It comes back with some code
and it comes back with a teaser. Right?
Uh, the problem was the code failed
partway through. So this is me being
really blunt with you. This was the
first launch day. OpenAI servers were
under a lot of pressure and this just
didn't generate. So I said try again. So
it comes back initially and I think it's
constraining tokens. It comes back with
a visual teaser. It says look at how
great Kyoto is. Here's a mini app
blueprint. All the places you could go.
These are all real places. It's citing
them in line. Gives you hot springs.
Gives you interaction ideas. 5day
snapshot. Now I could have edited this
heavily. I could have said this is not
enough. I need more options etc. In this
case I really want to see how good a job
it does at coding. I say please code it
as a mini app. That's it. Like this is
keep in mind the these sum total of my
substantive interaction with this has
been three or four lines here and then a
line here. Now I am sometimes known as
the really technical prompt. And one of
the things I like to balance that with
is to remind people that if your intent
is really clear, it doesn't have to be a
super technical prompt. If you go back
to the top here, this was actually
pretty clear intent. It was very clear
where I wanted to go, what I was
interested in, how far I was willing to
travel. I put some constraints in. I
defined the audience. I did a lot of the
things that a technical prompt would do.
I just did it in a plain sentence, and
that seemed to work well to evoke a
really detailed app recommendation. So,
I say, "Yes, please code it." It then
works for a minute and a half, and I
don't love what it comes up with. And
principally, I don't love with it what
it comes up with because it's just
incredibly ugly
and it's got sort of a dark blue text on
black. I can't see anything. It's not
interactive. It just looks terrible.
This is an example where I am showing
you what it looks like to actually code
versus what you see in the shiny demos.
Is it still worth it? Yes, because I
want you to see how quickly I can get to
something interesting and usable. Okay.
So, I say I can't see it. That's it.
That's all I tell it. I give it a
screenshot and say I can't see it. Um,
I've updated the code and it's basically
saying I fixed it and you can see things
more easily, right? I then come across a
bug. And so when it says I've fixed the
syntax issue, that is an indicator that
when I tried to run the code, I hit fix
this bug, which is an actual thing you
can do in the UI. I can't do it now
because we fixed the bug, but that's how
that works. It then says it fixed the
bug and I say fix another bug, right?
This is some of the reality, right? I am
starting to get fed up because there's a
third error, right? I am now annoyed.
Um, and so I start to get a little bit
annoyed. I say, you know what, you've
given me so many errors. This is the
third error in a row. The app you build
is dark on dark font. I cannot see it. I
need it beautiful, clear, minimalist,
and I need it to freaking work. Uh, I
can't tell you if freaking is actually a
useful prompting word. It was my
expression and my frustration at that
point. Um,
and it actually went all in on it. And I
think one of the things I noticed here,
coming back to the prompt, I did not
specify a visual style before, and that
was probably on me. That's an example of
where a more technical prompt would have
challenged me to set a more beautiful
style, and I just didn't do it. Anyway,
it comes back. It nukes the buggy
snippet. It replaces it with a clean,
light theme, minimal React, all of this
stuff. I then come back, and this is the
first time it's actually functional. The
map and the information links don't
work. Um, and I I need a plain English
rationale. So, if you remember when I
showed you the real app, it had a plain
English description of the day. That
wasn't there in the original version.
Now, you might be wondering, well, why
aren't you showing me these code
versions as we go? The answer is very
simple. inside the same canvas, the code
does not roll back the way it does on
Claude. And so if I click on that code
and run it, it shows whatever is the
most recent version. And so it's it's
not actually you you can access the
current code through this button. You
cannot access the old code. So we're
going to stick with it. And then it goes
to the end, which I think makes no
sense. Let's go back. Uh map and info
links don't work.
Um give it uh like a Japanese inspired
aesthetic. So then it starts to say,
"Okay, let's fix these things." I then
say, "Okay, we finally have something.
Do the whole 14-day trip." Um, and then
it starts to ask for extras, which is
what chat GPT classically does,
especially five. Would you like the
rationale to reflect the couple's
emotional arc, you know, uh, or should
it be more practical and log logistical?
And I say, "Look, let's be real. I'm
traveling with a one-year-old. Factor
that in. We'll probably want some extra
time." Uh, by the way, if you're
wondering what my chat GPT version is,
do not look up here. This is the current
sort of default. Instead, recognize that
whenever it's spending time thinking,
this is chat GPT5 thinking mode. And so,
I've already showed you a few thinking
mode examples. I was using thinking mode
because I felt like I was getting better
results. I actually tried this with chat
GPG5 without thinking and it just did
not give me runnable code, which is not
super surprising. Uh, it then refactors
it. Do you want me to flesh all of this
out? I need to have some meaningful
controls. At this point, we are really
optimizing, right? And at this point,
you are probably also curious for what
you can actually see, right? Like what
does another version besides the one
I've shown you look like. Well, this is
the latest version. I'll just show it to
you. What's interesting about this one
is it's very Japanese inflected. So,
like it literally brought in Japanese
language, which I don't read. So, I
thought that was a nice touch, but
perhaps not necessary. it expanded the
number of categories a fair bit which is
something I asked it to do in later
versions
um and it has filled out all of these
elements and so one of the things that
you'll notice if you go through my
production version is that we have an
issue with not enough mossheavy ramen
night heavy onsenheavy things to do and
so we need to fill out morning afternoon
and evening for 14 days and so one of
the later things I did is I basically
said, "You need to get creative and fill
out a full 14-day itinerary." And you
can see that it did. Uh, now some of it
is a family rest window, but
realistically with a kid, that's
actually not a bad idea. Um,
and it gives you longer and larger
narratives in the new version. And it
gives you a lot more options. So, as an
example, if I want to go to some of
these ones that are new, I can do a lot
more around Kyoto. Like, we can go to
the Araimaya uh bamboo grove if we want,
right? and I can add that in if we don't
already have that in. We can go to the
railway museum. This is good enough as
it stands that I am already thinking
about using it for a production planning
of a trip. And I think that that
underlines one of the things that I
really tried to call out in my original
review, which is that these things like
yes, like if you go back here, it's
somewhat it looks somewhat frustrating,
right? Like you're going back and forth,
you're asking it to make edits. Um, you
know, there are blanks. Please fix this.
Uh, I want to actually have like a lot
more creativity.
Um,
and and
it's I think the way I'll put it is that
in this chat experience, it can feel
frustrating. And that's something that
didn't come through in the chat GPT5
presentation. But the reality of getting
through to the end of this, getting
through little bugs like this that
happened postp production. I was fairly
frank. I'm not going to say that word on
video, but you can read it. Um, and
demanding restoration, getting it back.
It's It's encouraging to me that you can
restore stuff just by yelling at it. And
it's encouraging to me that after this
whole conversation, and this is post
launch, right? Like if you want to think
about how long it took just to get to
launch for the the app that you saw at
the beginning of this video, it was
about 15 minutes of conversation. It was
very easy. It was very fast. It might
have been less. It might have been 10
minutes all told. Um and it stopped
about here and that was it. And then all
the rest of this is post-production. Me
continuing to mess with it because it's
frankly fun. It writes out the code.
these hundreds of lines of code that
it's written out here
and it's continued to make it better.
It's added in a full twoe planner. It's
added in more interests. I can continue
to work with it.
And
I think one of the measures of a good
product is that you do continue to work
with it. And so even though like if you
scroll back up to the top, even though
my initial prompt missed some things I
would like to have added, it missed the
aesthetic I wanted to add. it missed uh
the controls I wanted to add things that
a better prompt would have done. There
is a reason I recommend using solid
prompts. Even though I was an honest
human being and I was realistic and I
was in a rush and I just put this down,
I still got to the app that I showed you
all in 10 minutes. And then in another,
I want to say 15 minutes of messing
around, I got to a much more uh involved
destinationheavy, lots more like places
to go like a riverside walk, better
descriptions. I basically got to a V2 in
about 15 minutes after the original 10
in the chat. That's 25 minutes over 2 or
3 days and you're swearing at it. You're
like, why isn't this fixed? This this
bug is annoying. But it has never been
possible to make this kind of app for an
individual not looking at the code. And
I did not touch or change any piece of
code here. I just messed with it until I
got what I wanted. And I chatted with it
and I yelled at it until I got what I
wanted. That is how easy it is now to
make useful little app artifacts. I
think it's a massive gamecher. I think
the way chat GPT5 works in the canvas is
special
and there's a ton to think about with
how this is going to change our work
going forward. So, I hope you enjoyed a
little description of how I built this
thing. Let me know what your questions
are.
I can't say that this is the perfect or
best way to build this. I think going
back, one of the things I would do is I
would actually say, "Hey, uh, and I'll
actually do this so you can see it.
Looking back
over our work so far, write me a
fantastic prompt." And I'll include this
prompt, uh, in the article. Write me a
fantastic prompt that would create this
final version of the app.
Um, as an extra treat, please uh include
brackets around key user choice points
like interest areas, city, etc. So a
user can easily modify this prompt for a
different place, right? And so I'm
basically asking it to reflect back and
figure out how to prompt better next
time. And I like to do that because it
gives me a chance to
learn myself how I can prompt the model
better, learn what I could change and
improve. And I will be very curious to
see what it comes up with. So whatever
comes up with, I will be sure and let
you guys know. Uh I do not want you to
have to sit there and watch it just spit
stuff out. Uh so I think I'm inclined to
uh let this video go for now. Uh I may
append a little bit at the end once
something comes through. Okay, so it
spent some time thinking. It came back
uh and it actually has a very complete
prompt here. Uh if you want to get this
even better, you can run this through
chat GPT5 Pro and it will be even more
deliberate with the prompt. And I will
actually show you the sidebyside in the
article so you can see that. But for
now, it's going to give you a name for
the app, places you can fill stuff in.
It's going to give you core features,
things that you can mix in. It's filled
them in, but you can obviously do more
than that. Um,
and it's going to give you a UI layout.
Uh, obviously you don't have to use. You
can use something else if it's a
different destination. Uh, it's going to
give you some non-functional
requirements I certainly didn't ask for
originally. And then some aesthetic
details that you can change. This is
fantastic to me because it is showing me
how the system thinks about what it
builds and what a controllable surface
is for that build. It's giving me all
the things it thinks are a variable.
Uh and so one example of a variable that
I think would need some work in an
initial prompt, it truly is storing my
itinerary somewhere in local storage.
It's going to need to research and
develop your itinerary, right? So you
would need to include that and say the
local storage, you need to research and
develop this or something. But this is
how we learn. This is how we go from at
the top just a short threeline prompt
here to this gigantic prompt at the end.
I did not have to actually paste this
prompt in to get this result. And I bet
because LLMs are probabilistic. If I
paste this prompt, it also won't look
exactly the same. And that's okay. The
point is that this prompt captures a lot
of the detail that I iteratively evolved
into over the course of this
conversation. So, wrapping up, all told,
about 25 minutes in this chat over two
days, about 10 minutes to get to a
production app that I showed you
earlier, about 15 minutes to get to the
V2 that I showed you in this video, and
you're going to get these prompts as
well that you can look into and dive
into as sort of follow-ups that will
help you to personalize this and use
this other places. I don't think it's
just for travel. It's really for
anything that you have to plan in space
and time. Like, you could also modify
this for a corporate event really
easily. I hope you've enjoyed this
breakdown. Uh, I think this video's gone
on long enough and I will catch you on
the flip side.