Easy Guide to Steering GPT‑5
Key Points
- GPT‑5 behaves like a “speedboat with a big rudder,” needing strong, precise steering to produce useful results, which many typical user prompts fail to provide.
- The author’s solution is a set of “metaprompts” – prompts that improve your own prompts – that can be copied from a Substack article for quick, accessible use.
- A real‑world example (preparing for a meeting) shows GPT‑5 initially hallucinating details and delivering useless templates until the user supplies clarifying questions about meeting type, participants, and desired outcome.
- Even after iterative prompting, GPT‑5 still makes unchecked assumptions (e.g., fabricating industry statistics), highlighting the importance of explicit constraints and verification in prompts.
- The guide aims to let users stay lazy and write naturally while still steering GPT‑5 effectively, making advanced prompting more approachable.
Sections
- Navigating GPT-5 Prompting Challenges - The speaker introduces a beginner‑friendly guide that uses metaprompts and concrete examples to help users effectively steer the notoriously hard‑to‑prompt GPT‑5 model.
- Meta‑Prompt for Structured Meeting Preparation - The speaker describes a two‑step meta‑prompt process that first verbalizes assumptions to create a detailed brief from a vague request, then adopts a specific role and methodology to produce a concrete, actionable plan for preparing a meeting.
- Iterating Metaprompts for GPT‑5 - The speaker explains how a newly refined metaprompt for GPT‑5 reduces hallucination, provides both quick‑start and detailed versions, and lets users harness the model’s speed while maintaining control.
- Guiding GPT-5 with Structured Prompts - The speaker stresses that using clear headers, bullets, and overall prompt architecture directs GPT‑5’s internal router to the desired sub‑model, yielding more precise and expert‑level responses.
- Meta-Prompts and Model Steering - The speaker outlines using meta‑prompts to steer the model, manage tool‑use preferences, and repeatedly reinforce instructions because the model’s contextual memory is effectively an illusion.
- Structuring Effective AI Prompts - The speaker outlines a four‑step framework—defining a role for expertise routing, establishing a clear objective, detailing a step‑by‑step process, and leveraging meta‑prompts—to ensure the model understands its mission and produces accurate results.
- When and How to Use Metaprompting - The speaker advises experimenting with metaprompting and the outlined seven principles for precise tasks while noting it’s unnecessary for simple factual queries, exploratory chats, or emotional conversations, and suggests choosing models better suited for those edge cases.
Full Transcript
# Easy Guide to Steering GPT‑5 **Source:** [https://www.youtube.com/watch?v=hvTGYMq3pfg](https://www.youtube.com/watch?v=hvTGYMq3pfg) **Duration:** 00:25:09 ## Summary - GPT‑5 behaves like a “speedboat with a big rudder,” needing strong, precise steering to produce useful results, which many typical user prompts fail to provide. - The author’s solution is a set of “metaprompts” – prompts that improve your own prompts – that can be copied from a Substack article for quick, accessible use. - A real‑world example (preparing for a meeting) shows GPT‑5 initially hallucinating details and delivering useless templates until the user supplies clarifying questions about meeting type, participants, and desired outcome. - Even after iterative prompting, GPT‑5 still makes unchecked assumptions (e.g., fabricating industry statistics), highlighting the importance of explicit constraints and verification in prompts. - The guide aims to let users stay lazy and write naturally while still steering GPT‑5 effectively, making advanced prompting more approachable. ## Sections - [00:00:00](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=0s) **Navigating GPT-5 Prompting Challenges** - The speaker introduces a beginner‑friendly guide that uses metaprompts and concrete examples to help users effectively steer the notoriously hard‑to‑prompt GPT‑5 model. - [00:03:54](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=234s) **Meta‑Prompt for Structured Meeting Preparation** - The speaker describes a two‑step meta‑prompt process that first verbalizes assumptions to create a detailed brief from a vague request, then adopts a specific role and methodology to produce a concrete, actionable plan for preparing a meeting. - [00:07:23](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=443s) **Iterating Metaprompts for GPT‑5** - The speaker explains how a newly refined metaprompt for GPT‑5 reduces hallucination, provides both quick‑start and detailed versions, and lets users harness the model’s speed while maintaining control. - [00:10:40](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=640s) **Guiding GPT-5 with Structured Prompts** - The speaker stresses that using clear headers, bullets, and overall prompt architecture directs GPT‑5’s internal router to the desired sub‑model, yielding more precise and expert‑level responses. - [00:14:52](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=892s) **Meta-Prompts and Model Steering** - The speaker outlines using meta‑prompts to steer the model, manage tool‑use preferences, and repeatedly reinforce instructions because the model’s contextual memory is effectively an illusion. - [00:18:38](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=1118s) **Structuring Effective AI Prompts** - The speaker outlines a four‑step framework—defining a role for expertise routing, establishing a clear objective, detailing a step‑by‑step process, and leveraging meta‑prompts—to ensure the model understands its mission and produces accurate results. - [00:22:09](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=1329s) **When and How to Use Metaprompting** - The speaker advises experimenting with metaprompting and the outlined seven principles for precise tasks while noting it’s unnecessary for simple factual queries, exploratory chats, or emotional conversations, and suggests choosing models better suited for those edge cases. ## Full Transcript
GPT5 puts prompting on hard mode. And I
want to make this the most accessible,
easy to use prompting guide for you when
you're playing with GPT5. Why am I doing
it now, weeks after GPT5 came out? Quite
simply, because it took me some time to
figure out how I wanted to share what I
was learning about the model. This is a
tricky model to prompt. And I compare it
to a speedboat with a really big rudder.
At the end of the day, this model wants
to go fast and it wants to be steered
really, really hard. That big rudder
like it wants to be steered really hard.
But most people our prompts are not in a
place where we can effectively steer
that model. My aim is to not only show
you how to solve that problem, but
enable you to be human, to be a little
bit lazy, to write the way you write and
still get good results from GPT5. It's
taken me some time to figure it out, but
I'm excited to show you. Let's dive in
right at the top with a quick look at a
metaprompt. A metaprompt is a prompt
that makes your prompts better. Now, if
that gives you a headache, don't worry.
I'm going to give you a bunch of these
in the Substack article that you can
use. Also going to give you a quick one
now that you can look at, understand how
it works, understand why it works, and
get right on the road to improving your
own experience with GPT5 to steering the
model in ways that are easy for you. So,
with that in mind, an easy get started
prompt with a specific real example.
Let's dive in. Okay, I chose help me
prepare for tomorrow's meeting as an
example of a reall life prompt that I
have seen people type in that I have
typed in sometimes myself. I'm not
always perfect at prompting. I gave it
to chat GPT5. I did not give it to chat
GPT5 thinking or any of the other
options. Just to chat GPT5. It responded
with all of this. Thought for 12 seconds
and spit back a rapid prep guide. It
doesn't even know what the meeting is
about. It spit out a specific agenda for
specific meetings. It doesn't know the
meeting is 30 minutes. I didn't tell it
that. It's making it up. It spit out a
drop in template. All of this is useless
to me. And then it asks for two
questions, right? What what kind of
meeting is it? Who's in the room? And
what outcome do you want? I guess that's
three questions. I answered all three.
And it comes back with a pitch. This
time, you know, you notice it's now
taken two tries. It's coming back with a
pitch. is coming back with a stakeholder
leverage map, a steering question for
the room, objections, encounters, run of
show, and a next step script. It's okay.
It's not nearly as clear as it needs to
be. It's making big assumptions about
what these stakeholders want that aren't
clear. It's deciding
that it wants a sense of the context and
the data, and it it just it's making it
up, right? Industry peers use automation
to see 20 to 40% lift. I didn't tell it
that. It just decided to make that up
and call it a fact, which it isn't a
fact. Um, it assumes that everything
supports this pitch. It doesn't even
know what the pitch is. In other words,
by giving it generic information. With
GPT5, the power on that speedboat is so
high, you're just inviting it to make
stuff up. You're just inviting it to
fabricate stuff. And this is not a
particularly useful tool. And I think it
exemplifies some of the frustration
people feel because whether or not
you're prompting with just one line or
two or three lines, it is easy to get
this incredibly detailed, incredibly
lengthy response that in the end of the
day isn't super useful. Now, let's go to
a different approach. Let's see if we
can use a metaprompt to improve things.
Okay, here we are. You're looking at a
metaprompt. Transform my request into a
structured brief and then execute it.
First, interpret what I'm actually
asking for. what type of output would
help me, what expertise would be
relevant, what format would be useful,
what level of detail. I'm asking the
model to verbalize assumptions that I
can correct if need be. And that's
really important because it shapes the
rest of the response and whether or not
it's useful. Second part, then
restructure and execute as a specific
role and you should infer appropriate
expertise, a specific objective. Please
make my vague request more specific an
approach. Choose the methodology that
fits the objective you've come up with
and an output. Basically, what we're
saying is take this tiny phrase. I use
the exact same phrase. Help me prepare
for tomorrow's meeting and expand it in
a way that makes the prompt useful and
then run it. So, here's what here's what
the model said. First, it gave me the
structured brief. Now, it assumes I'm
me, right? I talk to Chad GPT all the
time. Based on its memory of you, it
will say something different here. It
will then take the objective and prepare
a concrete actionable prep plan. Right?
And it talks about what it's going to
prepare. And already I think this is
more useful. It it wants to uh give me a
sharp grasp of the context, anticipated
objections, talking points, etc.
It's going to use this approach to
clarify the unknowns to structure
preparation to surface two to three
likely points. Do you see that? It's
already realizing it needs to ask
questions, but it's realizing it needs
to ask questions in the context of this
metaprompt I've given it. And so the
questions are more specific and they're
more useful because they're tied into
the objective that it's inferring. And
then it's giving me the output, a
meeting prep sheet with a context recap,
a message, questions to ask. This is a
more useful output already as a
framework. And so then it gives me the
executed output and it puts blanks in.
doesn't make stuff up, which is useful
because I don't want it to make stuff
up. And finally, it asks three questions
that are more useful and more specific.
What kind of meeting is this? Who's in
the room? And what's the decision? So, I
give it the answer. It's a client pitch
for a marketing automation project.
These are the people in the room. I need
to get approval to move to the proposal
phase, and I'd like a comprehensive
template that I can fill in, plus some
draft talking points. It then comes back
with a meeting prep sheet that's really
filled in. Now, does it still make some
stuff up? It does infer a little bit
here. I'm not going to say it's perfect,
but it gives me much much more
actionable, much closer meeting, draft
preparation notes than I got with the
other answer without the meta prompt. It
give me it gives me points I want to
emphasize. To be honest, it's correct.
Revenue impact is something that you
need to actually deliver on uh if you're
going to do a marketing proposal.
Futurep proofing is something that uh
anyone who's proposing AI systems need
to be able to answer, etc. Questions to
ask. These are valid questions. These
are questions I've literally heard in
these kinds of meetings. So, it's good
questions, likely counterpoints. Yep,
this sounds too expensive. I've
definitely heard that we're stretched
thin. How will we implement it? I've
heard that. These are plausible. In
other words, the metaprompt and the
ensuing clarity that I provided when it
asked for specific clarity have given
this model the ability to be useful to
me. I would say just this slight change
with the metaprompt has pushed this
meeting prep to I want to say 80% good.
It still needs probably another
iteration, but we now have something
usable. Whereas with the earlier
version, without the metaprompt, I
couldn't make heads or tails of it
because it chose to make up so much
stuff. And that's really the key. This
is a speedboat. You can't slow down. And
so, you have to figure out how to take
advantage of that power. And I'm trying
to give you a metaprompt that enables
you to take the work out of steering so
that you can write the way you write and
still get value back. So, I hope you
enjoyed that dive. We're not done yet. I
want to actually get into some of the
principles that makes GPT5 different to
prompt that I've discovered as I've
started to craft these meta prompts. And
by the way, there are a lot of meta
prompts in the substack. That's just an
example for a quick five minute get
started. I love that. That's right at
the top of the article. But there's a
bunch of others that are for specific
departments and use cases because what I
found is metaprompting is something that
you can exercise at different levels.
You can have the quick five minute get
started version and then for people who
want to go in depth, let's say you want
to craft a customer service prompt, you
can actually have a much longer meta
prompt that's much more detailed that
makes you do a little more work and
you're going to have a much more
powerful experience for that particular
objective. I want to cover both get
started quick and also the detail. These
are the prompting principles that have
really popped out to me about why GPT5
is different and how we can leverage
that difference for prompting. Number
one, GPT5 is multiple models. We know
that, but the dispatcher and routing
reality popped out to me a lot. I'm
going to talk about that a fair bit when
I talk about sort of the way we leverage
the principles of prompting to prompt
effectively. Number two is the precision
tax. If you give the model contradictory
instructions, it's going to make the
model burn out really hard. You're
basically telling a really powerful
speedboat to go in two directions at
once. That burns tokens, it burns cost,
it burns time. It's painful. And then
the third thing that sort of shapes how
GPT5 responds is agentic versus
conversational. GPT5, I've told you it's
a speedboat. It desperately wants to
complete missions, right? It doesn't
want to have conversations. it wants to
go do something. And so part of my goal
with this metaprompting is to recognize
this reality and to help you get to a
spot where it's actually doing something
useful and not just like burning tokens
going off where you don't want it to go.
And then the expertise paradox is the
last one that I want to call out. This
model works best with expert
instructions. It does not work with the
casual prompting we've talked about. I
hope you've seen that in this example.
It just doesn't work well at all. and
it's marketed to non-experts. And I
think that one of the things that Sam
Alton and others have realized is that
they kind of screwed that up. Like they
needed to be more honest about what this
model takes to prompt well and how hard
it is. I read the GPT5 official
prompting guide, which by the way, it's
notable to me that they felt the need to
release that because it suggests that
they recognize that this is also
difficult to prompt. Let me close with
some prompting principles that you can
apply in other cases because I don't
want you to just leave with like one
prompt here. I want you to leave with a
deeper understanding of what's going on.
And so I'm going to walk through based
on those insights, right? That it's a
router, that it forces you to be
precise, that it's a gent that it makes
you like write expert prompts. What does
that mean from a prompting principle
perspective? You won't find these in the
GPT5 prompting guide. I had to infer
these and dig into these. That's why
it's taken a while to make this video.
This has been on my list for a bit and
I've really had to dig in to make sure I
feel like I understand how to prompt
this model well and can share it with
you effectively. So, you need to
recognize the importance of structure
when you're prompting GPT5. That's the
first thing. Structure will affect the
way the model routes. So if it's a bunch
of models in a trench coat and it's
routing, you want to make sure that you
have your structure put together in a
way that prompts the router early on to
go to the model you want. And so some of
the early tries at this were like, hey,
think hard to trigger the thinking
model. But really, you want to think
about it as what are your headers, what
are your bullets, how do you expect the
model to respond in terms of the
structure of the output. Those are all
things that affect the implicit routing
of the model and which GPT5 under the
hood it calls. And I will say I don't
want to denigrate the idea of just
writing think hard. That absolutely
works too. But keep in mind that the way
you structure the prompt and the detail
with which you sort of structure the
prompt shapes what the model calls. And
in general, the more specific your
structure is, the more you're able to
clarify to GPT5 what the core problem is
that it's solving. That's a lot of what
a metaprompt does is starts to elucidate
or lay out for chat GBT5 what the core
problem is and that in turn helps the
model make the correct decision about
where to route it. So that's why the
structure matters. Number two, we talked
about this whole idea of contradictions
burning tokens. You want to make sure
that you explicitly prioritize tension.
If there are multiple goals, if there
are multiple tasks, if you tell chat
GPT5, be comprehensive, but be brief,
you're basically making it burn the
motor. You want to be really explicit
and say, "My primary goal is X. My
secondary goal is Y. When in doubt,
prioritize one over two. This is Y."
Again, metaprompts can help with that so
that you don't have to put in quite as
much work yourself. But without those,
the model is going to take everything
seriously and literally and try and
resolve the contradiction and burn a lot
of tokens doing it and it probably won't
get you where you want. The third
principle is that depth is not equal to
length with these model responses. The
model differentiates what it calls
reasoning and what it calls verbosity or
the length of the response. It is
possible to get a PhD level analysis in
a very tight executive summary format.
That means when you're prompting, you
want to specify how hard the model
should think and how verbal it should
be. Now, I'm aware that you can do this
more directly in the responses API. But
if you're just a chatbot user, you can
also talk to the model directly in plain
English in your prompt and tell it how
hard you want it to reason. Tell it how
long you want the response to be. And
that's a useful guide for the model. It
helps. It shapes what the model does. So
keep in mind you're not just you don't
have one power level. You have multiple
power levels to play with here. You have
a depth power lever that goes like this
and it can go in-depth or not. And you
have a length of response. And so you
can say I want really in-depth thinking
and I want a short response or vice
versa. Fourth principle you have to
define the uncertainty here. As I've
said before, this model is literal. You
need to recognize that because it's kind
of a speedboat. It's going to attempt
even any task you give it. even when it
shouldn't attempt that task. There are
it needs explicit protocols. There are
not explicit fallbacks for when it gets
stuck. It needs you to tell it when it
when you're stuck or when you don't know
what to do or when there's ambiguity or
when there's uncertainty. Here is where
you go. This is the next step. If data
is insufficient, this is what you need
to specify. This is what you need to
ask. There's a lot of examples like that
where we know there's ambiguity, but we
need to specify for the model. This is
what you do. And by the way, there's
seven of these. I've gone through four.
If this is feeling like a lot, that's
okay. That's why I wrote the meta
prompts. I want to make this easier. And
that's why I've taken the time to craft
these prompts because I think that we
need help steering this model. And this
is the first model that really feels
like it's so powerful and so sensitive
to steering that we need something like
metaprompts just to drive it
effectively. Principle number five, tool
use is sort of part of the initial
assessment the model makes. And my
observation is that it's not really easy
to get the model to be balanced in its
tool use. It's either a tool maximalist
or a tool minimalist. It helps if you
have opinions about tool use to tell it.
So say first I want you to search the
web. Then after that, please analyze the
data that you retrieve in this way. Give
it that specific tool use instruction so
you don't leave it to guess. Number six,
context memory can be an illusion with
this model. It will act like it
remembers. It will, but it's rereading
everything each time just like every
other model. And you will need to
periodically reiterate instructions to
remind it to follow the protocols you're
giving it. In other words, this model is
extraordinarily steerable, but it's kind
of built for one or two turn
conversations at core where you have a
very detailed prompt. If you have a
lengthy conversation that's meandering
and eventually get to the point, the
model may not remember at the level of
detail you need it to across that entire
conversational set because it's cued so
deeply to the last thing you said and
whether it's specific and actionable and
it can go do it. It's that bias for
action that's coming in. Again, one of
the ways you can check and see if the
model remembers is to plant a flag in
your initial prompt. You can say, "If
you have read this instruction and
recall it and remember it, please write
flag at the end of every response." When
the word flag disappears in the
responses, you know that the model has
forgotten the initial instruction. You
can actually see right when it happens.
So there's ways to know, but I think the
larger point here is that this model
expects you to prompt well at the top.
And that's why I've invested in
metaprompting as a useful way to take
our, you know, somewhat messy and
scattered human thinking and actually
get it into shape for a prompt.
Principle number seven, structure beats
intelligence. And so give the model
methodologies. Don't assume that
thinking mode is the only thing you have
to work with here. If you give it
structured thinking and structured
prompting, if you give it methodologies
along with goals, you're going to get
much farther. And so, in a sense, I
think all of the conversation that
happened at chat GPT5 launch around can
we push the model into thinking mode or
not was a little bit of a red herring.
Yes, there are ways to do it. I've
talked about some of those like writing
think hard like the way you talk about
the problem and elucidate it clearly so
it's it's easy to see but at the end of
the day if you give it clear goals and
methodologies and clear structure you
get so far with this model that I find
in practice I care less about exactly
which model it called because it's more
likely to be calling the correct one in
the first place. It's more likely that
if it calls a model, the model's going
to know what to do. And for both of
those reasons, I find that good
structure on the prompting makes some of
the intelligence questions go away. So,
if you're wondering, how do I make this
all work? How do I put this together?
How do I take these principles and use
them? You want to make sure that you are
calling for the expertise you need. I'm
actually going to go through the
components of a prompt that I would
recommend and this will come out in the
meta prompting as well if you want to
dive deeper. I recommend that you define
the role not because of roleplay, not
because it necessarily is a magic card,
but because you're trying to prompt for
expertise routing. You're trying to push
the model to understand where it needs
to have expertise
in order to set up the rest of the
prompt. And so in the beginning in 2022
when we said define a role the thought
was this enables the model to actually
answer correctly and otherwise it
wouldn't. Now it's more about aiming.
This enables the model to recognize the
world it's in the expertise that's
called and maybe route to a smarter
model if need be. Number two, make sure
you have an objective framework. If
you're writing a prompt from scratch,
you'll have to do this yourself. If
you're using a metaprompt, it will help
you a bit, but you want to be clear
about what the goal is for the model
because again, chat GPT5 needs to go on
missions. You have to give it missions
if you're going to do the work with it.
Number three, process methodology. You
want to give it really explicit process
to go through. Metaprompts can help here
too. You want to make sure that the
model understands this step by step is
what we need to do to get to the end
result. Number four, you want to have an
explicit expectation for format. Make
sure that the model knows how to get you
the format that you want. What do you
need? Meeting notes? Do you need an
email? Fine. And what you're asking for.
It wants to do the job. Make sure it
knows how to do the job in a way that
you want. Number five, give it those
boundaries and limitations. Constraint
handling really matters with this model
because you're again, you're trying to
aim the speedboat. If you're telling it
don't go to the coral reefs, that's
really helpful because it just wants to
go fast. So tell it where not to go. And
that matters a lot because if those
initial prompts are really important to
give the model a mission, you want to
make sure the model understands these
are the anti-missions, right? These are
the anti- goals. These are the things
we're not going to do. Number six, be
clear about those uncertainty pieces.
Right? I talked about how you have to
define areas of tension and ambiguity
and explicitly give the model priorities
like this is number one, this is number
two. If there's a conflict, this is how
you resolve it. Take that seriously.
Take it really seriously because it will
help the model to help you. And finally,
number seven, give it a way to check its
work. The model wants to please you and
go on missions. Give it a way to check
its work. Give it validation criteria
that will help. And those are the seven
components that I have seen work well
with this model. And they all add up to
that core idea we talked about at the
beginning of this video. This model
needs to be steered. The whole idea of
metaprompting is it's basically giving
you a help a a helper rudder that you
can use to more easily steer. It's like
giving you power steering to steer this
boat. Because if you don't if you don't
know better, if you just try and drive
this the way you drove other models,
you're going to have the experience that
so many people had after Chad GPT5
launched, you're going to give it the
same instructions you gave other models
that worked. and you're going to realize
how much power is there and how much
bias for action is there and how much
demand for precision there is and get
rightfully frustrated because the jump
in prompting expectation is frankly
ridiculous. I'm saying that I think it's
ridiculous but that's the expectation
and that's why I'm building metaprompts
to help because I think that we need
something like power steering for this.
And so
my suggestion to you is that you take
the metaprompting, play with it, see if
it will help you to get more precision.
Look at the seven principles I've
outlined. See if that can help you to
write better prompts. And make sure that
you recognize that there will be moments
when you don't need to do all of this.
You do not need to use metaprompts for
simple factual queries. You don't need
to use fancy prompts for an exploratory
conversation where the whole goal is to
discover meaning together. You don't
need to use it for personal and
emotional conversations.
So understand that this model is built
for the kinds of missions I've been
describing over most of this video and
that therefore this metaprompting skill
set, this meta prompting toolkit, the
idea of prompting in a more specific
manner is going to help it because
that's the core of what the model wants
to do. But that if you're kind of on the
edges of what the model does, if you're
because to be honest, this model is not
a super emotionally smart model. I feel
like an emotional conversation is on the
edge of what it does. This model isn't
really built for factual queries. It can
do it. If you're in that kind of a
space, don't bother with the fancy
prompting. Just go with the basic
conversation. And frankly, there are
other models that do some of that stuff
better. Claude has better emotional
capabilities than chat GPT. We're just
going to say it, right? And so, you can
pick the model that works for you for
these other tasks. The era of casual
conversation prompting is just over.
With chat GPT5, we need to recognize
that we are in a new world. I would
expect chat GPT6 to be even more
agentic, demand even more precision from
you. And maybe they're going to ship
something that helps you expand your
prompts. We'll see. But at the end of
the day, you need to learn systematic
prompting now. And meta prompting is a
way to learn that that doesn't feel as
overwhelming. It helps you to steer.
Please, please, please recognize that
you can prompt chat GPT5. It is not
impossible. You can give it the
precision it needs with some help. You
can understand this model. This model is
not impossibly complex. It's it's very
very useful if you can get it to steer
predictably. And predictability is
driven by prompting. And predictability
beats the wildly unpredictable,
brilliant or dumb responses that you get
from conversational prompting. We need
to get to a point where our prompting is
more precise. And so I hope that this
video has helped you understand some of
what makes chat GPT5 tricky to prompt,
the principles that go into chat GPT5
and how how those principles of
prompting shape the model and how they
shape the response. And also, I hope the
metaprompt example helped you to see the
importance of using metaprompts when
you're tired, frustrated, don't have the
time to improve your own prompting so
you can get the most out of this model.
That's Shad GPT5 for you. It is a
tricky, tricky model, but you got this.
You got it.