Learning Library

← Back to Library

Easy Guide to Steering GPT‑5

Key Points

  • GPT‑5 behaves like a “speedboat with a big rudder,” needing strong, precise steering to produce useful results, which many typical user prompts fail to provide.
  • The author’s solution is a set of “metaprompts” – prompts that improve your own prompts – that can be copied from a Substack article for quick, accessible use.
  • A real‑world example (preparing for a meeting) shows GPT‑5 initially hallucinating details and delivering useless templates until the user supplies clarifying questions about meeting type, participants, and desired outcome.
  • Even after iterative prompting, GPT‑5 still makes unchecked assumptions (e.g., fabricating industry statistics), highlighting the importance of explicit constraints and verification in prompts.
  • The guide aims to let users stay lazy and write naturally while still steering GPT‑5 effectively, making advanced prompting more approachable.

Sections

Full Transcript

# Easy Guide to Steering GPT‑5 **Source:** [https://www.youtube.com/watch?v=hvTGYMq3pfg](https://www.youtube.com/watch?v=hvTGYMq3pfg) **Duration:** 00:25:09 ## Summary - GPT‑5 behaves like a “speedboat with a big rudder,” needing strong, precise steering to produce useful results, which many typical user prompts fail to provide. - The author’s solution is a set of “metaprompts” – prompts that improve your own prompts – that can be copied from a Substack article for quick, accessible use. - A real‑world example (preparing for a meeting) shows GPT‑5 initially hallucinating details and delivering useless templates until the user supplies clarifying questions about meeting type, participants, and desired outcome. - Even after iterative prompting, GPT‑5 still makes unchecked assumptions (e.g., fabricating industry statistics), highlighting the importance of explicit constraints and verification in prompts. - The guide aims to let users stay lazy and write naturally while still steering GPT‑5 effectively, making advanced prompting more approachable. ## Sections - [00:00:00](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=0s) **Navigating GPT-5 Prompting Challenges** - The speaker introduces a beginner‑friendly guide that uses metaprompts and concrete examples to help users effectively steer the notoriously hard‑to‑prompt GPT‑5 model. - [00:03:54](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=234s) **Meta‑Prompt for Structured Meeting Preparation** - The speaker describes a two‑step meta‑prompt process that first verbalizes assumptions to create a detailed brief from a vague request, then adopts a specific role and methodology to produce a concrete, actionable plan for preparing a meeting. - [00:07:23](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=443s) **Iterating Metaprompts for GPT‑5** - The speaker explains how a newly refined metaprompt for GPT‑5 reduces hallucination, provides both quick‑start and detailed versions, and lets users harness the model’s speed while maintaining control. - [00:10:40](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=640s) **Guiding GPT-5 with Structured Prompts** - The speaker stresses that using clear headers, bullets, and overall prompt architecture directs GPT‑5’s internal router to the desired sub‑model, yielding more precise and expert‑level responses. - [00:14:52](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=892s) **Meta-Prompts and Model Steering** - The speaker outlines using meta‑prompts to steer the model, manage tool‑use preferences, and repeatedly reinforce instructions because the model’s contextual memory is effectively an illusion. - [00:18:38](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=1118s) **Structuring Effective AI Prompts** - The speaker outlines a four‑step framework—defining a role for expertise routing, establishing a clear objective, detailing a step‑by‑step process, and leveraging meta‑prompts—to ensure the model understands its mission and produces accurate results. - [00:22:09](https://www.youtube.com/watch?v=hvTGYMq3pfg&t=1329s) **When and How to Use Metaprompting** - The speaker advises experimenting with metaprompting and the outlined seven principles for precise tasks while noting it’s unnecessary for simple factual queries, exploratory chats, or emotional conversations, and suggests choosing models better suited for those edge cases. ## Full Transcript
0:00GPT5 puts prompting on hard mode. And I 0:03want to make this the most accessible, 0:06easy to use prompting guide for you when 0:09you're playing with GPT5. Why am I doing 0:12it now, weeks after GPT5 came out? Quite 0:15simply, because it took me some time to 0:18figure out how I wanted to share what I 0:21was learning about the model. This is a 0:23tricky model to prompt. And I compare it 0:26to a speedboat with a really big rudder. 0:29At the end of the day, this model wants 0:32to go fast and it wants to be steered 0:36really, really hard. That big rudder 0:37like it wants to be steered really hard. 0:39But most people our prompts are not in a 0:42place where we can effectively steer 0:44that model. My aim is to not only show 0:48you how to solve that problem, but 0:50enable you to be human, to be a little 0:53bit lazy, to write the way you write and 0:56still get good results from GPT5. It's 0:59taken me some time to figure it out, but 1:01I'm excited to show you. Let's dive in 1:03right at the top with a quick look at a 1:07metaprompt. A metaprompt is a prompt 1:10that makes your prompts better. Now, if 1:13that gives you a headache, don't worry. 1:15I'm going to give you a bunch of these 1:16in the Substack article that you can 1:18use. Also going to give you a quick one 1:21now that you can look at, understand how 1:23it works, understand why it works, and 1:26get right on the road to improving your 1:28own experience with GPT5 to steering the 1:31model in ways that are easy for you. So, 1:34with that in mind, an easy get started 1:36prompt with a specific real example. 1:39Let's dive in. Okay, I chose help me 1:42prepare for tomorrow's meeting as an 1:44example of a reall life prompt that I 1:47have seen people type in that I have 1:49typed in sometimes myself. I'm not 1:51always perfect at prompting. I gave it 1:53to chat GPT5. I did not give it to chat 1:56GPT5 thinking or any of the other 1:58options. Just to chat GPT5. It responded 2:01with all of this. Thought for 12 seconds 2:03and spit back a rapid prep guide. It 2:06doesn't even know what the meeting is 2:08about. It spit out a specific agenda for 2:10specific meetings. It doesn't know the 2:12meeting is 30 minutes. I didn't tell it 2:13that. It's making it up. It spit out a 2:15drop in template. All of this is useless 2:18to me. And then it asks for two 2:20questions, right? What what kind of 2:22meeting is it? Who's in the room? And 2:24what outcome do you want? I guess that's 2:26three questions. I answered all three. 2:29And it comes back with a pitch. This 2:32time, you know, you notice it's now 2:34taken two tries. It's coming back with a 2:36pitch. is coming back with a stakeholder 2:38leverage map, a steering question for 2:40the room, objections, encounters, run of 2:42show, and a next step script. It's okay. 2:46It's not nearly as clear as it needs to 2:48be. It's making big assumptions about 2:50what these stakeholders want that aren't 2:52clear. It's deciding 2:55that it wants a sense of the context and 2:59the data, and it it just it's making it 3:01up, right? Industry peers use automation 3:03to see 20 to 40% lift. I didn't tell it 3:06that. It just decided to make that up 3:08and call it a fact, which it isn't a 3:10fact. Um, it assumes that everything 3:14supports this pitch. It doesn't even 3:15know what the pitch is. In other words, 3:18by giving it generic information. With 3:20GPT5, the power on that speedboat is so 3:23high, you're just inviting it to make 3:26stuff up. You're just inviting it to 3:28fabricate stuff. And this is not a 3:30particularly useful tool. And I think it 3:33exemplifies some of the frustration 3:34people feel because whether or not 3:36you're prompting with just one line or 3:37two or three lines, it is easy to get 3:40this incredibly detailed, incredibly 3:42lengthy response that in the end of the 3:44day isn't super useful. Now, let's go to 3:47a different approach. Let's see if we 3:49can use a metaprompt to improve things. 3:51Okay, here we are. You're looking at a 3:54metaprompt. Transform my request into a 3:56structured brief and then execute it. 3:58First, interpret what I'm actually 4:01asking for. what type of output would 4:03help me, what expertise would be 4:05relevant, what format would be useful, 4:07what level of detail. I'm asking the 4:09model to verbalize assumptions that I 4:11can correct if need be. And that's 4:13really important because it shapes the 4:15rest of the response and whether or not 4:16it's useful. Second part, then 4:18restructure and execute as a specific 4:22role and you should infer appropriate 4:24expertise, a specific objective. Please 4:26make my vague request more specific an 4:29approach. Choose the methodology that 4:31fits the objective you've come up with 4:33and an output. Basically, what we're 4:35saying is take this tiny phrase. I use 4:37the exact same phrase. Help me prepare 4:39for tomorrow's meeting and expand it in 4:41a way that makes the prompt useful and 4:44then run it. So, here's what here's what 4:46the model said. First, it gave me the 4:48structured brief. Now, it assumes I'm 4:50me, right? I talk to Chad GPT all the 4:52time. Based on its memory of you, it 4:55will say something different here. It 4:56will then take the objective and prepare 4:59a concrete actionable prep plan. Right? 5:01And it talks about what it's going to 5:03prepare. And already I think this is 5:05more useful. It it wants to uh give me a 5:09sharp grasp of the context, anticipated 5:11objections, talking points, etc. 5:14It's going to use this approach to 5:16clarify the unknowns to structure 5:17preparation to surface two to three 5:19likely points. Do you see that? It's 5:21already realizing it needs to ask 5:23questions, but it's realizing it needs 5:25to ask questions in the context of this 5:28metaprompt I've given it. And so the 5:29questions are more specific and they're 5:31more useful because they're tied into 5:33the objective that it's inferring. And 5:35then it's giving me the output, a 5:37meeting prep sheet with a context recap, 5:39a message, questions to ask. This is a 5:42more useful output already as a 5:44framework. And so then it gives me the 5:46executed output and it puts blanks in. 5:49doesn't make stuff up, which is useful 5:51because I don't want it to make stuff 5:52up. And finally, it asks three questions 5:56that are more useful and more specific. 5:58What kind of meeting is this? Who's in 6:00the room? And what's the decision? So, I 6:01give it the answer. It's a client pitch 6:03for a marketing automation project. 6:05These are the people in the room. I need 6:07to get approval to move to the proposal 6:08phase, and I'd like a comprehensive 6:10template that I can fill in, plus some 6:12draft talking points. It then comes back 6:14with a meeting prep sheet that's really 6:16filled in. Now, does it still make some 6:18stuff up? It does infer a little bit 6:20here. I'm not going to say it's perfect, 6:22but it gives me much much more 6:24actionable, much closer meeting, draft 6:28preparation notes than I got with the 6:31other answer without the meta prompt. It 6:34give me it gives me points I want to 6:35emphasize. To be honest, it's correct. 6:37Revenue impact is something that you 6:39need to actually deliver on uh if you're 6:41going to do a marketing proposal. 6:43Futurep proofing is something that uh 6:45anyone who's proposing AI systems need 6:47to be able to answer, etc. Questions to 6:49ask. These are valid questions. These 6:51are questions I've literally heard in 6:52these kinds of meetings. So, it's good 6:54questions, likely counterpoints. Yep, 6:57this sounds too expensive. I've 6:58definitely heard that we're stretched 7:00thin. How will we implement it? I've 7:02heard that. These are plausible. In 7:03other words, the metaprompt and the 7:05ensuing clarity that I provided when it 7:08asked for specific clarity have given 7:10this model the ability to be useful to 7:12me. I would say just this slight change 7:16with the metaprompt has pushed this 7:18meeting prep to I want to say 80% good. 7:21It still needs probably another 7:23iteration, but we now have something 7:25usable. Whereas with the earlier 7:28version, without the metaprompt, I 7:30couldn't make heads or tails of it 7:31because it chose to make up so much 7:33stuff. And that's really the key. This 7:35is a speedboat. You can't slow down. And 7:38so, you have to figure out how to take 7:40advantage of that power. And I'm trying 7:41to give you a metaprompt that enables 7:44you to take the work out of steering so 7:46that you can write the way you write and 7:49still get value back. So, I hope you 7:51enjoyed that dive. We're not done yet. I 7:53want to actually get into some of the 7:55principles that makes GPT5 different to 7:59prompt that I've discovered as I've 8:00started to craft these meta prompts. And 8:02by the way, there are a lot of meta 8:04prompts in the substack. That's just an 8:06example for a quick five minute get 8:08started. I love that. That's right at 8:10the top of the article. But there's a 8:12bunch of others that are for specific 8:14departments and use cases because what I 8:17found is metaprompting is something that 8:20you can exercise at different levels. 8:22You can have the quick five minute get 8:23started version and then for people who 8:25want to go in depth, let's say you want 8:26to craft a customer service prompt, you 8:29can actually have a much longer meta 8:31prompt that's much more detailed that 8:32makes you do a little more work and 8:34you're going to have a much more 8:35powerful experience for that particular 8:38objective. I want to cover both get 8:40started quick and also the detail. These 8:42are the prompting principles that have 8:44really popped out to me about why GPT5 8:48is different and how we can leverage 8:50that difference for prompting. Number 8:51one, GPT5 is multiple models. We know 8:55that, but the dispatcher and routing 8:57reality popped out to me a lot. I'm 8:59going to talk about that a fair bit when 9:01I talk about sort of the way we leverage 9:04the principles of prompting to prompt 9:06effectively. Number two is the precision 9:08tax. If you give the model contradictory 9:11instructions, it's going to make the 9:13model burn out really hard. You're 9:15basically telling a really powerful 9:16speedboat to go in two directions at 9:18once. That burns tokens, it burns cost, 9:21it burns time. It's painful. And then 9:23the third thing that sort of shapes how 9:26GPT5 responds is agentic versus 9:29conversational. GPT5, I've told you it's 9:32a speedboat. It desperately wants to 9:34complete missions, right? It doesn't 9:35want to have conversations. it wants to 9:38go do something. And so part of my goal 9:40with this metaprompting is to recognize 9:42this reality and to help you get to a 9:45spot where it's actually doing something 9:46useful and not just like burning tokens 9:49going off where you don't want it to go. 9:50And then the expertise paradox is the 9:52last one that I want to call out. This 9:54model works best with expert 9:56instructions. It does not work with the 9:59casual prompting we've talked about. I 10:01hope you've seen that in this example. 10:03It just doesn't work well at all. and 10:05it's marketed to non-experts. And I 10:07think that one of the things that Sam 10:08Alton and others have realized is that 10:10they kind of screwed that up. Like they 10:12needed to be more honest about what this 10:15model takes to prompt well and how hard 10:17it is. I read the GPT5 official 10:20prompting guide, which by the way, it's 10:22notable to me that they felt the need to 10:24release that because it suggests that 10:26they recognize that this is also 10:28difficult to prompt. Let me close with 10:30some prompting principles that you can 10:32apply in other cases because I don't 10:34want you to just leave with like one 10:35prompt here. I want you to leave with a 10:37deeper understanding of what's going on. 10:38And so I'm going to walk through based 10:40on those insights, right? That it's a 10:42router, that it forces you to be 10:45precise, that it's a gent that it makes 10:47you like write expert prompts. What does 10:49that mean from a prompting principle 10:51perspective? You won't find these in the 10:53GPT5 prompting guide. I had to infer 10:55these and dig into these. That's why 10:56it's taken a while to make this video. 10:58This has been on my list for a bit and 11:00I've really had to dig in to make sure I 11:03feel like I understand how to prompt 11:05this model well and can share it with 11:06you effectively. So, you need to 11:09recognize the importance of structure 11:10when you're prompting GPT5. That's the 11:12first thing. Structure will affect the 11:14way the model routes. So if it's a bunch 11:17of models in a trench coat and it's 11:19routing, you want to make sure that you 11:21have your structure put together in a 11:23way that prompts the router early on to 11:26go to the model you want. And so some of 11:29the early tries at this were like, hey, 11:31think hard to trigger the thinking 11:33model. But really, you want to think 11:35about it as what are your headers, what 11:37are your bullets, how do you expect the 11:39model to respond in terms of the 11:41structure of the output. Those are all 11:43things that affect the implicit routing 11:46of the model and which GPT5 under the 11:48hood it calls. And I will say I don't 11:50want to denigrate the idea of just 11:52writing think hard. That absolutely 11:53works too. But keep in mind that the way 11:56you structure the prompt and the detail 11:57with which you sort of structure the 11:59prompt shapes what the model calls. And 12:01in general, the more specific your 12:03structure is, the more you're able to 12:04clarify to GPT5 what the core problem is 12:08that it's solving. That's a lot of what 12:10a metaprompt does is starts to elucidate 12:12or lay out for chat GBT5 what the core 12:14problem is and that in turn helps the 12:16model make the correct decision about 12:18where to route it. So that's why the 12:20structure matters. Number two, we talked 12:22about this whole idea of contradictions 12:24burning tokens. You want to make sure 12:26that you explicitly prioritize tension. 12:30If there are multiple goals, if there 12:32are multiple tasks, if you tell chat 12:35GPT5, be comprehensive, but be brief, 12:39you're basically making it burn the 12:40motor. You want to be really explicit 12:42and say, "My primary goal is X. My 12:45secondary goal is Y. When in doubt, 12:47prioritize one over two. This is Y." 12:49Again, metaprompts can help with that so 12:51that you don't have to put in quite as 12:53much work yourself. But without those, 12:55the model is going to take everything 12:58seriously and literally and try and 12:59resolve the contradiction and burn a lot 13:01of tokens doing it and it probably won't 13:03get you where you want. The third 13:04principle is that depth is not equal to 13:07length with these model responses. The 13:09model differentiates what it calls 13:11reasoning and what it calls verbosity or 13:14the length of the response. It is 13:16possible to get a PhD level analysis in 13:19a very tight executive summary format. 13:22That means when you're prompting, you 13:24want to specify how hard the model 13:26should think and how verbal it should 13:28be. Now, I'm aware that you can do this 13:30more directly in the responses API. But 13:33if you're just a chatbot user, you can 13:35also talk to the model directly in plain 13:38English in your prompt and tell it how 13:40hard you want it to reason. Tell it how 13:42long you want the response to be. And 13:43that's a useful guide for the model. It 13:46helps. It shapes what the model does. So 13:48keep in mind you're not just you don't 13:51have one power level. You have multiple 13:52power levels to play with here. You have 13:54a depth power lever that goes like this 13:56and it can go in-depth or not. And you 13:58have a length of response. And so you 14:00can say I want really in-depth thinking 14:02and I want a short response or vice 14:03versa. Fourth principle you have to 14:06define the uncertainty here. As I've 14:08said before, this model is literal. You 14:11need to recognize that because it's kind 14:13of a speedboat. It's going to attempt 14:15even any task you give it. even when it 14:18shouldn't attempt that task. There are 14:20it needs explicit protocols. There are 14:22not explicit fallbacks for when it gets 14:25stuck. It needs you to tell it when it 14:28when you're stuck or when you don't know 14:30what to do or when there's ambiguity or 14:31when there's uncertainty. Here is where 14:34you go. This is the next step. If data 14:36is insufficient, this is what you need 14:38to specify. This is what you need to 14:40ask. There's a lot of examples like that 14:42where we know there's ambiguity, but we 14:44need to specify for the model. This is 14:46what you do. And by the way, there's 14:49seven of these. I've gone through four. 14:50If this is feeling like a lot, that's 14:52okay. That's why I wrote the meta 14:54prompts. I want to make this easier. And 14:57that's why I've taken the time to craft 14:58these prompts because I think that we 15:01need help steering this model. And this 15:04is the first model that really feels 15:06like it's so powerful and so sensitive 15:10to steering that we need something like 15:12metaprompts just to drive it 15:13effectively. Principle number five, tool 15:16use is sort of part of the initial 15:18assessment the model makes. And my 15:21observation is that it's not really easy 15:24to get the model to be balanced in its 15:27tool use. It's either a tool maximalist 15:30or a tool minimalist. It helps if you 15:34have opinions about tool use to tell it. 15:37So say first I want you to search the 15:39web. Then after that, please analyze the 15:42data that you retrieve in this way. Give 15:45it that specific tool use instruction so 15:48you don't leave it to guess. Number six, 15:51context memory can be an illusion with 15:54this model. It will act like it 15:56remembers. It will, but it's rereading 15:58everything each time just like every 16:00other model. And you will need to 16:02periodically reiterate instructions to 16:05remind it to follow the protocols you're 16:08giving it. In other words, this model is 16:10extraordinarily steerable, but it's kind 16:12of built for one or two turn 16:14conversations at core where you have a 16:16very detailed prompt. If you have a 16:19lengthy conversation that's meandering 16:22and eventually get to the point, the 16:24model may not remember at the level of 16:26detail you need it to across that entire 16:28conversational set because it's cued so 16:31deeply to the last thing you said and 16:34whether it's specific and actionable and 16:35it can go do it. It's that bias for 16:37action that's coming in. Again, one of 16:39the ways you can check and see if the 16:41model remembers is to plant a flag in 16:45your initial prompt. You can say, "If 16:47you have read this instruction and 16:49recall it and remember it, please write 16:52flag at the end of every response." When 16:55the word flag disappears in the 16:57responses, you know that the model has 17:01forgotten the initial instruction. You 17:02can actually see right when it happens. 17:04So there's ways to know, but I think the 17:07larger point here is that this model 17:09expects you to prompt well at the top. 17:11And that's why I've invested in 17:13metaprompting as a useful way to take 17:16our, you know, somewhat messy and 17:18scattered human thinking and actually 17:20get it into shape for a prompt. 17:23Principle number seven, structure beats 17:26intelligence. And so give the model 17:28methodologies. Don't assume that 17:29thinking mode is the only thing you have 17:31to work with here. If you give it 17:33structured thinking and structured 17:36prompting, if you give it methodologies 17:38along with goals, you're going to get 17:40much farther. And so, in a sense, I 17:42think all of the conversation that 17:44happened at chat GPT5 launch around can 17:48we push the model into thinking mode or 17:50not was a little bit of a red herring. 17:53Yes, there are ways to do it. I've 17:54talked about some of those like writing 17:56think hard like the way you talk about 17:58the problem and elucidate it clearly so 18:00it's it's easy to see but at the end of 18:03the day if you give it clear goals and 18:05methodologies and clear structure you 18:08get so far with this model that I find 18:11in practice I care less about exactly 18:13which model it called because it's more 18:16likely to be calling the correct one in 18:19the first place. It's more likely that 18:21if it calls a model, the model's going 18:24to know what to do. And for both of 18:26those reasons, I find that good 18:28structure on the prompting makes some of 18:30the intelligence questions go away. So, 18:32if you're wondering, how do I make this 18:34all work? How do I put this together? 18:36How do I take these principles and use 18:38them? You want to make sure that you are 18:42calling for the expertise you need. I'm 18:44actually going to go through the 18:45components of a prompt that I would 18:47recommend and this will come out in the 18:48meta prompting as well if you want to 18:50dive deeper. I recommend that you define 18:53the role not because of roleplay, not 18:55because it necessarily is a magic card, 18:57but because you're trying to prompt for 18:58expertise routing. You're trying to push 19:01the model to understand where it needs 19:04to have expertise 19:06in order to set up the rest of the 19:08prompt. And so in the beginning in 2022 19:11when we said define a role the thought 19:13was this enables the model to actually 19:15answer correctly and otherwise it 19:17wouldn't. Now it's more about aiming. 19:19This enables the model to recognize the 19:22world it's in the expertise that's 19:24called and maybe route to a smarter 19:26model if need be. Number two, make sure 19:28you have an objective framework. If 19:30you're writing a prompt from scratch, 19:32you'll have to do this yourself. If 19:34you're using a metaprompt, it will help 19:35you a bit, but you want to be clear 19:37about what the goal is for the model 19:39because again, chat GPT5 needs to go on 19:42missions. You have to give it missions 19:43if you're going to do the work with it. 19:45Number three, process methodology. You 19:48want to give it really explicit process 19:51to go through. Metaprompts can help here 19:53too. You want to make sure that the 19:55model understands this step by step is 19:59what we need to do to get to the end 20:01result. Number four, you want to have an 20:03explicit expectation for format. Make 20:06sure that the model knows how to get you 20:08the format that you want. What do you 20:10need? Meeting notes? Do you need an 20:12email? Fine. And what you're asking for. 20:15It wants to do the job. Make sure it 20:17knows how to do the job in a way that 20:19you want. Number five, give it those 20:21boundaries and limitations. Constraint 20:23handling really matters with this model 20:25because you're again, you're trying to 20:26aim the speedboat. If you're telling it 20:28don't go to the coral reefs, that's 20:29really helpful because it just wants to 20:31go fast. So tell it where not to go. And 20:34that matters a lot because if those 20:35initial prompts are really important to 20:37give the model a mission, you want to 20:39make sure the model understands these 20:40are the anti-missions, right? These are 20:42the anti- goals. These are the things 20:44we're not going to do. Number six, be 20:46clear about those uncertainty pieces. 20:48Right? I talked about how you have to 20:49define areas of tension and ambiguity 20:51and explicitly give the model priorities 20:54like this is number one, this is number 20:55two. If there's a conflict, this is how 20:57you resolve it. Take that seriously. 20:59Take it really seriously because it will 21:01help the model to help you. And finally, 21:03number seven, give it a way to check its 21:06work. The model wants to please you and 21:08go on missions. Give it a way to check 21:09its work. Give it validation criteria 21:12that will help. And those are the seven 21:14components that I have seen work well 21:16with this model. And they all add up to 21:18that core idea we talked about at the 21:19beginning of this video. This model 21:21needs to be steered. The whole idea of 21:23metaprompting is it's basically giving 21:25you a help a a helper rudder that you 21:28can use to more easily steer. It's like 21:31giving you power steering to steer this 21:33boat. Because if you don't if you don't 21:36know better, if you just try and drive 21:38this the way you drove other models, 21:39you're going to have the experience that 21:41so many people had after Chad GPT5 21:43launched, you're going to give it the 21:44same instructions you gave other models 21:46that worked. and you're going to realize 21:48how much power is there and how much 21:50bias for action is there and how much 21:52demand for precision there is and get 21:54rightfully frustrated because the jump 21:56in prompting expectation is frankly 21:59ridiculous. I'm saying that I think it's 22:01ridiculous but that's the expectation 22:03and that's why I'm building metaprompts 22:04to help because I think that we need 22:06something like power steering for this. 22:08And so 22:09my suggestion to you is that you take 22:12the metaprompting, play with it, see if 22:14it will help you to get more precision. 22:16Look at the seven principles I've 22:17outlined. See if that can help you to 22:19write better prompts. And make sure that 22:21you recognize that there will be moments 22:24when you don't need to do all of this. 22:26You do not need to use metaprompts for 22:29simple factual queries. You don't need 22:30to use fancy prompts for an exploratory 22:33conversation where the whole goal is to 22:34discover meaning together. You don't 22:36need to use it for personal and 22:38emotional conversations. 22:41So understand that this model is built 22:45for the kinds of missions I've been 22:46describing over most of this video and 22:49that therefore this metaprompting skill 22:52set, this meta prompting toolkit, the 22:55idea of prompting in a more specific 22:57manner is going to help it because 22:59that's the core of what the model wants 23:01to do. But that if you're kind of on the 23:02edges of what the model does, if you're 23:04because to be honest, this model is not 23:06a super emotionally smart model. I feel 23:09like an emotional conversation is on the 23:10edge of what it does. This model isn't 23:12really built for factual queries. It can 23:14do it. If you're in that kind of a 23:16space, don't bother with the fancy 23:18prompting. Just go with the basic 23:19conversation. And frankly, there are 23:21other models that do some of that stuff 23:22better. Claude has better emotional 23:26capabilities than chat GPT. We're just 23:28going to say it, right? And so, you can 23:29pick the model that works for you for 23:31these other tasks. The era of casual 23:33conversation prompting is just over. 23:36With chat GPT5, we need to recognize 23:39that we are in a new world. I would 23:40expect chat GPT6 to be even more 23:43agentic, demand even more precision from 23:46you. And maybe they're going to ship 23:47something that helps you expand your 23:49prompts. We'll see. But at the end of 23:51the day, you need to learn systematic 23:53prompting now. And meta prompting is a 23:55way to learn that that doesn't feel as 23:56overwhelming. It helps you to steer. 23:58Please, please, please recognize that 24:01you can prompt chat GPT5. It is not 24:05impossible. You can give it the 24:08precision it needs with some help. You 24:10can understand this model. This model is 24:13not impossibly complex. It's it's very 24:16very useful if you can get it to steer 24:18predictably. And predictability is 24:21driven by prompting. And predictability 24:24beats the wildly unpredictable, 24:26brilliant or dumb responses that you get 24:29from conversational prompting. We need 24:32to get to a point where our prompting is 24:34more precise. And so I hope that this 24:36video has helped you understand some of 24:39what makes chat GPT5 tricky to prompt, 24:42the principles that go into chat GPT5 24:45and how how those principles of 24:47prompting shape the model and how they 24:48shape the response. And also, I hope the 24:50metaprompt example helped you to see the 24:53importance of using metaprompts when 24:55you're tired, frustrated, don't have the 24:58time to improve your own prompting so 25:00you can get the most out of this model. 25:01That's Shad GPT5 for you. It is a 25:04tricky, tricky model, but you got this. 25:06You got it.