Meta-Prompting: Dual Strategies Revealed
Key Points
- The way prompts are worded and structured dramatically impacts AI behavior, and mastering these details enables tailored, goal‑specific outputs.
- By presenting two versions of the same prompt—a “hard‑mode” framework prompt and a beginner‑friendly, diagnostic‑question flow—the speaker illustrates how subtle tweaks produce different learning systems rather than single responses.
- Logging prompts in tools like Notion allows you to attach an AI assistant (e.g., Comet) to evaluate and compare prompt effectiveness, turning AI into a self‑reviewing coach.
- A robust prompt template typically includes a role, purpose, instructions, references, and desired output, even though the role itself may not boost factual accuracy but sets the contextual tone.
- Prompting should be viewed as a process for driving iterative learning systems, not merely a one‑off request, and using AI to refine AI prompts accelerates mastery of the technology.
Sections
- Meta Prompting: Teaching AI via Examples - The speaker outlines an interactive session where they reveal how they refine prompts across multiple versions to achieve two goals—demonstrating prompt‑engineering techniques and teaching AI concepts—highlighting prompts as systems that drive learning.
- Prompt Blueprint for AI Tutor - The speaker outlines how to define a two‑step goal—first a diagnostic quiz, then progressively harder lessons—in a prompt that transforms the assistant into a personal AI learning tutor.
- Custom Prompt Blueprint Workflow - An overview of how the Prompt Coach guides users to create a tailored AI prompt by selecting strategy, effort level, and agentic mode, and either answering step‑by‑step questions or supplying all inputs up front to generate a ready‑to‑use prompt blueprint.
- Strategic Prompt Design Techniques - The speaker explains how to craft sophisticated, multi‑layered prompts—using example blurbs, placeholders, and explicit role/goal structures—to guide advanced language models without exposing full prompt content.
- Easy‑Mode AI Tutoring Blueprint - The speaker outlines a beginner‑friendly “easy mode” prompt that launches a personal AI tutor using single‑question diagnostics and micro‑lessons to quickly start learning AI without overwhelming the user.
- Active Micro-Learning Blueprint Design - The speaker outlines a revised prompt that generates a learning blueprint using single-question micro‑lessons, active‑learning tactics, pacing controls, and a structured output format (diagnostic, concept, practice, stretch goal).
- Building a Custom AI Tutor - The speaker walks through creating and testing a prompt‑driven learning tutor, toggling an “easy mode” that uses chain‑of‑thought reasoning, and briefly explains back‑propagation in simple terms.
- Managing Prompt Variations Effectively - The speaker explains they'll share full-text prompts on Substack for easy use, demonstrate how different wordings achieve the same goal, and highlight how structure and wording influence prompt outcomes.
Full Transcript
# Meta-Prompting: Dual Strategies Revealed **Source:** [https://www.youtube.com/watch?v=2uC5WllehxY](https://www.youtube.com/watch?v=2uC5WllehxY) **Duration:** 00:24:15 ## Summary - The way prompts are worded and structured dramatically impacts AI behavior, and mastering these details enables tailored, goal‑specific outputs. - By presenting two versions of the same prompt—a “hard‑mode” framework prompt and a beginner‑friendly, diagnostic‑question flow—the speaker illustrates how subtle tweaks produce different learning systems rather than single responses. - Logging prompts in tools like Notion allows you to attach an AI assistant (e.g., Comet) to evaluate and compare prompt effectiveness, turning AI into a self‑reviewing coach. - A robust prompt template typically includes a role, purpose, instructions, references, and desired output, even though the role itself may not boost factual accuracy but sets the contextual tone. - Prompting should be viewed as a process for driving iterative learning systems, not merely a one‑off request, and using AI to refine AI prompts accelerates mastery of the technology. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2uC5WllehxY&t=0s) **Meta Prompting: Teaching AI via Examples** - The speaker outlines an interactive session where they reveal how they refine prompts across multiple versions to achieve two goals—demonstrating prompt‑engineering techniques and teaching AI concepts—highlighting prompts as systems that drive learning. - [00:03:20](https://www.youtube.com/watch?v=2uC5WllehxY&t=200s) **Prompt Blueprint for AI Tutor** - The speaker outlines how to define a two‑step goal—first a diagnostic quiz, then progressively harder lessons—in a prompt that transforms the assistant into a personal AI learning tutor. - [00:07:09](https://www.youtube.com/watch?v=2uC5WllehxY&t=429s) **Custom Prompt Blueprint Workflow** - An overview of how the Prompt Coach guides users to create a tailored AI prompt by selecting strategy, effort level, and agentic mode, and either answering step‑by‑step questions or supplying all inputs up front to generate a ready‑to‑use prompt blueprint. - [00:10:17](https://www.youtube.com/watch?v=2uC5WllehxY&t=617s) **Strategic Prompt Design Techniques** - The speaker explains how to craft sophisticated, multi‑layered prompts—using example blurbs, placeholders, and explicit role/goal structures—to guide advanced language models without exposing full prompt content. - [00:13:22](https://www.youtube.com/watch?v=2uC5WllehxY&t=802s) **Easy‑Mode AI Tutoring Blueprint** - The speaker outlines a beginner‑friendly “easy mode” prompt that launches a personal AI tutor using single‑question diagnostics and micro‑lessons to quickly start learning AI without overwhelming the user. - [00:16:30](https://www.youtube.com/watch?v=2uC5WllehxY&t=990s) **Active Micro-Learning Blueprint Design** - The speaker outlines a revised prompt that generates a learning blueprint using single-question micro‑lessons, active‑learning tactics, pacing controls, and a structured output format (diagnostic, concept, practice, stretch goal). - [00:20:13](https://www.youtube.com/watch?v=2uC5WllehxY&t=1213s) **Building a Custom AI Tutor** - The speaker walks through creating and testing a prompt‑driven learning tutor, toggling an “easy mode” that uses chain‑of‑thought reasoning, and briefly explains back‑propagation in simple terms. - [00:23:54](https://www.youtube.com/watch?v=2uC5WllehxY&t=1434s) **Managing Prompt Variations Effectively** - The speaker explains they'll share full-text prompts on Substack for easy use, demonstrate how different wordings achieve the same goal, and highlight how structure and wording influence prompt outcomes. ## Full Transcript
You know, the details of how we prompt
profoundly influence AI. And most people
know that, but they don't know how to
shape those details so they matter. And
I find that whenever I do prompt
content, people get really excited. But
there's been a gap. There's something
that's been missing. And it's been me
talking you through multiple versions of
the same prompt that I created so that
you can see how I tweak and change the
differences to get slightly different
variance according to my goals. Today I
want to get you two for the price of
one. So it's not just two prompts, it's
also two goals. You're going to learn
about how I tweak prompting and the
structure of a prompt through an
interactive video like this where I
share all of my details on how I
construct the prompt. And you're also
going to learn about AI itself because
you're going to see the prompt. And the
prompt is a prompt to teach you AI. I
know that's very meta, but we're going
to get into it and you're going to see
why it works. And it's going to
introduce you to the idea of prompting
as systems of learning and systems that
drive process. I think one of the
biggest misconceptions of prompting is
that you prompt for just one response.
And you're going to see both of these
prompts are not just for one response.
They're actually to drive systems of
learning. With that in mind, let's get
to it. Okay, here we are. Prompts for
learning AI. And do you know the first
thing about this? I have my Comet
assistant pulled up on the side. I can
chat with this prompt using AI. That's
one of the advantages of logging your
prompts in a place like Notion. You can
actually use an AI assistant to review
your prompts. And I do that here. I
asked it which prompt is more helpful
for beginners. And it analyzes the
prompts. It says this first one lays out
a framework, purpose, instructions,
reference, output, which we can see
right here. That's perfectly correct.
And it calls out that version two
focuses heavily on single questions. So
begin with one diagnostic question,
record my answer, and then ask the next
question. That's also perfectly correct.
And so it concludes that version two is
easier for beginners and creates a nice
little table here. It is great to use AI
to help you with AI. It's one of my
biggest tips. AI is a self-learning
technology. As you are more into it, as
you are more hands-on with AI, you're
going to do better. Okay, let's get into
the prompts today. Version one is sort
of hard mode. It's the one where you
define your own AI goal. The first thing
we do is we take a role and we give it a
purpose. You are my prompt coach. This
is our shared mission. So our mission is
to craft a prompt blueprint that turns
the assistant into a personal AI tutor.
So what this section does is twofold.
First, it adopts a role. And people will
tell you the role doesn't matter because
the role has been shown and tested to
not improve factual accuracy on recall.
That's true. That is not the point of
the role. And people who think it is
misunderstand it. The point of the role
is to help the model get into a semantic
space so that the conversation flows
more smoothly so that the model is able
to understand more easily where we are
trying to go with the conversation. It
has nothing to do with factual recall.
It may have helped with factual recall
in the beginning in 2022, but it
certainly doesn't now. Now we have an
outcome or goal. Our shared mission is
to craft a prompt blueprint. Already you
can see some of the differences here
between prompts. This prompt is focused
very heavily on learning together and
expects a lot from you the user that
turns the assistant into a personal AI
tutor for AI learning that a quizzes
methodically and b delivers
progressively harder lessons. So this is
the heart of what you want the model to
do. This is what we would call
definition of goal. It's quite a complex
definition of goal. So you basically
have to get the model to understand that
it wants to do two things and it needs
to do them in a particular order. And we
signify that by being clear about the
overall goal, what we want the assistant
to do, the semantic space it occupies,
the stance that it takes, whether it's
interrogative or not. Clearly it's
interrogative here. And then what steps
it takes to reach that goal at a high
level. First it has to quiz methodically
to diagnose my current level. I'm still
using technical language here because
this reinforces that we are in a place
where we care about hard learning and
then deliver progressively harder
lessons. That is doing a lot of work
right there. Progressively is really
laboring to make it clear to the LLM
that we should not start with hard mode
in the beginning. So it says we'll
follow the prompt blueprint framework
from your prompt is the product. That is
actually an earlier version, an earlier
article of mine that has I think made it
into AI land. Now other people are using
it and seeing some success. So we we we
are trying it. Then we outline the
sections, right? So we reinforce the
parameter. That framework has four
sections that are in this order.
Purpose, instructions, reference,
output. Here is
what's critical. We then we we've laid
out what we expect the model to do in
this first paragraph up here. We explain
where we're going with the prompt as a
whole. This is all preamble. Now, we're
getting to where the prompt actually
begins to have teeth. You can see how
this is a very sort of advanced prompt
because it has has a lot of setup to get
the model where it needs to go. And most
prompts I see don't put this much effort
into the setup. And this is part of how
you get more complex prompt results.
Okay. Workflow rules. Now, we're telling
it how do you use this stuff? We haven't
even given it the purpose, instructions,
reference, and output yet. We're telling
it how it uses it. And we're using
markdown throughout. So when you see
these little asterisks, it is
intentional because it helps the model
to see emphasis. It reads it as bold. So
section by section, no skipping ahead.
That's critical because the model might
be tempted. Full question set. Show me
every question I must answer and provide
a concrete example answer for each. Now,
what's interesting here is that this is
potentially
going to make the user work very very
hard because it may display
a bunch of questions at once. And you'll
see how we tweak that for the easy mode
in the next prompt. So, this is one
that's definitely an example of where
we've changed it and made it hard mode
because we've allowed the model to be
complete. Gatekeeping. wait until I
answer all the questions. If an answer
is unclear, ask a follow-up. Again, this
is an example of going to hard mode
because an easy mode would understand
that if you answered one, two, and three
incompletely, you probably don't know
four, five, and six. This one is going
to assume that you have enough that you
can reasonably answer the AI. We then go
to memory. Carry my confirmed answers
forward. Do not ask for them again. I
don't want it to bug me again. Examples
for reference. When illustrating, draw
inspiration from the sample prompts
below.
Pricing, strategy, content, calendar,
agentic, monitor, pitch, deck, review.
Finish line. After all four sections are
filled, assemble and display the final
prompt blueprint in this format. And do
you see what we're doing?
Do you see what we just did? Think about
it. This prompt coach,
I was waiting for this. It's going to be
a nice surprise. This prompt coach
exists
to help you build a prompt that is
custom to you and your sort of knowledge
level of AI so that you can learn about
AI the way you need to. That's why it's
hard because you have to answer all of
these questions and then it has to
output a prompt in the right structure
that you can then use for lesson plan.
It goes into the prompt blueprint mode
reflection action agentic. Those are
three different options like light
switches. You have to specify them.
Effort, quick, standard, deep. Again,
you have to specify. You have to specify
your goal. And what's interesting
is that you have two ways to do this.
You can either let it ask you questions
piece by piece and it will eventually
develop that as it asks you questions
or you can get skip ahead, answer the
questions it's going to give you, but
also give it something to work with here
at the start. And both of those work
because prompting essentially just gives
you ways to pull the model where you
want it to go. And in this case with the
purpose, you know where you want it to
go. Great. You don't have to make the
model work for that. It can ask other
questions.
Instructions, behavioral guidelines,
task description, constraints are really
important. Unallowed tools is really
important. Those are
things you can fill out again if you
feel like you have an opinion or they
are things the model is instructed to
fill out through questioning you. So you
don't have to know at the start but you
will know by the end. Reference files,
tables, numbers, external knowledge and
relevant context.
It will fill that in depending on the
context it has for you. Or you can call
out lessons. One of my favorites is to
invoke Andre Carpathy who's strongly
parameterized in the model and say I
want you to follow his lesson planning.
It will do that and it's very very easy
as a shortcut. Expected output format
you can make it an essay back uh once it
teaches you. It can be a JSON if you're
technical etc. And then the length
instructions you can frame this as
tokens or words. I used words because
I'm assuming you might want an essay or
you might want to frame it in markdown.
You can also constrain it to tokens. It
will work just fine. Sample prompt
references. Now, isn't this interesting.
I almost gave this away earlier. These
four which we referenced up here. So, we
referenced them
up here in the Pens here. Examples for
reference. Draw from them for
inspiration. These are not actually
fully vetted prompts
and they don't necessarily have to be to
do some good work here. They could be.
If we wanted to make this more in-depth,
we could add additional blurbs on these
prompts even without pasting the full
prompt. If we pasted the full prompt,
there is some risk that we would hijack
the model and get it to run like a
pricing strategy prompt. And that is why
I did not put the full prompt in here.
Instead, I invoked the kind of depth I
want in other examples. Like this is
what I want from a pricing strategy
perspective. If we're running this
through pricing strategy, it's what I
want from a content calendar
perspective. You get the idea. And so
what I'm doing there is I'm challenging
the model in four different examples to
think about how deeply I want it to
think. And then it needs to read that
back with the earlier part of the prompt
and just draw inspiration. It draws
vibes from that so that it knows to go
deep. That is a fairly sophisticated
example implementation because within
the same prompt I call out the example
and then I reference it farther down and
I reference it with a placeholder. And
so if we zoom out and look at this
prompt overall we have a role at the
top. You are my prompt coach getting you
into semantic meaning. We have a shared
mission, a shared goal
and then we have a way we get that goal
done in order. Again, we we have
constructed this very carefully so it
will do it in order A and B. This will
tend to be followed better by a thinking
model, a Gemini 2.5 Pro, Claude Opus 4,
an 03, because they can parse the
instruction set. We then give it a sense
of what's in the box. We say refer to
your prompt as the product which to some
extent may be in the model at this point
and it has these four framework things.
So we don't assume it's in the model. We
refer to it if it's helpful and then we
define what is important to us. The
prompt that this is outputting because
remember the big surprise. This is a
prompt to develop a custom learning
prompt for you. And so it needs to have
a purpose that matches you, instructions
that match you, references that match
you, output that matches you. It needs
to get there by following these workflow
rules. It has to go section by section
and be methodical. It has to ask the
full question set. It has to gatekeep
and expect you to answer the all the all
the questions. It has to use its memory
and not just reask. And it has to refer
to these examples which are basically
placeholders for thinking deeply about
various subjects. And they're picked to
be different subjects than what we have
in the prompt so it doesn't confuse the
prompt. In other words, referencing them
helps the prompt understand the depth,
but the prompt understands that this is
about building a prompt for AI at this
point, so it's not going to get too
distracted. If we wrote out the full
prompt, that might be too much. Finish
line, assemble and display the final
prompt. We mentioned that you now have
options for these sections. You can fill
them out now before you paste the
prompt, or you can opt to have the model
fill them out as you go. And that's how
it works. Let's go to prompt number two.
Let's say you're impatient. You don't
want to do all of this work just to
build a prompt, just to get you an
actionoriented plan, etc., etc. Instead,
all you want to do is get started. Well,
that's what easy mode is for. And we are
again going to wrap the code. We start
with the same role. This has the same
purpose. Our shared mission is to run a
personal AI tutoring program that
diagnoses my current level and delivers
progressively harder lessons. Very
similar except we include the line
without overwhelming me because we know
it's a little bit more aimed at
beginners. Again, we invoked your prompt
is the product and we begin to talk
about the constraints before we get to
the blueprint. These are constraints
that are designed to make it easier to
consume. One is single question mode,
one is micro lessons. So, it's not too
much. At the end of this video, I'm
going to show you how each prompt looks
in reality,
at least the first turn or two. The
prompt blueprint then has the similar
purpose, mode, effort, goal,
uh, and that hasn't changed.
And it fills in stuff you would
otherwise have to fill in. So, the
purpose is minimum viable understanding.
The mode is default agentic. So it's
going to be more active with you and you
can overwrite it anytime with this
command. The effort is default standard.
You could change that in this prompt but
it prefills it so it won't ask you. The
goal is learn AI fast via single
question diagnostics toward tougher
lessons. Very simple. Quick start
diagnostic. It's a shorter version of
the workflow. Begin with one question.
Again, we're simplifying. Record my
answer. Respond with short feedback. And
then ask a single question. You cannot
ask more than five. Again, we're looking
to simplify. For any clarification or
follow-up, pose onepointed question,
wait for my reply, and resume. Here is
how micro lessons work. Ask a diagnostic
question. Teach. Give a task or code
snippet to practice. And then an
optional harder challenge. Escalate the
difficulty only when I score more than
80% on the prior practice task. It will
sit there until you learn it. Again,
this is designed for folks that don't
know their level and need help to learn
defaults and overrides.
This is
exactly what we just described effort
standard uh and it adds time horizon at
12 weeks which is interesting.
Essentially what I am doing with the
12week I am not saying actually take 12
weeks and the model won't. I'm saying 12
weeks because that triggers a part of
semantic space where the model believes
this is a real course because the
model's courses that it studied during
pre-training if it's an AI course
they're like 12week course is a complete
course I'm invoking that here I can send
batch to allow up to three questions at
once or come back to shorten lessons
further so this gives me controls this
is why you read your prompts if a
missing detail blocks progress ask only
one clarifying question. Retain
confirmed answers. That's the same.
This does
still get you to a blueprint for
learning, but you know what's
interesting? It doesn't stop you
learning along the way. Whereas the
earlier prompt was going to more or less
delay a lot of the learning until you
answered all the questions in a row.
It's going to be very overwhelming for
some folks who are earlier on in their
learning journey. This is how you want
to teach. We didn't have any of this
before. So, use active learning tactics,
mini projects, code snippets, thought
experiments, site authoritative sources,
and markdown. Accept pacing commands.
So, this gives you tips on how to have
pasting commands. I can tell it to skip
if I want. Uncheckpoint, please
summarize my progress. That's very
handy. And then I seed references. And
these are all great standard references,
so you don't have to go after them. And
then this is the output format per
lesson. So when it produces the
blueprint, it's supposed to follow a
diagnostic, a concept, a practice, and a
stretch goal. And then begin execution
now. And this is how you begin. Okay,
that is a very different prompt. It
accomplishes the same goal, but you can
see how flipping a few things at the
beginning really changed it. It is
asking only one question at a time. It
is focusing on micro lessons. We are
weaving that through
by emphasizing single question in
multiple places.
We are also filling in things that are
not filled in in the prompt up above
where we're saying, you know, the mode
is default agentic. Well, if you roll up
above, you still have a mode, which is,
I think, a helpful thing, but it doesn't
say you have to pick. And so, we're just
making it easier and giving you fewer
choices so you can get started. Let's
see how these prompts actually run.
Okay, let's run the first prompt. Here,
you are my prompt. Now, remember this is
the hard version. quiz methodically and
deliver progressively harder lessons.
We're going to go ahead and run it. My
model over here is 03. So, let's see how
we do. Let's think it. It gives me a
table of questions. It gives me example
answers it will accept. I can type them
in and answer. So, we're going to keep
this pretty simple. What is your
overarching learning goal? I want to
learn AI basics fast before chat GPT5
gets here. Two,
let's go with action. And three, I want
quick. I don't have a lot of time. So,
you can see it's just starting to walk
you through. Now, it wants to ask me all
of these questions at once. Interaction
cadence. So, these are all things that
we'll put into the prompt. Uh, yes, one
question.
Two,
150 words sounds good. Three, I'm
keeping it like I could answer longer,
but I'm keeping it quick. Multiple
choice, please. Four, tone is
conversational.
Five, give me hints. Six, allowed
references.
Yes, go get references. Okay, we're
going to keep it simple. One,
carpathy as a trusted source plus folks
on that level. Like we're going to be
deliberately vague. Uh, personal
context. This is making it harder for
them all. I don't know a lot about AI
existing notes. I don't have notes
for no off topics. Great. So, this is
just the model is just kind of going
through and chewing through and
eventually it's going to find that it
has everything it needs to produce the
blueprint. What kind of format do we
want? Base format is markdown. I'm not
even going to number it. Like you can
also do this four options.
Uh, show correct answer with explain.
I'm not even using good grammar. Again,
I'm trying to show you that you can
really mess with this prompt.
Recap after three questions. Great. So,
it thinks it's close. And there's the
prompt blueprint. And now it's writing
the prompt out. Well, with that in mind,
let's go ahead and write the prompt. And
we have our prompt. And we're off to the
races. I'm not going to do the rest of
this, but you can see you built your own
learning tutor system, which is really
cool. Well, you might wonder what does
the easy mode do? Let's see what easy
mode does. We're going to go over to
version two. We're going to copy it.
We're going to paste it in here. Here we
are on easy mode. See, you think it's
easy? How is it this easy? Watch. I
honestly don't know. You see how it's
using chain of thought there to pick an
easier path uh forward? No worries. Back
propagation is the algorithm that
propagates the error signal from a
network's output layer back through its
hidden layer, letting each weight adjust
itself to reduce future error. It's the
workhorse behind neural network
training. And then it asks a question.
So I'm going to give it two answers. I'm
going to say one,
a training data set is like the internet
or books
that models read prior to completing
training.
Again, keeping the answer simple. two,
tell me in simpler words how back
propagation
works. And I didn't even spell it
correctly, but the model will figure it
out. And so I'm immediately pushing the
model on the prompt and I'm basically
saying yes, but you have to make room
for me to have questions along the way.
And we'll see how it does. Models tend
to do okay with this usually. So it
likes my answer and then it expands it.
It's labeled or structured examples.
It's flashcards for the lesson, not the
internet, which is a great visual. And
then it answers my other question. This
is how back propagation works. It
guesses. It measures mistakes. It sends
the blame backwards. And it tweaks the
weights. That is super clear. Like it
actually answered my my concern. And
then diagnostic question three, what
problem does a validation or test data
set help detect when training a model?
And it actually gives you a reference
that it's following for this, which is
kind of handy because I can now look
this up. I won't, but I can. Okay, you
have now seen how both of these prompts
work. I don't know which one you want to
pick, but the idea is that you have come
away from this particular exercise
understanding how small changes in the
prompt itself actually shape what the
prompt can do. Prompt one effectively
became a full learning system for us.
You can use it to construct any version
of learning. It's like clay, but it's
structured clay. And so you can flip
those switches like to agentic mode or
hard mode or whatever and get what you
want. Whereas easy mode fills in a lot
of those and also imposes some extra
structure like one question at a time
that help you just get started learning
right now. And then you saw that in hard
mode you could actually build your own
learning system by using the prompt to
build the prompt. It's a very sort of
advanced technique. A lot of people roll
their eyes, but it's actually really
helpful. The prompt becomes the scaffold
that you can use to build what you want
that's custom to you. I don't know where
you're at on your knowledge. I want you
to get the most value possible. So, part
of why I picked this exercise today is
because I wanted you to see both how to
use these prompts for things like
building additional prompts. They're
called meta prompts when you use them to
build additional prompts and also
because I wanted you to get a concrete
action out of this where you could
actually learn AI with these prompts.
So, I'll be putting these down in the
substack in full int text so you can see
them and grab them really easily and
then you're off to the races. You can
use them for whatever you like. I hope
this has been helpful. In particular, I
wanted you to see how I manage
differences with different kinds of
prompts that accomplish the same goal.
And I want you to get a deeper
understanding of how structure and
wording influence prompts.