Flawed Prompt Packs Undermine AI Literacy
Key Points
- The newly released ChatGPT prompt pack offers overly generic, one‑line prompts that lack the necessary context for complex tasks like GDPR compliance, making them ineffective for professional teams.
- Relying on such superficial resources promotes a false sense of mastery, trapping a future generation of knowledge workers in the “messy middle” of AI adoption where they treat AI like ordinary software instead of a skill‑intensive tool.
- To stay competitive, individuals must continually deepen their prompting expertise, as AI advances rapidly and only those who “lean all the way in” will keep up.
- Real‑world examples show the difference that sophisticated prompting can make—e.g., generating a full financial analysis from a screenshot in Excel with Sonnet 4.5—while less capable attempts (like with ChatGPT‑5) fall short.
- Effective AI use requires teaching teams how to craft detailed, context‑rich prompts rather than deploying generic packs, ensuring AI serves as a true intelligence‑augmentation resource.
Sections
- Critique of ChatGPT Prompt Pack - The speaker condemns the newly released ChatGPT prompt pack as overly generic and ineffective, arguing it exemplifies the poor state of AI education and underscores the need for more thoughtful, context‑rich prompting.
- Designing Contextual AI Upskilling - The speaker argues that effective AI education should target specific job‑family pain points and real‑world use cases, rather than assuming generic prompting skills will seamlessly transfer from search‑engine experience.
- AI Adoption Hindered by Human Gaps - The speaker argues that AI tools like Claude and Copilot are powerful when integrated into clear workflows and proper training, but widespread reluctance, insufficient training, and token compliance efforts are the primary barriers to effective implementation.
- Call for Better AI Education - The speaker urges model makers to invest in clear, beginner‑friendly AI education and resources, while personally sharing content and encouraging passionate problem‑focused engagement with AI.
Full Transcript
# Flawed Prompt Packs Undermine AI Literacy **Source:** [https://www.youtube.com/watch?v=N8ddmMBJrzo](https://www.youtube.com/watch?v=N8ddmMBJrzo) **Duration:** 00:12:23 ## Summary - The newly released ChatGPT prompt pack offers overly generic, one‑line prompts that lack the necessary context for complex tasks like GDPR compliance, making them ineffective for professional teams. - Relying on such superficial resources promotes a false sense of mastery, trapping a future generation of knowledge workers in the “messy middle” of AI adoption where they treat AI like ordinary software instead of a skill‑intensive tool. - To stay competitive, individuals must continually deepen their prompting expertise, as AI advances rapidly and only those who “lean all the way in” will keep up. - Real‑world examples show the difference that sophisticated prompting can make—e.g., generating a full financial analysis from a screenshot in Excel with Sonnet 4.5—while less capable attempts (like with ChatGPT‑5) fall short. - Effective AI use requires teaching teams how to craft detailed, context‑rich prompts rather than deploying generic packs, ensuring AI serves as a true intelligence‑augmentation resource. ## Sections - [00:00:00](https://www.youtube.com/watch?v=N8ddmMBJrzo&t=0s) **Critique of ChatGPT Prompt Pack** - The speaker condemns the newly released ChatGPT prompt pack as overly generic and ineffective, arguing it exemplifies the poor state of AI education and underscores the need for more thoughtful, context‑rich prompting. - [00:03:42](https://www.youtube.com/watch?v=N8ddmMBJrzo&t=222s) **Designing Contextual AI Upskilling** - The speaker argues that effective AI education should target specific job‑family pain points and real‑world use cases, rather than assuming generic prompting skills will seamlessly transfer from search‑engine experience. - [00:07:18](https://www.youtube.com/watch?v=N8ddmMBJrzo&t=438s) **AI Adoption Hindered by Human Gaps** - The speaker argues that AI tools like Claude and Copilot are powerful when integrated into clear workflows and proper training, but widespread reluctance, insufficient training, and token compliance efforts are the primary barriers to effective implementation. - [00:11:45](https://www.youtube.com/watch?v=N8ddmMBJrzo&t=705s) **Call for Better AI Education** - The speaker urges model makers to invest in clear, beginner‑friendly AI education and resources, while personally sharing content and encouraging passionate problem‑focused engagement with AI. ## Full Transcript
ChatGpt launched an absolutely terrible
resource for prompting and I think it
deserves more attention because we need
to talk about how bad AI education is
today and how much is dependent on
getting it right. And Chad GPT is a
leader in the space. They're seen as an
influencer as the first mover. People
will look to things like the Chat GPT
prompt pack that just got released and
say this is something we need to give to
all of our teams. They're terrible
prompts, guys. They're like one or
twoline prompts that are extremely
generic. In fact, I'm going to go ahead
and read you one for their most
technical team, engineers. Let's say
your engineers are asked to come up with
GDPR compliance uh responses from a
technical perspective. How should we
advance GDPR compliance? You might think
that you need a fairly complex prompt
for that. It should take account of your
data schema. It should look at the
countries that you have a footprint in.
It should look at data processing, also
where data is stored, what your existing
stack looks like. None of that. None of
that comes out in this prompt. Research
best practices for GDPR CCPA compliance.
Not even one. It mashes them together.
So we can help kick off our discussions
with our legal team. When has
engineering kicked off discussions with
legal ever? Context. Our app stores
sensitive user data in the EU and US.
Output a compliance checklist with
citations sorted by regulation. Include
links to documentation and regulations.
No, that's what Google is for. That is
not what intelligence is for. If you're
building intelligence that's too cheap
to meter, teach us how to use it. Be
useful with it. And this worries me
because one of the looming fears I have
for 2026 is that we are going to get a
generation of builders of workers of
knowledge workers trapped in the messy
middle of AI adoption. And resources
like this encourage that kind of
behavior. They encourage the assumption
that we only need to pretend that this
is regular software we have to adopt. I
can go get the prompt pack from OpenAI.
I can roll it out as a manager to my
sales team or my engineering team or my
product team and I'm done and we can
move on and it's just it's a oneanddone
thing. AI is on an exponential curve.
This is a case of getting onto a moving
train. You are either going to lean all
the way in and you are going to learn
fast and you are going to scale up
quickly in your skills and keep leaning
in or you're going to get left behind.
And if you learn two or three lines in a
prompt and you think you've got it,
you're in the left behind contingent.
You're going to be surprised when people
come along and say, "I one-shotted an
entire financial analysis off a
screenshot." And here it is in Excel,
which by the way, real example, I did
that with Sonnet 4.5 last night. Very
helpful. I actually tried it with Chat
GPT5 as well. Chat GBT5 did not do as
good a job, which I thought was really
interesting because it's usually very
good at image analysis. But that being
said, that's an example of the kind of
thing that I tried. I learned something
new about image capabilities that wasn't
really published very well from Claude
and now I know more and now I'm sharing
it. There are hundreds of those
examples. Part of why I make this
channel is so that it is easier to keep
up. It is easier to understand. Part of
why I write the posts I do on Substack
is so it's easier to find. My response,
by the way, to the Open AI choice to
release effectively a gigantic packet of
lousy prompts. By the way, not just me
saying that, Reddit also has been
ripping it apart. And I know we don't
always like Reddit, but they they have
rightly been ripping this prompt packed
apart as completely useful for people
who are serious about AI. I am making a
prompt pack in response that is actually
useful. I'm going to put on Substack.
And so if you want something by job
family, I'm putting it together. I just
this is it's really bad. You can't
assume that all you need is basic
ability to ask questions of AI. If that
were true, one, Chad GPT5 would be
easier to prompt, which it is not, and
two, you would assume that people would
be able to transfer their existing
Google skills to AI seamlessly, which
it's actually a very different skill set
because people have been asking
questions of Google for a very long
time. That's not really a new thing. I'm
concerned. I'm concerned that our
assumptions about what is needed for AI
education do not match the pace of
development. If I were designing a
curriculum for teams, and I get asked
this, so I'm going to share right here
with you what I would say. If I were
asked to design an upskilling curriculum
for teams, I would start by working
through use cases with them. Where are
the pain points in the team's existing
workflow? Engineers, product managers,
sales, whatever it is. Where are the
pain points where we see lots and lots
of manual cycles and not a lot of
results? Like you just grind on it.
Great. Thank you. that is a candidate
for talking about AI. And then we start
to ground the whole day, the whole time
we have together in actually talking
through how AI can unlock that for you.
And that makes it tangible. It
immediately goes from the silly two or
threeline prompts like I've been tearing
apart here into something that is useful
for your use case. Maybe your use case
is that your team struggles to get
classic strong bulleted technical
requirements out of the documents
product gives you. Great. That's one we
could work on with AI. Maybe your team
struggles with getting accurate sales
pipeline predictions. Well, thanks to
tool use with LLMs, you can start to get
that, too. Maybe you're struggling with
the just the pace of the interview
pipeline as you try to bring people on
board. You can get note-taking. You can
get some standardized forms to review.
You can get standardized question sets.
There's a lot you can do with AI to lift
that burden and still put the human at
the center of the interview process so
you can focus on assessing candidates.
Those are just off the top of my head.
Every single department is full of those
kinds of opportunities. And the gap is
our ability to understand how quickly AI
is scaling and how much capability we
have on the table. There is meat on the
bone here that we are not touching. Most
managers have no idea how much AI
opportunity there is in their space.
It's like I look at it when I come in
and I'm like 80 or 90% of the AI
opportunity is untouched. You guys are
sitting here talking about you know how
Copilot can do this and that or how chat
GPT can do this and that. Great. I'm
glad you're chatting with Jack GPT. I'm
glad you're using Copilot for your
emails. Have you thought in workflows?
Have you thought about the impact your
team is delivering and worked back from
that into your pain points? No. Well,
maybe we should start with that and then
get into training. And so, yes, when I
build prompts, when I think about what
teams need, I think about how to build
prompts that are going to be supportive
of workflows. So, of course, they are
longer and they can be longer because AI
can do more. And by the way, if you're
listening to this and you're like, "My
or uses Copilot, Nate, like Chad GPT,
what Claude what?" Well, one, I have
news for you. Claude is now in the
Office family for Microsoft. It's it's
blessed. That is why Sacha Nadella was
bragging about having the best Excel
model. He just put Claude in a rapper,
right? Like he doesn't have a magical
best Excel model that he's been hiding.
He put Claude in a rapper. So Claude's
going to be there. But two, it is not
the AI model that matters. It is the way
you use it, which is a very sort of zen
thing to say, but it's true. If you have
a good idea of what you want to get done
with workflows, you can do a ton with
Copilot. I wrote a whole guide for that.
You can do a ton with co-pilot to enable
your business to actually use AI. It is
not just for email. It is a model you
can actually employ. Like to sidetrack
conversations around my model's terrible
or my model isn't as good as like the
best thinking models out there. You can
still do a lot with it. We would still
be impressed if it was 2022 and that
model launched. If co-pilot came out in
2022, everyone would be over the moon.
There's a ton you can do with it. The
gap is people. The gap is people. one,
not being willing to train. And that's
part of why Accenture fired 11,000
people is the strong implication was
they were not willing to be trained on
AI. I don't know if that's true. It's
Accenture's side of the story, but
that's what they said. Um, and then two,
the gap is people thinking a little bit
of training is enough. And that is why I
am concerned about what Chad GPT did
because they basically said, do you want
to get started? A little bit is enough.
You can just get started. Put these two
sentences in on GDPR CCPA and you'll be
done. You'll be good. And then they did
that 200 times. People on Reddit were
saying the intern wrote the prompts uh
with Chad GPT. And I'm like, "No, I
think the intern just wrote it by
themselves because Chad GPT would write
a better prompt." We owe it to ourselves
and people farther in AI, people at
ModelMakers owe it to the community to
produce better resources. And I know
that we have a gradation of talent and
we need on-ramps for everybody to get
into AI. Not everybody's going to sit
there and listen to Andre Carpathy talk
about LLMs and just go, "Wow, this is
amazing." Yes, they're stoastic people
spirits. It's an illusion to the YC 2025
presentation that he made. No, like
they're not all going to do it. And so,
everybody needs to get on at their own
pace, but we need to have really clear
progression and we need to help people
to understand principles that can scale.
And so, if you're going to give people
simple prompts, maybe that's all right
as long as they understand one, this is
just the start and you need to do
better. and two, this is how it ties to
your workflow and moves things forward.
And three, these are the principles that
scale with it. Like if they had taken
the time to say from their own best
practices, if OpenAI had taken the time
to say it's really important to
establish context for the prompt, having
a goal for the prompt is important. Look
how we're doing that even in a simple
prompt, right? Like that's helpful. That
helps you to internalize these
principles. If you don't do that, you're
going to be stuck thinking that you
understand prompting and AI and you're
going to get left behind in 2026. We
don't want that. We need better prompt
education. We need better AI education.
We need better understanding of where AI
opportunities lie in our fields of work
so that we retain our curiosity and we
learn with AI. And we're just not
getting that when we get resources like
this. And so I call them like I see
them, right? Every model maker has spots
they do well on and spots they don't. In
this case, I don't think the new chat
GPT prompt pack is moving the ball
forward at all. It reads very much like
a defensive gesture where they needed
people buying chat GPT for enterprise to
have a link they could point to to say
they offer prompt pack education and
then like somebody ticks the box in it
and they get the sale. They don't. That
is not what that is. So, I built some
prompts, but mostly make sure you
understand why you are learning the AI
you're learning. Make sure you
understand your use cases and make sure
you lean in on growing your AI knowledge
over time. This is not a typical
software adoption story. This is a new
general purpose technology and we need
to treat it like that if we are going to
successfully hang on to the train while
it is scaling exponentially. Onet 4.5
did 30 hours of continuous work and
rebuilt Slack. They built their own
version of Slack and Sonnet just went
and did it and wrote 11 thou th,000
lines of code and it worked. That is
what the bar is becoming. I'm not saying
any of the dramatic things about that
replaces engineers or this and that
because if you work in software
engineering like you will see the weak
spots of AI all over the place, but it's
a big big deal. It is going to change
how engineers work. It's going to change
how PMs work. It's going to change how
product gets built. It's going to change
our velocity expectations. And we need
to have AI education that keeps that in
mind. When we talk about prompting, we
need to prompt with that world in mind.
And that's why I care so much about this
because we deserve better. So this is my
plea. If you were to model maker, please
invest in yes AI education for
beginners, but really clear on-ramps,
really clear scaleups. Help us to be
able to teach this well. And in the
meantime, I'm doing my best to put
content out there everywhere I can think
of that is going to be more useful, that
is going to be more aligned to where AI
is going. So, if you want the prompts,
you know where to get them. In the
meantime, have fun, enjoy AI, pick a
problem space you care about, and uh
yeah, get passionate about it because I
don't think we're going to survive if
we're not passionate about