Learning Library

← Back to Library

Prompt Engineering: Zero vs Few‑Shot

Key Points

  • The way you prompt a Large Language Model (LLM) dramatically affects the relevance and accuracy of its answers.
  • Using a simple “zero‑shot” prompt (just a single question) can cause misinterpretations, especially with ambiguous terms like “bank.”
  • “Few‑shot” prompting—supplying one or more example inputs and outputs—clarifies the intended context, improves answer quality, and can steer the model toward a specific response format (e.g., HTML).
  • While a well‑crafted zero‑shot prompt can sometimes work, few‑shot prompting also helps the model reason through tasks and produce consistent, structured results.

Full Transcript

# Prompt Engineering: Zero vs Few‑Shot **Source:** [https://www.youtube.com/watch?v=T-w_5T-j-dA](https://www.youtube.com/watch?v=T-w_5T-j-dA) **Duration:** 00:07:42 ## Summary - The way you prompt a Large Language Model (LLM) dramatically affects the relevance and accuracy of its answers. - Using a simple “zero‑shot” prompt (just a single question) can cause misinterpretations, especially with ambiguous terms like “bank.” - “Few‑shot” prompting—supplying one or more example inputs and outputs—clarifies the intended context, improves answer quality, and can steer the model toward a specific response format (e.g., HTML). - While a well‑crafted zero‑shot prompt can sometimes work, few‑shot prompting also helps the model reason through tasks and produce consistent, structured results. ## Sections - [00:00:00](https://www.youtube.com/watch?v=T-w_5T-j-dA&t=0s) **Zero-Shot Prompting Pitfalls** - The speaker shows how a single, context‑less prompt can cause LLMs to misinterpret ambiguous terms (e.g., “bank”), leading to irrelevant answers and highlighting the need for better prompting techniques. - [00:03:20](https://www.youtube.com/watch?v=T-w_5T-j-dA&t=200s) **Chain‑of‑Thought Prompting Improves Reasoning** - Demonstrates how adding a cue like “let’s think step by step” to few‑shot or zero‑shot prompts enables large language models to correctly solve arithmetic word problems that previously yielded incorrect answers. - [00:06:26](https://www.youtube.com/watch?v=T-w_5T-j-dA&t=386s) **Improving LLM Responses with Chain‑of‑Thought** - The speaker explains how chain‑of‑thought and few‑shot prompting enhance explainable AI by encouraging models to consider multiple perspectives, leading to more accurate, comprehensive, and well‑reasoned answers. ## Full Transcript
0:00If you're not getting the responses you want from LLMs, 0:05or "Large Language Models", like the model that powers chatGPT, 0:12I may know what's wrong. 0:14It might be you. 0:16No, no, no. 0:16Hear me out. 0:17Look, you see, the way that we prompt these large language models is very important. 0:25So prompting plays a significant impact in the quality of the response that the LLM will generate. 0:31Let's take a look at an example. 0:33I'm working on a homework assignment for my Econ 101 class, and I need some help. 0:38So, I issue the following prompt to a large language model. 0:42Question: "explain the different types of banks." 0:45The LLM responds with this, 0:48"Banks along a river can take various forms depending on whether they are natural or artificial". 0:52Whoa ho, hang on there! 0:55Here I am trying to understand the difference between a credit union and an investment institution 1:00and it's talking to me about river banks! 1:03This is an example of a particular type of prompt, 1:07and that is called "zero-shot" prompting. 1:13You're providing the model with a single question or instruction 1:17without any additional context, examples, or guidance. 1:21The model is expected to understand and answer the prompt without that context, 1:26and to do so, it relies solely on its preexisting knowledge and its ability to generalize from that knowledge 1:32to generate a relevant and accurate response. 1:34And as you can see, it can lead to some suboptimal responses. 1:39Now, "bank" is a homograph. 1:42It has multiple meanings. 1:44One method to clear that ambiguity up is to employ a different type of prompting 1:50called "few-shot" prompting. 1:55Now, here is an example of few-shot prompting. 1:59So, we've got a question: "what is the primary function of a bank?" 2:02Answer: "A bank's primary function is 2:04to accept deposits, provide loans and offer other financial services to individuals and businesses." 2:09Question: "explain the different types of bank." 2:11With few-shot prompting, the model is provided with one or more examples 2:15to help guide its understanding of the task at hand. 2:19By providing an example related to financial institutions, 2:22the LLM is more likely to understand that you are asking about types of bank in the context of finance... 2:29...rather than the stream at the bottom of your garden. 2:32Now, in this example, we could probably just have used a better zero-shot prompt. 2:37Like, "explain the different types of banking financial institutions." 2:42That probably would have worked. 2:44But few-shot prompting has other advantages too. 2:48It can help an LLM understand the expected format a response should take. 2:54Like this, so we've got question: "create a title from my web page, then a title tag with all of our banks". 3:00Then, question: "create a heading for my article." 3:03Then we have an H1 title pair with types of banks in those tags, 3:08and then we say, "question: list the types of banks." 3:10And here the LLM may derive "we are looking for answers in HTML notation" 3:16and respond accordingly like this. 3:20Now there's another way that few-shot prompting can help, and that is to aid reasoning. 3:25So let's take an example from the paper, "Large Language Models Are Zero-Shot Reasoners" 3:30which was written by the University of Tokyo and Google Research. 3:35And they issued this zero-shot prompt to a large language model. 3:40Question: "a juggler can juggle 16 balls. 3:43Half of the balls are golf balls, and half of the golf balls are blue. 3:47How many blue golf balls are there?" 3:50The answer is... 3:52now, can you figure this out? 3:54It's not too tricky, but it was for the LLM. 3:58"8" 4:00Wrong! 4:01So next in the paper they tried a few-shot prompt. 4:04So we start off with a sample question and a sample completion. 4:08So, question: "Roger has five tennis balls. 4:11He buys two more cans of tennis balls. 4:13Each can has three tennis balls. 4:15How many tennis balls does he have now?". 4:17Answer: the answer is 11. 4:19Question: "a juggler can juggle 16 balls..." and so forth. 4:23Now we've shown the model of what a right answer looks like by applying addition and multiplication to a sentence. 4:29So, did the a few-shot prompt get us the right answer? 4:36Eight, again! 4:36No! 4:38But by making a slight change, we can improve the reasoning of the LLM and get the right answer. 4:46And we can apply that change to either few-shot prompts, or to zero-shot prompts. 4:52So, what is this mysterious addition? 4:57Well, it's called "chain of thought", or CoT, 5:07and to invoke it, just add wording such as, 5:10"let's think step by step." 5:13Effectively, we've asked the LLM to document its thinking. 5:18We're asking to see its chain of thought. 5:21And here's what we get. 5:22So the LLM responds to us with, "there are 16 balls in total. 5:27Half of the balls are golf balls. 5:28That means there are eight golf balls. 5:30Half of the golf balls are blue. 5:31That means there are four golf balls." 5:34Four golf balls! 5:36That is the right answer! 5:39Now, this particular test was applied to the InstructGPT model, which is a couple of years old. 5:46Newer models like GPT-4 5:48can invoke mathematical reasoning without the "let's think" step-by-step chain-of-thought prompting. 5:53But chain-of-thought prompting remains a valuable tool in prompt engineering for a number of reasons. 6:02Reason number one is it encourages the model to provide a more detailed, 6:07and specifically, a more transparent response. 6:13And an explanation of that response and its reasoning process. 6:17And this helps users better understand how the model arrived at a particular answer, 6:22making it easier for them to evaluate the correctness and relevance of the response. 6:27And that's all an important part of XAI, or "Explainable AI". 6:34And then reason number two for chain-of-thought prompting 6:37is that it can be used to improve the quality of a model's response 6:42by encouraging it to consider alternative perspectives. 6:49Or different approaches. 6:51And by asking the model to think through various possibilities, it can generate more well-rounded and comprehensive answers, 6:58which can be particularly valuable when dealing with open-ended or subjective questions. 7:03Look ultimately few-shot prompting and chain-of-thought prompting 7:07are powerful techniques that can be employed to improve the quality of responses generated by large language models. 7:14By providing the model with additional context, examples or guidance, 7:19users can help the model better understand the task at hand 7:21and generate more accurate, relevant and well-reasoned responses. 7:27And not to mention, keep better track of how many golf balls we're juggling. 7:33If you have any questions, please drop us a line below. 7:36And if you want to see more videos like this in the future, please like and subscribe. 7:41Thanks for watching.