Learning Library

← Back to Library

Prompt vs Context Engineering Explained

Key Points

  • Prompt engineering is the craft of designing the exact input text—including instructions, examples, and formatting cues—that steers an LLM’s behavior, whereas context engineering is the broader system‑level practice of assembling all the data, tools, memory, and documents the model sees during inference.
  • The transcript illustrates the difference with “Agent Graeme,” a travel‑booking AI that can mis‑interpret a vague request (booking a hotel in “Paris” without specifying France)—a failure that could be mitigated by richer context such as calendar access or conference lookup tools.
  • Effective prompt engineering often uses role assignment (telling the model who it is, e.g., “senior Python security reviewer”) and few‑shot examples (showing 2–3 input/output pairs) to shape output style, tone, and structure.
  • Context engineering involves supplying the model with auxiliary information like company travel policies, JSON configuration files, or external APIs so the agent can make accurate, policy‑compliant decisions beyond what the prompt alone conveys.

Full Transcript

# Prompt vs Context Engineering Explained **Source:** [https://www.youtube.com/watch?v=vD0E3EUb8-8](https://www.youtube.com/watch?v=vD0E3EUb8-8) **Duration:** 00:07:39 ## Summary - Prompt engineering is the craft of designing the exact input text—including instructions, examples, and formatting cues—that steers an LLM’s behavior, whereas context engineering is the broader system‑level practice of assembling all the data, tools, memory, and documents the model sees during inference. - The transcript illustrates the difference with “Agent Graeme,” a travel‑booking AI that can mis‑interpret a vague request (booking a hotel in “Paris” without specifying France)—a failure that could be mitigated by richer context such as calendar access or conference lookup tools. - Effective prompt engineering often uses role assignment (telling the model who it is, e.g., “senior Python security reviewer”) and few‑shot examples (showing 2–3 input/output pairs) to shape output style, tone, and structure. - Context engineering involves supplying the model with auxiliary information like company travel policies, JSON configuration files, or external APIs so the agent can make accurate, policy‑compliant decisions beyond what the prompt alone conveys. ## Sections - [00:00:00](https://www.youtube.com/watch?v=vD0E3EUb8-8&t=0s) **Prompt vs Context Engineering Illustrated** - The speaker contrasts prompt engineering with context engineering by showcasing a travel‑booking AI agent that misbooks a hotel due to ambiguous location, highlighting how incorporating broader context (documents, tools, memory) can resolve such errors. - [00:03:10](https://www.youtube.com/watch?v=vD0E3EUb8-8&t=190s) **Prompt Engineering and Agentic AI** - The speaker explains chain‑of‑thought prompting, constraint setting, and context engineering, then outlines how agentic systems use short‑term summarization, long‑term vector databases, and state management to operate effectively. - [00:06:21](https://www.youtube.com/watch?v=vD0E3EUb8-8&t=381s) **Context and Prompt Engineering Overview** - The passage explains how context engineering shapes tool interfaces and dynamic prompts—combining runtime data, memory, and RAG retrievals with static instructions—to guide LLMs toward correct usage and more effective answers. ## Full Transcript
0:00I think by now most of us are familiar 0:03with the term prompt engineering. 0:05It's the process of crafting the input text used to prompt a large language model, 0:09including instructions and examples and formatting cues. It's 0:11what steers the LLM's behavior and output. Now, 0:15context engineering, on the other hand, 0:19is the broader discipline of programmatically 0:21assembling everything the LLM sees during inference. 0:24Now that includes prompts, but also retrieve documents and memory and tools—everything 0:30needed to deliver accurate responses. So, 0:34to demonstrate the difference, 0:36let me introduce you to an agentic 0:38AI model that I like to call Graeme. 0:41Secret Agent Graeme. 0:44Agent Graeme specializes in travel booking. So, if I send Graeme 0:48this prompt, "Book me a hotel in Paris 0:51for the DevOps conference next month," well, 0:53the agent responds with "Sure thing. 0:56The Best Western Paris Inn has great 0:58wifi and free parking. 1:00It's booked." Cool. 1:02But the only trouble is the Best 1:04Western is located in Paris, Kentucky, and 1:06that DevOps conference is in Paris, France. 1:09Now, you could argue that that's a failing of prompt engineering. 1:14I wasn't specific on the location. But 1:16it could also be seen 1:18as a failing of context engineering 1:21because if Agent Graeme here was just a little smarter, well, 1:25they could have used a tool to check my calendar 1:28or look up the conference online to find the right location. 1:30So ... so let's try again with a follow-up prompt. 1:34My conference is in Paris, France. 1:38€900 a night. Ritz booked. 1:40Champagne. Breakfast included. 1:42Well, uh, wish me luck getting that one 1:44approved through my company expense reimbursement system. 1:47But Graeme here can't really be blamed for that one 1:51because I didn't provide sufficient context. 1:54I should have made my company's travel policy available to the agent. 1:58Perhaps there's a JSON file specifying things 2:00like maximum permissible hotel rates for the area. 2:03So, prompt engineering—that's 2:05the craft of wording the instruction itself. 2:08And context engineering is the system-level 2:10discipline of providing the model 2:12with what it needs to plausibly accomplish the task. 2:16So let's take a look at these two terms a bit closer. 2:19And we'll start with the key techniques 2:21that make prompt engineering effective. 2:24Now this is part art, part science. 2:28But there are several prompt engineering techniques 2:30that are now widely adopted. So, 2:32take for example, role assignment. 2:37This tells the LLM who it should be. 2:40So, you are a senior Python developer 2:43reviewing code for security vulnerabilities. Well, 2:46that produces vastly different outputs 2:48than a more generic code review request. 2:51The model adopts the expertise, the vocabulary 2:54and the concerns of that persona that we asked for. Uh 2:58... another good technique 3:00comes down to few shot examples. So, 3:04this is show, don't 3:05just tell. So, 3:07providing 2 or 3 examples of input/output pairs ... 3:10that helps the model understand your exact format and style requirements. 3:15So, if you want JSON output with specific field names, well, 3:19show it. Show it in the examples. Now, 3:23before we had reasoning models trained on reinforcement 3:27learning, a pretty popular prompt engineering 3:30technique was called COT 3:32or chain of thought prompting. 3:36Now this forces the model to show its work. Adding 3:39"let's think step by step" or "explain your reasoning"—that 3:43prevents the LLM from jumping to conclusions. 3:46And it's particularly powerful for complex reasoning tasks. 3:50And then another technique is called 3:53constraint setting. 3:55Here you define boundaries explicitly. So, 3:58"limit your response to only 100 words" 4:00or "only use information from the provided context". 4:04And that helps prevent the model 4:06from going off on tangents. 4:08Context engineering—that helps build dynamic, agentic 4:12systems to orchestrate the entire agentic environment. 4:16And let's take a look at some of the components of that. 4:18Well, agentic AI. 4:20First of all, it needs memory. 4:24And memory management can be thought of in two 4:28forms. So there's short-term memory ... that 4:32might involve summarizing long conversations 4:34to stay within context windows so that past conversations are not forgotten. 4:38And then there's also long-term memory, and 4:42that uses vector databases to retrieve things 4:45like user preferences and past trips and learned patterns. 4:48Then there is state management. Now, 4:52this says where are we in a multi-step process? So, 4:57if an agent is booking a complete trip—the flight, 4:59the hotel, the ground transportation, all of it— 5:03well, the agent needs to maintain state across these operations. 5:06Did the flight booking succeed? 5:08What's the arrival time for scheduling 5:10the airport transfer? Stuff like that. 5:12So, state ensures that the agent doesn't lose context mid task. Now, 5:18another important component is retrieval augmented 5:22generation or RAG, 5:25that connects an agent to dynamic knowledge sources. 5:29So, RAG uses hybrid search which combines 5:32semantic and keyword matching based on context. 5:36So, when retrieving your company's travel policy, 5:39RAG isn't returning the entire travel policy document. 5:43There's a lot of stuff that's just kind of irrelevant to the context in there. 5:47So instead, it's picking out the relevant sections and the relevant exceptions 5:52and returning only those contextually relevant parts 5:56back to the agent. 5:58And agents also need access to tools 6:03so they can actually go out and do stuff. 6:06So LLMs by themselves, 6:08they can't check real databases or call APIs or execute code. 6:12It's tools that bridge that gap, and a tool 6:15might query a SQL database, or it might fetch live 6:18pricing data, it might deploy infrastructure. 6:21And where context engineering comes in 6:24is in defining the interfaces that guide 6:26the LLM toward the correct usage. And tool descriptions— 6:30they specify what the tool does, 6:32when to use it, and what constraints 6:35apply. And prompt engineering? Well, 6:37actually we should include that as well 6:39because that is also part of 6:42context engineering. 6:44You can take a base 6:46written prompt like "analyze security logs for anomalies". 6:50You can take that as your prompt and then at runtime, 6:52inject the prompt with current context, 6:55like recent alerts and known false positives. 6:58And all of those variables in the prompt, 7:00they get populated from the states 7:02and the memory and the RAG 7:04retrievals. So, that final prompt might be 80% dynamic content from there 7:09and 20% static instructions. So, 7:12prompt engineering ... it gives you better questions. 7:15Context engineering—that gives you better systems 7:18when you combine them properly. 7:20Hotel booked. 7:22Paris, France. 7:23Under budget. Near the venue. 7:25Excellent. Thank you, Agent Graeme. 7:27Pending approval from your manager, HR and finance. 7:30Estimated approval time: 7:316 to 8 weeks. 7:33Uh, the conference is in two weeks. 7:36Have you tried prompt engineering your manager?