Learning Library

← Back to Library

Chunking Errors Cost Major Deals

Key Points

  • Proper chunking of text is essential for effective retrieval‑augmented generation, as AI models rely on a few well‑chosen chunks to formulate accurate answers.
  • A fintech company’s chatbot gave a wrong indemnification answer because a contract clause was split across token‑based chunks, illustrating that poor chunking, not model intelligence, caused the error.
  • Incorrect chunk sizes lead to missed context, increase hallucinations, and inflate costs by forcing the system to retrieve and process unnecessary tokens.
  • The primary challenge for organizations implementing RAG is not choosing the embedding model but designing a chunking strategy that preserves semantic continuity and fits the overall data pipeline.

Sections

Full Transcript

# Chunking Errors Cost Major Deals **Source:** [https://www.youtube.com/watch?v=pMSXPgAUq_k](https://www.youtube.com/watch?v=pMSXPgAUq_k) **Duration:** 00:21:38 ## Summary - Proper chunking of text is essential for effective retrieval‑augmented generation, as AI models rely on a few well‑chosen chunks to formulate accurate answers. - A fintech company’s chatbot gave a wrong indemnification answer because a contract clause was split across token‑based chunks, illustrating that poor chunking, not model intelligence, caused the error. - Incorrect chunk sizes lead to missed context, increase hallucinations, and inflate costs by forcing the system to retrieve and process unnecessary tokens. - The primary challenge for organizations implementing RAG is not choosing the embedding model but designing a chunking strategy that preserves semantic continuity and fits the overall data pipeline. ## Sections - [00:00:00](https://www.youtube.com/watch?v=pMSXPgAUq_k&t=0s) **Chunking Missteps Cost Fintech Deal** - A fintech’s AI chatbot mis‑interpreted a contract because it split the text into improper chunks, leading to a wrong indemnification answer and a near‑lost deal. - [00:03:07](https://www.youtube.com/watch?v=pMSXPgAUq_k&t=187s) **Chunking vs Agentic Search** - The speaker emphasizes that proper chunking dramatically reduces costs and prevents hallucinations, while acknowledging that agentic search can tackle complex, multi‑source queries but does not eliminate the need for good chunking. - [00:06:19](https://www.youtube.com/watch?v=pMSXPgAUq_k&t=379s) **The Critical Role of Chunking** - The speaker explains how proper semantic chunking is essential for effective retrieval‑augmented generation and agentic systems, outlining common pitfalls and introducing five principles for creating useful vector‑based document fragments. - [00:10:13](https://www.youtube.com/watch?v=pMSXPgAUq_k&t=613s) **Chunking Financial Tables & Code** - The speaker outlines the intricacies of handling financial tables and source code, stressing that simple row‑by‑row chunking fails and that building dependency graphs and semantic “neighborhood” chunking is essential for effective retrieval and analysis. - [00:13:26](https://www.youtube.com/watch?v=pMSXPgAUq_k&t=806s) **AI Demands Structured Data** - The speaker emphasizes that AI forces users to clean and hierarchically organize code, spreadsheets, and financial data—using clear chunking and semantic labeling—because agentic search alone cannot compensate for messy, poorly organized information. - [00:18:04](https://www.youtube.com/watch?v=pMSXPgAUq_k&t=1084s) **Strategic Chunking for AI Retrieval** - The speaker stresses that treating data chunking as an afterthought harms downstream AI performance and that preserving metadata, document structure, and rearchitecting data are essential for effective retrieval. - [00:21:25](https://www.youtube.com/watch?v=pMSXPgAUq_k&t=1285s) **Importance of Embeddings and Chunking** - The speaker stresses that while embeddings are crucial, they’re not a silver‑bullet solution, and proper chunking—often overlooked—remains essential to effective implementation. ## Full Transcript
0:00I want to tell you the story of a 0:01fintech company that almost lost a major 0:04deal because they handled chunking 0:06badly. You might think, what is 0:08chunking? Chunking is the foundation of 0:11so much efficient context engineering 0:13and data work with AI. It sounds boring. 0:16I know it sounds boring, but we are 0:18going to go through it together. I'm 0:19going to lay out the key principles of 0:22chunking and embedding and I'm going to 0:24explain why they matter. This is the 0:27number one question I get as soon as 0:30people understand that they need to put 0:32their data into a position where they 0:35make it ready for the AI. They're like, 0:36"Okay, so I need something. Oh, it's 0:38probably a rag, a retrieval augmented 0:40generation system. Well, now what?" And 0:42this is where chunking appears. Now what 0:44is chunking? And if you can't cut your 0:47text into the appropriate sized chunks, 0:50you're going to get into huge trouble. 0:52And we're going to get into a lot of 0:53specifics on this. Buckle in. Get 0:55excited. Grab some coffee. Okay, so this 0:56fintech company, their AI chatbot was 0:59asked about indemnification for an NDA. 1:03The contract said party A indemnifies 1:05party B in one chunk and accept as 1:08provided in section whatever in the next 1:10chunk. It broke in the middle of the 1:12sentence because they were using every 1:13so many token chunking. So the AI 1:15retrieved only the first chunk and 1:17confidently said party A fully 1:19indemnifies party B. That's the wrong 1:22answer and it took a lot of billable 1:23hours to clean up. Here's the thing. 1:25That was not a model intelligence 1:27problem. I've seen that happen over and 1:29over again. You get an inaccurate 1:31response and people assume this will get 1:34fixed when chat GPT5 comes out. It won't 1:37because it's a problem of context 1:39engineering. You're not chunking your 1:41data right and it matters. I've 1:43consulted with a lot of companies 1:45implementing rack and this is the top 1:47question. It's not which embedding 1:50model. It's it's how should we chunk our 1:52data. That's what matters. It's not how 1:55do we prevent hallucinations. That is 1:57actually also a question of chunking. 1:59People just don't know how to ask it. So 2:01if chunking is the foundation that 2:03everything builds on, how do you think 2:04about it in the context of your overall 2:06pipeline? First, let's understand how 2:08rag and let's understand how retrieval 2:11load generation and how AI works 2:13together with chunking to get you 2:15answers very briefly. So when someone 2:17asks a question of AI, you are going to 2:20get three to five chunks back and that 2:23is what the system is going to depend on 2:25to formulate an answer. And those chunks 2:27are going to be retrieved by their 2:29semantic fit to the query. And so if the 2:32true answer got split across multiple 2:35chunks and part of it is missing from 2:37that three to five chunk set like I 2:39described, you're not going to get the 2:40right answer. It doesn't matter how 2:42smart the model is. Chunking also 2:44directly impacts your costs. Bad 2:47chunking means retrieving more chunks 2:49than you need to to get the information 2:51you need. Pulling more chunks means 2:53pulling more tokens, loading more into 2:55the context window, which could 2:57overwhelm 2:58the system and ironically produce less 3:01accurate responses because it has so 3:02much meaningless context in there. 3:04Companies can reduce their bills to 3:07major model makers significantly, like 3:09double-digit percentages, by getting 3:11chunking right. And again, I'm just 3:13going to say this one more time. 3:15Chunking is one of your first lines of 3:18defense against models hallucinating. 3:21You think models hallucinate because the 3:24model itself is bad. What you don't 3:25realize is that what else is the model 3:28going to do when you give it bad chunks 3:30with incomplete information. The AI 3:32fills in the gaps. That's where the 3:33hallucinations come from. And that's 3:35really on you for not chunking well. 3:37Now, some of you are going to raise your 3:38hands at this point and you're going to 3:40say, "Nate, I've heard about this thing 3:43called Agentic Search. It uses AI 3:45agents, so it must be cool. Why would we 3:48chunk it all?" Well, that's a fair 3:50question. Aentic Search is a different 3:52technology. An AI agent can iteratively 3:54search and then read and then reason and 3:56then search again. And it seems like it 3:58could sidestep chunking all the way. 4:00Certain certain use cases, that's true. 4:03If you have a complex use case, you're 4:04reasoning across multiple types of data 4:06at the same time. Agentic search can be 4:09really, really effective. It's great for 4:11exploratory queries. It's great for 4:14answering complex challenges like what 4:17is the total impact of our Q3 marketing 4:19campaign across all channels, right? 4:20Like that kind of a query where you're 4:22going to have to go and look at multiple 4:23tables, you're going to have to reason, 4:25you're going to have to sum, etc. Do 4:27some math, come back with an answer. 4:28There's a lot in that one sentence. Rag 4:31is more foundational and chunking is 4:34more foundational. Rag is good at 4:36solving the problem of fast economic 4:38retrieval and chunking is how you get 4:41that retrieval to be accurate. Chunking 4:44is it's like eating your vegetables. 4:46People don't think of it as a super 4:49amazing technology that's sexy, but that 4:52doesn't matter. You either have accurate 4:54retrieval and low hallucinations at an 4:56economical price or you pay a lot for a 5:00gentic search that's going to be a lot 5:01slower. And those are both step change 5:04differences. A gentic search can be 10 5:06or more times slower than a good rag 5:08retrieval. And it can be 10 or more 5:10times more expensive. Do you really want 5:13to 10x your expenses just to use aentic 5:16search and sidestep the hard 5:17conversations around embeddings and 5:19chunking? most businesses don't when 5:21they actually sit down and pencil out 5:22the math. So the so this is where rag 5:25wins. Rag wins when you need consistent 5:28responses and you need them fast and 5:29they need to be economical and the 5:31questions relatively cleanly map to 5:34specific information. You can retrieve 5:36the answers when the cost per query 5:38matters at scale. When you need 5:40predictable behavior when queries are 5:42often semantic meaning related lookups 5:45those are all cases where good chunking 5:46strategy wins and we will go through 5:48some use cases. There's a lot of them 5:50that you can dig into. On the other 5:52hand, I will I I recommend a gentic 5:54search to some companies. It matters and 5:57it makes a difference when you're using 5:58multi-step reasoning with a query. It 6:00matters when information is scattered 6:02across a whole lot of documents. So, 6:03retrieving the chunks would be very 6:05difficult. It matters when you need to 6:06follow references and you need to follow 6:08links. It matters when the path to the 6:10answer is really unknown. Agentic search 6:12can be really helpful. So, the point is 6:14agentic searching is helpful is useful. 6:17But interestingly enough, if you've been 6:19watching this far, you'll realize that 6:22agentic systems also rely on good 6:25chunking because they are also involved 6:27in picking out semantic information. 6:29What I said information is scattered 6:31across many documents. That gets easier 6:32with chunking. Following references and 6:34links, it would sure help if the 6:36references and links were in the same 6:37semantic unit of meaning as the original 6:39context. Multi-step reasoning is easier 6:42when you have clearly labeled chunks 6:44that actually work as individual units 6:47of content. All of this stuff, this 6:49boring chunking stuff, turns out to add 6:51value not just to cheap, efficient rag, 6:54but also to a gentic search. So, let's 6:56get into chunking a little bit and we'll 6:58talk about the five principles of 7:00effective chunking. When you build a 7:01retrieval augmented generation system, 7:03you're not just feeding the whole 7:05document into AI and saying, "God bless 7:08right off you go." You have to break it 7:10into pieces into chunks that get stored 7:13in a vector database. So your AI is 7:15taking an open book exam, right? And 7:18someone has to tear that book page by 7:20page into little chunks. And if you tear 7:23it wrong, your AI is reading half the 7:25sentence. That's what you need to have 7:26in your head as a picture. You're giving 7:28the AI a book, but you're giving it in 7:29pieces. And you have to have it retrieve 7:32the right piece to get the answer. So 7:34bad chunking is responsible for a huge 7:37amount of rag failures and realistic 7:39production pipelines and it can take 7:41weeks or months. I've had teams spend 7:43months working on figuring out chunking 7:45strategies so that they get all of the 7:47meaning in the query and they end up 7:49iterating and iterating iterating to get 7:51there. I would like to make that easy 7:53for you. I would like to make you more 7:55of an expert on chunking than I was when 7:57I got started. So, let me lay out the 7:59five principles of effective chunking 8:01that I've seen work over and over again. 8:04Number one, context coherence. You are 8:06doing context engineering when you 8:08chunk. Never split, meaning your AI can 8:11only work with what's in the chunk that 8:13it retrieves. If you split the defendant 8:15shall pay damages into one chunk and 8:17unless gross negligence is proven into 8:19another chunk, you've created a 8:21hallucination waiting to happen. Respect 8:23natural boundaries. For contracts, that 8:25would be sections and subsections. If 8:27it's code, it might be functions and 8:29classes. I'm going to talk a little bit 8:31more about code. Code is an interesting 8:32case. For conversations, it's usually 8:35speaker turns. It might be time windows. 8:38Every data type has semantic boundaries. 8:40Take the time to find them and use them. 8:43Principle number two, there are three 8:45levers that you can control in chunking 8:47and know how to use them. Boundaries, 8:49size, and overlap. Boundaries are where 8:51you cut, maybe by sentence, by 8:53paragraph, by section, whatever makes 8:55semantic sense. Size is how big each 8:58chunk gets. It's not an arbitrary token 9:00count. It should be a complete unit of 9:02meaning. And overlap is an insurance 9:04policy. It is often the case that you 9:06are 10, 15, 20% overlapped on your 9:09chunks because you don't want to have 9:11breaks in your chunks that give you the 9:14risks I've described with AI 9:16hallucinating contracts. Most people 9:18only think about size. They'll set it at 9:20whatever thousand tokens and call it 9:22good. You don't want to live in that 9:24world because then you're just ripping 9:25the pages off and it doesn't matter in 9:27this book that we're imagining AI 9:29reading and AI is going to be really 9:31confused reading the book because it's 9:32ripped in weird places. It's not ripped 9:33at the chapter breaks or the section 9:35breaks. Okay, so know your levers. No 9:37boundaries, no size, no overlap. Use 9:39them all and use them in a way that 9:42respects principle number one, context 9:44coherence. Third principle, data type is 9:47going to dictate your strategy. This is 9:48where we'll get back into the code piece 9:50a little bit. A legal contract chunks 9:52differently than source code, which 9:54everyone would envision is true, but so 9:56few people think about it that way. You 9:58can split on section markers in a legal 10:00contract, and that's typically labeled 10:02really cleanly. You want to be in a 10:04place where you include the full 10:06hierarchy of contracts in metadata so 10:09that it's easy to read and understand. 10:10Financial tables, this can get complex. 10:13Tables tend to have orthogonal 10:14relationships. Rows relate to columns. 10:17Cells reference other cells. Formulas 10:19depend on ranges. A simple rowbyrow 10:21chunk does not work. So, I'm going to 10:23get in in a second and explain to you an 10:25approach that can help a bit with 10:26financial tables. And then we'll get 10:28into sort of where to use that versus 10:30maybe where to use a gentic search. And 10:32then let's talk about source code too. 10:34To me, that is the biggest elephant in 10:35the room. Do you use real code and look 10:39at all the dependencies and try and 10:40build a semantically meaningful rag 10:43system with good chunking? How do you do 10:44it? So in reality, if you have really 10:47clean code, which most people don't, and 10:49your functions are pure and 10:50self-contained, it is possible to have 10:52very useful semantic chunking with 10:55source code that enables you to retrieve 10:56bits of code and actually operate 10:58against them. Often times you need to be 11:01in a place where you are retrieving 11:03information across a really messy 11:06dependency tree. So your function might 11:08call three other functions. It 11:09references class variables that are not 11:11local, uses imported modules, whatever 11:13it may be. Your code has side effects. 11:15Everybody's does. So the best way to 11:18think about it is if you were going to 11:20find use and value out of chunking code, 11:24take the time to build dependency 11:27graphs. Take to the time to include all 11:30called functions in your metadata. 11:32Consider something like neighborhood 11:35chunking where you include the function 11:37plus everything that it's going to call 11:39in one chunk. And if the code is really 11:41highly coupled together, you might need 11:43to chunk like an entire class or an 11:45entire module of code together. The best 11:47strategy if you really want clean, 11:49semantically meaningful code is 11:51sometimes to refactor it. And by the 11:53way, AI is good at that. That's a 11:54separate conversation, but AI can be 11:56quite good at that. Bad code 11:57architecture leads to very a huge amount 11:59of difficulty with chunks. And that by 12:01the way, that is why a lot of 12:04organizations that are trying to figure 12:06out how to get their code into AI are 12:09employing Agentic Search. Agentic Search 12:11enables them to not have to immediately 12:13refactor their bad code and instead 12:16they're going to burn tokens and have an 12:18Agentic Search reason across a large and 12:21messy codebase. Is it perfect? No. Is it 12:23expensive? Yes. Is it something that may 12:26be a way for them to go forward because 12:28they know that chunking is going to be 12:29hard here? Also, yes. Now, let's dive a 12:32little bit more into Excel. I think this 12:33deserves its own section because I see a 12:35lot of disasters here. Excel data isn't 12:38just rows and columns. You're basically 12:40preserving a web of relationships. So, 12:42the marketing dashboard might have a 12:43time series that runs horizontally and 12:45categories that run vertically and 12:47formulas that reference various ranges, 12:49etc. You can't chunk it rowby row and 12:51expect it to work. So, here's a few ways 12:54that people approach this. One, again, 12:56you can go back to aentic search. 12:58Sometimes that happens. Or if you really 13:00want to get useful semantic meaning, 13:02think about the natural semantic chunks. 13:05And so you could take a particular time 13:07window and chunk that and include all 13:09categories like Q3 2024, chunk that. Uh 13:11for formula heavy sheets, you may want 13:13to trace dependencies, build a map, and 13:15chunk calculable units together. Like if 13:18cells A1 to A10 feed to a summary in 13:21B15, they would all be the same chunk. 13:23This also takes work. Again, the AI is 13:26norming us and pushing us toward cleaner 13:28code and cleaner spreadsheets here. And 13:30that is absolutely going to be a trend 13:32in the workplace. If you're using pivot 13:34tables, if you're using summaries, you 13:36want to duplicate the summary in each 13:39chunk or create a very clear hierarchy 13:41so that detailed chunks can reference a 13:44summary chunk. You want to be again 13:46clear about what each thing does. And 13:48sometimes it is possible if you really 13:51want the semantic meaning that you will 13:53need to extract and convert the pivot 13:55table or whatever the Excel sheet is 13:56into natural language. I don't see that 13:58happen very often. If it gets that bad, 14:00most people go back to aentic search 14:03because at least it works with the messy 14:05data a little bit. Even so, if you are 14:08dealing with a situation where you have 14:10a lot of complex financial data, I would 14:13recommend that you look at the semantic 14:17borders and meaning of your financial 14:19data because as we discussed at the top, 14:22a gentic search isn't a silver bullet. 14:24It still needs good chunking strategies 14:26if you have them to retrieve stuff 14:28effectively. In a sense, one of the 14:30things I want you to take away here is 14:31there's no such thing as a free lunch. 14:33There is no way to easily and 14:35intuitively get away with not chunking 14:39well and agentic search is not a get out 14:41of jail free card on that one. You have 14:43to wrestle with the challenge of your 14:45own messy data and AI forces you to 14:47confront it if you want the benefits of 14:49reasoning with machine intelligence 14:51across your data sets. Let's go to the 14:53fourth principle. You want to size for 14:56Goldilocks outcomes. If your chunks are 14:58too small and your chunks lack context, 15:00your AI at best will say, "I don't know 15:02a lot." If they're too big, you're 15:04wasting a lot of tokens. Your answers 15:06will be really unfocused and you're 15:08paying more. So, you want to think about 15:09the sweet spot that reflects both the 15:12semantic meaning in the data and the 15:14natural language answer you want. And 15:16so, maybe for legal clauses that ends up 15:18being 750 tokens, somewhere between 500 15:21and a,000, something like that. Maybe 15:23for technical documents, it's longer, 15:25they're more complex. Uh, I know the 15:27irony, right? Technical docs could be 15:28more complex than legal docs. So maybe 15:30it's closer to a thousand all the time. 15:32For coupled code, you can have like 15:34really large chunks where like entire 15:36classes or modules are in there. They 15:37can get into the thousands of tokens. 15:40Uh, time series data. If you're pulling 15:42a full period with all contexts, again, 15:44that can be a somewhat sizable piece of 15:46of context. It could run over a thousand 15:48tokens. And the key is not Nate said it 15:51was going to be X token, so that's what 15:52we use. I've been saying the entire 15:54video, don't use an arbitrary token 15:56boundary. Go for semantic meaning. You 15:58want to build an evaluation set and you 16:01want to test these evaluation questions 16:04against various chunking strategies 16:06until you find one that works. Evals 16:09win. Accuracy is going to max if you try 16:13different chunking strategies against a 16:15common evaluation set of questions. All 16:17right. Fifth and final principle. 16:19Remember overlap. Remember, overlap gets 16:22underused so much. Overlap means the end 16:25of chunk A appears at the beginning of 16:27chunk B. And this matters because 16:29important information can still span 16:31boundaries sometimes. No matter how good 16:33your semantic splits are, they may not 16:35be perfect. And if you have a big enough 16:37data set, you may not be able to 16:38handcheck every split. And so you have 16:41to have some overlap as insurance. One 16:44of the catches is if you have orthogonal 16:45data like spreadsheets, which way do you 16:47overlap? For time series, maybe it's a 16:49temporal overlap where you include like 16:51a summary from the previous period. For 16:53categorical data, is it a category 16:55overlap? It's one of the things that 16:57like gets sort of fraught when you get 16:58into the details. And this is why I'm 17:00making this video is I want to surface 17:01and sunlight these conversations so that 17:04everyone knows that we're all having 17:05this discussion. It's a big question and 17:08it's easier to have it as an AI 17:11community so that we can actually review 17:13it effectively. I have looked at 17:15documentation, blogs, how you get 17:17started on chunking from a lot of 17:19different sources and for the most part 17:21it's people shilling their own 17:22solutions. I don't have a solution to 17:24show. I'm just trying to build best 17:25practices into the community here so 17:27that we have an easier job building in 17:30effective data sets that we can retrieve 17:33against and reason against with machine 17:34learning with AI. So your biggest your 17:37leverage point in building any kind of 17:40retrieval augmented generation system in 17:42context engineering is getting the 17:43context right. And that comes down to 17:45chunking strategies and embedding which 17:47is why we're just drilling on this so 17:49hard. So if that's you, if you've been 17:51struggling with chunking, if you don't 17:52know where to get started, I want you to 17:54run a little bit of an audit on your 17:56current strategy. Are you using flat 17:58token and character splits? Are you 18:00having no overlap between your chunks? 18:02Are you using the same strategy for all 18:04your data types? Are you not preserving 18:06metadata? Are you splitting in ways that 18:08ignore document structure? These are all 18:09common issues. Is financial data chunked 18:12with no thought for relationship 18:13preservation? The fixes aren't simple. 18:16You are wrestling with the structure of 18:18the data itself and how it expresses 18:20semantic meaning. I am not here to 18:22pretend it's an easy solve, but it is 18:24the way forward toward truly 18:27transformative AI retrieval. It is the 18:29difference between well the AI kind of 18:31makes sense when it looks at our big 18:33pile of messy data or wow that is on 18:35point that is correct we use it all the 18:37time if you want that latter world you 18:40cannot treat chunking as an afterthought 18:43it's the foundation for AI performance 18:45if you have a large data set bad 18:47chunking poisons everything downstream 18:50whether that's rag performance or prompt 18:52engineering or model upgrades or even a 18:55gentic search you need to start with 18:57your highest value and most problematic 18:59data type and just run into that pain, 19:02run into the spaghetti nature of that 19:04data and map it out and figure out how 19:07you're going to deal with the chunking 19:08side and fix it. Apply those five 19:10principles and fix it. Sometimes what 19:13you are really wrestling with is the 19:15fact that you made data archite 19:16architecture decisions that are very 19:19difficult to undo. And so it's possible 19:20that you will need to rearchitect your 19:22data sets for AI. And I see companies 19:24being willing to do it because they see 19:26the benefits of AI. They were not 19:28willing to do it for the cloud, right? 19:30They were not willing to do it for a SAS 19:32company and their SAS tool. They will do 19:34it for AI and they will do it for AI 19:36because they see the benefits. Getting 19:38data architecture right matters. If you 19:40are in the data architecture space as a 19:42specialist, someone who designs good 19:44data architectures, you have a sweet job 19:47right now. People need your expertise. 19:50If you really want to get your data into 19:55AI in a way that generates that step 19:58change in value that everybody on 19:59LinkedIn likes to brag about, not 20:01everybody on LinkedIn is actually 20:02telling the truth, but we all know that 20:04the value is actually there. It's just 20:06there when you put the hard work in. And 20:07this, by the way, this whole 20:08conversation, the fact that I have to 20:10make this video is why it is so 20:14difficult to just stamp out solutions 20:16for companies. Companies all have 20:19different flavors of painful data. All 20:21have different messes. Every data set is 20:24painful in its own way. And so you need 20:27to be able to take these principles and 20:28figure out in your data environment what 20:31chunking strategies make sense. And 20:33that's why I've leaned on principles so 20:34much because I think that in the end 20:37that is the only thing that really 20:38scales across really complex corporate 20:41data sets. I looked at a bunch and these 20:44are the things that keep standing out. 20:45You got to maintain context coherence. 20:47You have to be aware of boundaries, 20:48size, and overlap. Your three levers. 20:51Number three, you should recognize that 20:53data type is dictating your strategy. We 20:55talked about Excel. We talked about 20:56code. We talked a little bit about 20:57legal. You think about the data type. We 20:59talked about conversations a little bit. 21:01And then make sure that you are actually 21:05getting your size right. So, size for 21:08Goldilock outcomes. That's that's 21:10principle number four. Don't size them 21:12too big or you're going to get vague 21:13answers. Don't size them too small or 21:15you're going to get unfocused, 21:16hallucinated answers. And then the fifth 21:18principle, remember overlap. Remember 21:21overlap. Remember overlap. There you go. 21:23Those are the principles. That is why 21:25embeddings matter. I don't want you to 21:27walk away from this and think that there 21:29is a handy silver bullet alternative. 21:32Chunking matters and we don't talk about 21:34it enough. Tears.