Learning Library

← Back to Library

RAG vs Fine-Tuning Explained

Key Points

  • Retrieval‑augmented generation (RAG) lets a pre‑trained LLM pull up‑to‑date, domain‑specific documents (e.g., PDFs, spreadsheets) at query time and augment the prompt, avoiding hallucinations without any model retraining.
  • Fine‑tuning involves actually re‑training the base LLM on a targeted corpus so the model internalizes specialized knowledge, making it natively proficient in a particular domain.
  • RAG is ideal when you need quick, accurate answers from confidential or proprietary data and want to keep the underlying model unchanged, while fine‑tuning is better for deep, permanent specialization where inference speed and reduced latency matter.
  • Choosing between them depends on factors like the freshness of required information, the volume and sensitivity of your data, development resources, and whether you prefer on‑the‑fly augmentation (RAG) or a permanently tuned model (fine‑tuning).

Full Transcript

# RAG vs Fine-Tuning Explained **Source:** [https://www.youtube.com/watch?v=00Q0G84kq3M](https://www.youtube.com/watch?v=00Q0G84kq3M) **Duration:** 00:08:51 ## Summary - Retrieval‑augmented generation (RAG) lets a pre‑trained LLM pull up‑to‑date, domain‑specific documents (e.g., PDFs, spreadsheets) at query time and augment the prompt, avoiding hallucinations without any model retraining. - Fine‑tuning involves actually re‑training the base LLM on a targeted corpus so the model internalizes specialized knowledge, making it natively proficient in a particular domain. - RAG is ideal when you need quick, accurate answers from confidential or proprietary data and want to keep the underlying model unchanged, while fine‑tuning is better for deep, permanent specialization where inference speed and reduced latency matter. - Choosing between them depends on factors like the freshness of required information, the volume and sensitivity of your data, development resources, and whether you prefer on‑the‑fly augmentation (RAG) or a permanently tuned model (fine‑tuning). ## Sections - [00:00:00](https://www.youtube.com/watch?v=00Q0G84kq3M&t=0s) **Choosing RAG vs Fine‑Tuning** - An overview of how Retrieval‑Augmented Generation and fine‑tuning each strengthen large language models, their ideal use cases, and guidance for selecting the right approach in enterprise applications. ## Full Transcript
0:00let's talk about rag versus fine-tuning 0:02now they're both powerful ways to 0:03enhance the capabilities of large 0:05language models but today you're going 0:07to learn about their strengths their use 0:09cases and how you can choose between 0:11them so one of the biggest issues with 0:13dealing with generative AI right now is 0:15one enhancing the models but also to 0:18dealing with their limitations for 0:19example I just recently asked my 0:21favorite llm a simple question who won 0:24the Euro 2024 World Championship and 0:27while this might seem like a simple 0:28query for my model well there's a slight 0:31issue because the model wasn't trained 0:32on that specific information it can't 0:35give me an accurate or up-to-date answer 0:37at the same time these popular models 0:39are very generalistic and so how do we 0:41think about specializing them for 0:44specific use cases and adapt them in 0:46Enterprise applications because your 0:48data is one of the most important things 0:50that you can work with and in the field 0:51of AI using techniques such as rag or 0:54fine-tuning will allow you to 0:56supercharge the capabilities that your 0:57application delivers so in the next few 0:59minutes we're going to learn about both 1:00of these techniques the differences 1:02between them and where you can start 1:03seeing and using them in let's get 1:05started so let's begin with retrieval 1:08augmented generation which is a way to 1:10increase the capabilities of a model 1:12through retrieving external and 1:13up-to-date information augmenting the 1:16original prompt that was given to the 1:17model and then generating a response 1:19back using that context and information 1:21and this is really powerful because if 1:23we think back about that example of with 1:25the Euro Cup well the model didn't have 1:27the information in context to provide an 1:29answer and this is one of the big 1:31limitations of llms but this is 1:32mitigated in a way with rag because now 1:36instead of having an incorrect or 1:38possibly um a hallucinated answer we're 1:41able to work with what's known as a 1:42corpus of information so this could be 1:45data this could be PDFs documents 1:47spreadsheets things that are relevant to 1:49our specific organization or knowledge 1:51that we need to specialize in so when 1:53the query comes in this time we're 1:55working with what's known as a retriever 1:57that's able to pull the correct doc 1:59doents and Rel relative context to what 2:02the question is and then pass that 2:04knowledge uh as well as the original 2:07prompt to a large language model and 2:09with its intuition and pre-trained data 2:11it's able to give us a response back 2:13based on that contextualized information 2:16uh which is really really powerful 2:17because we can start to see that we can 2:19get better responses back from a model 2:21with our proprietary and confidential 2:23information without needing to do any 2:24retraining on the model uh and this is a 2:26great and popular way to enhance the 2:28capabilities of a model uh without 2:30having to do any fine-tuning so as the 2:32name implies what this involves is 2:34taking a large language foundational 2:37model but this time we're going to be 2:39specializing it in a certain domain or 2:41area so we're working with labeled and 2:43targeted data that's going to be 2:45provided to the model and and when we do 2:47some processing we'll have a specialized 2:50model for a specific use case to talk in 2:53a certain style to have a certain tone 2:55that could represent our organization or 2:56company and so then when a model is 2:59queried from um a user or any other type 3:02of way we'll have a a response that 3:05gives the correct tone and output or 3:08specialty in a domain that we'd like to 3:10receive and this is really important 3:12because what we're doing is essentially 3:14baking in this context and intuition 3:17into the model um and it's really 3:19important because this is now part of 3:20the model's weights versus being 3:22supplemented on top with a a technique 3:24like 3:25rag okay so we understand how both of 3:27these techniques can enhance a model's 3:29accur output and performance but let's 3:31take a look at their strengths and 3:32weaknesses in some common use cases 3:34because the direction that you go in can 3:36greatly affect a model's performance its 3:38accuracy outputs compute cost and much 3:41much more so let's begin with retrieval 3:43augmented generation and something that 3:45I want to point out here is that because 3:47we're working with a corpus of 3:48information and data this is perfect for 3:50dynamic data sources such as databases 3:53uh and other data repositories where we 3:55want to continuously pull information 3:57and have that up to date for the model 3:59to use 4:00understand and at the same time because 4:02we're working with this retriever system 4:04and passing in the information as 4:06context in the prompt well that really 4:08helps with hallucinations and providing 4:10the sources for this information is 4:12really important in systems where we 4:14need trust and transparency when we're 4:16using AI so this is fantastic but let's 4:19also think about this whole system 4:20because um having this efficient 4:23retrieval system uh is really important 4:26in how we select and pick the data that 4:28we want to provide in that limited 4:30context window and so maintaining this 4:32is also something that you need to think 4:33about and at the same time what we're 4:36doing here in this system is effectively 4:38supplementing that information on top of 4:40the model so we're not essentially 4:42enhancing the base model itself we're 4:44just giving it the relative and 4:46contextual information it needs versus 4:48fine-tuning is a little bit different 4:50because we're actually baking in that 4:52context and intuition into the model 4:54well we have greater um influence um in 4:58essentially how the model behaves and 5:00reacts in different situations is it an 5:02insurance adjuster can it summarize 5:04documents whatever we want the model to 5:06do we can essentially use fine tuning in 5:08order to uh help with that process and 5:11at the same time because that is baked 5:13into the model's weights itself well 5:15that's really great for Speed and 5:17inference cost and a variety of other um 5:19factors that come to running models so 5:21for example we can use smaller prompt 5:24context windows in order to get the 5:26responses that we want from the model 5:28and as we begin to special these models 5:30they can get smaller and smaller for 5:31specific use case so it's really great 5:33for running these specific uh 5:36specialized models in a variety of use 5:37cases but at the same time we have the 5:39same issue of cut off so up until the 5:42point where the model is trained well 5:44after that we have no more additional 5:46information that we can give to the 5:47model so the same issue that we had with 5:49the World Cup example so both of these 5:52have their strengths and weaknesses but 5:54let's actually see this in some examples 5:55and use cases here so when you're 5:58thinking about choosing between r and 5:59fine-tuning it's really important to 6:01consider your AI enabled application 6:03priorities and requirements so namely 6:06this starts off with the data is the 6:08data that you're working with slow 6:09moving or is it fast for example if we 6:12need to use uh up-to-date external 6:15information and have that ready 6:16contextually every time we use a model 6:18then this could be a great use case for 6:20rag for example a product documentation 6:22chatbot where we can continually update 6:24the responses with up-to-date 6:27information now at the same time let's 6:29think about the industry that you might 6:30be in now fine tuning is really uh 6:33powerful for specific industries that 6:35have nuances in their writing styles 6:38terminology vocabulary and so for 6:40example if we have a legal document 6:42summarizer well this could be a perfect 6:44use case for fine tuning now let's think 6:47about sources this is really important 6:49right now in having um transparency 6:51behind our models and with rag being 6:54able to provide the context and where 6:56the information came from uh is really 6:58really great so this could be a great 7:00use case again for that chatbot for 7:02retail insurance and a variety of other 7:05uh uh uh specialities where having that 7:08source and information in the context of 7:10the prompt is very important but at the 7:12same time we may have things such as 7:14past data in our organization that we 7:16can use to train a model so let it be uh 7:19accustomed to the data that we're going 7:20to be working with for example again 7:22that legal summarizer could have past 7:24data on different legal cases and and 7:26documents that we feed it so that it 7:28understands the situation that's working 7:30in we have better more desirable outputs 7:32so this is cool but I think the best um 7:35situation is a combination of both of 7:37these methods so let's say we have a 7:39financial news reporting service well we 7:42could fine-tune it to be uh native to 7:44the industry of finance and understand 7:47all the lingo there uh we could also 7:49give it past data of financial records 7:51and let it understand um how we work in 7:53that specific industry but also be able 7:55to provide the most up-to-date sources 7:57for news and data and be able to provide 7:59that with a level of confidence and 8:01transparency and Trust to the end user 8:03who's making that decision and needs to 8:05know the source and this is really where 8:07a combination of fine-tuning and rag is 8:10so awesome because we can really build 8:13amazing applications taking advantage of 8:15both rag as a way to retrieve that 8:17information and have it up to date but 8:19fine tuning to specialize our data uh 8:22but also specialize our model in a 8:24certain domain so uh they're both 8:27wonderful techniques and they have their 8:28strengths but the choice to use one or 8:30combination of both techniques is up to 8:32you and your specific use case and data 8:35so thank you so much for watching uh as 8:37always if you have any questions about 8:39fine-tuning rag or all AI related topics 8:42let us know in the comment section below 8:43don't forget to like the video and 8:45subscribe to the channel for more 8:46content thanks so much for watching