Learning Library

← Back to Library

Ensembling Traditional AI with LLMs

Key Points

  • The speaker introduces an “AI toolbox” concept, emphasizing the need to dynamically select and combine different AI models to maximize value as new techniques emerge.
  • A new ensemble approach is proposed that leverages both traditional AI (machine‑learning/deep‑learning models) and large language models (LLMs) to capitalize on each type’s strengths.
  • Traditional AI excels with structured data, offers fast, low‑latency, energy‑efficient predictions, and is commonly used for tasks such as fraud detection, AML, insurance claim analysis, and medical imaging.
  • LLMs—particularly encoder‑based models—provide higher accuracy at the cost of greater computational power, higher latency, and lower energy efficiency, making them suitable for the same domains when precision outweighs speed.

Full Transcript

# Ensembling Traditional AI with LLMs **Source:** [https://www.youtube.com/watch?v=UvZAaeBhOBs](https://www.youtube.com/watch?v=UvZAaeBhOBs) **Duration:** 00:08:06 ## Summary - The speaker introduces an “AI toolbox” concept, emphasizing the need to dynamically select and combine different AI models to maximize value as new techniques emerge. - A new ensemble approach is proposed that leverages both traditional AI (machine‑learning/deep‑learning models) and large language models (LLMs) to capitalize on each type’s strengths. - Traditional AI excels with structured data, offers fast, low‑latency, energy‑efficient predictions, and is commonly used for tasks such as fraud detection, AML, insurance claim analysis, and medical imaging. - LLMs—particularly encoder‑based models—provide higher accuracy at the cost of greater computational power, higher latency, and lower energy efficiency, making them suitable for the same domains when precision outweighs speed. ## Sections - [00:00:00](https://www.youtube.com/watch?v=UvZAaeBhOBs&t=0s) **Ensemble AI Toolbox Framework** - The speaker outlines a dynamic approach that categorizes traditional AI and large language models, then uses an ensemble strategy to combine their complementary strengths for greater business value. ## Full Transcript
0:00AI is everywhere in business these days 0:02and with rapid Innovations continuously 0:05adding new models and new techniques to 0:08what I'm going to conceptualize here is 0:10our AI toolbox it's important that we 0:14also continually adapt how and when we 0:18use what out of that toolbox and in a 0:20very Dynamic way today I want to talk 0:23through a new approach using an ensemble 0:25of AI models that will help get you more 0:29value out of that growing 0:31toolbox so let's go through three 0:34things let's dive in to that 0:38toolbox and understand our tools a 0:40little bit better we'll set ourselves a 0:43framework second let's talk about their 0:45different attributes their 0:47characteristics so that we can 0:49understand their strengths and therefore 0:51when we might use 0:52what and then lastly we'll overlay all 0:56of that with some examples with some use 0:58cases 1:00so let's take a look at our first 1:02tool up until now a lot of the 1:05development and the use cases has been 1:08around traditional 1:12AI traditional AI is built on machine 1:17learning and deep learning 1:19models another tool that we 1:22have are large language models or llms 1:26large language models are largely built 1:30on encoder and decoder models and and 1:34but we'll table that for now 1:37historically a lot of the discussion has 1:39been on when to use traditional AI or 1:42when to use large language models what 1:46this technique opens up is the and it 1:49allows you to leverage multiple AI 1:52models to take advantage of their 1:54different strengths based on the 1:56situation to get you the most out of 1:59your data 2:02so now that we have a framework of our 2:04tools let's dive in a little further and 2:06let's talk about their different 2:08attributes so traditional AI that again 2:11very simplistic view how does 2:13traditional AI work it looks at 2:17structured 2:19data and then following a set of 2:22rules it makes a 2:26prediction and along with that 2:28prediction it gives you a confidence 2:31rating the types of models that you 2:33would see that on there's a couple in 2:35the financial 2:38industry you'll see fraud analysis done 2:42anti-money 2:44laundering Insurance claim 2:47analysis and medical image 2:50analysis those strengths that I 2:52mentioned earlier for traditional 2:55AI that tend to be smaller in size 3:00they tend to have lower latency so 3:02they're 3:03faster and they tend to use less power 3:06so they're more energy 3:08efficient let's jump over to our large 3:10language 3:12models starting with the encoder models 3:15let's talk through two spaces here the 3:18first space they work similar to 3:20traditional AI they start with 3:22structured data they follow a set of 3:24rules they make a prediction and they 3:26give a confidence rating but because 3:29those coder models use a different set 3:31of 3:32techniques and they also use larger more 3:36complex 3:38models they also tend to have a little 3:41bit 3:43higher power they're less energy 3:46efficient a little higher latency so 3:48they're a little slower if you look at 3:50that list why would you ever use those 3:54models they're more accurate their 3:57accuracy is increased 4:00where might you see those I said similar 4:03case fraud anti-money laundering 4:07Insurance analysis and image 4:10analysis I mentioned a second type of 4:13encoder models so a separate space there 4:17instead of starting with the structured 4:19data they actually convert unstructured 4:23data into structured data and I'll give 4:28an example later to to bring that home a 4:30little bit more decoder 4:33models they also start with unstructured 4:37data but they actually then 4:40generate new data these are chat Bots 4:44for 4:45example and again and and that's a big 4:48space there so you can start to 4:52see as you look at the different 4:55characteristics the different strengths 4:57of these um models the dependent on the 5:01situation you may want to use a 5:03different model type so this technique 5:07with that allows you to do is live in 5:10this hybrid 5:12world where you can have multiple models 5:15and then based on the situation based on 5:19what strengths you want to leverage so 5:21maybe accuracy or sustainability speed 5:26size you can very in a very Dynamic way 5:29switch between those models giving you 5:32the most accurate prediction in the 5:34least amount of 5:36time so examples I said now that we have 5:39this framework let's overlay some use 5:41cases on 5:42there so let's start in the um financial 5:47industry let's zoom into fraud analysis 5:49for a minute um because credit card 5:52fraud is unfortunately very relatable to 5:54a lot of us and you can also understand 5:57that in that situation you want the 6:00highest accuracy but in the least amount 6:02of time how do you get those two 6:05different strengths on two different 6:06models again the power of this approach 6:10so you go to the store you swipe your 6:12credit card and because that financial 6:16transaction is probably already running 6:18through a main frame you can use the 6:22powerful AI capabilities that mainframes 6:25have to extract some data while that 6:28transaction is already taking place so 6:30you can run it through a traditional 6:32model and you can get a prediction with 6:34a confidence rating whether that 6:36transaction was fraudulent or not most 6:39of the time you'll have a high 6:40confidence and you move on periodically 6:43if you have a lower confidence you can 6:46switch over to that large language model 6:48where you get the accuracy that you need 6:51so you can see that approach on a 6:53Mainframe maintains the speed and the 6:56sustainability of the smaller models 6:59while leveraging the larger models for 7:02accuracy when you need 7:04it I'll close by doing one short 7:07additional example where we showcase the 7:10reverse of those two so let's look at 7:13Insurance claim analysis insurance 7:15claims are a mix of structured and 7:18unstructured data structured your name 7:20your geographic location a dollar amount 7:23and then unstructured a bunch of text 7:25about that particular incident so you'll 7:27start over with a large language mod 7:29model you'll run it through the 7:31unstructured to structured so you get um 7:33more data to then run through and do 7:36that analysis you may jump straight to a 7:39large language model to get the accuracy 7:41that you need or similar to the last 7:44example you first run it through a 7:46traditional AI model get your prediction 7:48your confidence and then only then if 7:51needed switch 7:52over so hopefully you can start to see 7:55with these two examples the power of 7:57these multimodal Ai and envir 8:00environments and the value that you can 8:02get out of this technique