Learning Library

← Back to Library

Generative AI vs Traditional Predictive Analytics

Key Points

  • Traditional AI before generative models relied on a three‑layer stack: a data repository, an analytics platform (e.g., SPSS Modeler or Watson Studio) to build predictive models, and an application layer to act on those predictions.
  • Those predictive models were essentially static “what‑if” tools that required a manual feedback loop to retrain and improve accuracy after each deployment.
  • The feedback loop—learning from both correct and incorrect predictions—was the essential mechanism that turned basic analytics into a learning AI system.
  • Generative AI fundamentally reshapes this architecture by bypassing the separate modeling step and using large, pre‑trained models that ingest raw data directly to generate outputs.
  • As a result, the development cycle, deployment, and continuous improvement processes for AI applications are dramatically simplified and accelerated compared with the legacy predictive‑analytics approach.

Full Transcript

# Generative AI vs Traditional Predictive Analytics **Source:** [https://www.youtube.com/watch?v=SNZSm02_fpU](https://www.youtube.com/watch?v=SNZSm02_fpU) **Duration:** 00:06:08 ## Summary - Traditional AI before generative models relied on a three‑layer stack: a data repository, an analytics platform (e.g., SPSS Modeler or Watson Studio) to build predictive models, and an application layer to act on those predictions. - Those predictive models were essentially static “what‑if” tools that required a manual feedback loop to retrain and improve accuracy after each deployment. - The feedback loop—learning from both correct and incorrect predictions—was the essential mechanism that turned basic analytics into a learning AI system. - Generative AI fundamentally reshapes this architecture by bypassing the separate modeling step and using large, pre‑trained models that ingest raw data directly to generate outputs. - As a result, the development cycle, deployment, and continuous improvement processes for AI applications are dramatically simplified and accelerated compared with the legacy predictive‑analytics approach. ## Sections - [00:00:00](https://www.youtube.com/watch?v=SNZSm02_fpU&t=0s) **Legacy AI Workflow Overview** - The speaker contrasts traditional AI pipelines—using a data repository, analytics platform, and application layer to build predictive models such as churn detection—with the newer generative AI approach. - [00:03:03](https://www.youtube.com/watch?v=SNZSm02_fpU&t=183s) **From General Models to Business Specifics** - The speaker outlines how generative AI moves from using massive, publicly‑sourced data to requiring a prompting and tuning layer that adapts broad large language models to the nuanced, organization‑specific needs of a business. - [00:06:06](https://www.youtube.com/watch?v=SNZSm02_fpU&t=366s) **Polite Closing Appreciation** - The speaker thanks the listener for their time and expresses hope that the information provided was useful. ## Full Transcript
0:00So generative AI is all the rage, 0:02but one question I get quite frequently 0:05is how does generative AI differ from AI that we were doing 0:095, 10, 20, maybe even 30 years ago? 0:12To understand that, let's take a look 0:14at AI the way existed before generative AI. 0:20so typically the way that it worked 0:22is you start it off with a repository. 0:30And a repository is exactly what it sounds like. 0:32It's just where you keep all of your information 0:36and they can be, you know, data and tables, rows and columns. 0:40It can be images, it can be documents. 0:43It can really be anything. 0:44It's just kind of as an organization where you keep all of your 0:47historical information or stuff. 0:50The second part is what we call an analytic analytics platform. 1:02And in the IBM world, 1:04an example of a analytics platform is SPSS modeler 1:08or Watson Studio. 1:11And then the third component 1:13is the application layer. 1:22So let's say you're a telco. 1:24You have all your information about the customers in the repository. 1:29And let's say you want to know which customers are likely to churn or cancel their service. 1:33So you would take that information in the repository, 1:36move it into an analytics platform. 1:40Inside the analytics platform you would build your models. 1:44In this case, who is and isn't likely to churn or cancel their service? 1:49And then once you have those models built, 1:51you would put them in some kind of application. 1:53And the application would just try, is where you try to 1:56prevent those people from canceling. 1:58So for example, if somebody is likely to cancel, 2:00maybe you reach out to them and try to convince them not to 2:03or give them some kind of benefit so that they stick around as a customer. 2:07But this in itself, I wouldn't call this AI. 2:10This is more of a predictive analytics or a predictive model. 2:16To make this AI, you have to provide a feedback loop. 2:26And a feedback loop allows you to automate the process. 2:31So, for example, you know, you're a telco 2:33and, you have your information on your customers, 2:36you figure out who's going to cancel. 2:38You take action through an application to try to keep them from canceling. 2:42But your models here are sometimes they're right, or sometimes 2:44sometimes they're right, sometimes they're wrong. 2:46What the feedback loop allows you to do is to learn from that experience. 2:51So if there are situations where you predicted somebody was going to cancel and they didn't, 2:55maybe you can drill in and make your models better 2:57so that you don't make that same mistake a second time. 3:00So think of it like this: 3:01Fool me once, shame on you. 3:03Fool me twice, shame on me. 3:04That's what you want your AI to do. 3:06You want your AI to learn from its previous mistakes 3:09and its previous successes, too. 3:12And the feedback loop allows you to do that. 3:15So this is the way that it always existed. 3:18I've been in this business for over 30 years, and, this predates me. 3:22But with generative AI, this whole paradigm has changed. 3:27The whole fundamental architecture 3:28and the way that we do things is different now. 3:31With generative AI you start off with data, 3:35not from your organization, not from a repository 3:38inside the walls of your company. 3:41But you start off with data from Earth. 3:45Okay, so maybe not Earth, right? 3:46But you start with this massive, massive, massive quantity of information. 3:51Information about everything. 3:53That information then is used by 3:57large language models. 4:03But these large language models are 4:05they're very powerful, they're very big 4:08and they're remarkable, to be honest. 4:10But they - a lot of times they don't have the specifics that you need to guide you in your business. 4:16So, for example, a large language model might know in general 4:20why people cancel, a particular service if you're a telco. 4:24but they wouldn't have the nuances and the idiosyncrasies 4:28of why your specific customers cancel. 4:32That's when you use what's called prompting and tuning. 4:35So the prompting and tuning layer, 4:42the prompting and tuning layer 4:44is where you take the large language models, 4:47which are very general models, 4:49and make them specific to your use case. 4:52So going back to our telco who's trying to deal with customer churn, 4:55they would have this model that's built 4:57not just on customer churn or your customers, 4:59but built on massive quantities of information that have everything in it. 5:04LLMs are derived from that massive quantity of information 5:07then you use this prompting and tuning layer to try to fine tune 5:10those models so that they're specific to your organization. 5:14And then the final part is you have an application layer, 5:18just like you do with traditional AI. 5:22And the application again is is where you take the AI so that it's consumed 5:27so that it's going to fulfill its specific purpose. 5:30And also, just like with traditional AI, you also have a feedback loop, 5:36but the feedback loop typically just goes back to the prompting and tuning part of it, 5:40because these are typically outside of your organization. 5:44So there you have it. 5:45That's why large language models are generative. 5:48AI is different because the fundamental architecture is different. 5:50And primarily, it has to do with the size and the quantity, 5:54both of the data coming in, and the models being built. 5:58And these models and this data is way too big for any organization to hold in their repository. 6:03That's why we need a fundamentally different architecture. 6:06Thanks so much for your time. I hope this was helpful.