Generative AI vs Traditional Predictive Analytics
Key Points
- Traditional AI before generative models relied on a three‑layer stack: a data repository, an analytics platform (e.g., SPSS Modeler or Watson Studio) to build predictive models, and an application layer to act on those predictions.
- Those predictive models were essentially static “what‑if” tools that required a manual feedback loop to retrain and improve accuracy after each deployment.
- The feedback loop—learning from both correct and incorrect predictions—was the essential mechanism that turned basic analytics into a learning AI system.
- Generative AI fundamentally reshapes this architecture by bypassing the separate modeling step and using large, pre‑trained models that ingest raw data directly to generate outputs.
- As a result, the development cycle, deployment, and continuous improvement processes for AI applications are dramatically simplified and accelerated compared with the legacy predictive‑analytics approach.
Sections
- Legacy AI Workflow Overview - The speaker contrasts traditional AI pipelines—using a data repository, analytics platform, and application layer to build predictive models such as churn detection—with the newer generative AI approach.
- From General Models to Business Specifics - The speaker outlines how generative AI moves from using massive, publicly‑sourced data to requiring a prompting and tuning layer that adapts broad large language models to the nuanced, organization‑specific needs of a business.
- Polite Closing Appreciation - The speaker thanks the listener for their time and expresses hope that the information provided was useful.
Full Transcript
# Generative AI vs Traditional Predictive Analytics **Source:** [https://www.youtube.com/watch?v=SNZSm02_fpU](https://www.youtube.com/watch?v=SNZSm02_fpU) **Duration:** 00:06:08 ## Summary - Traditional AI before generative models relied on a three‑layer stack: a data repository, an analytics platform (e.g., SPSS Modeler or Watson Studio) to build predictive models, and an application layer to act on those predictions. - Those predictive models were essentially static “what‑if” tools that required a manual feedback loop to retrain and improve accuracy after each deployment. - The feedback loop—learning from both correct and incorrect predictions—was the essential mechanism that turned basic analytics into a learning AI system. - Generative AI fundamentally reshapes this architecture by bypassing the separate modeling step and using large, pre‑trained models that ingest raw data directly to generate outputs. - As a result, the development cycle, deployment, and continuous improvement processes for AI applications are dramatically simplified and accelerated compared with the legacy predictive‑analytics approach. ## Sections - [00:00:00](https://www.youtube.com/watch?v=SNZSm02_fpU&t=0s) **Legacy AI Workflow Overview** - The speaker contrasts traditional AI pipelines—using a data repository, analytics platform, and application layer to build predictive models such as churn detection—with the newer generative AI approach. - [00:03:03](https://www.youtube.com/watch?v=SNZSm02_fpU&t=183s) **From General Models to Business Specifics** - The speaker outlines how generative AI moves from using massive, publicly‑sourced data to requiring a prompting and tuning layer that adapts broad large language models to the nuanced, organization‑specific needs of a business. - [00:06:06](https://www.youtube.com/watch?v=SNZSm02_fpU&t=366s) **Polite Closing Appreciation** - The speaker thanks the listener for their time and expresses hope that the information provided was useful. ## Full Transcript
So generative AI is all the rage,
but one question I get quite frequently
is how does generative AI differ from AI that we were doing
5, 10, 20, maybe even 30 years ago?
To understand that, let's take a look
at AI the way existed before generative AI.
so typically the way that it worked
is you start it off with a repository.
And a repository is exactly what it sounds like.
It's just where you keep all of your information
and they can be, you know, data and tables, rows and columns.
It can be images, it can be documents.
It can really be anything.
It's just kind of as an organization where you keep all of your
historical information or stuff.
The second part is what we call an analytic analytics platform.
And in the IBM world,
an example of a analytics platform is SPSS modeler
or Watson Studio.
And then the third component
is the application layer.
So let's say you're a telco.
You have all your information about the customers in the repository.
And let's say you want to know which customers are likely to churn or cancel their service.
So you would take that information in the repository,
move it into an analytics platform.
Inside the analytics platform you would build your models.
In this case, who is and isn't likely to churn or cancel their service?
And then once you have those models built,
you would put them in some kind of application.
And the application would just try, is where you try to
prevent those people from canceling.
So for example, if somebody is likely to cancel,
maybe you reach out to them and try to convince them not to
or give them some kind of benefit so that they stick around as a customer.
But this in itself, I wouldn't call this AI.
This is more of a predictive analytics or a predictive model.
To make this AI, you have to provide a feedback loop.
And a feedback loop allows you to automate the process.
So, for example, you know, you're a telco
and, you have your information on your customers,
you figure out who's going to cancel.
You take action through an application to try to keep them from canceling.
But your models here are sometimes they're right, or sometimes
sometimes they're right, sometimes they're wrong.
What the feedback loop allows you to do is to learn from that experience.
So if there are situations where you predicted somebody was going to cancel and they didn't,
maybe you can drill in and make your models better
so that you don't make that same mistake a second time.
So think of it like this:
Fool me once, shame on you.
Fool me twice, shame on me.
That's what you want your AI to do.
You want your AI to learn from its previous mistakes
and its previous successes, too.
And the feedback loop allows you to do that.
So this is the way that it always existed.
I've been in this business for over 30 years, and, this predates me.
But with generative AI, this whole paradigm has changed.
The whole fundamental architecture
and the way that we do things is different now.
With generative AI you start off with data,
not from your organization, not from a repository
inside the walls of your company.
But you start off with data from Earth.
Okay, so maybe not Earth, right?
But you start with this massive, massive, massive quantity of information.
Information about everything.
That information then is used by
large language models.
But these large language models are
they're very powerful, they're very big
and they're remarkable, to be honest.
But they - a lot of times they don't have the specifics that you need to guide you in your business.
So, for example, a large language model might know in general
why people cancel, a particular service if you're a telco.
but they wouldn't have the nuances and the idiosyncrasies
of why your specific customers cancel.
That's when you use what's called prompting and tuning.
So the prompting and tuning layer,
the prompting and tuning layer
is where you take the large language models,
which are very general models,
and make them specific to your use case.
So going back to our telco who's trying to deal with customer churn,
they would have this model that's built
not just on customer churn or your customers,
but built on massive quantities of information that have everything in it.
LLMs are derived from that massive quantity of information
then you use this prompting and tuning layer to try to fine tune
those models so that they're specific to your organization.
And then the final part is you have an application layer,
just like you do with traditional AI.
And the application again is is where you take the AI so that it's consumed
so that it's going to fulfill its specific purpose.
And also, just like with traditional AI, you also have a feedback loop,
but the feedback loop typically just goes back to the prompting and tuning part of it,
because these are typically outside of your organization.
So there you have it.
That's why large language models are generative.
AI is different because the fundamental architecture is different.
And primarily, it has to do with the size and the quantity,
both of the data coming in, and the models being built.
And these models and this data is way too big for any organization to hold in their repository.
That's why we need a fundamentally different architecture.
Thanks so much for your time. I hope this was helpful.