Infrastructure: Anchor or AI Foundation
Key Points
- Evaluate your existing infrastructure (cloud and on‑prem) to determine whether it’s a stagnant “anchor” or a viable foundation for AI workloads.
- Treat on‑prem resources with the same “cattle, not pets” mindset as cloud assets, ensuring they’re managed as scalable, interchangeable services rather than fixed, monolithic servers.
- Modernize applications by shifting from legacy monoliths to API‑driven, containerized services that can fully leverage the underlying hardware capabilities.
- Deploy workloads to the environment that best fits their requirements—use cloud GPUs for conversational bots and on‑prem high‑performance processors (e.g., IBM Telum) for real‑time, high‑throughput AI tasks like fraud detection.
- Align each application with the most appropriate infrastructure to achieve “fit‑for‑purpose” performance, security, and reliability, turning your current platform into an AI‑ready foundation.
Sections
- From Anchor to AI Foundation - The speaker outlines three principles—assessing the current cloud/on‑prem landscape, modernizing applications, and placing workloads appropriately—to turn existing infrastructure from a static “anchor” into a flexible foundation for AI initiatives.
- Assessing Infrastructure as AI Foundation - The speaker compares an on‑prem Telum‑based setup to either a restrictive “boat anchor” or a solid foundation for AI and high‑throughput workloads.
Full Transcript
# Infrastructure: Anchor or AI Foundation **Source:** [https://www.youtube.com/watch?v=kyJJeik9loU](https://www.youtube.com/watch?v=kyJJeik9loU) **Duration:** 00:03:44 ## Summary - Evaluate your existing infrastructure (cloud and on‑prem) to determine whether it’s a stagnant “anchor” or a viable foundation for AI workloads. - Treat on‑prem resources with the same “cattle, not pets” mindset as cloud assets, ensuring they’re managed as scalable, interchangeable services rather than fixed, monolithic servers. - Modernize applications by shifting from legacy monoliths to API‑driven, containerized services that can fully leverage the underlying hardware capabilities. - Deploy workloads to the environment that best fits their requirements—use cloud GPUs for conversational bots and on‑prem high‑performance processors (e.g., IBM Telum) for real‑time, high‑throughput AI tasks like fraud detection. - Align each application with the most appropriate infrastructure to achieve “fit‑for‑purpose” performance, security, and reliability, turning your current platform into an AI‑ready foundation. ## Sections - [00:00:00](https://www.youtube.com/watch?v=kyJJeik9loU&t=0s) **From Anchor to AI Foundation** - The speaker outlines three principles—assessing the current cloud/on‑prem landscape, modernizing applications, and placing workloads appropriately—to turn existing infrastructure from a static “anchor” into a flexible foundation for AI initiatives. - [00:03:09](https://www.youtube.com/watch?v=kyJJeik9loU&t=189s) **Assessing Infrastructure as AI Foundation** - The speaker compares an on‑prem Telum‑based setup to either a restrictive “boat anchor” or a solid foundation for AI and high‑throughput workloads. ## Full Transcript
Do you consider your infrastructure a boat anchor
or are you using it as a foundation for A.I.?
Give me the next few minutes and let's discuss
how your current infrastructure truly can be that foundation for A.I..
Now, order to do this, we're going to go through three key principles..
Current landscape, understanding what you have.
Modernizing your applications to take advantage
of the infrastructure you have and fit for purpose.
Really making sure we're running the systems where they belong.
So let's look at the current landscape.
Everybody has cloud and on prem today.
What are you using and what are you running
in those environments as a service in cloud?
But on prem, are we treating our infrastructure as pets or cattle?
We think about our infrastructure, we truly need to make sure we're treating our on-prem infrastructure
in the same way we're dealing with cloud private dedicated, however you want to look at cloud.
It's just another environment that we can use to run our applications on.
The right applications in the right way.
It provides us the storage and the systems to be able to do the work that we need to do.
So we have our our infrastructure sitting there,
but how are applications taking advantage of this?
Are we truly building services or did we just do IaaS and just move
our existing legacy monolithic things over into an environment.
And on prem, are we doing APIs?
Are we exposing our applications?
Are we exposing our systems with our applications, or are they the legacy monoliths
that have grown up over the years that are just, yeah, a tangled spaghetti web mess?
So let's think about our applications and modernizing our applications,
and let's really get to the point of this which is fit for purpose.
When we think about our environments, I'm probably using
GPUs in the cloud to do chat bots.
I can get simple answers to questions near the users,
provide them that capability.
Well, on-prem I have the Telum processor.
I can run A.I.
in line.
When we think about this on-prem infrastructure optimized for high security,
high guaranteed reliability, high transaction throughput..
And I can run an AI transaction in line so I can do like fraud detection in line on
every credit card transaction.
Not just a few, by having to send it off somewhere else.
So if I think about the application and infrastructure landscape,
I need to use the right infrastructure for the right functions.
On prem using the Telum processor, high transaction throughput, A.I.
in line, that kind of workload, and then optimizing
my applications to run in the appropriate location.
So I have my application set up and I have them running on this foundation
that I already have in the environment.
So when you look at your infrastructure, do you consider it a boat anchor?
Or do you consider it that true foundation you already have for A.I.?