Learning Library

← Back to Library

Why Container Orchestration Matters

Key Points

  • Container orchestration was introduced to manage multiple inter‑dependent microservices—frontend, backend, and database access—once they’re packaged as containers.
  • Developers typically focus on a single application stack inside containers (app code, OS, dependencies), while operations teams must oversee the entire underlying infrastructure.
  • An orchestration platform like Kubernetes adds a master layer that controls worker nodes (VMs or containers) where the actual compute resources (CPU, RAM) reside.
  • The primary function of orchestration is to automate deployment of services across these worker nodes, even starting with a single node running one instance of each microservice.
  • By abstracting the hardware layer, orchestration enables consistent scaling, management, and monitoring of complex multi‑service applications.

Full Transcript

# Why Container Orchestration Matters **Source:** [https://www.youtube.com/watch?v=kBF6Bvth0zw](https://www.youtube.com/watch?v=kBF6Bvth0zw) **Duration:** 00:09:05 ## Summary - Container orchestration was introduced to manage multiple inter‑dependent microservices—frontend, backend, and database access—once they’re packaged as containers. - Developers typically focus on a single application stack inside containers (app code, OS, dependencies), while operations teams must oversee the entire underlying infrastructure. - An orchestration platform like Kubernetes adds a master layer that controls worker nodes (VMs or containers) where the actual compute resources (CPU, RAM) reside. - The primary function of orchestration is to automate deployment of services across these worker nodes, even starting with a single node running one instance of each microservice. - By abstracting the hardware layer, orchestration enables consistent scaling, management, and monitoring of complex multi‑service applications. ## Sections - [00:00:00](https://www.youtube.com/watch?v=kBF6Bvth0zw&t=0s) **Understanding Need for Container Orchestration** - Sai Vennam of IBM Cloud explains why orchestration is essential by illustrating how a master node (e.g., Kubernetes) coordinates multiple containerized microservices—frontend, backend, and database—to deliver a cohesive application. - [00:03:10](https://www.youtube.com/watch?v=kBF6Bvth0zw&t=190s) **Orchestrating Microservice Scaling & Networking** - The speaker outlines how an orchestration platform schedules containers across added worker nodes, scales each microservice to the desired replica count, and automatically handles networking tasks such as creating services, load balancing, and service discovery. - [00:06:17](https://www.youtube.com/watch?v=kBF6Bvth0zw&t=377s) **Visualizing Service Mesh Interactions** - The speaker describes how an orchestration platform integrates open‑source tools like Prometheus and Istio to provide logging, analytics, and a visual map of microservice communication, illustrated with a simple three‑service mesh that lets operations monitor request flows and spot unexpected direct accesses. ## Full Transcript
0:00Hi everyone, my name is Sai Vennam, 0:02and I'm with the IBM Cloud team. 0:03Today, we want to talk about container orchestration. 0:05I know that in the past, we've talked about containerization technology 0:10- as well as dived into Kubernetes as an orchestration platform. 0:14But, let's take a step back, 0:16and talk about why container orchestration was necessary in the first place. 0:21We'll start with an example. 0:22Let's say that we've got 3 different microservices 0:25that have already been containerized. 0:27We've got the frontend, 0:30we'll have the backend, 0:32as well as a database access service. 0:36These 3 services will be working together, 0:39and are also exposed to end users, 0:41so they can access that application. 0:44The developer has a very focused look at this layout. 0:48So, they're thinking about the end user, 0:51the end user accessing that frontend application, 0:55that frontend, which relies on the backend, 0:58which may, in turn, store things using the database service. 1:02The developer is focused entirely on this layer. 1:06Underneath it, we've got an orchestration layer. 1:10So, we can call that a master, 1:12and I'm thinking about Kubernetes right now, 1:15where you would have something like a master node 1:18that manages the various applications 1:20running on your computer resources. 1:23But, again, a developer has a very singular focused look at this layout 1:28and they're really only looking at this stack right here. 1:31They're thinking about the specific containers 1:33and what's happening within them. 1:35Within those containers, there are a few key things. 1:38So, there's going to be the application itself, 1:40there's also going to be things like the operating system, 1:44as well as dependencies. 1:46And there are going to be a number of other things that you define, 1:50but all of those things are contained within those containers themselves. 1:55An operations team has a much larger view of the world. 2:00They're looking at the entire stack. 2:03So, an operations team: 2:05there's a number of things that they need to focus on, 2:08but we'll use this side to kind of explain how they work with 2:12deploying an application that is made up of multiple services. 2:15So, first, we'll talk about deploying. 2:21So, taking a look here, 2:23it's very similar to over here, but the key difference is 2:26these are no longer containers, 2:27but the actual computing resources. 2:29This can be things like VMs (Virtual Machines) 2:31or, in the Kubernetes world, we call these "worker nodes". 2:35So, each one of these would be an actual 2:39computing worker node. 2:41So, you know, it could be something like 2:434 vCPUs (virtual CPUs) with 8 GB of RAM 2:45per each one of these different boxes that we have laid out here. 2:50The first thing you would use an orchestration platform to do 2:53is something simple - just deploying an application. 2:57Let's say that we start with a single node. 2:59And, again, here we've got the master. 3:04On that single node, we'll deploy 3 different microservices 3:09- one instance each. 3:10So, we'll start with the front end, 3:15we'll have the backend, 3:20as well as the database access service. 3:24Already, let's assume that 3:26we've consumed a good bit of the compute resources 3:29that are available on that worker node. 3:32So, we realize - let's add additional worker nodes to our master 3:36and start scheduling out and scaling our application. 3:40So, that's the next piece of the puzzle. 3:46The next thing an orchestration platform cares about 3:48is scaling an application out. 3:50So, let's say that we want to scale out the frontend twice. 3:56The backend, we'll scale it out 3 times. 4:01And the database access service, 4:04let's say we scale this one out 3 times as well. 4:09An orchestration platform will schedule out our different 4:13microservices and containers to make sure that 4:17we utilize the computer resource in the best possible way. 4:21One of the key things that an orchestration platform does is scheduling. 4:25Next, we need to talk about 4:27networking and how we enable 4:30other people to access those services. 4:33That's the third thing that we can do with an orchestration platform. 4:39So, that includes creating things 4:42like services that represent each of our individual containers. 4:47The problem is: without having 4:49something like an orchestration platform take care of this for you 4:52- you would have to create your own load balancers. 4:56In addition, you would have to manage your own services 4:59and service discovery, as well. 5:02So, by that, basically I mean 5:04that if these services need to talk to one another, 5:07they're not going to try to find the IP addresses of each different container 5:10and resolve those and see if they're running. 5:12That's something the orchestration platform needs to do 5:15- is handle that system around it. 5:17So, with this, we have the ability 5:19to expose singular points of access 5:23for each of those services. 5:25And again, very similarly, an end user 5:30might access that frontend application 5:33- so the orchestration platform would expose that service to the world, 5:36while keeping these services internal 5:39- where the frontend can access the backend, 5:41and the backend can access that database. 5:44Let's say that that's the third thing 5:46that an orchestration platform will do for you. 5:49The last thing I want to highlight here is insight. 5:56Insight is very important 5:58when working with an application in production. 6:01So, developers are focused on the applications themselves, 6:05but let's say that one of these pods accidentally goes down. 6:09What the orchestration platform will do 6:12is it will rapidly bring up another one, 6:14and bring it within the purview of that service. 6:17It will do that for you automatically. 6:19In addition, an orchestration platform has a number of pluggable points 6:23where you can use key open source technologies 6:25- things like Prometheus and Istio 6:28- to plug in directly into the platform 6:30and expose capabilities that let you do things like a logging, 6:33analytics, and there's even a cool one, 6:35something that I want to sketch out here, 6:37- the ability to see the entire service mesh. 6:41Many times, you might want to 6:43lay out all of the different microservices that you have 6:46and see how they communicate with one another. 6:47In this example, it's fairly straightforward, 6:50but let's go through the exercise anyway. 6:52So, we've got our end user; 6:54and the end user would likely be accessing the frontend application. 7:00And, we've got the two other services as well: 7:03the database, as well as the backend. 7:09In this particular example, I'll admit, 7:11we have a very simple service mesh 7:12- we've only got three services. 7:14But seeing at how they communicate with one another 7:18can still be very valuable. 7:20So, the user accesses the frontend, 7:21the frontend accesses the backend, 7:23and we expect the backend to access the database. 7:26But, let's say the operations team finds that, 7:28oh actually, sometimes the frontend 7:31is directly accessing the database service. 7:33They can see how often, as well. 7:35With things like a service mesh, 7:37you get insight into things like the operations per second. 7:40Let's say that every time 7:41- or let's say there are 5 operations per second hitting the frontend, 7:45maybe 8 that go to the backend, 7:47maybe 3 that go per second to the database service, 7:51but then .5 requests per second 7:54going from the frontend to the database service. 7:56The operations team has identified, 8:00by taking a look at the requests 8:02and tracing them through the different services, 8:05that here's where the issue is. 8:07This is a simple example 8:09about how you can use something out like Istio and Kiali 8:13(which is a key service-meshing capability) 8:16to gain insight into running services. 8:19Orchestration platforms have a number of capabilities 8:23that they need to support, 8:24and this why operations teams 8:26and these roles that we're seeing pop up 8:28- things like SREs (Site Reliability Engineers) 8:31- and we're seeing the growth of those roles because 8:34there are a lot of things that they need to concern themselves with 8:38when running an application in production. 8:40Developers see a very singular view of the world, 8:43where they're focusing on the things within the containers themselves. 8:46Thanks for joining me for this quick overview 8:48of container orchestration technology. 8:50If you like this video please be sure to drop a comment below 8:53or leave us any feedback and we'll get back to you. 8:55Be sure to subscribe, 8:56and stay tuned for more videos in the future. 8:59Thank you. 9:03