Learning Library

← Back to Library

Kubernetes Managed Service Architecture Overview

Key Points

  • Sai Venom, an IBM developer advocate, introduces a high‑level reference architecture for managed Kubernetes services and explains how to deploy micro‑services onto the platform.
  • The architecture centers on the Kubernetes master (primarily the API server) that receives workload definitions, and on each worker node a kubelet that schedules pods and monitors their health.
  • Kubernetes is presented as the solution for scaling cloud‑native, micro‑service applications (e.g., a front‑end and back‑end) by describing resources in YAML manifests sent to the API server.
  • A simple YAML example is shown that defines a pod with a Docker Hub image, assigns a version tag, and adds an “app=frontend” label to enable identification and selection.
  • The deployment process is completed using `kubectl` (the client) to submit the manifest, after which the kubelet on a worker node creates and runs the pod.

Full Transcript

# Kubernetes Managed Service Architecture Overview **Source:** [https://www.youtube.com/watch?v=aSrqRSk43lY](https://www.youtube.com/watch?v=aSrqRSk43lY) **Duration:** 00:10:58 ## Summary - Sai Venom, an IBM developer advocate, introduces a high‑level reference architecture for managed Kubernetes services and explains how to deploy micro‑services onto the platform. - The architecture centers on the Kubernetes master (primarily the API server) that receives workload definitions, and on each worker node a kubelet that schedules pods and monitors their health. - Kubernetes is presented as the solution for scaling cloud‑native, micro‑service applications (e.g., a front‑end and back‑end) by describing resources in YAML manifests sent to the API server. - A simple YAML example is shown that defines a pod with a Docker Hub image, assigns a version tag, and adds an “app=frontend” label to enable identification and selection. - The deployment process is completed using `kubectl` (the client) to submit the manifest, after which the kubelet on a worker node creates and runs the pod. ## Sections - [00:00:00](https://www.youtube.com/watch?v=aSrqRSk43lY&t=0s) **Kubernetes Master and Worker Overview** - The speaker outlines a high‑level reference architecture of managed Kubernetes, highlighting the role of the API server on the master and the kubelet on each worker node in orchestrating container workloads. ## Full Transcript
0:00hi everyone my name is sai venom and I'm 0:02a developer advocate with IBM today I'm 0:05back with another video where I'm gonna 0:07be talking about all things kubernetes 0:09kubernetes is an orchestration tool 0:11allowing you to run and manage your 0:14container based workloads today I want 0:16to take a high-level look at a reference 0:19architecture of managed kubernetes 0:21services and dive a little bit deeper 0:23about how you would do a deployment of 0:25your micro services let's get started 0:28here so we've got here sketched out kind 0:31of two sides of the puzzle here on on 0:33the left side here we've got the cloud 0:34side and what we've got here is a very 0:37important component it's gonna be the 0:39kubernetes master the kubernetes master 0:44has a lot of important components in it 0:46but the most important piece that we 0:48want to talk about today is going to be 0:50the API server the kubernetes api server 0:54running on the master is integral to 0:56running all of your workloads and 0:57exposes a set of capabilities allowing 1:00us to define exactly how we want to run 1:03our workloads on the right side here on 1:05the customer manage side we've got our 1:08worker nodes which are all also 1:10kubernetes based there's one major 1:15component that I want to point out 1:16running on every single kubernetes 1:19worker node and that's gonna be the 1:21cubelet the cubelet essentially is 1:23responsible for scheduling and making 1:26sure our apps are healthy and running 1:28within our worker nodes so you can 1:30imagine that the master and the cubelet 1:31are gonna be working together quite 1:32often let's take a step back why would 1:35someone want to start using kubernetes 1:37well maybe they have some micro services 1:39make up a cloud native application you 1:42know as we all know micro services are 1:43talking to each other over the network 1:45to really simplify this example let's 1:48say we've got a front end and a back end 1:49and those are the two components that we 1:51want to scale out and deploy to the 1:53cluster today so kubernetes uses yamo to 1:57kind of define the resources that are 1:59sent to the api server which end up 2:01creating the actual application so let's 2:04get started with that by sketching out 2:06what a simple yeah Mille for deploying a 2:09pod a really small logical unit allowing 2:12you to run a simple can 2:13and a worker note so we'll start with 2:15that we'll say we've got a pod and what 2:21we need with that is an image that's 2:24associated with it let's say that you 2:26know it's a container we've already 2:27pushed it up to dr. hub and we'll use my 2:30mic right my registry for this one and 2:33very simply let's say the name of the 2:36application is just F for front-end 2:38version one and one more thing that we 2:42want to add here let's just say we've 2:44got labels so labels are very important 2:49and we'll talk about why in a second 2:50here but allow us to define exactly what 2:52type of artifact we've got here is so 2:55for the labels we'll just say the app is 3:00F for front-end all right so we've got 3:04that created and what we want to do is 3:07push it through our process to get it 3:09into a worker node we've got here is 3:11cube CTL cube cuddle I've heard 3:15different ways of pronouncing that but 3:17using that we're going to be able to 3:18deploy the simple manifest we've got and 3:21have it in our one of our worker nodes 3:24so we'll push the manifest through cube 3:26CTL it hits the API running on the 3:29kubernetes master that in turn is going 3:32to go and talk to one of the cubelets 3:37because we just want to deploy one of 3:39these pods and start it up so taking a 3:42look let's say that it starts it up in 3:44our first worker node here with the 3:48label that we've given it app is front 3:50end and one thing to note here it 3:53actually does get an IP address as well 3:55so let's say we get an internal IP 3:58address that ends in a dot one so at 4:00this point I could SSH into any of the 4:03worker nodes and use that IP address to 4:05hit that application so that's great for 4:08deploying a simple application let's 4:09take it a step further kubernetes has an 4:12abstraction called deployments allowing 4:14us to do something and create something 4:17called a desired state so we can define 4:19the number of replicas we want for that 4:21pod and if something were to happen to 4:23that pot and it dies it would create a 4:25new one for us 4:25so we've got that pod 4:27as you know app is front-end and we want 4:30to say that we wanted to create maybe 4:32three replicas of that so going back to 4:35our manifest here one thing we need to 4:37do is tell kubernetes that we don't want 4:39a pod we want a template for a pod right 4:42so we'll scratch that out and we'll 4:45create a say that this is a template for 4:49a pod on top of that we've got a few 4:52other things that we want right so the 4:54number of replicas let's say we want 4:58three we've got a selector right so we 5:03want to tell this deployment to manage 5:05any application deployed with that kind 5:08of name here so we'll say Maj that 5:13selector here again this is not entirely 5:15valid Djamel just want to give you an 5:17idea of the kind of artifacts that 5:18kubernetes is looking for and the last 5:20thing that we've got here is what kind 5:22of artifact is this and this is going to 5:24be a deployment all right so we've 5:30scratched out that pod and we've got a 5:32new manifest here what it's gonna do 5:33we're gonna push it through cube CTL it 5:35hits the API server now it's no it's not 5:38an ephemeral kind of object kubernetes 5:40needs to manage the desired State so 5:43what it's gonna do is it's gonna manage 5:44that deployment for as long as we have 5:46that deployment and we don't delete it 5:48it's going to manage that here so we'll 5:51say that creates a deployment and since 5:56we've got it three replicas it's always 5:57going to ensure that we've got three 5:59running so as soon as we got that 6:00deployment created to realize a 6:01something's wrong we've only got one we 6:03need two more it's what its gonna do 6:05it's gonna schedule out deploying that 6:09application wherever it has resources so 6:13we've got a lot of resources still most 6:14of these worker nodes are empty so 6:15decides to put one in each of the 6:17different nodes so we've got the 6:20deployment created and let's just say we 6:22do the same thing for our back end here 6:24so we'll say we'll create another 6:26application deployment application is 6:32back-end and for this one let's just 6:35scale it out two times so we'll go here 6:39application is back end and all right 6:45everyone's happy now we need to start 6:47thinking about communication between 6:48these services right we talked about how 6:50every pot has an IP address but we also 6:53mentioned some of these pods might die 6:55maybe you'll have to update them at some 6:57point when when a pod goes away and 6:59comes back it actually has a different 7:00IP address so if we want to access one 7:03of those pods from the back end or even 7:05external users we need an IP address 7:07that we can rely on and this is the 7:10problem that's been around for a while 7:11and service registry and service 7:12discovery capabilities were created to 7:14solve exactly that that comes built in 7:17into kubernetes so what we're going to 7:19do now is create a service to actually 7:22create a more stable IP address so we 7:24can access our pods as a singular kind 7:27of app rather than individual different 7:29services so to do that I'm going to take 7:32a step back here and we're gonna create 7:34a service definition around those three 7:39pods to do that we're gonna need some 7:41more manifests gamal so again so we'll 7:45go back here create a new section in our 7:48file this time we've got a kind service 7:52and we're going to need a selector on 7:55that and again that's gonna match the 8:00label that we've got here and the last 8:05thing that we need here is a type so how 8:07do we want to actually expose this but 8:09we'll get to that in a second by default 8:11that type is gonna be cluster IP meaning 8:14our services can be accessed from inside 8:15the cluster so deploying that through 8:19you know cube CTL hits our master goes 8:23over here and creates that abstraction 8:25we talked about and we can say it 8:27created recurred another one for the 8:29backend as well so what we get now is a 8:32cluster IP let's just say CL IP 8:38IP say ends in five and then another 8:43cluster IP for our other service here 8:49and will say that has in ends in 8.6 so 8:56now we have a IP that we can use to 8:58reliably do communications between these 9:01services in addition the cube dns 9:04service which is usually running by 9:05default will make it even easier for 9:07these services to access each other 9:09they can just use their names so they 9:11could hit each other using the name 9:12front-end back-end or F or B for short 9:15and so we've got that and we've talked 9:18about how now these services could talk 9:20to each other you know by using these 9:23cluster IPs so communication within the 9:26cluster is kind of solved how about when 9:27we want to start exposing our front-end 9:29to our end users to do that what we'll 9:32need to do is define a type of this 9:34service and what we want is a load 9:39balancer there's actually other ways to 9:42expose like node ports as well but a 9:43load balancer essentially what it's 9:45going to do where theses internal 9:47internal to the actual kubernetes worker 9:50nodes we can create an external IP now 9:52so say external and this is this might 10:00be you know let's say a 169 address and 10:04now what we can do is actually expose 10:06that directly to end-users so that they 10:10can access that front-end by directly 10:12using that service we talked about three 10:14major components here today we've got 10:16pods pods which are then deployed and 10:19managed by deployments and then 10:21facilitating access to those pods 10:24created by those deployments using 10:26services well sort of three major 10:28components working together with the 10:30kubernetes master and all the worker 10:31nodes to allow you to really redefine 10:35your DevOps workflow for deploying your 10:37applications into a managed kubernetes 10:39service I know we talked about a lot 10:42today but we want to get into more 10:43in-depth topics in our future light 10:45boarding videos I mean for example 10:47something like deployments so feel free 10:49to drop a comment below leave us any 10:51feedback definitely subscribe and stay 10:53tuned for more light bording videos in 10:54the future and thank you so much for 10:56joining me today