Learning Library

← Back to Library

Docker vs Kubernetes: Scaling Simplified

Key Points

  • Sai Venom explains that the common misconception of having to pick either Docker or Kubernetes is wrong—Kubernetes can orchestrate the Docker containers you already use while handling the added complexity of scaling.
  • He illustrates a typical cloud‑native stack (React/Node front‑end, Java for database access, Python/Flask for external APIs) and walks through a pure‑Docker deployment workflow: Ubuntu host → Docker daemon → `docker build`, `docker push`, SSH, and `docker run`/Compose.
  • While Docker makes a single‑instance deployment easy, manually replicating containers and provisioning new hardware quickly becomes fragile as traffic grows, new micro‑services are added, and operational consistency is required.
  • Kubernetes addresses these pain points by providing automated scaling, service discovery, and orchestration, allowing teams to extend the same Docker‑based workloads without the manual scripting and hardware‑management overhead.

Full Transcript

# Docker vs Kubernetes: Scaling Simplified **Source:** [https://www.youtube.com/watch?v=2vMEQ5zs1ko](https://www.youtube.com/watch?v=2vMEQ5zs1ko) **Duration:** 00:08:03 ## Summary - Sai Venom explains that the common misconception of having to pick either Docker or Kubernetes is wrong—Kubernetes can orchestrate the Docker containers you already use while handling the added complexity of scaling. - He illustrates a typical cloud‑native stack (React/Node front‑end, Java for database access, Python/Flask for external APIs) and walks through a pure‑Docker deployment workflow: Ubuntu host → Docker daemon → `docker build`, `docker push`, SSH, and `docker run`/Compose. - While Docker makes a single‑instance deployment easy, manually replicating containers and provisioning new hardware quickly becomes fragile as traffic grows, new micro‑services are added, and operational consistency is required. - Kubernetes addresses these pain points by providing automated scaling, service discovery, and orchestration, allowing teams to extend the same Docker‑based workloads without the manual scripting and hardware‑management overhead. ## Sections - [00:00:00](https://www.youtube.com/watch?v=2vMEQ5zs1ko&t=0s) **Docker vs Kubernetes Explained** - IBM developer advocate Sai Venom clarifies the misconception that Docker and Kubernetes are mutually exclusive, showing how Kubernetes can orchestrate existing Docker containers for a multi‑service cloud‑native app while outlining the basic Docker‑only deployment stack. ## Full Transcript
0:00hi everyone my name is sai venom and I'm 0:02a developer advocate with IBM here at 0:05IBM were always enabling developers to 0:07be able to use the latest and greatest 0:09technologies when developing their 0:10applications but a question I almost 0:13always seem to be running into is 0:15whether or not you should use docker 0:16versus kubernetes I think there's the 0:20small misconception out there that you 0:22have to be using one or the other but 0:24the fact is kubernetes allows you to use 0:26your existing docker containers and 0:28workloads but allows you to tackle some 0:30of the complexity issues you run into 0:32when moving to scale to better answer 0:35this question let's start with a simple 0:37cloud native application sketched out up 0:41here and let's just say that the front 0:44end of this application is something 0:46that we wrote with react backed by 0:51nodejs we'll say that this database 0:54access application a fan of using Java 0:57for database access so we'll say Java up 1:00here and for accessing external API s 1:03maybe we use Python on maybe a flask 1:06application that allows us to serve rest 1:09endpoints now putting on my hat as a 1:12docker ops engineer using a purely 1:14docker approach to deploying an 1:16application let's take this app and move 1:20over to a sample server stack that we 1:22have sketched out over here on every 1:24service stack you're gonna have the the 1:26basics right so we'll have the hardware 1:30we'll have the OS which is generally 1:33going to be ubuntu when you're working 1:35with docker and we'll have the doctor 1:38daemon installed on top of that OS 1:40that's what allows us to spin up 1:42containers so docker actually provides 1:44us a number of great tools for working 1:46with our containerized applications so 1:49once we take these applications create 1:51neat docker containers out of them will 1:53do docker build docker push up to a 1:55registry and then SSH into our stack and 1:58do docker run commands or even use 2:00doctor compose to spin up our containers 2:02so let's take a look at what that would 2:04look like so we've got our jsn we've got 2:09our java app as well as the Python 2:14and let's go ahead and scale out these 2:17individual pieces as well so take 2:19advantage of all the resources we have 2:22so we'll scale them out and we can do 2:26this as many times as we want but let's 2:28assume that we scale them out twice for 2:29now to make effective use of all the 2:32resources that we have available so 2:34using docker and the tools that docker 2:36makes available a simple deployment is 2:38very easy but let's imagine that our 2:41application starts to get a lot more 2:42load a lot more people are hitting it 2:44and we realize hey we need to scale out 2:46to be able to provide a better user 2:48experience so it's an ops engineer my 2:50first instinct might be hey I've already 2:52got scripts to make this stack let's 2:54just simply create new get new hardware 2:58and do that exact same deployment 3:00multiple times this can fall apart for 3:04many reasons when you start moving to 3:05scale for example what if your dev team 3:08has to create a new micro service to 3:10support a new requirement where do we 3:12piece those in especially if you already 3:14have effective use of the hardware and 3:16the ops and dinner would have to find 3:18that out and in addition a big advantage 3:21of micro service based applications is 3:23being able to scale out individual 3:24components individually so that's 3:27another thing that the ops engineer 3:28would have to write scripts for and find 3:30the most effective way to scale things 3:32out in response to load to identify and 3:35address user experience issues when 3:37moving to scale so this is where an 3:39orchestration tool comes in something 3:41like kubernetes which is going to allow 3:43you to use your existing darker eyes 3:45applications but orchestrate them and do 3:48make it more effective use of your 3:50servers in space so what we have 3:53sketched out down here is a number of 3:54boxes which represent a server stack but 3:58in the kubernetes land we call them 4:00worker nodes so we're gonna have 4:02kubernetes installed on every single one 4:04of these worker nodes and the main one 4:07is going to be the master node whereas 4:09the other ones or workers this master 4:12node is actually connected to all the 4:13worker nodes and decides where to host 4:15our applications our docker containers 4:18how to piece them together and even 4:20manages orchestrating them starting 4:22stopping updates that kind of thing I'd 4:25say there's three major advantages that 4:27could 4:27Nettie's provides that I want to walk 4:28through deployment making development 4:31easier and providing monitoring tools 4:33the first step as expected is going to 4:36be deployment so coming back to our 4:38application architecture let's say we 4:40want to deploy that react app about 4:42eight times so we'll say we want eight 4:45instances each of them let's say we 4:47expect it to consume about 128 megabytes 4:49and then we can actually specify some 4:51other parameters in there as well 4:53policies like when to restart that kind 4:56of thing and when we box that up what we 4:59get is a kubernetes deployment so a 5:09kubernetes deployment is not a one-time 5:11thing but it's something that grows and 5:13lives and breathes with the application 5:15and our and our full stack so for 5:17example if the react app happens to 5:19crash kubernetes will automatically be 5:21started to get back to that state that 5:23we've identified when we first created 5:26that deployment so deployment is always 5:28growing and always living with our 5:30application so I think we can 5:33effectively say that it's made 5:34deployment in addition to scaling easier 5:36let's talk about development you might 5:39be wondering so once we've created like 5:41the deployments for each of these 5:43individual services and scaled all of 5:45them out we have a lots of different 5:48micro services out there with different 5:50endpoints so for example if our 5:52fernández de debase there might be maybe 5:558 different versions of that Java app 5:58that that talk to that database we have 6:00to talk to one of them to get our kind 6:03of request fulfilled right so what 6:05kubernetes does is deploys load balances 6:07for all of our micro services that we 6:09scaled out and in addition takes it 6:12takes advantage of service registry and 6:14discovery capabilities to allow our 6:16applications to talk to each other using 6:18something called a kubernetes service so 6:21for each of these kubernetes will also 6:23create a service which we can simply 6:26label service a B and C obviously you 6:33can have more meaningful names for those 6:34as well but very simply these 6:37applications can now speak to each other 6:39just by using those 6:41service names that are laid out in 6:42kubernetes so essentially I could say 6:45that kubernetes has made development 6:46easier and the last thing I want to 6:48touch on is monitoring kubernetes has a 6:51lot of built-in capabilities to allow 6:53you to kind of see logs see CPU load all 6:56in there neat UI but the fact is it 7:00there's sometimes more that you want to 7:02see with your application and the open 7:04source community out there has developed 7:05a number of amazing tools to give you 7:07introspection into your running 7:09application so the main one I'm thinking 7:11about right now is sto and although 7:14that's a little bit more of an advanced 7:15topic will likely hit that in a future 7:17whiteboarding session so back to our 7:21main topic using kubernetes versus 7:24docker it's definitely not a choice of 7:26using one or the other it's one of those 7:28things where kubernetes allows you to 7:30take advantage of your existing docker 7:32workloads and run them at scale tackle 7:35real complexities kubernetes is great to 7:37get started with even if you're making a 7:39small app if you anticipate that one day 7:41you'll have to move to scale if you're 7:43already taking advantage of docker and 7:45containers with your applications moving 7:47them onto kubernetes can really help you 7:49tackle some of the operations overhead 7:51that almost every application is going 7:54to run into when moving to scale thank 7:56you for joining me today I hope you find 7:58this useful and definitely stay tuned 7:59for additional whiteboarding sessions in 8:01the future