Introduction: Kubernetes Is Just Another Platform
I first want to start by explaining that Kubernetes is just another platform. When we look at virtualization, we know that the underpinning hardware will consist of compute storage, and networking this is the same process that we see within the public cloud and even SaaS based workloads like Microsoft 365.
Kubernetes is no different, since we can run Kubernetes clusters in many different forms. This could be bare metal, using virtual machines (VMs) or taking advantage of a managed Kubernetes service like Amazon EKS.
Container Orchestration With Kubernetes
Let’s now dive into Kubernetes as a container orchestration engine. Kubernetes is not alone here since there are other container orchestration engines available, including Docker Swarm and Hashicorp Nomad. However, Kubernetes has become the front runner when it comes to orchestrating containers.
Comparison of Kubernetes and vSphere in VM Orchestration
If we look at virtualization and in particular vSphere, we can see a virtual center that can be likened to a VM orchestrator. It helps balance our VMs across all our ESXi virtualization hosts, since if one of those hosts were to die or fail, then the VMs that resided on it would be moved to another host without causing downtime to the VM and the application it runs on.
From VMs to Containers: A Shift in Orchestration
In Kubernetes, this process is very similar except that here, we’re not orchestrating VMs, we’re orchestrating containers and the services associated. Instead of moving containers between hosts, we simply spin up new ones to keep the application running in its desired state.
Another thing to note here is that, in the virtualization world, we can choose to run a single ESXi host and run VMs within that. Now, the biggest issue is the lack of high availability for those VMs if the host was to fail. We can do something similar when it comes to containers. We may choose to run a VM that runs a Docker, for example, and then our application runs in a container. If this VM was to fail, then the application will also fail which is why we need orchestration engines to enable high availability and resilience against failure.
Kubernetes Building Blocks
In a VM, we might have one or several applications that are all installed on the same VM. Now, within an application, there are likely several services that enable the application to work for an end user or customer.
Benefits of Containerization and Service Scaling
In a containerized environment, our application will most definitely be split up into multiple different container images that can be likened to services that build up our application. One of the key benefits of containerization is that we can scale individual services however and whenever we need them (i.e., like dealing with demand during busy periods).
Understanding Pods in Kubernetes
When I talk about these services, think of them in simplistic terms as a front-end management web interface with several back-end services like authentication, catalogue and stock. These then store that data into a data service, this is likely where you will find a database similar to what you have seen before such as MySQL or Postgres (so much choice when it comes to databases). With VMs, all these services would most likely live on the same VM. In a Kubernetes environment, these parts of our application will be container images and referred to as a pod.
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A pod is a group of one or more containers with storage and networking resources and specifications for how to run containers. Within our specific application, the services mentioned will probably have at least a pod each.
We can wrap these pods and further extend the capabilities of our application with workload resources like deployments, replicasets, statefulsets and daemonsets. Each one of these provide a desired state in which the application can run.
Exploring Kubernetes Cluster Architecture
As I said before, we can consider container orchestration to be very similar to VM orchestration at the end of the day since both platforms consist of compute, storage and networking.
Nomenclature Differences: Virtualization vs. Kubernetes
In the Kubernetes world, we have different nomenclature for some of the similar architectures that we see in virtualization.
In virtualization, we use the term “hosts”. This is where we run our VMs. We can have many hosts and make up our cluster to scale our environment accordingly. In the Kubernetes world, we have the concept of the “control plane” and “worker nodes” (there are others but for the basic 101 this is where we start). Pods are then deployed to run on these worker nodes so we can have as many worker nodes as we want to build up our cluster (again, there will be some limitations but for now let’s use this concept of infinite scalability).
From a management or scheduling perspective in virtualization and specifically VMware vSphere, we have a virtual center. The virtual center watches over our hosts and ensures that the VMs are spread out efficiently across our cluster. Plus, in the event of failure, it is the virtual center that will trigger that migration onto a different host. The virtual center, following best practices, would not reside on the same cluster.
In Kubernetes, we take another node that doesn’t run any of our pod workloads. Rather, it’s there as part of the cluster to make sure that everything is healthy and schedule our containers or pods to run on the most efficient node within the cluster. The control plane can act like a virtual center and be made highly available with multiple instances.
In Conclusion: Kubernetes Is Just Another Platform
We have only touched the surface of the similarities between virtualization and Kubernetes, but the biggest takeaway is that Kubernetes is just another platform. The same way virtualization was when it first appeared alongside physical servers and when the cloud came along from the large hyperscalers.