Kubernetes
Deconstructing Kubernetes Scheduling Mechanisms
Deconstructing Kubernetes Scheduling
Think of Kubernetes as a super-organized logistics manager. Its job? To ensure every Pod finds the perfect node to live on, based on its requirements, preferences, and constraints.
But Kubernetes isn’t just about making sure the Pod “finds a place.” It uses sophisticated scheduling mechanisms like affinity, anti-affinity, taints, tolerations, and even direct assignments with nodeName.
Let’s dive into how Kubernetes works its scheduling magic. Before we start with that, lets just get a brief idea on how the api server decides which node the application should be deployed into.
Deconstructing a Kubernetes Deployment
Deconstructing a Kubernetes Deployment
Think back to the first time you laid eyes on a Kubernetes deployment manifest. Did it make any sense to you apart from the image and container parameters? Wait, it did? Well, that makes one of us!
When I first saw a Kubernetes deployment, I was hit with a flurry of questions. Questions that made me feel like I had opened the Matrix. Now, after some much-needed experience (and a few existential crises), I think I can finally answer some of those burning questions.
Just Enough Kubernetes: Architecture
TLDR
Kubernetes is a container orchestration tool in which you can use multiple machines/VM’s to create a cluster. When cluster is created, application deployed on it are distributed throughout the nodes and Kubernetes makes sure they are up and available depending on the provided configuration.
Architecture
Node: A physical machine or VM where our applications are run when deployed.
Cluster: A combination of nodes running together. It is best practice to have multiple nodes running at the same time to avoid a failure if any one of the nodes in the cluster stops working.