How Kubernetes Schedules Pods and Maintains Desired State

·

3 min read

Introduction

Kubernetes has a sophisticated architecture that ensures your applications run smoothly by maintaining the desired state and efficiently scheduling pods. Let’s walk through the process of pod scheduling and creation and explore how Kubernetes ensures that your cluster operates according to the specified configurations.

Scheduling Pods on a Node

When a new pod needs to be created, the following steps occur:

  1. Pod Creation Request: A request to create a new pod is made, often through a kubectl command or an API call to the Kubernetes API server.

  2. API Server as the Middleman: The API server receives the pod creation request and records this desired state in etcd, the cluster's data store.

  3. API Server and Scheduler’s Role: The API server identifies new pods that require scheduling and informs the scheduler. The scheduler then evaluates the cluster’s nodes, considering factors like available resources, taints, tolerations, and other scheduling policies.

  4. Node Assignment: Based on its evaluation, the scheduler assigns the pod to the most suitable node. This decision is then communicated back to the API server.

  5. Node and Kubelet Action: The kubelet on the chosen node receives the scheduling decision from the API server and instructs the container runtime to pull the necessary container images and start the containers within the pod.

  6. Pod Startup: Once the containers are running, the kubelet updates the API server with the pod’s status, confirming that the pod is now operational on the node.

Creating a New Pod

When a new pod is required, the process involves multiple components working together seamlessly:

  1. Desired State Declaration: Users or automation tools declare the desired state of the application, which includes creating new pods through deployment manifests or replication controllers.

  2. API Server Interaction: The desired state is sent to the API server, which stores this information in etcd.

  3. Controller Manager’s Task: The controller manager detects discrepancies between the current state and the desired state in the cluster. If fewer pods are running than specified, it triggers the creation of new pods.

  4. Scheduler’s Intervention: The scheduler then assigns these new pods to appropriate nodes based on resource requirements and other constraints.

  5. Kubelet and Container Runtime: The kubelet on the respective nodes receives instructions to run the containers of the new pods, ensuring they are up and running as expected.

Maintaining the Desired State

Kubernetes maintains the desired state of the cluster through continuous reconciliation:

  1. Reconciliation Loop: Kubernetes controllers, including the replication controller and deployment controller, constantly watch the current state of the cluster against the desired state defined in the manifests.

  2. Detecting and Correcting Deviations: When a deviation is detected—such as a pod failure or a need to scale up or down—these controllers take corrective actions. This could involve restarting failed pods, creating new ones, or terminating excess pods.

  3. Updating the API Server: The controllers update the API server with any changes, which then propagates these updates to etcd, ensuring that the desired state and actual state remain in sync.

Conclusion

Understanding the pod scheduling process and how Kubernetes maintains the desired state is crucial for effectively managing your applications. Kubernetes’s robust architecture ensures that your applications are always running in the desired state, providing a resilient and efficient environment for containerized workloads.