Kubernetes does not have an explicit command to stop pods like you may be familiar with from Docker or VM environments. The reasoning stems from how Kubernetes views containers as ephemeral, disposable entities. The cluster aims to reconcile towards a defined desired state rather than pause and resume long-lived pods.

However, users still often need to stop pods for various reasons like debugging stuck processes, upgrading dependencies, infrastructure changes, scaling down resource usage, and handling unexpected errors. Instead of directly stopping execution, Kubernetes relies on alternative strategies to halt pods.

Kubernetes Pod Design

A Kubernetes pod represents a group of one or more containers that share resources like volumes, IP addresses, and namespaces. Pods act as the atomic unit that the cluster manager (kube-controller-manager) schedules, monitors health for, replicates, and balances load across nodes.

Rather than handling containers directly, the Kubernetes control plane supervises the state of pods. Its various controllers then work to automatically reconcile current pod state towards the desired state declared in manifests or via CLI.

This abstraction enables features like self-healing, horizontal scaling, simplified networking, and design convenience. You deploy containers in pods while Kubernetes handles supervision via controllers.

Deletion and Recreation

The most common way to stop Kubernetes pods is simply deleting them. This triggers the pod termination process which tries to gracefully shutdown containers when possible. Here is the typical shutdown sequence:

  1. Pod marked for deletion and removed from endpoints list for services
  2. Containers receive SIGTERM signal to exit gracefully
  3. gracePeriodSeconds counter begins counting down
  4. Containers still running receive SIGKILL signal after grace period
  5. Kubectl shows pod status as "Terminating"
  6. Controllers remove pod API object after cleanup

Cleanup hooks also run during termination to release resources or save state prior to signaled shutdown.

Kubernetes then notes the missing pod and spins up a replacement to reach desired counts. This essentially restarts the containers while persisting volumes and other state as applicable. Recreation from the controlling workload results in a brand new pod, restarting any processes.

This approach works well for stateless pods that can restart from scratch. However, it can lose in-memory data or consistency for stateful workloads. The cluster does not pause and later resume the original pod.

Cascading Deletion

Many pods associate with other resources like services, ingress rules, and persistent volumes. By default, directly deleting the pod would orphan these related objects.

The --cascade=true kubectl flag sets cascading deletion which removes secondary objects automatically:

kubectl delete pod mypod --cascade=true

This prevents unused resources lingering after the pod stops. Cascading deletion helps reduce clutter and orphaned objects.

Controlling Pod Deletion

Higher-level Kubernetes workloads like Deployments, StatefulSets, and Jobs manage underlying pods. They provide options to control pod deletion behavior:

  • RollingUpdate strategy: Progressively cycles pod recreations rather than simultaneous batches
  • PodDisruptionBudgets: Limits how many pods delete at once for availability
  • TerminationGracePeriod: Time span for graceful shutdown before force kill

For example, a StatefulSet with a PodDisruptionBudget of 1 deletes pods one at a time. This prevents stopping multiple pod replicas simultaneously.

Kubernetes Scheduler

The Kubernetes scheduler manages allocation of pods to nodes. When pods stop via deletion, the scheduler updates its state:

  • Removed pods free up requested resources like CPU/memory
  • Scheduler balances remaining pods across available capacity
  • New pods can schedule into the cluster after capacity recalculation

Carefully engineered pod disruption budgets help avoid resource starvation and maintain cluster stability during pod deletions.

Pod Lifecycle Hooks

Containers use lifecycle hooks to run code on events like startup, shutdown, or crashes. The preStop hook before a pod deletes provides another option to prepare for stopping.

Hook code can run cleanup tasks like saving state, finishing requests, or exiting cleanly. A separate "stop" command is not necessary since pods handle their own stopping via hooks.

Here is an example preStop hook in a Kubernetes pod manifest:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:  
  - name: myapp
    image: myimage
    lifecycle:
      preStop:
        exec:  
          command: ["/bin/sh", "-c", "sleep 5"]

This pause allows time for the container to finish work before termination signals.

Pod Selection

Workloads like Deployments match pods using label selectors and pod templates. By modifying the selector, you can effectively stop pods from being targeted.

For instance, change the label selector key on a Deployment named app-v1:

# Original template match
selector:
  matchLabels:
    app: myapp
    version: v1 

# Updated - will not match old pods  
selector:
  matchLabels: 
    app: myapp
    version: v2

New pods spawn matching version: v2 while old version: v1 pods get orphaned. This essentially stops the old pods without explicitly deleting them.

The control plane eventually garbage collects orphaned pods. You can edit the selector back whenever you want to resume targeting the original pods.

Kubectl Deletion Commands

Despite alternatives, directly requesting pod deletion remains a common action. Kubectl supports different options to customize deletion behavior:

# Delete a single pod
kubectl delete pod mypod

# Cascading deletion of related objects
kubectl delete pod mypod --cascade=true  

# Graceful shutdown period 
kubectl delete pod mypod --grace-period=60

# Orphan dependent resources
kubectl delete pod mypod --cascade=false 

# Batch delete pods by labels
kubectl delete pods -l app=myapp

# Watch deletions real-time 
kubectl get pods -w

Grace periods wait before force killing the pod process. Cascading deletes secondary resources. These parameters allow safer, more controlled pod stopping.

Pod Stopping Patterns

Several common application patterns rely on stopping old pods after starting new versions:

Blue-Green Deployments: Version 2 pods provision while Version 1 handles traffic, followed by a swap

Canary Releases: New pod incrementally takes traffic share from old pod over time

In-Place Upgrades: StatefulSets update pods sequentially while maintaining quorum

These patterns use orchestration to manage pod lifecycles, calling kubectl delete only after readiness checks pass.

Conclusion

Kubernetes opts to have controllers recreate pods rather than pause and resume them. Most of the time, simply deleting pods accomplishes the goal of stopping them. Additional options like disruptions budgets, lifecycle hooks, and updating selectors allow more fine-grained control over stopping behavior.

Following Kubernetes principles, rather than directly stopping pods, you update the desired state and let the controllers implement the changes. This keeps Kubernetes‘ advantages around self-management, scaling, and availability.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *