Understanding Kubernetes Taints and Tolerations: A Simple Guide

Kubernetes taints and tolerations are powerful tools used to control which pods run on which nodes in your cluster. If you’re a beginner, this can sound complex, but once you understand the basic concepts, it becomes quite simple.

What Are Taints and Tolerations?

  • Taints: Applied to nodes. They tell the node, “Don’t allow most pods to run here unless they can handle this condition (the taint).”

  • Tolerations: Applied to pods. They tell the pod, “I can handle the condition (the taint) on this node, so I can be scheduled here.”

Taints keep unwanted pods away from a node, while tolerations allow specific pods to bypass those taints and run on the node.

1. Taints: Making Nodes Reject Pods

When you add a taint to a node, you’re basically saying, “I don’t want most pods to run here unless they have permission.” The taint comes with three parts:

  • Key: Identifies the reason for the taint.

  • Value: Provides extra detail about the key (optional).

  • Effect: Defines the behavior for scheduling pods:

    • NoSchedule: Pods without matching tolerations will not be scheduled on this node.

    • PreferNoSchedule: It’s better if pods without matching tolerations are not scheduled, but they can still run if needed.

    • NoExecute: If a pod is already running and doesn't tolerate the taint, it will be evicted (removed).

2. Tolerations: Allowing Pods to Bypass Taints

Tolerations allow specific pods to ignore a taint on a node. A toleration also has three parts:

  • Key: The same key as in the taint.

  • Value: The same value (if any) as in the taint.

  • Effect: Matches the effect (NoSchedule, PreferNoSchedule, or NoExecute).

  • Operator: Tells Kubernetes whether to look for an exact match (Equal) or just check if the taint exists (Exists).

How Do Taints and Tolerations Work Together?

  • A taint repels pods from a node.

  • A toleration allows pods to ignore that taint and be scheduled anyway.


Example 1: Keep Certain Workloads Away from Specific Nodes

Imagine you have a node (node1) in your cluster that is special—it might have limited resources, or you want it to run only critical workloads. To stop regular pods from being scheduled on this node, you can add a taint.

Step 1: Add a Taint to the Node

You can taint node1 with the following command:

kubectl taint nodes <node-name> high-priority=true:NoSchedule

To verify execute this command:

kubectl describe node <node-name> | grep Taints

This tells Kubernetes: "Don't schedule any pods on node1 unless they can tolerate this high-priority=true:NoSchedule taint."

Step 2: Deploy a Regular Pod Without Toleration

Let’s try deploying a regular pod (one without any tolerations):

apiVersion: v1
kind: Pod
metadata:
  name: normal-pod
spec:
  containers:
  - name: nginx
    image: nginx
kubectl apply -f normal-pod

This pod won’t be scheduled on node1 because it doesn’t tolerate the taint (high-priority=true:NoSchedule). Kubernetes will find another node for it or leave it unscheduled as shown below.

kubectl get po -o wide

Step 3: Deploy a High-Priority Pod With Toleration

Now, let’s deploy a pod that can tolerate the taint:

apiVersion: v1
kind: Pod
metadata:
  name: high-priority-pod
spec:
  tolerations:
  - key: "high-priority"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
  containers:
  - name: nginx
    image: nginx

This pod will be allowed to run on node1 because its toleration matches the taint.


Example 2: Soft Preferences Using PreferNoSchedule

Sometimes, you might prefer that a node doesn’t run certain pods, but you’re not strict about it. This is where PreferNoSchedule comes in.

Step 1: Add a Soft Taint to the Node

You can taint node2 with a soft taint that expresses a preference:

kubectl taint nodes node2 general-workload=false:PreferNoSchedule

This tells Kubernetes: "It’s better not to schedule general pods on node2, but you can if necessary."

Step 2: Deploy a Regular Pod

Here’s a regular pod without any toleration:

apiVersion: v1
kind: Pod
metadata:
  name: regular-pod
spec:
  containers:
  - name: nginx
    image: nginx

Kubernetes will try to avoid scheduling this pod on node2, but if no other nodes are available, it might still run there. This is the key difference with PreferNoSchedule: it’s a preference, not a strict rule.


Example 3: Evicting Pods Using NoExecute

The NoExecute effect is used when you want to remove pods from a node if they don’t tolerate its taint. This is useful when a node becomes unhealthy or is undergoing maintenance.

Step 1: Add a NoExecute Taint to the Node

Let’s say node3 is going into maintenance. You can add a taint that evicts any pods that don’t tolerate it:

kubectl taint nodes node3 maintenance=true:NoExecute

Any pods on node3 that don’t tolerate this taint will be evicted.

Step 2: Deploy a Regular Pod Without Toleration

If a regular pod like this is running on node3, it will be evicted:

apiVersion: v1
kind: Pod
metadata:
  name: regular-pod
spec:
  containers:
  - name: nginx
    image: nginx

Since the pod doesn’t tolerate the maintenance=true:NoExecute taint, it will be removed from node3.

Step 3: Deploy a Pod That Can Tolerate the Taint

Now, let’s deploy a pod that can tolerate the NoExecute taint:

apiVersion: v1
kind: Pod
metadata:
  name: important-pod
spec:
  tolerations:
  - key: "maintenance"
    operator: "Equal"
    value: "true"
    effect: "NoExecute"
  containers:
  - name: nginx
    image: nginx

This pod will not be evicted from node3 because its toleration allows it to ignore the taint.


Conclusion: Why Use Taints and Tolerations?

  • Taints protect nodes from running unwanted workloads.

  • Tolerations let specific pods bypass these taints.

  • Use NoSchedule for strict control over scheduling.

  • Use PreferNoSchedule for soft preferences.

  • Use NoExecute to evict pods from nodes or prevent them from being scheduled.

These tools give you fine-grained control over where your workloads run, ensuring that sensitive or critical workloads are separated from general ones.


Feel free to apply these examples in your Kubernetes cluster and experiment with different scenarios. By mastering taints and tolerations, you can manage your cluster more effectively and ensure your workloads are running where they should be!