Atlas Entry
Atlas: Pods Pending (Scheduling)
Pods stay in Pending; events mention failed scheduling or insufficient resources.
Text
Symptom → evidence → resolution.
Symptom
Pods stay in Pending; events mention failed scheduling or insufficient resources.
SchedulingOperationsReliability
What this usually means
The scheduler cannot place the pod on any eligible node given the current constraints. The correct fix is to change the constraint or add capacity—not to re-apply YAML repeatedly.
Likely causes
Pending is usually a constraint: capacity, policy, or topology.
- Insufficient CPU/memory/ephemeral storage on eligible nodes.
- Taints without tolerations; node selectors that match no nodes.
- Affinity/anti-affinity or topology spread constraints that are impossible to satisfy.
- Quota limits or limit ranges preventing admission/creation.
- Priority/preemption rules keeping lower-priority pods waiting.
What to inspect first
The scheduler tells the truth in events. Read them first.
- Look for: `0/… nodes are available`, `Insufficient`, `didn’t match node selector`, `taint`, `preemption`.
- Confirm the pod’s requests are honest; requests drive placement.
kubectl
shell
kubectl describe pod <pod> -n <ns>
kubectl get nodes -o wide
kubectl get events -n <ns> --sort-by=.lastTimestamp | tail -n 40Resolution guidance
Fix the constraint, not the symptom.
- Add capacity (node pool / autoscaler) or reduce requests if they were padded.
- Correct selectors/affinity/taints; avoid impossible constraints.
- Define priority classes deliberately when scarcity is expected.
Related
Canonical link
Canonical URL: /atlas/pods-pending-scheduling