Lab · Intermediate
Lab: Pending Pods (Scheduling Constraints)
Practice reading scheduler testimony. Pending is placement failure—solve the constraint, not the symptom.
Prerequisites
What you should have before you begin.
- A cluster and namespace
- kubectl installed
- Basic understanding of requests/limits
Lab text
Follow the sequence. Change one thing at a time.
Goal
You will learn to diagnose Pending pods by reading events and the pod spec, then applying a smallest-safe fix (reduce requests, adjust constraints, or add capacity).
- Read scheduler events.
- Identify the blocking constraint.
- Change one governing input and verify placement.
Scenario
A pod is created and stays Pending. The team keeps re-applying the manifest. Nothing changes.
Your job is to prove why it cannot be scheduled.
Read the scheduler’s message
The scheduler emits a precise diagnosis. Find it and interpret it.
- Look for: Insufficient CPU/memory, taints, node selector mismatch, preemption notes.
kubectl
shell
kubectl describe pod <pod> -n <ns>
kubectl get events -n <ns> --sort-by=.lastTimestamp | tail -n 40Inspect the governing constraints
Placement is governed by requests and constraints.
- Confirm requests are honest.
- Confirm selectors/taints/affinity match real nodes.
kubectl
shell
kubectl get pod <pod> -n <ns> -o yaml | rg -n "resources:|requests:|limits:|nodeSelector:|tolerations:|affinity:|topologySpreadConstraints"kubectl
shell
kubectl get nodes -o wideResolution patterns
Prefer the fix that matches your intent and preserves safety.
- If requests were padded, reduce them and document why.
- If taints/selectors are wrong, correct them; avoid impossible affinity.
- If the cluster is truly out of capacity, add nodes or enable autoscaling with guardrails.
Related
Canonical link
Canonical URL: /labs/pending-scheduling-constraints