Atlas Entry
Atlas: PVC Pending (Storage)
A PersistentVolumeClaim stays Pending; pods that require it may be stuck in Pending or fail to start.
Text
Symptom → evidence → resolution.
Symptom
A PersistentVolumeClaim stays Pending; pods that require it may be stuck in Pending or fail to start.
StorageOperationsReliability
What this usually means
The cluster cannot satisfy the claim: there is no matching PV, dynamic provisioning is misconfigured, or constraints (access modes/topology) make the request impossible.
Likely causes
Storage failures are mostly configuration and controller health.
- No default StorageClass and the PVC does not specify one.
- Dynamic provisioner/controller is not installed or unhealthy.
- Access mode mismatch (ReadWriteOnce vs ReadWriteMany) for the chosen backend.
- Topology/zone constraints: volume can’t be provisioned where the pod must run.
- Quota or backend limits prevent provisioning.
What to inspect first
PVC events and StorageClass configuration usually name the block directly.
- Read events at the bottom of `describe pvc`.
- Confirm the StorageClass and provisioner.
kubectl
shell
kubectl get pvc -n <ns> -o wide
kubectl describe pvc <pvc> -n <ns>
kubectl get storageclass
kubectl get pv -o wideResolution guidance
Fix the provisioner or the request. Then re-check binding.
- Set a default StorageClass or specify storageClassName explicitly in the PVC.
- Restore the CSI controller/driver and check its logs in kube-system (or the driver namespace).
- Align access modes with the backend’s capabilities.
- If topology is the issue, ensure node pools and volume provisioning align on zones.
Related
Canonical link
Canonical URL: /atlas/pvc-pending