Atlas Entry
Atlas: Ingress Returns 502/503
Requests through ingress return 502 Bad Gateway or 503 Service Unavailable; direct pod/service access may or may not work.
Text
Symptom → evidence → resolution.
Symptom
Requests through ingress return 502 Bad Gateway or 503 Service Unavailable; direct pod/service access may or may not work.
What this usually means
A controller accepted the request but could not route it to a healthy upstream. The root cause is usually endpoints missing, readiness failing, or timeouts that do not match real behavior.
Likely causes
Treat ingress failures as a chain: controller → route → service → endpoints → pods.
- Ingress points at the wrong service name/port.
- Service has no endpoints (selector mismatch, pods not Ready).
- Backend returns errors under load (crash loops, resource starvation).
- Controller timeouts too small for backend latency.
- TLS/SNI mismatch or host routing mismatch (often shows as 404, but can present as upstream errors depending on controller).
What to inspect first
Use controller logs to decide whether the request reached the controller and how it was routed.
- If endpoints are empty, stop and fix that first.
- If endpoints exist, test service directly with port-forward.
kubectl
shell
kubectl get ingressclass
kubectl describe ingress <ing> -n <ns>
# Controller logs (adjust namespace/name)
kubectl logs -n <ingress-ns> deploy/<controller> --tail=250kubectl
shell
kubectl get svc,ep,endpointslices -n <ns>
kubectl get pods -n <ns> -o wideResolution guidance
Fix the first broken link in the chain, then verify the whole path.
- Align ingress backend service name/port with the real service object.
- Restore endpoints by fixing labels/selectors and readiness.
- Tune timeouts only after confirming backend health; timeouts should match measured latency.
- Verify with direct Service access first, then with ingress.
Related
Canonical link
Canonical URL: /atlas/ingress-502-503