kubernetes - Why does Ingress fail when LoadBalancer works on GKE? -
i can't ingress work on gke, owing health check failures. i've tried of debugging steps can think of, including:
- verified i'm not running low on quotas
- verified service accessible within cluster
- verified service works behind k8s/gke load balancer.
- verified
healthz
checks passing in stackdriver logs
... i'd love advice how debug or fix. details below!
i have set service type loadbalancer
on gke. works great via external ip:
apiversion: v1 kind: service metadata: name: echoserver namespace: es spec: ports: - port: 80 targetport: 8080 protocol: tcp type: loadbalancer selector: app: echoserver
then try setting ingress on top of same service:
apiversion: extensions/v1beta1 kind: ingress metadata: name: echoserver-ingress namespace: es annotations: kubernetes.io/ingress.class: "gce" kubernetes.io/ingress.global-static-ip-name: "echoserver-global-ip" spec: backend: servicename: echoserver serviceport: 80
the ingress gets created, thinks backend nodes unhealthy:
$ kubectl --namespace es describe ingress echoserver-ingress | grep backends backends: {"k8s-be-31102--<snipped>":"unhealthy"}
inspecting state of ingress backend in gke web console, see same thing:
the health check details appear expected:
... , within pod in cluster can call service successfully:
# curl -vvv echoserver 2>&1 | grep "< http" < http/1.0 200 ok # curl -vvv echoserver/healthz 2>&1 | grep "< http" < http/1.0 200 ok
and can address service nodeport:
# curl -vvv 10.0.1.1:31102 2>&1 | grep "< http" < http/1.0 200 ok
(which goes without saying, because load balancer service set in step 1 resulted in web site that's working fine.)
i see healthz
checks passing in stackdriver logs:
regarding quotas, check , see i'm using 3 of 30 backend services:
$ gcloud compute project-info describe | grep -a 1 -b 1 backend_services - limit: 30.0 metric: backend_services usage: 3.0
you have configured timeout value 1 second. perhaps increasing 5 seconds solve issue.
Comments
Post a Comment