简体   繁体   中英

kubernetes unhealthy ingress backend

I followed the load balancer tutorial: https://cloud.google.com/container-engine/docs/tutorials/http-balancer which is working fine when I use the Nginx image, when I try and use my own application image though the backend switches to unhealthy.

My application redirects on / (returns a 302) but I added a livenessProbe in the pod definition:

    livenessProbe:
      httpGet:
        path: /ping
        port: 4001
        httpHeaders:
          - name: X-health-check
            value: kubernetes-healthcheck
          - name: X-Forwarded-Proto
            value: https
          - name: Host
            value: foo.bar.com

My ingress looks like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: foo
spec:
  backend:
    serviceName: foo
    servicePort: 80
  rules:
  - host: foo.bar.com

Service configuration is:

kind: Service
apiVersion: v1
metadata:
  name: foo
spec:
  type: NodePort
  selector:
    app: foo
  ports:
    - port: 80 
      targetPort: 4001

Backends health in ingress describe ing looks like:

backends:       {"k8s-be-32180--5117658971cfc555":"UNHEALTHY"}

and the rules on the ingress look like:

Rules:
  Host  Path    Backends
  ----  ----    --------
  * *   foo:80 (10.0.0.7:4001,10.0.1.6:4001)

Any pointers greatly received, I've been trying to work this out for hours with no luck.

Update

I have added the readinessProbe to my deployment but something still appears to hit / and the ingress is still unhealthy. My probe looks like:

    readinessProbe:
      httpGet:
        path: /ping
        port: 4001
        httpHeaders:
          - name: X-health-check
            value: kubernetes-healthcheck
          - name: X-Forwarded-Proto
            value: https
          - name: Host
            value: foo.com

I changed my service to:

kind: Service
apiVersion: v1
metadata:
  name: foo
spec:
  type: NodePort
  selector:
    app: foo
  ports:
    - port: 4001
      targetPort: 4001

Update2

After I removed the custom headers from the readinessProbe it started working! Many thanks.

You need to add a readinessProbe (just copy your livenessProbe).

It's explained in the GCE L7 Ingress Docs .

Health checks

Currently, all service backends must satisfy either of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer: 1. Respond with a 200 on '/'. The content does not matter. 2. Expose an arbitrary url as a readiness probe on the pods backing the Service.

Also make sure that the readinessProbe is pointing to the same port that you expose to the Ingress. In your case that's fine since you have only one port, if you add another one you may run into trouble.

I thought it's worth noting that this is a quite important limitation in the documentation:

Changes to a Pod's readinessProbe do not affect the Ingress after it is created.

After adding my readinessProbe I basically deleted my ingress ( kubectl delete ingress <name> ) and then applied my yaml file again to re-create it and shortly after everything was working again.

I was having the same issue. Followed Tex's tip but continued to see that message. It turns out I had to wait a few minutes before ingress to validate the service health. If someone is going through the same and done all the steps like readinessProbe and linvenessProbe , just ensure your ingress is pointing to a service that is either a NodePort , and wait a few minutes until the yellow warning icon turns into a green one. Also, check the log on StackDriver to get a better idea of what's going on.

I was also having exactly the same issue, after updating my ingress readinessProbe .

I can see Ingress status labeled Some backend services are in UNKNOWN state status in yellow. I waited for more than 30 min, yet the changes were not reflected.

After more than 24 hours the changes reflected and status turned green. I didn't get any official documentation for this but seems like a bug in GCP Ingress resource.

Everyone of these answers helped me.

In addition, the http probes need to return a 200 status. Stupidly, mine was returning a 301. So I just added a simple "ping" endpoint and all was well/healthy.

If you don't want to change your pod spec, or rely on the magic of GKE pulling out your readinessProbe, you can also configure a BackendConfig like this to explicitly configure the health check.

This is also helpful if you want to use a script for your readinessProbe, which isn't supported by GKE ingress health checks.

Note that the BackendConfig needs to be explicitly referenced in your Service definition.

---
apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: my-namespace
  annotations:
    cloud.google.com/neg: '{"ingress":true}'
    # This points GKE Ingress to the BackendConfig below
    cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
spec:
  type: ClusterIP
  ports:
    - name: health
      port: 1234
      protocol: TCP
      targetPort: 1234
    - name: http
      ...
  selector:
    ...
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: my-backendconfig
  namespace: my-namespace
spec:
  healthCheck:
    checkIntervalSec: 15
    port: 1234
    type: HTTP
    requestPath: /healthz

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM