简体   繁体   中英

In Kubernetes, do pods with “readiness checks” in a state of “Unhealthy” fail to resolve from other pods until they become ready?

I've defined a dummy service as a means of registering my pods in DNS, as a cluster IP will not work for my application right now.

apiVersion: v1
kind: Service
metadata:
  name: company
spec:
  selector:
    app: company_application
  clusterIP: None

apiVersion: apps/v1
kind: Deployment
metadata:
  name: company-master-deployment
  labels:
    app: company_application
    role: master
spec:
  selector:
    matchLabels:
      app: company_application
      role: master
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: company_application
        role: master
    spec:
      hostname: master
      subdomain: company

I'm using the DNS entry for master.company.default.svc.cluster.local to connect to that pod from another pod.

I've noticed a really annoying behavior in Kubernetes under these conditions:

  • I have a pod that is in "unhealthy" as defined by a ReadinessCheck
  • I have another pod whose application wants to do a DNS lookup on that pod
  • The DNS lookup fails until the "unhealthy" pod becomes healthy.

Is this the way Kubernetes is supposed to work? Is there any way, other than removing the readiness check, to make sure that the DNS continues to resolve?

Yes, Pods are not added to service endpoints till they pass readiness checks. You can confirm this by running following command:

kubectl get endpoints company -n <your_namespace>

You wont see any endpoints till

readinessProbe

is failing.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM