简体   繁体   中英

Keep a Kubernetes Pod in service when in a state of not ready

I am working on a project currently migrating a legacy application towards becoming cloud-compliant. We are using Kubernetes, Openshift and Docker for this. The application has one particular type of "back-end pod" (let's call it BEP) whose responsibility it is to process incoming transactions. In this pod we have several interdependent containers, but only one container which actually does the "real processing" (call it BEC). This legacy application processes several thousands of transactions / sec, and will need to continue to do so in the cloud.

To achieve this scale we were thinking to duplicate the BEC in the pod instead of replicating the BEP (and thus also replicating all the other unnecessary containers that come along with it). We might need X replicas of this BEC, whereas we would not need to scale its interdependent containers at all. It would thus be useless to scale X replicas of the BEP instead.

However, this solution poses a problem. Once one BEC is down the entire pod will be flagged as "Not ready" by kubernetes (even if there are 100 other BEC's which are up and ready to process) upon which the pod end-point is removed and thus cutting the traffic to the entire pod.

I guess this is a classical example of defining some sort of "minimum running requirement" for the pod.

I thus have two questions:

  • Is there a way to flag a pod as still functioning even if all containers are not in a state of "ready"? Ie achieving this minimum running requirement by defining a lower threshold on the # containers in a state of "ready" for the pod to be considered functioning?
  • Is there a way to maybe flag the service - that the pod provides - as to still send traffic even if the pod is not in a ready state? I have seen an property called: publishNotReadyAddresses ( https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#servicespec-v1-core#publishNotReadyAddresses ) but I am unsure if this does what we would require?

If the answer to both of these two questions is a no: do you have any idea / approach to take concerning this problem, without proposing a major architectural refactoring of this legacy application? We can not split the interdependent containers from the BEC, they need to run in the same pod...unfortunately.

Thanks in advance for any help/advice!

/Alex

Is there a way to flag a pod as still functioning even if all containers are not in a state of "ready"? Ie achieving this minimum running requirement by defining a lower threshold on the # containers in a state of "ready" for the pod to be considered functioning?

No, this is not possible.

You can use annotations :

metadata:
  name: name
  labels:
    app: app
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

But community is already working on this issue/bug, you can follow #58662 and #49239

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM