简体   繁体   中英

kubectl wait for Service on AWS EKS to expose Elastic Load Balancer (ELB) address reported in .status.loadBalancer.ingress field

As the kubernetes.io docs state about a Service of type LoadBalancer :

On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service's .status.loadBalancer field.

On AWS Elastic Kubernetes Service (EKS) a an AWS Load Balancer is provisioned that load balances network traffic ( see AWS docs & the example project on GitHub provisioning a EKS cluster with Pulumi ). Assuming we have a Deployment ready with the selector app=tekton-dashboard (it's the default Tekton dashboard you can deploy as stated in the docs ), a Service of type LoadBalancer defined in tekton-dashboard-service.yml could look like this:

apiVersion: v1
kind: Service
metadata:
  name: tekton-dashboard-external-svc-manual
spec:
  selector:
    app: tekton-dashboard
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9097
  type: LoadBalancer

If we create the Service in our cluster with kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines , the AWS ELB get's created automatically:

在此处输入图像描述

There's only one problem: The .status.loadBalancer field is populated with the ingress[0].hostname field asynchronously and is therefore not available immediately. We can check this, if we run the following commands together:

kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines && \
kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}'

The output will be an empty field:

{}%

So if we want to run this setup in a CI pipeline for example (eg GitHub Actions, see the example project's workflow provision.yml ), we need to somehow wait until the .status.loadBalancer field got populated with the AWS ELB's hostname. How can we achieve this using kubectl wait ?

TLDR;

Prior to Kubernetes v1.23 it's not possible using kubectl wait , but using until together with grep like this:

until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done

or even enhance the command using timeout ( brew install coreutils on a Mac ) to prevent the command from running infinitely:

timeout 10s bash -c 'until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done'

Problem with kubectl wait & the solution explained in detail

As stated in this so Q&A and the kubernetes issues kubectl wait unable to not wait for service ready #80828 & kubectl wait on arbitrary jsonpath #83094 using kubectl wait for this isn't possible in current Kubernetes versions right now.

The main reason is, that kubectl wait assumes that the status field of a Kubernetes resource queried with kubectl get service/xyz --output=yaml contains a conditions list. Which a Service doesn't have. Using jsonpath here would be a solution and will be possible from Kubernetes v1.23 on (see this merged PR ). But until this version is broadly available in managed Kubernetes clusters like EKS, we need another solution. And it should also be available as "one-liner" just as a kubectl wait would be.

A good starting point could be this superuser answer about "watching" the output of a command until a particular string is observed and then exit :

until my_cmd | grep "String Im Looking For"; do : ; done

If we use this approach together with a kubectl get we can craft a command which will wait until the field ingress gets populated into the status.loadBalancer field in our Service :

until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done

This will wait until the ingress field got populated and then print out the AWS ELB address (eg via using kubectl get service tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}' thereafter):

$ until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done
{"ingress":[{"hostname":"a74b078064c7d4ba1b89bf4e92586af0-18561896.eu-central-1.elb.amazonaws.com"}]}

Now we have a one-liner command that behaves just like a kubectl wait for our Service to become available through the AWS Loadbalancer. We can double check if this is working with the following commands combined (be sure to delete the Service using kubectl delete service/tekton-dashboard-external-svc-manual -n tekton-pipelines before you execute it, because otherwise the Service incl. the AWS LoadBalancer already exists):

kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines && \
until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done && \
kubectl get service tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}'

Here's also a full GitHub Actions pipeline run if you're interested.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM