简体   繁体   English

kubectl 等待 AWS EKS 上的服务公开在.status.loadBalancer.ingress 字段中报告的 Elastic Load Balancer (ELB) 地址

[英]kubectl wait for Service on AWS EKS to expose Elastic Load Balancer (ELB) address reported in .status.loadBalancer.ingress field

As the kubernetes.io docs state about a Service of type LoadBalancer :作为kubernetes.io 文档 state 关于LoadBalancer类型的Service

On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service.在支持外部负载均衡器的云提供商上,将 type 字段设置为 LoadBalancer 会为您的服务提供负载均衡器。 The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service's .status.loadBalancer field.负载均衡器的实际创建是异步发生的,有关已配置均衡器的信息发布在服务的.status.loadBalancer字段中。

On AWS Elastic Kubernetes Service (EKS) a an AWS Load Balancer is provisioned that load balances network traffic ( see AWS docs & the example project on GitHub provisioning a EKS cluster with Pulumi ).在 AWS Elastic Kubernetes 服务 (EKS) 上,预置了一个 AWS 负载均衡器来负载均衡网络流量( 请参阅 AWS 文档GitHub 上的示例项目,使用 Pulumi 预置 EKS 集群)。 Assuming we have a Deployment ready with the selector app=tekton-dashboard (it's the default Tekton dashboard you can deploy as stated in the docs ), a Service of type LoadBalancer defined in tekton-dashboard-service.yml could look like this:假设我们已经使用选择器app=tekton-dashboard准备好Deployment (它是您可以按照文档中所述部署的默认 Tekton 仪表板),在tekton-dashboard-service.yml中定义的LoadBalancer类型的Service可能如下所示:

apiVersion: v1
kind: Service
metadata:
  name: tekton-dashboard-external-svc-manual
spec:
  selector:
    app: tekton-dashboard
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9097
  type: LoadBalancer

If we create the Service in our cluster with kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines , the AWS ELB get's created automatically:如果我们使用kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines在集群中创建服务,AWS ELB 会自动创建:

在此处输入图像描述

There's only one problem: The .status.loadBalancer field is populated with the ingress[0].hostname field asynchronously and is therefore not available immediately.只有一个问题: .status.loadBalancer字段由ingress[0].hostname字段异步填充,因此无法立即使用。 We can check this, if we run the following commands together:如果我们一起运行以下命令,我们可以检查这一点:

kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines && \
kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}'

The output will be an empty field: output 将是一个空字段:

{}%

So if we want to run this setup in a CI pipeline for example (eg GitHub Actions, see the example project's workflow provision.yml ), we need to somehow wait until the .status.loadBalancer field got populated with the AWS ELB's hostname.因此,如果我们想在 CI 管道中运行此设置(例如GitHub 操作,请参阅示例项目的工作流provision.yml ),我们需要以某种方式等到.status.loadBalancer字段填充了 AWS ELB 的主机名。 How can we achieve this using kubectl wait ?我们如何使用kubectl wait来实现这一点?

TLDR; TLDR;

Prior to Kubernetes v1.23 it's not possible using kubectl wait , but using until together with grep like this:在 Kubernetes v1.23之前,无法使用kubectl wait ,但要与until一起使用, grep所示:

until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done

or even enhance the command using timeout ( brew install coreutils on a Mac ) to prevent the command from running infinitely:甚至使用超时brew install coreutils在 Mac 上)来增强命令,以防止命令无限运行:

timeout 10s bash -c 'until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done'

Problem with kubectl wait & the solution explained in detail kubectl wait 的问题&详细解释的解决方案

As stated in this so Q&A and the kubernetes issues kubectl wait unable to not wait for service ready #80828 & kubectl wait on arbitrary jsonpath #83094 using kubectl wait for this isn't possible in current Kubernetes versions right now.如本文所述, 因此问答和kubernetes 问题kubectl wait cannot not wait for service ready #80828 & kubectl wait on任意 jsonpath #83094使用kubectl wait for 这在当前的 Kubernetes 版本中是不可能的。

The main reason is, that kubectl wait assumes that the status field of a Kubernetes resource queried with kubectl get service/xyz --output=yaml contains a conditions list.主要原因是, kubectl wait假设使用kubectl get service/xyz --output=yaml查询的 Kubernetes 资源的status字段包含conditions列表。 Which a Service doesn't have. Service没有。 Using jsonpath here would be a solution and will be possible from Kubernetes v1.23 on (see this merged PR ).在此处使用 jsonpath 将是一种解决方案,并且可以从 Kubernetes v1.23开始(请参阅此合并的 PR )。 But until this version is broadly available in managed Kubernetes clusters like EKS, we need another solution.但在此版本广泛用于托管 Kubernetes 集群(如 EKS)之前,我们需要另一种解决方案。 And it should also be available as "one-liner" just as a kubectl wait would be.它也应该像kubectl wait一样作为“单线”提供。

A good starting point could be this superuser answer about "watching" the output of a command until a particular string is observed and then exit :一个好的起点可能是关于“观察”命令的 output 直到观察到特定字符串然后退出的超级用户回答:

until my_cmd | grep "String Im Looking For"; do : ; done

If we use this approach together with a kubectl get we can craft a command which will wait until the field ingress gets populated into the status.loadBalancer field in our Service :如果我们将这种方法与kubectl get一起使用,我们可以制作一个命令,该命令将等待字段ingress填充到我们的Service中的status.loadBalancer字段中:

until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done

This will wait until the ingress field got populated and then print out the AWS ELB address (eg via using kubectl get service tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}' thereafter):这将等到ingress字段被填充,然后打印出 AWS ELB 地址(例如,通过使用kubectl get service tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}'之后):

$ until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done
{"ingress":[{"hostname":"a74b078064c7d4ba1b89bf4e92586af0-18561896.eu-central-1.elb.amazonaws.com"}]}

Now we have a one-liner command that behaves just like a kubectl wait for our Service to become available through the AWS Loadbalancer.现在我们有了一个单行命令,其行为就像一个kubectl wait我们的Service通过 AWS 负载均衡器变得可用。 We can double check if this is working with the following commands combined (be sure to delete the Service using kubectl delete service/tekton-dashboard-external-svc-manual -n tekton-pipelines before you execute it, because otherwise the Service incl. the AWS LoadBalancer already exists):我们可以仔细检查这是否与以下命令组合使用(确保在执行之前使用kubectl delete service/tekton-dashboard-external-svc-manual -n tekton-pipelines ,否则服务包括。 AWS LoadBalancer 已经存在):

kubectl apply -f tekton-dashboard-service.yml -n tekton-pipelines && \
until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done && \
kubectl get service tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}'

Here's also a full GitHub Actions pipeline run if you're interested.如果您有兴趣, 这里还有一个完整的 GitHub Actions 管道运行

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用AWS Java Elastic Load Balancer SDK检查负载均衡器的状态 - How to check the status of loadbalancer with AWS Java Elastic Load Balancer SDK 使用负载均衡器在 AWS EKS 上公开 Hazelcast 集群 - Expose a Hazelcast cluster on AWS EKS with a load balancer AWS EKS - Terraform 在应用 LoadBalancer 服务后不创建负载均衡器 - AWS EKS - Terraform does not create load balancer after applying LoadBalancer service AWS EKS 入口 - 无地址 - AWS EKS Ingress - No Address AWS Elastic Load Balancer (ELB) 后面的 Windows 身份验证不起作用 - Windows Authentication behind AWS Elastic Load Balancer (ELB) not working 使用 NGINX 入口公开 EKS 上的服务和负载均衡器问题 - Exposing a service on EKS using NGINX ingress and issues with load balancer 使用AWS ELB弹性负载平衡器将Facebook身份验证回调到错误的服务器 - Facebook authentication callback to wrong server with AWS ELB elastic load balancer 为AWS中的弹性负载均衡器(ELB)同步两个ec2实例 - Synch two ec2 instance for elastic load balancer(ELB) in AWS AWS EKS - 创建负载均衡器服务将停止服务 - AWS EKS - Create Load Balancer Service throws out of service EKS Ingress Nginx 负载均衡器 与 WAF 集成 - EKS Ingress Nginx Load Balancer Integrate with WAF
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM