簡體   English   中英

EKS 服務 504 網關超時 - AWS 應用程序負載均衡器 Controller

[英]EKS Service 504 Gateway Timeout - AWS Application Load Balancer Controller

語境

我已經成功部署了一個 EKS 集群,我在其中配置和部署了 MLflow (v1.27.0) 和 AWS Application Load Balancer Controller( 圖表)。 兩種工作負載均已使用 Terraform 和 Helm 圖表進行部署。

為了驗證 MLflow 工作負載是否按預期工作,我通過 kubectl 連接到集群並向我的本地工作站運行 port-forward 命令。 我能夠在我的瀏覽器中成功訪問 MLflow 儀表板,並且一切都按預期工作。 下面是我的服務定義(由 Helm 圖表安裝通過 Terraform 生成)突出顯示我用於 MLflow 服務的端口配置:

mlflow-service.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: mlflow
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2022-08-08T08:08:17Z"
  labels:
    app.kubernetes.io/instance: mlflow
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: mlflow
    app.kubernetes.io/version: 1.27.0
    helm.sh/chart: mlflow-1.27.0
  name: mlflow
  namespace: default
  resourceVersion: "2183"
  uid: eb1b2f0f-289b-453a-90b6-c853a60cd9b0
spec:
  clusterIP: 172.20.28.63
  clusterIPs:
  - 172.20.28.63
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: mlflow
    nodePort: 30360
    port: 5252
    protocol: TCP
    targetPort: 5252
  selector:
    app.kubernetes.io/instance: mlflow
    app.kubernetes.io/name: mlflow
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

除了配置和部署 AWS 負載均衡器 Controller 圖表之外,我還在 Route53 中配置了一個子域以通過別名路由指向我的負載均衡器實例(這會生成一個 A 記錄條目),並更新了我的 DNS 提供程序以指向 AWS 服務器. 我從 DNS 的角度驗證了所有內容都已正確配置。 此外,我在 ACM 中為 SSL 加密創建了一個證書,並將所需的記錄添加到我的 Route53 配置中。 作為對確認有效性的確認,我的證書在Issued state 中,我可以將其用作我的Ingress資源定義的注釋。 下面是我的入口定義和kubectl describe ingress my-ingress命令的 output:

入口.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: ACM_CERT_ARN
    alb.ingress.kubernetes.io/load-balancer-name: app-lb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/security-groups: load_balancer_security_group_id
    alb.ingress.kubernetes.io/subnets: subnet_3_id, subnet_4_id
    alb.ingress.kubernetes.io/target-type: ip
  creationTimestamp: "2022-08-08T08:09:11Z"
  finalizers:
  - ingress.k8s.aws/resources
  generation: 1
  name: my-ingress
  namespace: default
  resourceVersion: "2395"
  uid: 992799bb-3bd5-48fa-9b79-6c9dedab8f6b
spec:
  ingressClassName: alb
  rules:
  - host: public.my_domain.com
    http:
      paths:
      - backend:
          service:
            name: mlflow
            port:
              number: 5252
        path: /mlflow
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - hostname: app-lb-XXXXXXXXXX.eu-central-1.elb.amazonaws.com

指令 Output

> kubectl get ingress my-ingress
NAME          CLASS   HOSTS                    ADDRESS                                                PORTS   AGE
my-ingress    alb     public.my_domain.com     app-lb-XXXXXXXXXX.eu-central-1.elb.amazonaws.com       80      22m

> kubectl describe ingress my-ingress
...
Normal   SuccessfullyReconciled  16s (x2 over 29s)  ingress  Successfully reconciled

我想強調的另一件重要的事情是,我沒有通過 Terraform 預置 AWS Application Load Balancer,它是通過 Ingress 資源上的alb.ingress.kubernetes.io/load-balancer-name注釋自動創建的。

從網絡的角度來看,我預置了一個帶有 4 個子網的 AWS VPC:

  • subnet_1(私有子網,10.0.0.0/24 CIDR,AZ eu-central-1a)
  • subnet_2(私有子網,10.0.1.0/24 CIDR,AZ eu-central-1b)
  • subnet_3(公共子網,10.0.2.0/24 CIDR,AZ eu-central-1a)
  • subnet_4(公共子網,10.0.3.0/24 CIDR,AZ eu-central-1b)

默認 VPC 安全組應用於網絡中的所有子網。 在入站和出站規則下方: 默認 VPC 安全組的入站規則 默認 VPC 安全組的出站規則

對於 AWS Application Load Balancer(面向 Internet),我創建了一個具有以下規則的專用安全組: AWS Application Load Balancer 的入站規則 AWS Application Load Balancer 的出站規則

問題

即使我可以通過將端口轉發到本地工作站來驗證我的 MLflow 服務是否按預期運行,但如果我嘗試導航到public.my_domain.com/mlflow ,我確實會自動重定向到 HTTPS,但我最終會收到一個504 Gateway Time-Out錯誤。

如果我只是導航到public.my_domain.com我會得到一個404 Reponse響應,我可以在瀏覽器的網絡選項卡中看到它。

我還嘗試導航到 AWS 負載均衡器 URL,但在這種情況下,我得到了app-lb-XXXXXXXXXX.eu-central-1.elb.amazonaws.comapp-lb-XXXXXXXXXX.eu-central-1.elb.amazonaws.com/mlflow404 Responseapp-lb-XXXXXXXXXX.eu-central-1.elb.amazonaws.com/mlflow

非常感謝您在嘗試解決此問題時提供的幫助。 我的最終目標是能夠訪問我在public.my_domain.com/mlflow上的部署。

我是否缺少任何類型的alb.*注釋才能通過我的自定義子域或 AWS 負載均衡器 URL 成功公開我的服務? 我懷疑如果問題僅與我的子域有關,我應該能夠訪問 AWS ALB URL 上的工作負載。

我應該將任何重寫規則添加到注釋中嗎? 例如,在 Google Cloud Platform 上,我使用的是 NGINX Ingress Controller,我必須為這種特殊情況添加注釋"nginx.ingress.kubernetes.io/rewrite-target": "/$2"

我是否必須更新我的任何安全組才能訪問這些資源? 我的假設是,如果是這樣的話,我什至不能從 URL 中獲得404 Reponse

最后,我也願意使用不同的設置(例如:AWS 網絡負載均衡器 + NGINX 入口控制器),但我非常感謝任何文檔的可用參考。 我最初嘗試設置 AWS NLB + NGINX Ingress Controller,但最終無法獲得令人滿意的結果。

編輯1

即使我還沒有找到這個問題的根本原因,我已經采取了一些額外的步驟來調查這個問題並對我的配置進行了一些更新:

  • 為了確保流量可以從 AWS Application Load Balancer 流向 VPC,我在 VPC 安全組中添加了一個新的入站規則以允許來自 ALB 安全組的“所有流量”,並在 ALB 安全組中添加了一個新的出站規則來允許“所有流量”到 VPC 安全組

  • 我注意到當我定義我的 Ingress 資源時,它會自動為映射的服務創建一個目標組,並且定義的目標組實際上是不健康的。 我的初始配置和更新配置都會顯示此行為。 Ingress 資源定義創建的 AWS 目標組運行狀況不佳

  • 我嘗試將我的服務資源定義配置為ClusterIPNodePort ,並相應地更新了 Ingress 上的alb.ingress.kubernetes.io/target-type注釋

  • 我在 Service 資源定義中添加了額外的注釋來指定健康檢查行為,因為我意識到 MLflow 有一個專用的端點:

     service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /health service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "5252" service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP
  • 在本地工作站上轉發服務時,我通過curl -I仔細檢查了//health端點,並且兩個端點都返回了狀態代碼200

編輯2
私有子網有一個配置了相關路由表的 NAT 網關。

私有子網路由表私有子網 NAT 網關路由表配置

公共子網有一個配置了相關路由表的 Internet 網關。

公共子網路由表私有子網 Internet 網關路由表配置

> kubectl describe svc mlflow
Name:                     mlflow
Namespace:                default
Labels:                   app.kubernetes.io/instance=mlflow
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=mlflow
                          app.kubernetes.io/version=1.27.0
                          helm.sh/chart=mlflow-1.27.0
Annotations:              meta.helm.sh/release-name: mlflow
                          meta.helm.sh/release-namespace: default
Selector:                 app.kubernetes.io/instance=mlflow,app.kubernetes.io/name=mlflow
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       172.20.206.164
IPs:                      172.20.206.164
Port:                     mlflow  5252/TCP
TargetPort:               5252/TCP
NodePort:                 mlflow  32290/TCP
Endpoints:                10.0.1.11:5252
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

編輯3
我已經嘗試使用以下條目更新我的 Ingress 資源中的路徑定義:

  • /mlflow
  • /mlflow*
  • /mlflow/
  • /mlflow/*

無論 Ingress 路徑格式和類型如何(我嘗試了PrefixImplementationSpecific ),我總是遇到同樣的錯誤。

在我的 AWS 負載均衡器 Controller 和 CoreDNS pod 的日志下方添加。

> kubectl logs aws-load-balancer-controller-65768c7bd4-5zh9t -n kube-system  
{"level":"info","ts":1661337648.830114,"logger":"controllers.ingress","msg":"creating loadBalancer","stackID":"default/ingress","resourceID":"LoadBalancer"}
{"level":"info","ts":1661337649.143063,"logger":"controllers.ingress","msg":"created loadBalancer","stackID":"default/ingress","resourceID":"LoadBalancer","arn":"arn:aws:elasticloadbalancing:eu-central-1:832003983940:loadbalancer/app/app-lb/fb8f025c92f17f99"}
{"level":"info","ts":1661337649.1980903,"logger":"controllers.ingress","msg":"creating listener","stackID":"default/ingress","resourceID":"80"}
{"level":"info","ts":1661337649.262167,"logger":"controllers.ingress","msg":"created listener","stackID":"default/ingress","resourceID":"80","arn":"arn:aws:elasticloadbalancing:eu-central-1:832003983940:listener/app/app-lb/fb8f025c92f17f99/58d7bed72dc5a575"}
{"level":"info","ts":1661337649.2622535,"logger":"controllers.ingress","msg":"creating listener","stackID":"default/ingress","resourceID":"443"}
{"level":"info","ts":1661337649.4499094,"logger":"controllers.ingress","msg":"created listener","stackID":"default/ingress","resourceID":"443","arn":"arn:aws:elasticloadbalancing:eu-central-1:832003983940:listener/app/app-lb/fb8f025c92f17f99/5cb08a47d89149c3"}
{"level":"info","ts":1661337649.5827081,"logger":"controllers.ingress","msg":"creating listener rule","stackID":"default/ingress","resourceID":"443:1"}
{"level":"info","ts":1661337649.6701205,"logger":"controllers.ingress","msg":"created listener rule","stackID":"default/ingress","resourceID":"443:1","arn":"arn:aws:elasticloadbalancing:eu-central-1:832003983940:listener-rule/app/app-lb/fb8f025c92f17f99/5cb08a47d89149c3/c246154c528e7f81"}
{"level":"info","ts":1661337649.670285,"logger":"controllers.ingress","msg":"creating targetGroupBinding","stackID":"default/ingress","resourceID":"default/ingress-mlflow:5252"}
{"level":"info","ts":1661337649.7510114,"logger":"controllers.ingress","msg":"created targetGroupBinding","stackID":"default/ingress","resourceID":"default/ingress-mlflow:5252","targetGroupBinding":{"namespace":"default","name":"k8s-default-mlflo-17cef2d176"}}
{"level":"info","ts":1661337649.8379247,"logger":"controllers.ingress","msg":"successfully deployed model","ingressGroup":"default/ingress"}
{"level":"info","ts":1661337649.8394184,"msg":"registering targets","arn":"arn:aws:elasticloadbalancing:eu-central-1:832003983940:targetgroup/k8s-default-mlflo-17cef2d176/9e67c879c5aee34a","targets":[{"AvailabilityZone":null,"Id":"i-0c7dd86bd788d7173","Port":31564}]}
{"level":"info","ts":1661337649.9140155,"msg":"registered targets","arn":"arn:aws:elasticloadbalancing:eu-central-1:832003983940:targetgroup/k8s-default-mlflo-17cef2d176/9e67c879c5aee34a"}

> kubectl logs aws-load-balancer-controller-65768c7bd4-lj4bs -n kube-system  
{"level":"info","ts":1661337543.0539248,"msg":"version","GitVersion":"v2.4.2","GitCommit":"77370be7f8e13787a3ec0cfa99de1647010f1055","BuildDate":"2022-05-24T22:33:27+0000"}
{"level":"info","ts":1661337543.0877683,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1661337543.0911741,"logger":"setup","msg":"adding health check for controller"}
{"level":"info","ts":1661337543.0913644,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/mutate-v1-pod"}
{"level":"info","ts":1661337543.0941734,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/mutate-elbv2-k8s-aws-v1beta1-targetgroupbinding"}
{"level":"info","ts":1661337543.0943336,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/validate-elbv2-k8s-aws-v1beta1-targetgroupbinding"}
{"level":"info","ts":1661337543.0944283,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/validate-networking-v1-ingress"}
{"level":"info","ts":1661337543.0944858,"logger":"setup","msg":"starting podInfo repo"}
I0824 10:39:05.094659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/aws-load-balancer-controller-leader...
{"level":"info","ts":1661337545.094871,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
{"level":"info","ts":1661337545.0950024,"logger":"controller-runtime.webhook.webhooks","msg":"starting webhook server"}
{"level":"info","ts":1661337545.0952883,"logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"}
{"level":"info","ts":1661337545.0953963,"logger":"controller-runtime.webhook","msg":"serving webhook server","host":"","port":9443}
{"level":"info","ts":1661337545.0957239,"logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher"}

> kubectl logs coredns-78666889b9-hwpqc -n kube-system  
.:53
[INFO] plugin/reload: Running configuration MD5 = 47d57903c0f0ba4ee0626a17181e5d94
CoreDNS-1.8.7
linux/amd64, go1.17.7, a9adfd56

在調查了這個問題 10 多天之后,我想我可能已經找到了阻止我的服務通過自定義域 URL 訪問的問題。

通過 Terraform 配置我的技術堆棧時,我正在創建兩個具有以下入口/出口規則的安全組:

  • VPC 安全組

    ingress { description = "TLS ingress traffic" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = [aws_vpc.vpc.cidr_block] } ingress { description = "RDS ingress traffic" from_port = 5432 to_port = 5432 protocol = "tcp" cidr_blocks = [aws_vpc.vpc.cidr_block] } ingress { description = "ALB ingress traffic" from_port = 0 to_port = 0 protocol = "-1" security_groups = [aws_security_group.alb.id] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] }
  • ALB 安全組

    # HTTPS Ingress ingress { description = "TLS ingress traffic" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } # HTTP Ingress ingress { description = "HTTP ingress traffic" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] ipv6_cidr_blocks = ["::/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = [var.vpc_ipam_pool_cidr_block] }

但是,當通過 Terraform 配置我的 EKS 集群時,我僅在vpc_config塊中指定以下參數,假設 VPC 安全組通過子網傳播自動應用於集群資源:

  vpc_config {
    subnet_ids              = var.eks_subnet_ids
    endpoint_public_access  = var.eks_endpoint_public_access
    endpoint_private_access = var.eks_endpoint_private_access
  }

即使我在配置 EKS 集群時明確 map 將 VPC 和 ALB 安全組添加到 EKS 集群,Terraform 始終會為 EKS 集群創建一個新的安全組,該集群沒有適當的入口/出口規則。 因此,我必須使用以下條目更新我的 Terraform 配置:

resource "aws_security_group_rule" "eks_ingress_from_alb" {
    depends_on = [
      aws_eks_cluster.cluster
    ]

    security_group_id        = aws_eks_cluster.cluster.vpc_config.cluster_security_group_id 
    type                     = "ingress"
    protocol                 = "-1"
    from_port                = 0
    to_port                  = 0
    source_security_group_id = var.alb_security_group_id
}

進行此更新后,EC2 目標組變得健康,我能夠通過自定義子域訪問 MLflow 服務。 但是,我只能在/路徑上訪問它,而不能在/mlflow等子路徑上訪問它。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM