简体   繁体   English

在 AWS EKS 和 Istio Ingress 上使用 GRPC 的 SSL 提供 StatusCode.UNAVAILABLE

[英]SSL with GRPC on AWS EKS and Istio Ingress gives StatusCode.UNAVAILABLE

I'm running a kubernetes cluster using AWS EKS service (K8S version 1.17), with Istio (1.7.1) installed on it as an Operator installation.我正在使用 AWS EKS 服务(K8S 版本 1.17)运行 kubernetes 集群,并在其上安装 Istio (1.7.1) 作为 Operator 安装。

I've been running the services just fine as they work properly, and also I'm running the Istio Ingress Gateway as ingress service, published with an AWS NLB with the following annotations on the Istio Ingress Gateway:我一直在运行这些服务,因为它们正常工作,而且我将 Istio Ingress Gateway 作为入口服务运行,使用 AWS NLB 发布,在 Istio Ingress Gateway 上带有以下注释:

metadata:
  annoations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
    service.beta.kubernetes.io/aws-load-balancer-internal: "false"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "redacted arn"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

This creates successfully the NLB with 4 listeners (as per Istio ingress definition), with the 443 running TLS with the provided certificate.这成功创建了具有 4 个侦听器的 NLB(根据 Istio 入口定义),其中 443 使用提供的证书运行 TLS。

Behind it the Gateway is configured as following:在其后面,网关配置如下:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: service-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: grpc-plain
        protocol: GRPC
      hosts:
        - redacted
    - port:
        number: 443
        name: grpc-tls
        protocol: GRPC
      hosts:
        - redacted
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: service-vservice
  namespace: app
spec:
  gateways:
  - istio-system/service-gateway
  hosts:
  - redacted
  http:
  - route:
    - destination:
        host: service
        port:
          number: 8000

but while the plain port (80) works just fine with the load balancer, the SSL/TLS port 443 gives the following error using any language (tested with C, C++, Python):但是,虽然普通端口 (80) 与负载均衡器配合得很好,但 SSL/TLS 端口 443 使用任何语言(使用 C、C++、Python 测试)都会出现以下错误:

grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
    status = StatusCode.UNAVAILABLE
    details = "failed to connect to all addresses"
    debug_error_string = "{"created":"@1601027790.018488379","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":4089,"referenced_errors":[{"created":"@1601027790.018476348","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":393,"grpc_status":14}]}"
>

As an example, the Python client has been initialized as following:例如,Python 客户端已初始化如下:

import grpc
from service_pb2_grpc import ServiceStub

creds = grpc.ssl_channel_credentials()

with grpc.secure_channel(url, creds) as channel:
    grpc_client = ServiceStub(channel)

What am i doing wrong to get this error while using a simple client?使用简单客户端时出现此错误我做错了什么?

According to this article about using GRPC on AWS, it appears that using GRPC on AWS is a challenging task.根据文章关于使用AWS的GRPC,似乎对使用AWS GRPC是一项艰巨的任务。

There is another article about how to create a load blanacer for GRPC on AWS.还有另一篇关于如何在 AWS 上为 GRPC 创建负载均衡器的文章

Here's the fact – gRPC does not work with AWS load balancers properly.事实是这样的——gRPC 不能正确地与 AWS 负载均衡器一起工作。

It follows up with a workaround using envoy:接下来是使用 envoy 的解决方法:

How Can you Load Balance gRPC on AWS using Envoy如何使用 Envoy 在 AWS 上对 gRPC 进行负载平衡

So what's the solution here?那么这里的解决方案是什么? We decided to use third-party software for load balancing.我们决定使用第三方软件进行负载均衡。 In this case, we used Envoy (for AWS load balancer Layer 7).在这种情况下,我们使用了Envoy (用于 AWS 负载均衡器第 7 层)。 It's an open-source software created by Lyft.它是由 Lyft 创建的开源软件。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM