[英]Kubernetes nginx ingress path-based routing of HTTPS in AWS
Question: Within Kubernetes, how do I configure the nginx ingress to treat traffic from an elastic load balancer as HTTPS, when it is defined as TCP? Question: Within Kubernetes, how do I configure the nginx ingress to treat traffic from an elastic load balancer as HTTPS, when it is defined as TCP?
I am working with a Kubernetes cluster in an AWS environment.我正在 AWS 环境中使用 Kubernetes 集群。 I want to use an nginx ingress to do path-based routing of the HTTPS traffic;
我想使用 nginx 入口对 HTTPS 流量进行基于路径的路由; however, I do not want to do SSL termination or reencryption on the AWS elastic load balancer.
但是,我不想在 AWS 弹性负载均衡器上执行 SSL 终止或重新加密。
The desired setup is:所需的设置是:
client -> elastic load balancer -> nginx ingress -> pod客户端 -> 弹性负载均衡器 -> nginx 入口 -> pod
Requirements:要求:
1. The traffic be end-to-end encrypted. 1. 流量是端到端加密的。
2. An AWS ELB must be used (the traffic cannot go directly into Kubernetes from the outside world). 2. 必须使用AWS ELB(外部流量不能go直接进入Kubernetes)。
The problem that I have is that to do SSL passthrough on the ELB, I must configure the ELB as TCP traffic.我遇到的问题是,要在 ELB 上进行 SSL 直通,我必须将 ELB 配置为 TCP 流量。 However, when the ELB is defined as TCP, all traffic bypasses nginx.
但是,当 ELB 定义为 TCP 时,所有流量都会绕过 nginx。
As far as I can tell, I can set up a TCP passthrough via a ConfigMap , but that is merely another passthrough;据我所知,我可以通过 ConfigMap 设置 TCP 直通,但这只是另一个直通; it does not allow me to do path-based routing within nginx.
它不允许我在 nginx 中进行基于路径的路由。
I am looking for a way to define the ELB as TCP (for passthrough) while still having the ingress treat the traffic as HTTPS.我正在寻找一种方法将 ELB 定义为 TCP(用于直通),同时仍然让入口将流量视为 HTTPS。
I can define the ELB as HTTPS, but then there is a second, unnecessary negotiate/break/reencrypt step in the process that I want to avoid if at all possible.我可以将 ELB 定义为 HTTPS,但是在此过程中还有第二个不必要的协商/中断/重新加密步骤,我希望尽可能避免。
To make it more clear I'll start from OSI model , which tells us that TCP is level 4 protocol and HTTP/HTTPS is level 7 protocol.为了更清楚,我将从OSI model开始,它告诉我们TCP是 4 级协议, HTTP/HTTPS 是 7 级协议。 So, frankly speaking
HTTP/HTTP
data is encapsulated to TCP
data before doing rest levels encapsulations to transfer packet to another network device.因此,坦率地说,
HTTP/HTTP
数据在进行 rest 级别封装以将数据包传输到另一个网络设备之前,被封装为TCP
数据。
If you setup Classic (TCP) LoadBalancer it stops reading packet data after reading TCP part, which is enough to decide (according to LB configuration) to which IP address
and to which IP port
this data packet should be delivered.如果您设置Classic (TCP) LoadBalancer ,它会在读取 TCP 部分后停止读取数据包数据,这足以决定(根据 LB 配置)该数据包应该发送到哪个
IP address
和哪个IP port
。 After that LB takes the TCP payload data and wrap it around with another TCP layer data and send it to the destination point (which in turn cause all other OSI layers applied).之后,LB 获取 TCP 有效负载数据,并用另一个 TCP 层数据将其包装起来,并将其发送到目标点(这反过来又会导致应用所有其他 OSI 层)。
To make your configuration works as expected, it's required to expose nginx-ingress-controller Pod using NodePort service .为了使您的配置按预期工作,需要使用NodePort 服务公开nginx-ingress-controller Pod。 Then Classic ELB can be configured to deliver traffic to any cluster node to port selected for that NodePort service.
然后可以将经典 ELB 配置为将流量传送到任何集群节点到为该 NodePort 服务选择的端口。 Usually it is in between
30000
and 32767
.通常它在
30000
和32767
之间。 Sou your LB pool will look like the following: Sou 你的 LB 池将如下所示:
Let's imagine cluster nodes have IP addresses 10.132.10.1...10
and NodePort port is 30276
.假设集群节点有 IP 地址
10.132.10.1...10
和 NodePort 端口是30276
。
ELB Endpoint 1: 10.132.10.1:30276
ELB Endpoint 2: 10.132.10.2:30276
...
ELB Endpoint 10: 10.132.10.10:30276
Note: In case of AWS ELB, I guess, nodes DNS names should be used instead of IP addresses.注意:对于 AWS ELB,我猜应该使用节点 DNS 名称而不是 IP 地址。
So it should cause the following sequence of traffic distribution from a client to Kubernetes application Pod:因此它应该导致从客户端到 Kubernetes 应用程序 Pod 的流量分配顺序如下:
abc.d:80
).abc.d:80
)。lmnk:30xxx
) and then send it to the selected destination. lmnk:30xxx
),然后将其发送到选定的目的地。nginx.conf
settings, Nginx process change HTTP request and deliver it to the cluster service, specified for the configured host and URL path. nginx.conf
settings, Nginx process change HTTP request and deliver it to the cluster service, specified for the configured host and URL path.IP_address:TCP_port
.IP_address:TCP_port
。 Note : To terminate SSL on ingress controller you have to create SSL certificates that includes ELB IP and ELB FQDN in the SAN section. Note : To terminate SSL on ingress controller you have to create SSL certificates that includes ELB IP and ELB FQDN in the SAN section.
Note : If you want to terminate SSL on the application Pod to have end to end SSL encryption, you may want to configure nginx to bypass SSL traffic. Note : If you want to terminate SSL on the application Pod to have end to end SSL encryption, you may want to configure nginx to bypass SSL traffic.
Bottom line : ELB configured for delivering TCP traffic to Kubernetes cluster works perfectly with nginx-ingress controller if you configure it in the correct way.底线:配置为将 TCP 流量传送到 Kubernetes 集群的 ELB 与 nginx-ingress controller 完美配合,如果您以正确的方式配置它。
In GKE (Google Kubernetes Engine) if you create a Service with type:LoadBalancer it creates you exactly TCP LB which forward traffic to a Service NodePort and then Kubernetes is responsible to deliver it to the Pod.在 GKE(Google Kubernetes 引擎)中,如果您创建一个类型为:LoadBalancer 的服务,它会准确地为您创建 TCP LB,它将流量转发到服务 NodePort,然后 Z30136395F01879792198317C118 负责将它传递到 Pod。 EKS (Elastic Kubernetes Service) from AWS works in pretty much similar way.
AWS 的 EKS(弹性 Kubernetes 服务)的工作方式非常相似。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.