[英]Kubernetes on AWS: Preserving Client IP with nginx-ingress + cert-manager
We have setup Kubernetes with nginx-ingress
combined with cert-manager
to automatically obtain and use SSL certificates for ingress domains using LetsEncrypt using this guide: https://medium.com/@maninder.bindra/auto-provisioning-of-letsencrypt-tls-certificates-for-kubernetes-services-deployed-to-an-aks-52fd437b06b0 . 我们使用此指南通过LetsEncrypt设置了带有nginx-ingress
并结合cert-manager
Kubernetes,以自动获取和使用SSL证书用于入口域: https ://medium.com/@maninder.bindra/auto-provisioning-of-letsencrypt- tls证书用于kubernetes,服务已部署到aks-52fd437b06b0 。 The result is that each Ingress defines its own SSL certificate that is automatically provisioned by cert-manager
. 结果是,每个Ingress都定义了自己的SSL证书,该证书由cert-manager
自动提供。
This all works well but for one problem, the source IP address of the traffic is lost to applications in Pods. 这一切都很好,但是有一个问题,流量的源IP地址已丢失给Pods中的应用程序。
There is an annotation that is advised to use to apply to the nginx-ingress
controller service service.beta.kubernetes.io/aws-load-balancer-backend-protocol: '*'
. 建议使用注释来应用于nginx-ingress
控制器服务service.beta.kubernetes.io/aws-load-balancer-backend-protocol: '*'
。 This has the effect of preserving source IP addresses. 这具有保留源IP地址的作用。 However, doing it breaks SSL: 但是,这样做会破坏SSL:
An error occurred during a connection to {my.domain.com}. SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG
My head is starting to spin. 我的头开始旋转。 Does anyone know of any approaches to this (it seems to me that this would be a common requirement)? 有谁知道解决这个问题的任何方法(在我看来这是一个普遍的要求)?
Ingress configuration: 入口配置:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-http-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: my.host.com
http:
paths:
- path: /
backend:
serviceName: my-http-service
servicePort: 80
tls:
- hosts:
- "my.host.com"
secretName: malcolmqa-tls
As @dom_watson mentioned in the comments, adding parameter controller.service.externalTrafficPolicy=Local
to Helm install configuration solved the issue due to the fact that Local
value preserves the client source IP, thus the network traffic will reach target Pod in Kubernetes cluster. 正如@dom_watson在评论中提到的那样,将参数controller.service.externalTrafficPolicy=Local
添加到Helm安装配置解决了该问题,因为Local
值保留了客户端源IP,因此网络流量将到达Kubernetes集群中的目标Pod。 Find more information in the official Kubernetes guidelines . 在Kubernetes官方指南中找到更多信息。
helm upgrade my-nginx stable/nginx-ingress --set rbac.create=true --set controller.service.externalTrafficPolicy=Local
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.