简体   繁体   English

在带有 ClusterIP 服务和 kube-keepalive-vip 的裸机 Kubernetes 上使用 Traefik 2 没有 X-Forwarded-For

[英]no X-Forwarded-For with Traefik 2 on bare metal Kubernetes with ClusterIP Service and kube-keepalive-vip

My setup is a bare metal cluster running Kubernetes 1.17.我的设置是一个运行 Kubernetes 1.17 的裸机集群。 I'm using Traefik 2(.3.2) as a reverse proxy and to get failover for my machines I use kube-keepalive-vip [1].我使用 Traefik 2(.3.2) 作为反向代理,为了让我的机器进行故障转移,我使用了kube-keepalive-vip [1]。

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-keepalived-vip-config
  namespace: kube-system
data:
  172.111.222.33: kube-system/traefik2-ingress-controller

Therefore my traefik service is of default type cluster IP and references an external IP provided by the kube-keepalive-vip service:因此,我的 traefik 服务是默认类型的集群 IP,并引用了kube-keepalive-vip服务提供的外部 IP:

---
apiVersion: v1
kind: Service
metadata:
  name: traefik2-ingress-controller
  namespace: kube-system
spec:
  selector:
    app: traefik2-ingress-controller
  ports:
    - protocol: TCP
      name: web
      port: 80
    - protocol: TCP
      name: webs
      port: 443
  externalIPs:
    - 172.111.222.33

This works as it is.这按原样工作。 Now I want to restrict some of my applications to be accessible only from a specific subnet inside my network.现在我想限制我的一些应用程序只能从我网络内的特定子网访问。 Since my requests are handled by kube-keepalive-vip and also kube-proxy , the client IP in my requests is not the one of the actual client anymore.由于我的请求由kube-keepalive-vipkube-proxy ,因此我请求中的客户端 IP 不再是实际客户端之一。 But as far as I got the documentation kube-proxy is setting the real ip in the X-Forwarded-For header.但据我kube-proxy正在X-Forwarded-For标头中设置真实 ip。 So my Middleware looks like this:所以我的中间件看起来像这样:

internal-ip-whitelist:
  ipWhiteList:
    sourceRange:
      - 10.0.0.0/8 # my subnet
      - 60.120.180.240 # my public ip
    ipStrategy:
      depth: 2 # take second entry from X-Forwarded-For header

Now each request to the ingresses this middleware is attached to is rejected.现在,对这个中间件所附加的入口的每个请求都被拒绝了。 I checked the Traefik logs and saw, that the requests contain some X-Forwarded-* headers, but there is no X-Forwarded-For :(我检查了 Traefik 日志,发现请求包含一些X-Forwarded-*标头,但没有X-Forwarded-For :(

Has anyone any experience with this and can point me to my error?有没有人有这方面的经验,可以指出我的错误吗? Is there probably something wrong with my Kubernetes setup?我的 Kubernetes 设置可能有问题吗? Or is there something missing in my kube-keepalive-vip config?还是我的kube-keepalive-vip配置中缺少某些内容?

Thanks in advance!提前致谢!

[1] https://github.com/aledbf/kube-keepalived-vip [1] https://github.com/aledbf/kube-keepalived-vip

For everyone stumbling upon this, I managed to fix my problem in the meantime.对于遇到此问题的每个人,我同时设法解决了我的问题。

The main problem is kube-proxy .主要问题是kube-proxy By default all services are routed through it.默认情况下,所有服务都通过它路由。 And, depending on your CNI provider (I use flannel ), the information of your calling client is lost there.并且,根据您的 CNI 提供商(我使用flannel ),您的呼叫客户端的信息会在那里丢失。

K8s provides a way around that by setting the .spec.externalTrafficPolicy to Local ( https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support ). K8s 通过将.spec.externalTrafficPolicy设置为Local ( https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support ) 提供了一种解决方法。 But this is not supported for ClusterIP services.ClusterIP服务不支持此功能。

So I got around that by using MetalLB ( https://metallb.universe.tf/ ) which provides load balancing for bare metal clusters.所以我通过使用为裸机集群提供负载平衡的 MetalLB ( https://metallb.universe.tf/ ) 解决了这个问题。 After setting it up with my virtual IP that was assigned to the keepalived container before, I configured the traefik service with type LoadBalancer and requested the one IP I have in MetalLB.在使用我之前分配给 keepalived 容器的虚拟 IP 进行设置后,我将 traefik 服务配置为LoadBalancer并请求我在 MetalLB 中拥有的一个 IP。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM