简体   繁体   中英

NGINX Ingress Controller's Load Balancer is hiding the real client IP

Setup

I'm playing around with K8s and I set up a small, single-node, bare metal cluster. For this cluster I pulled the NGINX Ingress Controller config from here , which is coming from the official getting started guide .

Progress

Ok, so pulling this set up a bunch of things, including a LoadBalancer in front. I like that.

For my app (single pod, returns the caller IP) I created a bunch of things to play around with. I now have SSL enabled and another ingress controller, which I pointed to my app's service, which then points to the deployed pod. This all works perfectly, I can browse the page with https. See:

集群设置

BUT...

My app is not getting the original IP from the client . All client requests end up as coming from 10.42.0.99... here's the controller config from describe :

入口控制器配置

Debugging

I tried like 50 solutions that were proposed online, none of them worked (ConfigMaps, annotations, proxy mode, etc). And I debugged in-depth, there's no X-Forwarder-For or any similar header in the request that reaches the pod. Previously I tested the same app on apache directly, and also in a docker setup, it works without any issues.

It's also worth mentioning that I looked into the ingress controller's pod and I already saw the same internal IP in there. I don't know how to debug the controller's pod further.

Happy to share more information and config if it helps.

UPDATE 2021-12-15

I think I know what the issue is... I didn't mention how I installed the cluster, assuming it's irrelevant. Now I think it's the most important thing

I set it up using K3S , which has its own LoadBalancer . And through debugging, I see now that all of my requests in NGINX have the IP of the load balancer's pod...

I still don't know how to make this Klipper LB give the source IP address though.

UPDATE 2021-12-17

Opened an issue with the Klipper LB.

Make sure your Nginx ingress configmap have enabled user IP real-ip-header: proxy_protocol try updating this line into configmap.

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  compute-full-forwarded-for: "true"
  use-forwarded-headers: "false"
  real-ip-header: proxy_protocol

still if that not work you can just inject this config as annotation your ingress configuration and test once.

nginx.ingress.kubernetes.io/configuration-snippet: |
      more_set_headers "X-Forwarded-For $http_x_forwarded_for";

@milosmns - one of the ways i have been trying is to not install servicelb (--no-deploy=servicelb) and remove traefik (--no-deploy=traefik).

Instead deploy haproxy ingress ( https://forums.rancher.com/t/how-to-access-rancher-when-k3s-is-installed-with-no-deploy-servicelb/17941/3 ) and enable proxy protocol. When you do this, all requests that hit the haproxy ingress will be injected with proxy protocol and no matter how they are routed you will be able to pick them up from anywhere. you can also get haproxy to inject X-Real-IP headers.

the important thing is that haproxy should be running on all master nodes. since there is no servicelb, your haproxy will always get the correct ip address.

Just set externalTrafficPolicy to "Local" if using GCP

add this in ingress controller service externalTrafficPolicy: Local

 service: externalTrafficPolicy: Local

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM