简体   繁体   中英

Docker Swarm get real IP (client host) in Nginx

I have a stack with nginx and PHP to run on Docker Swarm Cluster.

In a moment in my PHP application, I need to get the remote_addr ($_SERVER['REMOTE_ADDR']) which contains the real IP from the client host accessing my webapp.

But the problem is that the IP informed for nginx by docker swarm cluster. It's showed an Internal IP like 10.255.0.2, but the real IP it's the external IP from the client Host (like 192.168.101.151).

How I can solve that?

My docker-compose file:

version: '3'

services:
  php:
    image: php:5.6
    volumes:
      - /var/www/:/var/www/
      - ./data/log/php:/var/log/php5
    networks:
      - backend
    deploy:
      replicas: 1
  web:
    image: nginx:latest
    ports:
      - "80:80"
    volumes:
      - /var/www/:/var/www/
      - ./data/log/nginx:/var/log/nginx
    networks:
      - backend
networks:
  backend:

My default.conf (vhost.conf) file:

server {
    listen          80;
    root            /var/www;
    index           index.html index.htm index.php;

    access_log  /var/log/nginx/access.log  main;
    error_log   /var/log/nginx/error.log error;

    location / {
        proxy_set_header        Host $host;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;

        try_files   $uri $uri/ /index.php;
    }

    location = /50x.html {
        root   /var/www;
    }

    # set expiration of assets to MAX for caching
    location ~* \.(js|css|gif|png|jp?g|pdf|xml|oga|ogg|m4a|ogv|mp4|m4v|webm|svg|svgz|eot|ttf|otf|woff|ico|webp|appcache|manifest|htc|crx|oex|xpi|safariextz|vcf)(\?[0-9]+)?$ {
            expires max;
            log_not_found off;
    }

    location ~ \.php$ {
        try_files                   $uri =404;
        fastcgi_index               index.php;
        fastcgi_split_path_info     ^(.+\.php)(/.+)$;
        fastcgi_pass                php:9000;
        include                     fastcgi_params;
        fastcgi_param               SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param               PATH_INFO       $fastcgi_path_info;
        fastcgi_read_timeout        300;
    }
}

My nginx config file:

user  nginx;
worker_processes    3;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    keepalive_timeout   15;
    client_body_buffer_size     100K;
    client_header_buffer_size   1k;
    client_max_body_size        8m;
    large_client_header_buffers 2 1k;

    gzip             on;
    gzip_comp_level  2;
    gzip_min_length  1000;
    gzip_proxied     expired no-cache no-store private auth;
    gzip_types       text/plain application/x-javascript text/xml text/css application/xml;

    log_format  main  '$remote_addr - $remote_user [$time_local]  "$request_filename" "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    include /etc/nginx/conf.d/*.conf;
}

for those don't want to read all the github thread ( https://github.com/moby/moby/issues/25526 ), the answer that was good for me was to change the config to this :

version: '3.7'
services:
  nginx:
    ports:
      - mode: host
        protocol: tcp
        published: 80
        target: 80
      - mode: host
        protocol: tcp
        published: 443
        target: 81

This still lets the internal overlay network work, but uses some tricks with iptables to forward those ports directly to the container, so the service inside the container see the correct source IP address of the packets.

There is no facility in iptables to allow balancing of ports between multiple containers, so you can only assign one port to one container (which includes multiple replicas of a container).

You can't get this yet through an overlay network. If you scroll up from bottom on this long-running GitHub issue , you'll see some options for using bridge networks in Swarm with your proxies to get around this issue for now.

changing port binding mode to host worked for me

ports: 
  - mode: host 
    protocol: tcp 
    published: 8082 
    target: 80

however your web front end must listen on a specific host inside swarm cluster ie

deploy: 
  placement: 
    constraints: 
      [node.role == manager]

X-Real-IP will be passthrough and you can use it to access client IP. You can look at http://dequn.github.io/2019/06/22/docker-web-get-real-client-ip/ for reference.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM