简体   繁体   English

使用Nginx,node-http-proxy屏蔽IP地址

[英]Using Nginx, node-http-proxy to mask IP addresses

First of all, I'd like to apologize for the long post! 首先,对于长篇文章我深表歉意!

I'm almost close to figuring everything out! 我几乎要搞清楚一切! What I want to do is to use node-http-proxy to mask a series of dynamic IPs that I get from a MySQL database. 我想做的是使用node-http-proxy掩盖我从MySQL数据库获得的一系列动态IP。 I do this by redirecting the subdomains to node-http-proxy and parsing it from there. 我通过将子域重定向到node-http-proxy并从那里进行解析来做到这一点。 I was able to do this locally without any problems. 我能够在本地完成此操作,而没有任何问题。

Remotely, it's behind an Nginx web server with HTTPS enabled (I have a wildcard certificate issued through Let's Encrypt, and a Comodo SSL for the domain). 在远程,它在启用HTTPS的Nginx Web服务器后面(我有一个通过Let's Encrypt颁发的通配符证书,以及该域的Comodo SSL)。 I managed to configure it so it passed it to the node-http-proxy without problems. 我设法对其进行了配置,因此它可以毫无问题地传递给node-http-proxy。 The only problem, is that the latter is giving me 唯一的问题是,后者给了我

  The error is { Error: connect ECONNREFUSED 127.0.0.1:80 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14) errno: 'ECONNREFUSED', code: 'ECONNREFUSED', syscall: 'connect', address: '127.0.0.1', port: 80 } 

Whenever I set: 每当我设置:

proxy.web(req, res, { target, ws: true }

And I don't know if the problem is the remote address (highly unlikely since I'm able to connect through a secondary device), or I have misconfigured nginx (highly likely). 而且我不知道问题是否出在远程地址上(因为我可以通过辅助设备进行连接,所以可能性很小),或者我配置了nginx的可能性很高(可能性很高)。 There's also the possibility that it may be clashing with Nginx which is listening to port 80. But I don't know why node-http-proxy would connect through port 80 也有可能与正在监听端口80的Nginx冲突。但是我不知道为什么node-http-proxy将通过端口80连接

Some additional info: There's a Ruby on Rails app running side-by-side as well. 一些其他信息:还有一个Ruby on Rails应用程序并排运行。 Node-http-proxy, nginx, ruby on rails are running in each own Docker container. Node-http-proxy,nginx,ruby on rails在每个自己的Docker容器中运行。 I don't think it's a problem from Docker, since I was able to locally test this without any problems. 我认为这不是Docker的问题,因为我能够在本地进行测试而没有任何问题。

Here's my current nginx.conf (I have replaced my domain name for example.com, for security reasons) 这是我当前的nginx.conf(出于安全原因,我已将域名替换为example.com)

The server_name "~^\\d+\\.example\\.co$"; server_name "~^\\d+\\.example\\.co$"; is where I want it to redirect to node-http-proxy, whereas example.com is where a Ruby on Rails application lies. 是我希望它重定向到node-http-proxy的地方,而example.com是Ruby on Rails应用程序所在的地方。

# https://codepany.com/blog/rails-5-and-docker-puma-nginx/
# This is the port the app is currently exposing.
# Please, check this: https://gist.github.com/bradmontgomery/6487319#gistcomment-1559180  

upstream puma_example_docker_app {
  server app:5000;
}


server {
    listen 80 default_server;
    listen [::]:80 default_server;

    # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
    # Enable once you solve wildcard subdomain issue.
    return 301 https://$host$request_uri;
}

server {

  server_name "~^\d+\.example\.co$";

  # listen 80;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # Created by Certbot
  ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
  # include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
  # ssl_certificate_key /etc/ssl/private/example.co.key;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
  # This is generated by ourselves. 
  # ssl_dhparam /etc/ssl/certs/dhparam.pem;

  # intermediate configuration. tweak to your needs.
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # OCSP Stapling ---
  # fetch OCSP records from URL in ssl_certificate and cache them
  ssl_stapling on;
  ssl_stapling_verify on;

  ## verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /etc/ssl/certs/trusted.crt;




  location / {
    # https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    proxy_pass http://ipmask_docker_app;
    # limit_req zone=one;
    access_log /var/www/example/log/nginx.access.log;
    error_log /var/www/example/log/nginx.error.log;
  }
}





# SSL configuration was obtained through Mozilla's 
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
server {

server_name localhost example.co www.example.co; #puma_example_docker_app;

# listen 80;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # Created by Certbot
  # ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
  #ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
  # include /etc/letsencrypt/options-ssl-nginx.conf;
  # ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
  ssl_certificate_key /etc/ssl/private/example.co.key;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
  # This is generated by ourselves. 
  ssl_dhparam /etc/ssl/certs/dhparam.pem;

  # intermediate configuration. tweak to your needs.
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # OCSP Stapling ---
  # fetch OCSP records from URL in ssl_certificate and cache them
  ssl_stapling on;
  ssl_stapling_verify on;

  ## verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /etc/ssl/certs/trusted.crt;

  # resolver 127.0.0.1;
  # https://support.comodo.com/index.php?/Knowledgebase/Article/View/1091/37/certificate-installation--nginx

  # The above was generated through Mozilla's SSL Config Generator
  # https://mozilla.github.io/server-side-tls/ssl-config-generator/

  # This is important for Rails to accept the headers, otherwise it won't work:
  # AKA. => HTTP_AUTHORIZATION_HEADER Will not work!
  underscores_in_headers on; 

  client_max_body_size 4G;
  keepalive_timeout 10;

  error_page 500 502 504 /500.html;
  error_page 503 @503;


  root /var/www/example/public;
  try_files $uri/index.html $uri @puma_example_docker_app;

  # This is a new configuration and needs to be tested.
  # Final slashes are critical
  # https://stackoverflow.com/a/47658830/1057052
  location /kibana/ {
      auth_basic "Restricted";
      auth_basic_user_file /etc/nginx/.htpasswd;
      #rewrite ^/kibanalogs/(.*)$ /$1 break;
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_redirect off;

      proxy_pass http://kibana:5601/;

  }


  location @puma_example_docker_app {
    # https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    proxy_pass http://puma_example_docker_app;
    # limit_req zone=one;
    access_log /var/www/example/log/nginx.access.log;
    error_log /var/www/example/log/nginx.error.log;
  }

  location ~ ^/(assets|images|javascripts|stylesheets)/   {    
      try_files $uri @rails;     
      access_log off;    
      gzip_static on; 

      # to serve pre-gzipped version     
      expires max;    
      add_header Cache-Control public;     

      add_header Last-Modified "";    
      add_header ETag "";    
      break;  
   } 

  location = /50x.html {
    root html;
  }

  location = /404.html {
    root html;
  }

  location @503 {
    error_page 405 = /system/maintenance.html;
    if (-f $document_root/system/maintenance.html) {
      rewrite ^(.*)$ /system/maintenance.html break;
    }
    rewrite ^(.*)$ /503.html break;
  }

  if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
    return 405;
  }

  if (-f $document_root/system/maintenance.html) {
    return 503;
  }

  location ~ \.(php|html)$ {
    return 405;
  }
}

Current docker-compose file: 当前的docker-compose文件:

# This is a docker compose file that will pull from the private
# repo and will use all the images. 
# This will be an equivalent for production.

version: '3.2'
services:
  # No need for the database in production, since it will be connecting to one
  # Use this while you solve Database problems
  app:
    image: myrepo/rails:latest
    restart: always
    environment:
      RAILS_ENV: production
      # What this is going to do is that all the logging is going to be printed into the console. 
      # Use this with caution as it can become very verbose and hard to read.
      # This can then be read by using docker-compose logs app.
      RAILS_LOG_TO_STDOUT: 'true'
      # RAILS_SERVE_STATIC_FILES: 'true'
    # The first command, the remove part, what it does is that it eliminates a file that 
    # tells rails and puma that an instance is running. This was causing issues, 
    # https://github.com/docker/compose/issues/1393
    command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -e production -p 5000 -b '0.0.0.0'"
    # volumes:
    #   - /var/www/cprint
    ports:
      - "5000:5000"
    expose:
      - "5000"
    networks:
      - elk
    links:
      - logstash
  # Uses Nginx as a web server (Access everything through http://localhost)
  # https://stackoverflow.com/questions/30652299/having-docker-access-external-files
  # 
  web:
    image: myrepo/nginx:latest
    depends_on:
      - elasticsearch
      - kibana
      - app
      - ipmask
    restart: always
    volumes:
      # https://stackoverflow.com/a/48800695/1057052
      # - "/etc/ssl/:/etc/ssl/"
      - type: bind
        source: /etc/ssl/certs
        target: /etc/ssl/certs
      - type: bind
        source: /etc/ssl/private/
        target: /etc/ssl/private
      - type: bind
        source: /etc/nginx/.htpasswd
        target: /etc/nginx/.htpasswd
      - type: bind
        source: /etc/letsencrypt/
        target: /etc/letsencrypt/
    ports:
      - "80:80"
      - "443:443"
    networks:
      - elk
      - nginx
    links:
      - elasticsearch
      - kibana
  # Defining the ELK Stack! 
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
    container_name: elasticsearch
    networks:
      - elk
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
      # - ./elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
  logstash:
    image: docker.elastic.co/logstash/logstash:6.2.3
    container_name: logstash
    volumes:
      - ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      # This is the most important part of the configuration
      # This will allow Rails to connect to it. 
      # See application.rb for the configuration!
      - ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
    command: logstash -f /etc/logstash/conf.d/logstash.conf
    ports:
      - "5228:5228"
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    networks:
      - elk
    links:
      - elasticsearch
    depends_on:
      - elasticsearch
  kibana:
    image: docker.elastic.co/kibana/kibana:6.2.3
    volumes:
      - ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - "5601:5601"
    networks:
      - elk
    links:
      - elasticsearch
    depends_on:
      - elasticsearch
  ipmask:
    image: myrepo/proxy:latest
    command: "npm start"
    restart: always
    environment:
      - "NODE_ENV=production"
    expose:
      - "5050"
    ports:
      - "4430:80"
    links:
      - app
    networks:
      - nginx


# # Volumes are the recommended storage mechanism of Docker. 
volumes:
  elasticsearch:
    driver: local
  rails:
    driver: local

networks:
    elk:
      driver: bridge
    nginx:
      driver: bridge

Thank you very much! 非常感谢你!

Waaaaaaitttt. Waaaaaaitttt。 There was no problem with the code! 代码没有问题!

The problem was that I was trying to pass a bland IP address without appending http before it! 问题是我试图传递一个平淡的IP地址而不在其前面附加http! By appending HTTP everything is working!! 通过附加HTTP,一切正常!

Example: 例:

I was doing: 我在做:

proxy.web(req, res, { target: '128.29.41.1', ws: true })

When in fact this was the answer: 实际上,这就是答案:

proxy.web(req, res, { target: 'http://128.29.41.1', ws: true })

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM