简体   繁体   中英

Using nginx as reverse proxy pointing to AWS alb

I've successfully setup my nginx public facing loadbalancer with my upstreams pointing to an application loadbalancer of which balances my High Availablity Elastic Beanstalk environment.

The problem i have is overnight, it seems, when i visit the url it hangs. then with an nginx reload it works again...

Some more info in the architecture:

(listeners on the application loadbalancer)

  • 80 ---> nodejs:4000

  • 8400 ---> php:8400

Public facing load balancer running on an ec2 small:

/etc/nginx/conf.d/my.site.com.conf:

upstream api_container {
  server awseb-AWSEB-PKTUBG0TQ9ME-840688617.eu-west-1.elb.amazonaws.com:8400;
}

upstream app_container {
   server awseb-AWSEB-PKTUBG0TQ9ME-840688617.eu-west-1.elb.amazonaws.com;
}

server {

    listen 80;
    listen [::]:80;
    server_name my.site.com

    location /.well-known {
       alias /var/www/ssl/.well-known;
    }

    location / {
       rewrite ^ https://my.site.com$request_uri permanent;
    }
  }


    server {
        listen 443 ssl;
        keepalive_timeout 75s;
        server_name my.site.com;
        add_header Access-Control-Allow-Origin '*';
        add_header Access-Control-Allow-Methods 'POST, GET, OPTIONS, PUT, DELETE';
        add_header Access-Control-Allow-Headers 'Authorization, X-Requested-With, X-Requested-At, enctype, Accept, Content-Type, Content-Disposition, X-Xsrf-Token, X-Csrf-Token';
        add_header Access-Control-Expose-Headers 'Authorization';

        location / {
            include proxy_params;
            proxy_cache STATIC; // this is set in nginx.conf
            proxy_pass http://app_container/;
            proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
        }

        location /lv1 {
            include proxy_params;
            proxy_pass http://api_container/;
        }

        // ssl certs import down here
    }

/etc/nginx/proxy_params

proxy_ignore_headers "Cache-Control" "Expires";
proxy_max_temp_file_size 0;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size        1024m;
client_body_buffer_size     4m;
proxy_connect_timeout 300;
proxy_read_timeout 300;
proxy_send_timeout 300;
proxy_intercept_errors off;

am i missing something fundamental? I am not an nginx master but im a little stumped... My assumption is something to do with buffering or caching? but i can't put my finger on it

UPDATE

Turns out it was php-fpm going crazy using 97-100% cpu on the api instance so i was getting timeouts and errors from the proxy...

on the api webserver i have nginx serving laravel through the php-fpm socket and its using near to 100% cpu on specific requests. looks like its a code problem...

Is nginx the tool for this? maybe just apache and php?

Just Adding the full answer to this problem. I'm using laravel as a database api.

Laravels eloquent feature is great and easy to write... but resource thirsty apparently when doing huge queries.

After rewriting the function that was bricking the api from eloquent features, into pure sql it works flawlessly again.

Lesson learnt not to blame third party tools every time :3

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM