简体   繁体   中英

Nginx PHP Consistently Slower than Apache 2.2

In doing load testing comparing Apache 2.2 and Nginx 1.2.6 running using fully stock packages on Ubuntu 13.04, I consistently see lower performance for Nginx PHP requests than for Apache PHP ones; I'm looking for guidance to get our Nginx performance above that of Apache under all circumstances, if possible .

Apache settings are fairly standard, but the Nginx settings were customized considerably; they are listed below the benchmark results .

I used the benchmarking tool called Siege v3.0.2 ( http://www.joedog.org/siege-home/ ) to generate results for a single concurrent user (c1), 10 concurrent users (c10), and 100 concurrent users (c100); the results are as follows:

Apache Results:

      Date & Time,  Trans,  Elap Time,  Data Trans,  Resp Time,  Trans Rate,  Throughput,  Concurrent,    OKAY,   Failed
**** c1 Apache Static ****
2013-08-01 00:54:12,   5982,      59.23,         338,       0.01,      101.00,        5.71,        1.00,    5982,       0
**** c1 Apache PHP ****
2013-08-01 00:55:12,    549,      59.98,          88,       0.11,        9.15,        1.47,        1.00,     549,       0
**** c1 Apache Combined ****
2013-08-01 00:56:12,   1609,      59.98,         139,       0.04,       26.83,        2.32,        1.00,    1609,       0
**** c10 Apache Static ****
2013-08-01 00:57:12,  35983,      59.97,        2039,       0.02,      600.02,       34.00,        9.99,   35983,       0
**** c10 Apache PHP ****
2013-08-01 00:58:12,   3769,      59.98,         610,       0.16,       62.84,       10.17,        9.99,    3769,       0
**** c10 Apache Combined ****
2013-08-01 00:59:12,  10928,      59.98,         947,       0.05,      182.19,       15.79,        9.99,   10928,       0
**** c100 Apache Static ****
2013-08-01 01:00:12,  44581,      59.97,        2523,       0.13,      743.39,       42.07,       98.63,   44581,       0
**** c100 Apache PHP ****
2013-08-01 01:01:12,   4427,      59.98,         721,       1.32,       73.81,       12.02,       97.34,    4427,       1
**** c100 Apache Combined ****
2013-08-01 01:02:12,  12735,      59.98,        1125,       0.47,      212.32,       18.76,       99.68,   12735,       0

Nginx Results:

      Date & Time,  Trans,  Elap Time,  Data Trans,  Resp Time,  Trans Rate,  Throughput,  Concurrent,    OKAY,   Failed
**** c1 Nginx Static ****
2013-08-01 02:36:13,   9040,      59.10,         274,       0.01,      152.96,        4.64,        1.00,    9040,       0
**** c1 Nginx PHP ****
2013-08-01 02:37:13,    581,      59.98,          18,       0.10,        9.69,        0.30,        1.00,     581,       0
**** c1 Nginx Combined ****
2013-08-01 02:38:13,   1786,      59.59,          55,       0.03,       29.97,        0.92,        1.00,    1786,       0
**** c10 Nginx Static ****
2013-08-01 02:39:13,  44557,      59.98,        1353,       0.01,      742.86,       22.56,        9.99,   44557,       0
**** c10 Nginx PHP ****
2013-08-01 02:40:13,   3766,      59.98,         120,       0.16,       62.79,        2.00,        9.98,    3766,       0
**** c10 Nginx Combined ****
2013-08-01 02:41:13,  10962,      59.98,         339,       0.05,      182.76,        5.65,        9.98,   10962,       0
**** c100 Nginx Static ****
2013-08-01 02:42:13,  54463,      59.98,        1642,       0.11,      908.02,       27.38,       99.70,   54463,       0
**** c100 Nginx PHP ****
2013-08-01 02:43:13,   3649,      59.98,         117,       1.62,       60.84,        1.95,       98.70,    3649,       0
**** c100 Nginx Combined ****
2013-08-01 02:44:13,  10802,      59.98,         334,       0.55,      180.09,        5.57,       98.63,   10802,       0

The data I'm concerned about is from the c100 "PHP" and "Combined" results. Apache is quite a bit faster and I'm wondering how that's possible given all the benchmarks online that show the opposite.

Both servers are:

  1. Running on a quad-core Xeon processor
  2. 8GB RAM
  3. Connected to a MongoDB v2.2 database on the same network
  4. PHP-FPM is set to use 100 PHP processes

Apache (settings are very close to stock):

  1. Running on CentOS 5
  2. Apache 2.2
  3. mod_php

Nginx:

  1. Ubuntu 13.04
  2. Nginx 1.2.6
  3. PHP-FPM (FastCGI) with 100 PHP processes

nginx.conf

pid /run/nginx.pid;
user www-data;
worker_processes 4;


events {
    worker_connections 1024;
}


http {
    # APACHE BACKWARDS COMPATIBILITY ENVIRONMENT VARIABLES
    map $request_uri $my_script_url {
        default $request_uri;
        ~^(?<script_filename>.+\.(php))(.*)?$ $script_filename; #/test.php or /test.php?hello=world
        ~^(?<script_filename>.*)(\?.*)$ $script_filename; #/tos?hello=world
        ~^(?<script_filename>.*)(\?.*)?$ $script_filename; #/tos or /tos/hello/world or /tos/hello/world?omg=what
    }


    # BASE SETTINGS
    charset utf-8;
    default_type application/octet-stream;
    include /etc/nginx/mime.types;
    server_tokens off;


    # CLIENT CACHING SETTINGS
    add_header Last-Modified "";
    expires 7d;


    # CONNECTION SETTINGS
    client_body_timeout 15s;
    client_header_timeout 30s;
    client_max_body_size 8m;
    keepalive_requests 10000;
    keepalive_timeout 30s;
    reset_timedout_connection on;
    resolver_timeout 5s;
    send_timeout 15s;
    tcp_nopush on;


    # FASTCGI SETTINGS
    # fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m;


    # FILE CACHING AND PERFORMANCE SETTINGS
    open_file_cache max=10000 inactive=20s;
    open_file_cache_errors off;
    open_file_cache_min_uses 2;
    open_file_cache_valid 30s;
    sendfile on;


    # GZIP SETTINGS
    gzip on;
    gzip_comp_level 5;
    gzip_min_length 1024;
    gzip_proxied any;
    gzip_types
        text/css
        text/plain
        text/javascript
        application/javascript
        application/json
        application/x-javascript
        application/xml
        application/xml+rss
        application/xhtml+xml
        application/x-font-ttf
        application/x-font-opentype
        application/vnd.ms-fontobject
        image/svg+xml
        image/x-icon
        application/rss+xml
        application/atom_xml;
    gzip_vary on;


    # LOGGING SETTINGS
    access_log /var/log/nginx/access.log combined buffer=16k;
    error_log /var/log/nginx/error.log crit;
    open_log_file_cache max=100 inactive=1m min_uses=1 valid=2m;


    # SSL SETTINGS
    # ssl_ciphers !aNULL:!eNULL:FIPS@STRENGTH;
    # ssl_prefer_server_ciphers on;
    # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    # ssl_session_cache shared:SSL:10m;
    # ssl_session_timeout 3m;


    # OTHER GLOBAL CONFIGURATION FILES
    include /etc/nginx/conf.d/*.conf;


    # VIRTUAL HOST CONFIGS
    include /etc/nginx/sites-enabled/*;
}

Virtual Host Config

server {
    # BASE SETTINGS
    listen 80;
    root /var/www/tbi/example/htdocs;
    # server_name local.example.com;
    server_name www.example.com;


    # LOG SETTINGS
    access_log /var/log/nginx/www.example.com.access.log combined buffer=64k;
    error_log /var/log/nginx/www.example.com.error.log crit;


    # LOCATIONS
    location / {
        index index.php index.html;
        try_files $uri @extensionless-php;
    }

    location ~ \.(ttf|otf|eot|woff)$ {
        add_header Access-Control-Allow-Origin *;
    }

    # location /nginx_status {
    #   See a brief synopsis of real-time, instantaneous performance
    #   stub_status on;
    # }

    location ~ \.php$ {
        expires off;

        # FASTCGI SETTINGS
        fastcgi_index index.php;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        include fastcgi_params;

        # FASTCGI CACHE SETTINGS
        # fastcgi_cache microcache;
        # fastcgi_cache_bypass $http_pragma;
        # fastcgi_cache_key $scheme$host$request_uri$request_method;
        # fastcgi_cache_methods GET HEAD;
        # fastcgi_cache_use_stale updating error timeout invalid_header http_500;
        # fastcgi_cache_valid any 1m;
        # fastcgi_ignore_headers "Cache-Control" "Expires" "Set-Cookie";

        # TBI ENVIRONMENT VARIABLES
        fastcgi_param TBI_CONFIG /var/www/tbi/configs/;
        fastcgi_param TBI_DOMAIN example.com;
        # fastcgi_param TBI_ENV local;
        fastcgi_param TBI_ENV www;
        fastcgi_param TBI_RELEASETIME 0;

        # APACHE BACKWARDS COMPATIBILITY ENVIRONMENT VARIABLES
        fastcgi_param SCRIPT_URI $scheme://$http_host$my_script_url;
        fastcgi_param SCRIPT_URL $my_script_url;
    }

    location @extensionless-php {
        if (-f $request_filename.php) {
            rewrite ^/(.*)$ /$1.php last;
        }
        rewrite ^/(.*)$ /index.php?$1 last;
    }
}

Any advice related to making Nginx faster would be greatly appreciated. I'd like to avoid kernel and TCP/IP tuning if possible.

PHP performance between Apache and nginx should be fairly similar, as PHP is a much higher bottleneck than the server used.

In your case the performance looks identical when concurrency = 1, or concurrency = 10, only becoming slower on nginx/PHP-FPM when concurrency is 100.

Despite what you may assume, running more PHP-FPM processes in parallel doesn't result in faster performance for many concurrent queries. PHP doesn't heavily benefit from running in parallel after a certain point. After a point, more processes in parallel can reduce overall throughput due to additional context switching, more random contention for I/O access, higher memory use, etc.

In my testing, the sweet spot in terms of PHP-FPM processes was around 6 to 10 (I use 8). This gets me the highest performance even when testing with hundreds of concurrent connections. Adding more PHP-FPM processes after that started to slow it down. Your mileage may vary, but 100 is unlikely to be the sweet spot on any server.

Note that your number of PHP-FPM processes does not have to be equal to or greater than the number of concurrent connections you support. To explain that better - having 8 PHP-FPM processes does not mean that you are limited to 8 concurrent connections. As long as your listen.backlog in PHP-FPM is sufficiently high your server (nginx) will still maintain many hundreds of concurrent connections but PHP-FPM will simply process them 8 at a time internally, rather than all at once. This means that each individual query will spend much less time actually being executed by PHP because it is in contention with fewer other processes. The connections will still be held open concurrently and testing with hundreds of concurrent users will still see all requests served quickly and successfully.

Note that I also found another way to speed up PHP on my nginx setup, and that was to increase the number/size of fastcgi_buffers. Mine is currently set to fastcgi_buffers 32 16k; .

You have enabled gzip on-the-fly compression in nginx - very expensive operation, so what do you expect? Even worse, you have configured 5th compression level, that make it even slower, slow as hell.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM