简体   繁体   中英

Django, python, mod_wsgi and Apache worker

I just moved from apache prefork to worker and started running mod_wsgi in daemon mode. So far, so good. I haven't experienced max load yet, but the server seems more consistent and we're not seeing random requests take 2min waiting for a mod_wsgi response. Memory footprint has gone from 3.5G to 1G. This is awesome. We're running on a single VPS with 6G of ram. There's one Django app running on this sevrer along with an instance of memcache, to which we've allocated 1G of ram. We have a separate MySql server.

Our application is bulky and can certainly be optimized. We're using NewRelic to troubleshoot some of the more slow running pages now. I've read a lot on optimizing mod_wsgi/apache but, like everyone else, I'm still left with a few questions.

Our average application page load time is 650-750ms. A lot of our pages are in the 200ms range, but we've got some dogs that take 2-5+ seconds to load. We get around 15-20 requests/second during normal load times and 30-40 requests/second during peak times, which may last 30-60 minutes.

Here's my apache config, running worker mpm.

StartServers        10
MaxClients         400
MinSpareThreads     25
MaxSpareThreads     75
ThreadsPerChild     25
MaxRequestsPerChild  0

I started out with the defaults (StatServers=2 and MaxClients=150) but our site slowed way down under minimal load. I'm guessing it took a long time to spin up servers as requests came in. We're serving 90% of our media from s3. The other 10% are served through Apache on our https pages or someone pointing lazily to our local server. At nominal load, 15 worker processes end up being created, so I'm thinking I should probably just set StartServers=15? With this configuration I'm assuming I have 15 worker processes running (which I can confirm with NewRelic) with 25 threads each (which I don't know how to confirm, guessing 400/15).

My apache/mod_wsgi directives look like this:

<VirtualHost *:80>
    # Some stuff
    WSGIDaemonProcess app1 user=http group=http processes=10 threads=20
    WSGIProcessGroup app1
    WSGIApplicationGroup app1
    WSGIScriptAlias / /path/to/django.wsgi
    WSGIImportScript /path/to/django.wsgi process-group=app1 application-group=app1    
    # Some more stuff    
</VirtualHost>

<VirtualHost *:443>
    # Some stuff
    WSGIDaemonProcess app1-ssl user=http group=http processes=2 threads=20
    WSGIProcessGroup app1-ssl
    WSGIApplicationGroup app1-ssl
    WSGIScriptAlias / /path/to/django.wsgi
    WSGIImportScript /path/to/django.wsgi process-group=app1-ssl application-group=app1-ssl
    # Some more stuff
</VirtualHost>

Having a different WSGIDaemonProcess/WSGIProcessGroup for the ssl side of my site, well, that just doesn't feel right at all. I'm 100% sure I've mucked something up here. To the greater point though, I've allocated 200+40 threads for mod_wsgi to handle requests from Apache, leaving 160 threads to deal with whatever media needs to be delivered up (through ssl or laziness of not pointing to s3).

So given our application load above, can anyone suggest ways I can improve performance of my site? Am I dealing with the ssl/mod_wsgi directives properly? Where's Graham? ;)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM