简体   繁体   English

Nginx php-fpm和Memcached的问题

[英]Problems with Nginx php-fpm and Memcached

I have an website wich is getting more traffic as usual in the past months. 在过去的几个月中,我有一个网站正在照常获得更多的访问量。 I want this website to server more users in the same amount of time without changing the Hardware. 我希望该网站在不更改硬件的情况下,在相同的时间内为更多用户提供服务。 At the Moment i use Apache2 with Wordpress and Memcached. 目前,我将Apache2与Wordpress和Memcached结合使用。 I wanted to know if i can use Nginx to get more performance on this site. 我想知道我是否可以使用Nginx在此站点上获得更高的性能。 When i have Nginx running on the Webserver with Wordpress and i run a test with 10000 users over a period of 60 seconds, i get only 600 succesfull answers the other 9400 connections get Errors. 当我使用Wordpress在Web服务器上运行Nginx并在60秒的时间内对10000个用户进行测试时,我仅获得600成功的完整答案,其他9400连接得到错误。 (mostly timeout). (主要是超时)。 IMG IMG
when i use Memcached additionally to the previous configuration i get 9969 successfull Answers, but the maximal users per second dont go over 451 IMG 当我在以前的配置之外还使用Memcached时,我会成功获得9969的答案,但是每秒的最大用户数不会超过451 IMG
But on my Site i have sometimes over 1000 Users per second. 但是在我的网站上,我有时每秒有1000多名用户。 So can anybody tell me what im doing wrong? 那么有人可以告诉我我做错了什么吗?

System: 系统:
AWS EC2 Cloud Server 2GHz, 650MB RAM AWS EC2云服务器2GHz,650MB RAM
Ubuntu 13.10 Ubuntu 13.10
Nginx 1.4.7 Nginx 1.4.7
Memcached 1.4.14 Memcached 1.4.14
Php-fpm for php 5.5.3 php-fpm用于PHP 5.5.3

The number you should consider is Avg error rate , your WP + Nginx + Memcached configuration looks not too bad, so by my opinion this is good choice. 您应该考虑的数字是Avg error rate ,您的WP + Nginx + Memcached配置看起来还不错,所以我认为这是一个不错的选择。 Maybe you can increase the -m parameter in memcached to match half of your RAM. 也许您可以增加memcached-m参数以匹配一半的RAM。

BUT: memcached do not guarantee that the data will be available in memory and you have to be prepared for a cache-miss storm. 但是: memcached不保证数据将在内存中可用,因此您必须为高速缓存未命中做好准备。 One interesting approach to avoid a miss-storm is to set the expiration time with some random offset, say 10 + [0..10] minutes, which means some items will be stored for 10, and other for 20 minutes (the goal is that not all of items expire at the same time). 避免未命中风暴的一种有趣方法是将失效时间设置为某个随机偏移量,例如10 + [0..10]分钟,这意味着某些项目将存储10分钟,而其他项目将存储20分钟(目标是并非所有商品都同时失效)。

Also, no matter how much memory you will allocate for memcached , it will use only the amount it needs, eg it allocates only the memory actually used. 同样,无论您将为memcached分配多少内存,它都只会使用所需的数量,例如,它只会分配实际使用的内存。 With the -k option however (which is disabled in your config), the entire memory is reserved when memcached is started, so it always allocate the whole amount of memory, no matter if it needs it or not. 但是使用-k选项 (在配置中已禁用),启动memcached时将保留整个内存,因此无论是否需要,它总是分配全部内存。

This number of 451 connections can actually vary, it depends. 451连接的数量实际上可以变化,具体取决于。 It is always good idea to look at the averages when performing benchmarks, ie better to have 0% Avg error rate and 451 served clients, than 65% Avg error rate and 8200+ served clients. 最好在执行基准测试时查看平均值,例如, Avg error rate 0%和服务客户451Avg error rate 65%和服务客户​​8200个以上,更好。

However, in order to offload some more resources, you can use additional caching for Wordpress, there are plenty of plugins, I personally wrote one for that purpose. 但是,为了卸载更多资源,您可以为Wordpress使用额外的缓存,有很多插件,我为此目的亲自编写了一个。

Regarding the nginx configuration, you can tune some parameters there also: 关于nginx配置,您还可以在那里调整一些参数:

worker_rlimit_nofile 100000;

worker_connections 4000;

# optmized to serve many clients with each thread, essential for linux use epoll;
# accept as many connections as possible,may flood worker connections if set too low
multi_accept on;

# cache informations about FDs, frequently accessed files
# can boost performance, but you need to test those values
open_file_cache max=200000 inactive=20s; 
open_file_cache_valid 30s; 
open_file_cache_min_uses 2;
open_file_cache_errors on;

# to boost IO on HDD we can disable access logs
access_log off;

# copies data between one FD and other from within the kernel
# faster then read() + write()
sendfile on;

# send headers in one peace, its better then sending them one by one 
tcp_nopush on;

# don't buffer data sent, good for small data bursts in real time
tcp_nodelay on;
# number of requests client can make over keep-alive -- for testing
keepalive_requests 100000;

# allow the server to close connection on non responding client, this will free up memory
reset_timedout_connection on;

# request timed out -- default 60
client_body_timeout 10;

# if client stop responding, free up memory -- default 60
send_timeout 2;

# reduce the data that needs to be sent over network
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";

The Problem we had was not a real Problem. 我们遇到的问题不是一个真正的问题。 We only interpreted the Test results wrong. 我们仅将测试结果解释为错误。

The 400 Users limit wasnt an actual limit, the Server was able to keep the Users on a constant level, because it was fast enough to answer all requests right away. 400个用户的限制不是实际的限制,服务器能够将用户保持在恒定水平,因为它足够快,可以立即回答所有请求。

The Results of the Tests are not comparable to my site, that is getting 1k Users, as it has better hardware than an AWS free instance of course. 测试结果无法与我的站点相提并论,该站点获得了1000名用户,因为它的硬件比当然免费的AWS实例更好。 But i think 400 Users per sec is a very good result for such a "weak" server.. so 但是我认为对于这样一个“弱”服务器,每秒400个用户是一个很好的结果。

Question solved i think, because of my own stupidness of reading test results... 我认为问题已解决,因为我自己愚蠢地阅读测试结果...

Thanks for your help anyway bodi0. 无论如何,感谢您的帮助bodi0。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM