简体   繁体   中英

How much overhead does a caching reverse proxy bring for static content

If i query a static 200kb HTML file on a nginx server with t parallel threads, it comes in m ms and i reach a throughput of about r req/sec. (I used the average over about 2000 requests)

t:10 m:13 r:440

t:20 m:20 r:475

t:50 m:67 r:547

t:80 m:98 r:517

I'm developing a reverse proxy, which adds some time per request, if i do the same tests on it, without modifying or caching (rfc2616 is respected), i get those results (didn't do much performance tuning till now).

t:10 m:42 r:130

t:20 m:80 r:121

t:50 m:133 r:194

t:80 m:182 r:258

If the proxy has a cached version of the file, i get this results

t:10 m:74 r:118

t:20 m:116 r:150

t:50 m:236 r:155

t:80 m:402 r:142

Now my question: Are these good values? I couldn't find much values to compare with. I just want to know: is it okay to add 30-50ms on every request, when requesting with 10 parallel clients? Is it okay, that the throughput goes down this much?

How much time are squid, varnish or apache traffic server adding? Does somebody has comparable values?

Okay, those values were very bad.... now i'm under 20ms for the most of them. Reason was: i used the clouchnode client for couchbase, now i', using the memcache interface of couchbase.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM