简体   繁体   English

如何改善码头响应时间?

[英]How do I improve jetty response times?

I'm trying to speed test jetty (to compare it with using apache) for serving dynamic content. 我正在尝试加速测试码头(与使用apache进行比较)以提供动态内容。

I'm testing this using three client threads requesting again as soon as a response comes back. 我正在使用三个客户端线程测试这个,一旦响应回来就再次请求。 These are running on a local box (OSX 10.5.8 mac book pro). 它们运行在本地盒子(OSX 10.5.8 mac book pro)上。 Apache is pretty much straight out of the box (XAMPP distribution) and I've tested Jetty 7.0.2 and 7.1.6 Apache几乎是开箱即用的(XAMPP发行版),我测试了Jetty 7.0.2和7.1.6

Apache is giving my spikey times : response times upto 2000ms, but an average of 50ms, and if you remove the spikes (about 2%) the average is 10ms per call. Apache正在给我的spikey时间:响应时间高达2000毫秒,但平均为50毫秒,如果你删除尖峰(大约2%),平均每次通话10毫秒。 (This was to a PHP hello world page) (这是一个PHP hello world页面)

Jetty is giving me no spikes, but response times of about 200ms. Jetty给我没有尖峰,但响应时间约为200ms。

This was calling to the localhost:8080/hello/ that is distributed with jetty, and starting jetty with java -jar start.jar . 这是调用localhost:8080 / hello /与jetty一起分发,并用java -jar start.jar启动jetty。

This seems slow to me, and I'm wondering if its just me doing something wrong. 这对我来说似乎很慢,我想知道它是否只是我做错了什么。

Any sugestions on how to get better numbers out of Jetty would be appreciated. 如何从码头获得更好的数字的任何sugestions将不胜感激。

Thanks 谢谢

Well, since I am successfully running a site with some traffic on Jetty, I was pretty surprised by your observation. 好吧,既然我在Jetty上成功运行了一些有一些流量的网站,我对你的观察感到非常惊讶。

So I just tried your test. 所以我只是尝试了你的测试。 With the same result. 结果相同。

So I decompiled the Hello Servlet which comes with Jetty. 所以我反编译了Jetty附带的Hello Servlet。 And I had to laugh - it really includes following line: 我不得不笑 - 它真的包括以下几行:

 Thread.sleep(200L);

You can see for yourself. 你可以亲眼看看

My own experience with Jetty performance: I ran multi threaded load tests on my real-world app where I had a throughput of about 1000 requests per second on my dev workstation... 我自己在Jetty性能方面的经验:我在我的真实应用程序上运行了多线程负载测试,在我的开发工作站上我的吞吐量大约是每秒1000个请求...

Note also that your speed test is really just a latency test, which is fine so long as you know what you are measuring. 另请注意,您的速度测试实际上只是一个延迟测试,只要您知道自己测量的是什么就可以了。 But Jetty does trade off latency for throughput, so often there are servers with lower latency, but also lower throughput as well. 但Jetty确实会牺牲吞吐量的延迟,因此通常会有服务器具有较低的延迟,但也会降低吞吐量。

Realistic traffic for a webserver is not 3 very busy connections - 1 browser will open 6 connections, so that represents half a user. 网络服务器的实际流量不是3个非常繁忙的连接 - 1个浏览器将打开6个连接,因此代表半个用户。 More realistic traffic is many hundreds or thousands of connections, each of them mostly idle. 更真实的流量是数百或数千个连接,每个连接大多处于空闲状态。

Have a read of my blogs on this subject: https://webtide.com/truth-in-benchmarking/ and https://webtide.com/lies-damned-lies-and-benchmarks-2/ 阅读我关于此主题的博客: https//webtide.com/truth-in-benchmarking/https://webtide.com/lies-damned-lies-and-benchmarks-2/

You should definitely check it with profiler. 你一定要用探查器检查它。 Here are instructions how to setup remote profiling with Jetty: 以下是如何使用Jetty设置远程分析的说明:

http://sujitpal.sys-con.com/node/508048/mobile http://sujitpal.sys-con.com/node/508048/mobile

Speedup or performance tune any application or server is really hard to get done in my experience. 根据我的经验,任何应用程序或服务器的加速或性能调整都很难完成。 You'll need to benchmark several times with different work models to define what your peak load is. 您需要使用不同的工作模型多次进行基准测试,以定义峰值负载。 Once you define the peak load for the configuration/environment mixture you need to tune and benchmark, you might have to run 5+ iterations of your benchmark. 一旦定义了需要调整和基准测试的配置/环境混合的峰值负载,您可能必须运行基准测试的5次迭代。 Check the configuration of both apache/jetty in terms of number of working threads to process the request and get them to match if possible. 检查apache / jetty的配置,以处理请求的工作线程数,并尽可能使它们匹配。 Here are some recommendations: 以下是一些建议:

  1. Consider the differences of the two environments (GC in jetty, consider tuning you min and max memory threshold to the same size and later proceed to execute your test) 考虑两种环境的差异(在jetty中的GC,考虑将最小和最大内存阈值调整到相同的大小,然后继续执行测试)
  2. The load should come from another box. 负载应该来自另一个盒子。 If you don't have a second box/PC/server take your CPU/core into count and setup your the test to a specific CPU, do the same for jetty/apache. 如果您没有第二个盒子/ PC /服务器将CPU /核心计入计数并将测试设置为特定CPU,请对jetty / apache执行相同操作。
  3. This is given that you cant get another machine to be the stress agent. 这是因为你不能让另一台机器成为压力因素。 Run several workload model 运行几个工作负载模型

Moving to modeling the test do the following 2 stages: 转向测试建模需要执行以下两个阶段:

  1. One Thread for each configuration for 30 minutes. 每个配置一个线程30分钟。
  2. Start with 1 thread and going up to 5 with a 10 minutes interval to increase the count, 从1个线程开始,以10分钟的间隔上升到5个,以增加计数,
  3. Base on the metrics Stage 2 define a number of threads for the test. 基于指标第2阶段为测试定义了许多线程。 and run that number of thread concurrent for 1 hour. 并运行该数量的线程并发1小时。

Correlate the metrics (response times) from your testing app to the server hosting the application resources (use sar, top and other unix commands to track cpu and memory), some other process might be impacting you app. 将测试应用程序的指标(响应时间)与托管应用程序资源的服务器相关联(使用sar,top和其他unix命令来跟踪cpu和内存),其他一些进程可能会影响您的应用程序。 (memory is relevant for apache jetty will be constraint to the JVM memory configuration so it should not change the memory usage once the server is up and running) (内存与apache jetty相关将限制JVM内存配置,因此一旦服务器启动并运行它就不应该更改内存使用量)

Be aware of the Hotspot Compiler. 注意Hotspot编译器。

Methods have to be called several times (1000 times ?), before the are compiled into native code. 在编译为本机代码之前,必须多次调用方法(1000次?)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM