简体   繁体   English

Spring MVC非阻塞与阻塞之间的性能差异

[英]Performance difference between Spring MVC non-blocking and blocking

The application we're building is expected to have a high number of concurrent users. 我们正在构建的应用程序预期会有大量的并发用户。 We're trying to evaluate Spring MVC for building our API layer. 我们正在尝试评估Spring MVC用于构建我们的API层。

The following 2 handlers were written - one blocking and another non-blocking: 编写了以下2个处理程序-一个阻塞和另一个非阻塞:

@RequestMapping("/nblocking")
public DeferredResult<String> nonBlockingProcessing() {
    DeferredResult<String> deferredResult = new DeferredResult<>();
    executionService.execute(new Runnable() {
        @Override
        public void run() {
            deferredResult.setResult(fetcher.load());
        }
    });

    return deferredResult;
}

@RequestMapping("/blocking")
public String blockingProcessing() {
    return fetcher.load();
}

We ran tests via JMETER, hitting each endpoint with 3500 concurrent users. 我们通过JMETER进行了测试,使每个端点拥有3500个并发用户。

Results with blocking call: 封锁呼叫的结果: 在此处输入图片说明

Results with non-blocking call: 非阻塞呼叫的结果: 在此处输入图片说明

In the above code, the fetcher.load call is making a DB query to MySql (max connections set to 200) and connection pool of max size(50). 在上面的代码中,fetcher.load调用对MySql(最大连接数设置为200)和最大大小为50的连接池进行数据库查询。

Overall, throughput and average times are better with non-blocking calls. 总体而言,无阻塞呼叫的吞吐量和平均时间更好。 What other improvements can we make or factors we can consider to make the throughput even better? 我们还可以做哪些其他改进,或者可以考虑哪些因素来提高吞吐量呢?

1) You server uses a synchronous request-response model 1)您的服务器使用同步请求-响应模型

According to your results, your server is based on a synchronous request-response model and not on an asynchronous or event-driven model. 根据您的结果,您的服务器基于同步请求-响应模型,而不基于异步或事件驱动的模型。
It is the case for Tomcat, Apache, Weblogic, etc... and most of Java applications server. Tomcat,Apache,Weblogic等...以及大多数Java应用程序服务器就是这种情况。
In this model, the number of request is limited generally to some dozens of concurrent requests. 在此模型中,请求数通常限制为数十个并发请求。
You ran 17.000 request in your test. 您在测试中运行了17.000个请求。
So it means that many requests are on pending to be processed. 因此,这意味着许多请求正在等待处理。
So differing the processing of the requests will not improve the performance as the server is already full. 因此,由于服务器已满,因此不同的请求处理不会提高性能。

2) The thread creation for each new request and the asynchronous processing as the response has to be returned has also a cost. 2)为每个新请求创建线程以及由于必须返回响应而进行异步处理也要付出代价。

Indeed, the JVM has to create more objects and perform more tasks in this case and the UC has also to perform more scheduling tasks as you have more threads. 实际上,在这种情况下,JVM必须创建更多的对象并执行更多的任务,而UC还必须在拥有更多线程时执行更多的调度任务。

Conclusion : Asynchronous processing from server side may improve the performance but not always 结论:服务器端的异步处理可能会提高性能,但并非总是如此

As you has some available CPU threads on the machine, dividing tasks that will be performed by multiple threads makes sense to improve performance. 由于计算机上有一些可用的CPU线程,因此划分将由多个线程执行的任务对于提高性能是有意义的。
As you perform so many requests as in your case, you don't have available CPU. 由于执行的请求数量很多,因此您没有可用的CPU。
So you will not gain in performance, you will just be able to process multiple clients "in parallel" but it will reduce the performance because of UC scheduling and objects creation explained in the previous point. 因此,您将不会获得性能提升,您只能“并行”处理多个客户端,但是由于上一章中介绍的UC调度和对象创建,它会降低性能。

You should so understand in your case , why from server side the asynchronous way is slower as the synchronous way. 您应该这样了解您的情况 ,为什么从服务器端异步方式比同步方式要慢。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM