简体   繁体   中英

Performance difference between Spring MVC non-blocking and blocking

The application we're building is expected to have a high number of concurrent users. We're trying to evaluate Spring MVC for building our API layer.

The following 2 handlers were written - one blocking and another non-blocking:

@RequestMapping("/nblocking")
public DeferredResult<String> nonBlockingProcessing() {
    DeferredResult<String> deferredResult = new DeferredResult<>();
    executionService.execute(new Runnable() {
        @Override
        public void run() {
            deferredResult.setResult(fetcher.load());
        }
    });

    return deferredResult;
}

@RequestMapping("/blocking")
public String blockingProcessing() {
    return fetcher.load();
}

We ran tests via JMETER, hitting each endpoint with 3500 concurrent users.

Results with blocking call: 在此处输入图片说明

Results with non-blocking call: 在此处输入图片说明

In the above code, the fetcher.load call is making a DB query to MySql (max connections set to 200) and connection pool of max size(50).

Overall, throughput and average times are better with non-blocking calls. What other improvements can we make or factors we can consider to make the throughput even better?

1) You server uses a synchronous request-response model

According to your results, your server is based on a synchronous request-response model and not on an asynchronous or event-driven model.
It is the case for Tomcat, Apache, Weblogic, etc... and most of Java applications server.
In this model, the number of request is limited generally to some dozens of concurrent requests.
You ran 17.000 request in your test.
So it means that many requests are on pending to be processed.
So differing the processing of the requests will not improve the performance as the server is already full.

2) The thread creation for each new request and the asynchronous processing as the response has to be returned has also a cost.

Indeed, the JVM has to create more objects and perform more tasks in this case and the UC has also to perform more scheduling tasks as you have more threads.

Conclusion : Asynchronous processing from server side may improve the performance but not always

As you has some available CPU threads on the machine, dividing tasks that will be performed by multiple threads makes sense to improve performance.
As you perform so many requests as in your case, you don't have available CPU.
So you will not gain in performance, you will just be able to process multiple clients "in parallel" but it will reduce the performance because of UC scheduling and objects creation explained in the previous point.

You should so understand in your case , why from server side the asynchronous way is slower as the synchronous way.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM