简体   繁体   中英

Configuring threadpool size in a service

I am writing a service which takes two urls urlA and urlB to fetch two integers a and b . The service returns the sum of a and b .

In its most simple form the service works like this:

public Integer getSumFromUrls(String urlA, String urlB) {

    Integer a = fetchFromUrl(urlA);
    Integer b = fetchFromUrl(urlB);

    return a + b;
}

Here fetchFromUrl is a synchronous operation, so it blocks the processing thread unless the value is available. To make things efficient I would rather use ExecutorService to schedule the two fetches and return when the results are available. Here is the changed code (ignore the syntactic nuances)

public Integer getSumFromUrls(String urlA, String urlB) {
    Future<Integer> aFuture = Executors.newSingleThreadScheduledExecutor().submit(new Callable<Integer>() {
        public Integer call() {
            return fetchFromUrl(urlA);
        }

    });
    Future<Integer> bFuture = Executors.newSingleThreadScheduledExecutor().submit(new Callable<Integer>() {
        public Integer call() {
            return fetchFromUrl(urlB);
        }                                                                                
    });

    Integer a = aFuture.get();
    Integer b = bFuture.get();

    return a + b;
}

Here, I have created single thread executors to execute the requests concurrently.

Since, this code would be running in the context of a web service, I should probably be not creating the single thread executors locally inside the function but should rather use some N sized thread pools shared across the requests.

My questions here are:

  1. Is the above understanding (italicised part) correct?
  2. If yes, how should I choose the optimum size of the thread pool. Should it be a function of the thread pool size of my service container, or request throughput or both etc?
  3. Is there a better way of optimising this scenario so that service threads are not blocked on doing IO most of the time.

Note: The details provided in this question are not the completely real scenarios but are representative of the same set of complexities required to answer the question.

If your function getSumFromUrls executed in every time a new request comes that means it will create a new ThreadPool each time and submit the task. Suppose if you have 1000 request hit at any point of time then 1000 ThreadPool will be created and which eventually create 1000s of thread. I believe if you create 1000s or more of thread at any point of time it will be an issue for your application. Generally at any point of time number of the active thread should be about/equal to the number of available core size of the system, however that totally depends on the use cases suppose your task is CPU intensive then number of threads should be as CPU core size but if your task is IO intensive then you can have more number of thread. More number of threads means more number of context switch will happen and which has it own cost and may degrade application performance.

Is the above understanding (italicised part) correct?

-> Yes.

If yes, how should I choose the optimum size of the thread pool. Should it be a function of the thread pool size of my service container, or request throughput or both etc?

-> As I have mentioned above it depends on the which type of task you are doing. You should use common thread pool to execute those task.

Is there a better way of optimizing this scenario so that service threads are not blocked on doing IO most of the time?

-> You should benchmark thread pool size and operating system automatically assign the CPU to another thread when a thread doing IO operation and do not need the CPU.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM