简体   繁体   中英

What is the use case for unbounded queue in Java Executors?

Executors factory from Java uses unbounded pending tasks queue. For instance, Executors.newFixedThreadPool uses new LinkedBlockingQueue which has not limit for tasks to be accepted.

public static ExecutorService newFixedThreadPool(int nThreads) {
  return new ThreadPoolExecutor(nThreads, nThreads,
                                0L, TimeUnit.MILLISECONDS,
                                new LinkedBlockingQueue<Runnable>());
}

When new task arrives, and there is no thread available it goes to the queue. Tasks can be added to the queue indefinitely causing OutOfMemoryError .

What is the scenario for using this approach actually? Why Java creators didn't use bounded queue? I can't imagine a scenario when unbounded is better the bounded, but I may be missing something. Can someone provide a decent explanation? Best!

This is the default approach and the user can choose to change to a bounded queue.

Now maybe your question is why is this the default?

It is actually harder to deal with bounded queues, what would you do if the queue is full? You drop the task and don't accept it? You throw an exception and fail the entire process? Isn't that what would happen in the case OOM? So all these are decision need to be taken by the user whose accepting lots of long running tasks, which is not the the default Java user.

A use case for unbounded queue could simply be when you only expect a small number of running concurrent requests but you don't know exactly how much or you can implement back pressure in a different stage of your application like throttling your API requests.

You can reject tasks by using ArrayBlockingQueue (bounded blocking queue)

 final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<>(100); executorService = new ThreadPoolExecutor(n, n, 0L, TimeUnit.MILLISECONDS, queue); 

Code above is equivalent to Executors.newFixedThreadPool(n), however instead of default unlimited LinkedBlockingQueue we use ArrayBlockingQueue with fixed capacity of 100. This means that if 100 tasks are already queued (and n being executed), new task will be rejected with RejectedExecutionException.

Tasks can be added to the queue indefinitely causing OutOfMemoryError

No. The queue is not really unbouned , for an unbounded LinkedBlockingQueue , it's capacity is Integer.MAX_VALUE(2147483647) . When there is no enough space, RejectedExecutionHandler will handle new arrival tasks. And the default handler is AbortPolicy , it will abort new tasks directly.

I can't imagine a scenario when unbounded is better the bounded

The users might don't care about the queue size, or they just don't want to limit the cached tasks.

If you do care about it, you can create a ThreadPoolExecutor with custom construtor.

Since you're asking about the "use case", very simple: every time you have a lot of single tasks you want to finish eventually. Say you want to download 100s of thousands of files? Create a download task for each, submit to ExecutorService , wait for termination. The tasks will finish eventually since you don't add any more, and there's no reason for a limit.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM