[英]ThreadPoolTaskExecutor with small queue capacity blocking calling thread?
Sorry for formatting issue...抱歉格式化问题...
I've been trying to understand this for around 4 hours now.我一直试图理解这一点大约 4 个小时。 Basically I have a method that will call a private method that uses ThreadPoolTaskExecutor.
基本上我有一个方法会调用一个使用 ThreadPoolTaskExecutor 的私有方法。
public ThreadPoolTaskExecutor someTaskExecutor() {
final ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(0);
taskExecutor.setMaxPoolSize(3);
taskExecutor.setKeepAliveSeconds(60);
taskExecutor.setQueueCapacity(3);
taskExecutor.afterPropertiesSet();
return taskExecutor;}
This is how I use it.这就是我使用它的方式。 Dont mind the "DTO" and "Obj", just made it a placeholder
不要介意“DTO”和“Obj”,只是将其设为占位符
@Override
public void processMessage(final DTO dto) throws Exception {
log.info("MainThread BEFORE [{}] : [{}] - [{}]", dto.getTaskId(), Thread.currentThread().getName(), Thread.currentThread().getId());
final Object obj = save(dto);
log.info("MainThread SAVED [{}] : [{}] - [{}]", dto.getTaskId(), Thread.currentThread().getName(), Thread.currentThread().getId());
generateSomething(obj);
log.info("MainThread AFTER [{}] : [{}] - [{}]", dto.getTaskId(), Thread.currentThread().getName(), Thread.currentThread().getId());
}
private void generateSomething(final Object obj) {
someTaskExecutor.execute(() -> {
log.info("thread START [{}] : [{}] - [{}]", obj.getTaskId(), Thread.currentThread().getName(), Thread.currentThread().getId());
//SOME API CALL THAT TAKES 3 second
log.info("thread DONE [{}] : [{}] - [{}]", obj.getTaskId(), Thread.currentThread().getName(), Thread.currentThread().getId());
});
With the current setting of my threadpooltaskexecutor, when I have 10 concurrent call to the main method (which produces 10 main thread), only 6 thread successfully went through the "MainThread AFTER" which is called after calling the the private method which has threadpooltaskexecutor execute.使用我的 threadpooltaskexecutor 的当前设置,当我对 main 方法有 10 个并发调用(产生 10 个主线程)时,只有 6 个线程成功通过调用具有 threadpooltaskexecutor 执行的私有方法后调用的“MainThread AFTER” .
I am at lost currently, my understanding that the main/caller threads should not be affected by the new threads spawned by the threadpooltaskexecutor, and why is it being directly affected by queue capacity.我目前迷失了方向,我的理解是主/调用者线程不应该受到线程池任务执行器产生的新线程的影响,为什么它会直接受到队列容量的影响。 If I make the queue capacity at 5 instead of 3, 8 main threads will go through properly.
如果我将队列容量设为 5 而不是 3,则 8 个主线程将 go 正确通过。 If I left it unset, all main thread will go through properly.
如果我没有设置它,所有主线程将正确通过 go。
Thanks for the help, again sorry for the formatting.感谢您的帮助,再次抱歉格式化。
I am at lost currently, my understanding that the main/caller threads should not be affected by the new threads spawned by the threadpooltaskexecutor, and why is it being directly affected by queue capacity.
我目前迷失了方向,我的理解是主/调用者线程不应该受到线程池任务执行器产生的新线程的影响,为什么它会直接受到队列容量的影响。 If I make the queue capacity at 5 instead of 3, 8 main threads will go through properly.
如果我将队列容量设为 5 而不是 3,则 8 个主线程将 go 正确通过。 If I left it unset, all main thread will go through properly.
如果我没有设置它,所有主线程将正确通过 go。
You are setting up a ThreadPoolTaskExecutor
(under the covers it is a ThreadPoolExecutor
) that has a maximum of 3 threads and a queue-limit of 3 as well.您正在设置一个
ThreadPoolTaskExecutor
(在幕后它是一个ThreadPoolExecutor
),它最多有 3 个线程和 3 个队列限制。 If you submit 10 jobs to this executor and the jobs take a while to complete, then 3 tasks will start running, 3 will go into the queue, and when you try to submit the 7th task, the caller will block.如果你向这个执行器提交 10 个作业并且这些作业需要一段时间才能完成,那么 3 个任务将开始运行,3 个将 go 进入队列,当你尝试提交第 7 个任务时,调用者将阻塞。 That's how it works.
这就是它的工作原理。 If you increase the number of queued tasks or increase the number of threads that can work on those tasks then the caller will block later.
如果您增加排队任务的数量或增加可以处理这些任务的线程数,那么调用者将稍后阻塞。
Why is blocking important?为什么阻塞很重要? Let's say you have an application that needs to do 1,000,000 jobs.
假设您有一个应用程序需要完成 1,000,000 个工作。 You can't spawn 1,000,000 threads so you have to queue up jobs.
您不能生成 1,000,000 个线程,因此您必须将作业排队。 If the jobs are large in memory you might run out of heap space holding that many – or think about scaling to 100 million.
如果 memory 中的作业很大,您可能会用完容纳那么多的堆空间——或者考虑扩展到 1 亿个。 By blocking, the system keeps its thread and memory usage low while still keeping the throughput high.
通过阻塞,系统保持其线程和 memory 使用率低,同时仍保持高吞吐量。 You want to queue up some jobs so the threads can easily take them off the queue and start working but not too many that they take memory resources that the application needs.
您希望将一些作业排队,以便线程可以轻松地将它们从队列中取出并开始工作,但不要太多,以免占用应用程序所需的 memory 资源。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.