[英]What batch_size and pre_dispatch in joblib exactly mean
From documentation here https://pythonhosted.org/joblib/parallel.html#parallel-reference-documentation It's not clear for me what exactly batch_size
and pre_dispatch
means. 来自此处的文档https://pythonhosted.org/joblib/parallel.html#parallel-reference-documentation我不清楚batch_size
和pre_dispatch
究竟是什么意思。
Let's consider case when we are using 'multiprocessing'
backend, 2 jobs (2 processes) and we have 10 tasks to compute. 让我们考虑使用'multiprocessing'
后端,2个作业(2个进程)并且我们有10个任务要计算的情况。
As i understand: 我认为:
batch_size
- controls amount of pickled tasks at one time, so if you set batch_size = 5
- joblib will pickle and send 5 tasks immediately to each process, and after arriving there they will be solved by process sequentially, one after another. batch_size
- 一次控制pickle任务的数量,所以如果你设置batch_size = 5
- joblib将pickle并立即向每个进程发送5个任务,并且在到达之后它们将按顺序一个接一个地解决。 With batch_size=1
joblib will pickle and send one task at a time, if and only if that process completed previous task. 使用batch_size=1
,当且仅当该进程完成上一个任务时,joblib将一次pickle并发送一个任务。
To show what i mean: 显示我的意思:
def solve_one_task(task):
# Solves one task at a time
....
return result
def solve_list(list_of_tasks):
# Solves batch of tasks sequentially
return [solve_one_task(task) for task in list_of_tasks]
So this code: 所以这段代码:
Parallel(n_jobs=2, backend = 'multiprocessing', batch_size=5)(
delayed(solve_one_task)(task) for task in tasks)
is equal to this code (in perfomance): 等于此代码(在性能方面):
slices = [(0,5)(5,10)]
Parallel(n_jobs=2, backend = 'multiprocessing', batch_size=1)(
delayed(solve_list)(tasks[slice[0]:slice[1]]) for slice in slices)
Am i right? 我对吗? And what pre_dispatch
means then? 那pre_dispatch
意味着什么?
As it turns out, i was right, and two sections of code are pretty similar in perfomance sense, so batch_size
works as i expected in Question. 事实证明,我是对的,并且两段代码在性能方面非常相似,因此batch_size
工作方式与我在Question中的预期相同。 pre_dispatch (as documentation states) controls number of instantiated tasks in task queue. pre_dispatch(作为文档状态)控制任务队列中实例化任务的数量。
from sklearn.externals.joblib import Parallel, delayed
from time import sleep, time
def solve_one_task(task):
# Solves one task at a time
print("%d. Task #%d is being solved"%(time(), task))
sleep(5)
return task
def task_gen(max_task):
current_task = 0
while current_task < max_task:
print("%d. Task #%d was dispatched"%(time(), current_task))
yield current_task
current_task += 1
Parallel(n_jobs=2, backend = 'multiprocessing', batch_size=1, pre_dispatch=3)(
delayed(solve_one_task)(task) for task in task_gen(10))
outputs: 输出:
1450105367. Task #0 was dispatched
1450105367. Task #1 was dispatched
1450105367. Task #2 was dispatched
1450105367. Task #0 is being solved
1450105367. Task #1 is being solved
1450105372. Task #2 is being solved
1450105372. Task #3 was dispatched
1450105372. Task #4 was dispatched
1450105372. Task #3 is being solved
1450105377. Task #4 is being solved
1450105377. Task #5 was dispatched
1450105377. Task #5 is being solved
1450105377. Task #6 was dispatched
1450105382. Task #7 was dispatched
1450105382. Task #6 is being solved
1450105382. Task #7 is being solved
1450105382. Task #8 was dispatched
1450105387. Task #9 was dispatched
1450105387. Task #8 is being solved
1450105387. Task #9 is being solved
Out[1]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.