简体   繁体   English

在模块化 python 代码库中使用 Dask LocalCluster()

[英]Using Dask LocalCluster() within a modular python codebase

I am trying to use Dask Distributed's LocalCluster to run code in parallel using all the cores of a single machine.我正在尝试使用 Dask Distributed 的LocalCluster使用单台机器的所有内核并行运行代码。

Consider a sample python data pipeline, with the folder structure below.考虑一个示例 python 数据管道,其文件夹结构如下。

sample_dask_program
├── main.py
├── parallel_process_1.py
├── parallel_process_2.py
├── process_1.py
├── process_2.py
└── process_3.py

main.py is the entry point, which executes while pipeline sequentially. main.py是入口点,它在管道中顺序执行。

Eg:例如:

def run_pipeline():
    stage_one_run_util()
    stage_two_run_util()

    ...

    stage_six_run_util()


if __name__ == '__main__':

    ...

    run_pipeline()

parallel_process_1.py and parallel_process_2.py are modules which create a Client() and use futures to achieve parallelism. parallel_process_1.pyparallel_process_2.py是创建 Client() 并使用期货来实现并行性的模块。

with Client() as client:
            # list to store futures after they are submitted
            futures = []

            for item in items:
                future = client.submit(
                    ...
                )
                futures.append(future)

            results = client.gather(futures)

process_1.py , process_2.py and process_3.py are modules which do simple computation that need not be run in parallel using all the CPU cores. process_1.pyprocess_2.pyprocess_3.py是执行简单计算的模块,不需要使用所有 CPU 内核并行运行。

Traceback:追溯:

  File "/sm/src/calculation/parallel.py", line 140, in convert_qty_to_float
    results = client.gather(futures)
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/client.py", line 1894, in gather
    asynchronous=asynchronous,
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/client.py", line 778, in sync
    self.loop, func, *args, callback_timeout=callback_timeout, **kwargs
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/utils.py", line 348, in sync
    raise exc.with_traceback(tb)
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/utils.py", line 332, in f
    result[0] = yield future
  File "/home/iouser/.local/lib/python3.7/site-packages/tornado/gen.py", line 735, in run
    value = future.result()
concurrent.futures._base.CancelledError

This is the error thrown by the workers:这是工人抛出的错误:

distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:33901 -> tcp://127.0.0.1:38821
Traceback (most recent call last):
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/comm/tcp.py", line 248, in write
    future = stream.write(frame)
  File "/home/iouser/.local/lib/python3.7/site-packages/tornado/iostream.py", line 546, in write
    self._check_closed()
  File "/home/iouser/.local/lib/python3.7/site-packages/tornado/iostream.py", line 1035, in _check_closed
    raise StreamClosedError(real_error=self.error)
tornado.iostream.StreamClosedError: Stream is closed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/worker.py", line 1248, in get_data
    compressed = await comm.write(msg, serializers=serializers)
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/comm/tcp.py", line 255, in write
    convert_stream_closed_error(self, e)
  File "/home/iouser/.local/lib/python3.7/site-packages/distributed/comm/tcp.py", line 121, in convert_stream_closed_error
    raise CommClosedError("in %s: %s: %s" % (obj, exc.__class__.__name__, exc))
distributed.comm.core.CommClosedError: in <closed TCP>: BrokenPipeError: [Errno 32] Broken pipe

I am not able to locally reproduce this error or find a minimum reproducible example, as the occurrence of this error is abrupt.我无法在本地重现此错误或找到最小可重现示例,因为此错误的发生是突然的。

Is this the right way to use Dask LocalCluster in a modular python program?这是在模块化 python 程序中使用 Dask LocalCluster 的正确方法吗?

EDIT编辑

I have observed that these errors come up when the LocalCluster is created with a relatively high number of threads and processes.我观察到,当使用相对较多的线程和进程创建 LocalCluster 时,会出现这些错误。 I am doing computations which uses NumPy and Pandas and this is not a good practice as described here .我正在使用 NumPy 和 Pandas 进行计算,这不是这里描述的好习惯。

At times, when the LocalCluster is created using 4 workers and 16 processes, no error gets thrown.有时,当使用 4 个工作人员和 16 个进程创建 LocalCluster 时,不会引发任何错误。 When the LocalCluster is created using 8 workers and 40 processes, the error I described above gets thrown.当使用 8 个工作人员和 40 个进程创建 LocalCluster 时,会抛出我上面描述的错误。

As far as I understand, dask randomly selects this combination (is this an issue with dask?), as I tested on the same AWS Batch instance (with 8 cores (16 vCPUs)).据我了解,dask 随机选择此组合(这是 dask 的问题吗?),因为我在同一个 AWS Batch 实例(具有 8 个内核(16 个 vCPU))上进行了测试。

The issue does not pop up when I forcefully create the cluster with only threads.当我强制创建仅使用线程的集群时,不会弹出该问题。

Eg:例如:

cluster = LocalCluster(processes=False)
with Client(cluster) as client:
    client.submit(...)
    ...

But, creating the LocalCluster using only threads slows down the execution by 2-3 times.但是,仅使用线程创建 LocalCluster 会使执行速度减慢 2-3 倍。

So, is the solution to the problem, finding the right number of processes/threads suitable to the program?那么,问题的解决方案是找到适合程序的正确数量的进程/线程吗?

It is more common to create a Dask Client once, and then run many workloads on it.更常见的是创建一次 Dask Client,然后在其上运行许多工作负载。

with Client() as client:
    stage_one(client)
    stage_two(client)

That being said, what you're doing should be fine.话虽这么说,你在做什么应该没问题。 If you're able to reproduce the error with a minimal example, that would be useful (but no expectations).如果您能够通过最小的示例重现错误,那将很有用(但没有期望)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM