繁体   English   中英

为什么我的 Python 数据流作业卡在写入阶段?

[英]Why does my Python Dataflow job gets stuck at the Write phase?

我写了一个 Python 数据流作业,它设法处理了 300 个文件,不幸的是,当我尝试在 400 个文件上运行它时,它永远停留在写入阶段。

日志并不是很有帮助,但我认为问题出在代码的编写逻辑上,最初,我只想要 1 output 文件,所以我写道:

     | 'Write' >> beam.io.WriteToText(
                known_args.output,
                file_name_suffix=".json",
                num_shards=1,
                shard_name_template=""
            ))

然后,我删除num_shards=1shard_name_template=""并且我能够处理更多文件,但它仍然会卡住。

额外的信息

  • 要处理的文件很小,不到 1MB
  • 另外,在删除 num_shards 和 shard_name_template 字段时,我注意到数据在目标路径中有一个临时文件夹 output,但工作永远不会完成
  • 我有以下DEADLINE_EXCEEDED异常,我尝试通过将 --num_workers 增加到 6 并将 --disk_size_gb 增加到 30 来解决它,但它不起作用。
Error message from worker: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 638, in do_work work_executor.execute() File "/usr/local/lib/python3.7/site-packages/dataflow_worker/executor.py", line 179, in execute op.start() File "dataflow_worker/shuffle_operations.py", line 63, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 64, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 79, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 80, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 82, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 441, in __iter__ for entry in entries_iterator: File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 282, in __next__ return next(self.iterator) File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 240, in __iter__ chunk, next_position = self.reader.Read(start_position, end_position) File "third_party/windmill/shuffle/python/shuffle_client.pyx", line 133, in shuffle_client.PyShuffleReader.Read OSError: Shuffle read failed: b'DEADLINE_EXCEEDED: (g)RPC timed out when extract-fields-three-mont-10090801-dlaj-harness-fj4v talking to extract-fields-three-mont-10090801-dlaj-harness-1f7r:12346. Server unresponsive (ping error: Deadline Exceeded, {"created":"@1602260204.931126454","description":"Deadline Exceeded","file":"third_party/grpc/src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}). Typically one can self manage this issue, please read: https://cloud.google.com/dataflow/docs/guides/common-errors#tsg-rpc-timeout'

您能否推荐解决此类问题的方法?

在尝试抽取资源后,我设法通过启用 Dataflow shuffle 服务解决了这个问题。 请看资源

只需将--experiments=shuffle_mode=service添加到您的PipelineOptions即可。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM