简体   繁体   English

PyTorch:如何使用 multiprocessing.pool 并行化多个 GPU

[英]PyTorch: How to parallelize over multiple GPU using multiprocessing.pool

I have the following code which I am trying to parallelize over multiple GPUs in PyTorch:我有以下代码,我试图在 PyTorch 中的多个 GPU 上并行化:

import numpy as np
import torch
from torch.multiprocessing import Pool

X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]])
X = torch.DoubleTensor(X).cuda()

def X_power_func(j):
    X_power = X**j
    return X_power

if __name__ == '__main__':
  with Pool(processes = 2) as p:   # Parallelizing over 2 GPUs
    results = p.map(X_power_func, range(4))

results

But when I ran the code, I am getting this error:但是当我运行代码时,我收到了这个错误:

---------------------------------------------------------------------------
RemoteTraceback                           Traceback (most recent call last)
RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
    return list(map(*args))
  File "<ipython-input-35-6529ab6dac60>", line 11, in X_power_func
    X_power = X**j
RuntimeError: CUDA error: initialization error
"""

The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
<ipython-input-35-6529ab6dac60> in <module>()
     14 if __name__ == '__main__':
     15   with Pool(processes = 1) as p:
---> 16     results = p.map(X_power_func, range(8))
     17 
     18 results

1 frames
/usr/lib/python3.6/multiprocessing/pool.py in get(self, timeout)
    642             return self._value
    643         else:
--> 644             raise self._value
    645 
    646     def _set(self, i, obj):

RuntimeError: CUDA error: initialization error

Where have I gone wrong?我哪里出错了? Any help would really be appreciated.任何帮助将不胜感激。

I think the usual approach is to call model.share_memory() once before multiprocessing, assuming you have a model which subclasses nn.Module .我认为通常的方法是在多处理之前调用model.share_memory()一次,假设你有一个 model 子类nn.Module For tensors, it should be X.share_memory_() .对于张量,它应该是X.share_memory_() Unfortunately, I had trouble getting that to work with your code, it hangs (without errors) if X.share_memory_() is called before calling pool.map;不幸的是,我很难让它与您的代码一起使用,如果在调用 pool.map 之前调用 X.share_memory_ X.share_memory_() ,它会挂起(没有错误); I'm not sure if the reason is because X is a global variable which is not passed as one of the arguments in map.我不确定原因是否是因为 X 是一个全局变量,它没有作为 map 中的 arguments 之一传递。

What does work is this:什么工作是这样的:

X = torch.DoubleTensor(X)

def X_power_func(j):
    X_power = X.cuda()**j
    return X_power

Btw: https://github.com/pytorch/pytorch/issues/15734 mentions that " CUDA API must not be initialized before you fork " (this is likely the issue you were seeing).顺便说一句: https://github.com/pytorch/pytorch/issues/15734提到“ CUDA API 可能在你看到之前不能初始化这个问题。”

Also https://github.com/pytorch/pytorch/issues/17680 if using spawn in Jupyter notebooks "the spawn method will run everything in your notebook top-level" (likely the issue I was seeing when my code was hanging, in a notebook).还有https://github.com/pytorch/pytorch/issues/17680如果在 Jupyter 笔记本中使用 spawn “spawn 方法将运行笔记本顶层中的所有内容”(可能是我在代码挂起时看到的问题,在笔记本)。 In short, I couldn't get either fork or spawn to work, except using the sequence above (which doesn't use CUDA until it's in the forked process).简而言之,我无法让 fork 或 spawn 工作,除非使用上面的序列(在分叉进程中之前不使用 CUDA )。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM