[英]Pytorch CPU and GPU run in parallel
Is there a way to do something with CPU (compute mean and variance of current mini-batch loss) while GPU is doing back-propagation? 在GPU进行反向传播时,是否可以对CPU进行某些处理(计算当前最小批量损失的均值和方差)?
Something like this: 像这样:
for input, label in dataloader:
output = model(input)
losses = some_loss_function(output, label) # size = (batch_size,)
loss = losses.sum() / batch_size
# =========== do on CPU ============
mean = loss.item()
var = losses.pow(2).sum().item() / batch_size - mean**2
# ============ BP ================
loss.backward()
#gradient update
will the backward() on GPU wait for CPU computation to finish? 将在GPU上的向后()等待CPU计算完成吗? Is there a way to do backward() and CPU computation in parallel?
有没有一种方法可以并行进行Backward()和CPU计算?
It doesn't do both computations at the same time. 它不会同时执行两个计算。 There is no exposed mechanism to do it parallely.
没有公开的机制可以并行执行。 (there is an advanced mechanism using CUDA streams that allows to do this in pytorch, but it is too error-prone for most users)
(存在使用CUDA流的高级机制,该机制允许在pytorch中执行此操作,但对于大多数用户而言,它太容易出错)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.