[英]How to release temporarily consumed GPU memory after each forward?
我有這樣的課:
class Stem(nn.Module):
def __init__(self):
super(Stem, self).__init__()
self.out_1 = BasicConv2D(3, 32, kernelSize = 3, stride = 2)
self.out_2 = BasicConv2D(32, 32, kernelSize = 3, stride = 1)
self.out_3 = BasicConv2D(32, 64, kernelSize = 3, stride = 1, padding = 1)
def forward(self, x):
x = self.out_1(x)
x = self.out_2(x)
x = self.out_3(x)
return x
Stem
out_1,2,3
屬性是以下類的實例:
class BasicConv2D(nn.Module):
def __init__(self, inChannels, outChannels, kernelSize, stride, padding = 0):
super(BasicConv2D, self).__init__()
self.conv = nn.Conv2d(inChannels, outChannels,
kernel_size = kernelSize,
stride = stride,
padding = padding, bias = False)
self.bn = nn.BatchNorm2d(outChannels,
eps = 0.001,
momentum = 0.1,
affine = True)
self.relu = nn.ReLU(inplace = False)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
y = self.relu(x)
return y
訓練時,在Stem.forward()
, nvidia-smi
告訴每行將消耗x
MB的GPU內存,但是在Stem.forward()
完成后,該內存將不會被釋放,從而導致訓練迅速崩潰, GPU內存。
因此,問題是:如何釋放臨時消耗的GPU內存?
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.