简体   繁体   English

需要长时间训练的模特

[英]Model taking long time to train

I have added an LSTM layer after a convolution in the VGG-16 model using PyTorch. 在使用PyTorch对VGG-16模型进行卷积后,我添加了一个LSTM层。 Overtime, the model learns just fine. 加班,该模型学习得很好。 However, after adding just one LSTM layer, which consists of 32 LSTM cells, the process of training and evaluating takes about 10x longer. 但是,仅添加一个由32个LSTM单元组成的LSTM层后,训练和评估的过程大约需要10倍的时间。

I added the LSTM layer to a VGG framework as follows 我将LSTM层添加到VGG框架中,如下所示

def make_layers(cfg, batch_norm=False):
   # print("Making layers!")
    layers = []
    in_channels = 3
    count=0
    for v in cfg:
        count+=1
        if v == 'M':
            layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
        else:
            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
            if batch_norm:
                layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
            else:
                layers += [conv2d, nn.ReLU(inplace=True)]
                in_channels=v
        if count==5:
            rlstm =RLSTM(v)
            rlstm=rlstm.cuda()
            layers+=[rlstm]

RLSTM is my custom class, which implements RowLSTM, from Google's Pixel RNN paper. RLSTM是我的自定义类,它实现了Google的Pixel RNN论文中的RowLSTM。

Is this a common issue? 这是常见问题吗? Do LSTM layers just take long to train in general? 一般而言,LSTM层会花费很长时间进行训练吗?

Yes, since LSTM (and many other RNNs) rely on sequential feeding of information you lose a big portion of parallelization speed ups you generally have with CNNs. 是的,因为LSTM(以及许多其他RNN)依赖于顺序的信息馈送,所以您失去了CNN通常具有的大部分并行化速度。 There are other types of RNNs you can explore that leverage more parallelizable algorithms but the verdict on their predictive performance compared to LSTM/GRU is still not out 您还可以探索其他类型的RNN,它们利用了更多可并行化的算法,但是与LSTM / GRU相比,它们的预测性能仍未得出结论

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM