简体   繁体   English

PyTorch:简单的前馈神经网络在没有 retain_graph=True 的情况下无法运行

[英]PyTorch: Simple feedforward neural network not running without retain_graph=True

Below is my code for training a Feedforward neural network (FFNN).下面是我用于训练前馈神经网络 (FFNN) 的代码。

The labels are numbers between 0 and 50. The FFNN comprises of a single hidden layer with 50 neurons and an output layer with 51 neurons.标签是 0 到 50 之间的数字。 FFNN 由具有 50 个神经元的单个隐藏层和具有 51 个神经元的输出层组成。 Furthermore, I have used negative log likelihood loss.此外,我使用了负对数似然损失。

I am very new to PyTorch so I used a couple of websites for guidance.我对 PyTorch 很陌生,所以我使用了几个网站作为指导。 The strange thing is that none of them required retain_graph to be set to True (they dont pass any arguments when calling backward() ).奇怪的是,它们中没有一个需要将 retain_graph 设置为 True(它们在调用backward()时不传递任何参数)。 Furthermore, it runs very slowly and the accuracy seems to be fluctuating around a fixed value instead of reducing.此外,它运行得非常缓慢,准确度似乎在一个固定值附近波动而不是降低。

Assuming that the input's format is correct, can someone please explain to me why the network is performing so badly and why the network requires retain_graph to be set to True?假设输入的格式正确,有人可以向我解释为什么网络表现如此糟糕以及为什么网络需要将 retain_graph 设置为 True?

Thank you very much!非常感谢!

n_epochs = 2
batch_size = 100
for epoch in range(n_epochs):
    permutation = torch.randperm(training_set.size()[0])
    for i in range(0, training_set.size()[0], batch_size):
        opt.zero_grad()
        indices = permutation[i:i + batch_size]
        batch_features = training_set[indices]
        batch_labels = torch.LongTensor([label for label, sent in train[indices]])
        batch_outputs = model(batch_features)
        loss = loss_function(batch_outputs, batch_labels)
        loss.backward(retain_graph=True)
        opt.step()

You are missing .zero_grad() operation.您缺少.zero_grad()操作。 Add that to the loop and your code will work fine without retain_graph= True .将其添加到循环中,您的代码将在没有retain_graph= True情况下retain_graph= True工作。

loss.backward()
opt.step()
opt.zero_grad()

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 pytorch:retain_graph=True 错误,即使我添加了这个 - pytorch : retain_graph=True error even though i add this PyTorch:在交替优化中是否需要 retain_graph=True? - PyTorch: Is retain_graph=True necessary in alternating optimization? 为什么我的简单前馈神经网络发散(pytorch)? - why is my simple feedforward neural network diverging (pytorch)? Calling.backward() function 用于两个不同的神经网络,但得到 retain_graph=True 错误 - Calling .backward() function for two different neural networks but getting retain_graph=True error 当指定“retain_graph=True”时,PyTorch 的 loss.backward() 是如何工作的? - How does PyTorch's loss.backward() work when “retain_graph=True” is specified? 在Keras中构建简单的前馈神经网络时遇到关键错误 - Running into key error when building simple feedforward neural network in Keras PyTorch:尝试第二次向后遍历图形,但缓冲区已被释放。 指定retain_graph=True - PyTorch: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True 为什么不使用keep_graph = True会导致错误? - Why does not using retain_graph=True result in error? 使用 pytorch-geometric 运行 Graph 神经网络时出错 - Error when running a Graph neural network with pytorch-geometric 使用TensorFlow的简单前馈神经网络将无法学习 - Simple Feedforward Neural Network with TensorFlow won't learn
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM