簡體   English   中英

控制問題中的就地操作錯誤

[英]Inplace operation error in control problem

我是 pytorch 的新手,我在使用一些代碼來訓練神經網絡來解決控制問題時遇到了問題。 我使用以下代碼來解決我的問題的玩具版本:

# SOME IMPORTS
import torch
import torch.autograd as autograd  
from torch import Tensor               
import torch.nn as nn                   
import torch.optim as optim       

# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')


# PARAMETERS OF THE PROBLEM
layers = [4, 32, 32, 4]  # Layers of the NN
steps = 10000  # Simulation steps
train_step = 1  # I train the NN for 1 epoch every train_step steps
lr = 1e-3  # Learning rate

在此之后我定義了一個非常簡單的網絡:

# DEFINITION OF THE NETWORK (A SIMPLE FEED FORWARD)

class FCN(nn.Module):
    
    def __init__(self,layers):
        super(FCN, self).__init__() #call __init__ from parent class 
         
        self.linears = []
        for i in range(len(layers)-2):
            self.linears.append(
                nn.Linear(layers[i], layers[i+1])
            )
            self.linears.append(
                nn.ReLU()
            )
            
        self.linears.append(
                nn.Linear(layers[-2], layers[-1])
        )
        
        self.linear_stack = nn.Sequential(*self.linears)
            
    'forward pass'
    def forward(self,x):
        
        out = self.linear_stack(x)

        return out

然后我使用定義的類來創建我的模型:

model = FCN(layers)
model.to(device)
params = list(model.parameters())
optimizer = torch.optim.Adam(model.parameters(),lr=lr,amsgrad=False)

然后我定義損失函數和模擬函數,即更新問題狀態的函數。

def simulate(state_old, model):

    state_new = model(state_old)

    return state_new

def lossNN(state_old,state_new, model):

    error = torch.sum( (state_old-state_new)**2 )

    return error

最后我訓練我的模型:

torch.autograd.set_detect_anomaly(True)

state_old = torch.Tensor([0.01, 0.01, 0.5, 0.1]).to(device)

for i in range(steps):
    state_new = simulate(state_old, model)
    
    if i%train_step == 0:
        optimizer.zero_grad()
        loss = lossNN(state_old, state_new, model)
        loss.backward(retain_graph=True)
        optimizer.step()
        
    state_old = state_new
    
    
    if (i%1000)==0:
        print(loss)
        print(state_new)

然后我收到以下錯誤。 在這里你可以找到回溯:

RuntimeError:梯度計算所需的變量之一已被就地操作修改:[torch.cuda.FloatTensor [32, 4]],它是 AsStridedBackward0 的輸出 0,版本為 2; 而是預期的版本 1。 提示:上面的回溯顯示了計算梯度失敗的操作。 有問題的變量在那里或以后的任何地方被改變了。 祝你好運!

您需要使用 detach 來移除在先前狀態中創建的漸變。

state_old = state_new
state_old = state_new.detach()

然后您的訓練代碼更改為:

torch.autograd.set_detect_anomaly(True)

state_old = torch.Tensor([0.01, 0.01, 0.5, 0.1]).to(device)

for i in range(steps):
    state_new = simulate(state_old, model)

    if i%train_step == 0:
        optimizer.zero_grad()
        loss = lossNN(state_old, state_new, model)
        loss.backward(retain_graph=True)
        optimizer.step()

    state_old = state_new.detach()
    
    if (i%1000)==0:
        print(loss)
        print(state_new)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM