简体   繁体   English

如何在 nn.LSTM 中取得 R2 分数 pytorch

[英]How to make R2 score in nn.LSTM pytorch

I tried to make loss function with R2in nn.LSTM but i couldnt find any documentation about it.我试图用 R2in nn.LSTM 损失 function 但我找不到任何关于它的文档。 I already use RMSE and MAE loss from pytorch.我已经使用了 pytorch 中的 RMSE 和 MAE 损失。

My data is a time series and im doing time series forecasting我的数据是一个时间序列,我正在做时间序列预测

This is the code where i use the loss function of RMSE in data training这是我在数据训练中使用 RMSE 损失 function 的代码

model = LSTM_model(input_size=1, output_size=1, hidden_size=512, num_layers=2, dropout=0).to(device)
criterion = nn.MSELoss(reduction="sum")
optimizer = optim.Adam(model.parameters(), lr=0.001)
callback = Callback(model, early_stop_patience=10 ,outdir="model/lstm", plot_every=20,)


from tqdm.auto import tqdm

def loop_fn(mode, dataset, dataloader, model, criterion, optimizer,device):
    if mode =="train":
        model.train()
    elif mode =="test":
        model.eval()
    cost = 0
    for feature, target in tqdm(dataloader, desc=mode.title()):
        feature, target = feature.to(device), target.to(device)
        output , hidden = model(feature,None)
        loss = torch.sqrt(criterion(output,target))
        
        if mode =="train":
            loss.backward()
            optimizer.step()
            optimizer.zero_grad()
        
        cost += loss.item() * feature.shape[0]
    cost = cost / len(dataset)
    return cost

And this is the code to start data training这是开始数据训练的代码

while True :
    train_cost = loop_fn("train", train_set, trainloader, model, criterion, optimizer,device)
    with torch.no_grad():
        test_cost  = loop_fn("test", test_set, testloader, model, criterion, optimizer,device)
        
    callback.log(train_cost, test_cost)
    
    callback.save_checkpoint()
    
    callback.cost_runtime_plotting()
   
    
    if callback.early_stopping(model, monitor="test_cost"):
        callback.plot_cost()
        break

Can anyone help me with the R2 loss function?谁能帮我解决 R2 损失 function? Thank you in advance先感谢您

Here is an implemention,这是一个实现,

"""
From https://en.wikipedia.org/wiki/Coefficient_of_determination
"""
def r2_loss(output, target):
    target_mean = torch.mean(target)
    ss_tot = torch.sum((target - target_mean) ** 2)
    ss_res = torch.sum((target - output) ** 2)
    r2 = 1 - ss_res / ss_tot
    return r2

You can use it as below,您可以按如下方式使用它,

loss = r2_loss(output, target)
loss.backward()

The following library function already implements the comments I have made on Melike's solution:以下库 function 已经实现了我对 Melike 的解决方案所做的评论:

from torchmetrics.functional import r2_score
loss = r2_score(output, target)
loss.backward()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM