簡體   English   中英

張量流線性回歸的pytorch等價物是什么?

[英]what is the pytorch equivalent of a tensorflow linear regression?

我正在學習 pytorch,要對這里以這種方式創建的數據進行基本的線性回歸:

from sklearn.datasets import make_regression

x, y = make_regression(n_samples=100, n_features=1, noise=15, random_state=42)
y = y.reshape(-1, 1)
print(x.shape, y.shape)

plt.scatter(x, y)

我知道使用 tensorflow 這段代碼可以解決:

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=1, activation='linear', input_shape=(x.shape[1], )))

model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.05), loss='mse')

hist = model.fit(x, y, epochs=15, verbose=0)

但我需要知道 pytorch 等價物會是什么樣子,我試圖做的是:

# Model Class
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.linear = nn.Linear(1,1)
        
    def forward(self, x):
        x = self.linear(x)
        return x
    
    def predict(self, x):
        return self.forward(x)
    
model = Net()

loss_fn = F.mse_loss
opt = torch.optim.SGD(modelo.parameters(), lr=0.05)

# Funcao para treinar
def fit(num_epochs, model, loss_fn, opt, train_dl):
    
    
    # Repeat for given number of epochs
    for epoch in range(num_epochs):
        
        # Train with batches of data
        for xb, yb in train_dl:
            
            # 1. Generate predictions
            pred = model(xb)
            
            # 2. Calculate Loss
            loss = loss_fn(pred, yb)
            
            # 3. Campute gradients
            loss.backward()
            
            # 4. Update parameters using gradients
            opt.step()
            
            # 5. Reset the gradients to zero
            opt.zero_grad()
            
        # Print the progress
        if (epoch+1) % 10 == 0:
            print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))

# Training
fit(200, model, loss_fn, opt, data_loader)

但是模型沒有學到任何東西,我不知道我還能做什么。

輸入/輸出尺寸為 (1/1)

數據集

首先,你應該定義torch.utils.data.Dataset

import torch
from sklearn.datasets import make_regression


class RegressionDataset(torch.utils.data.Dataset):
    def __init__(self):
        data = make_regression(n_samples=100, n_features=1, noise=0.1, random_state=42)
        self.x = torch.from_numpy(data[0]).float()
        self.y = torch.from_numpy(data[1]).float()

    def __len__(self):
        return len(self.x)

    def __getitem__(self, index):
        return self.x[index], self.y[index]

它可以轉換numpy數據PyTorch的tensor__init__和將數據轉換為floatnumpy具有double默認而PyTorch的默認值是float ,以使用更少的內存)。

除此之外,它將簡單地返回特征tuple和各自的回歸目標。

合身

差不多了,但是您必須使模型的輸出變平(如下所述)。 torch.nn.Linear將返回形狀為(batch, 1)張量,而您的目標的形狀為(batch,) flatten()將刪除不必要的1維。

# 2. Calculate Loss
loss = criterion(pred.flatten(), yb)

模型

這就是你真正需要的:

model = torch.nn.Linear(1, 1)

任何層都可以直接調用,簡單模型不需要forward和繼承。

打電話

剩下的幾乎沒問題,你只需要創建torch.utils.data.DataLoader並傳遞我們數據集的實例。 DataLoader作用是多次發出dataset __getitem__並創建一批指定大小的(還有一些其他有趣的事情,但這就是想法):

dataset = RegressionDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32)
model = torch.nn.Linear(1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=3e-4)

fit(5000, model, criterion, optimizer, dataloader)

另請注意,我使用了torch.nn.MSELoss() ,因為我們正在傳遞對象,在這種情況下它看起來比函數更好。

全碼

為了使它更容易:

import torch
from sklearn.datasets import make_regression


class RegressionDataset(torch.utils.data.Dataset):
    def __init__(self):
        data = make_regression(n_samples=100, n_features=1, noise=0.1, random_state=42)
        self.x = torch.from_numpy(data[0]).float()
        self.y = torch.from_numpy(data[1]).float()

    def __len__(self):
        return len(self.x)

    def __getitem__(self, index):
        return self.x[index], self.y[index]


# Funcao para treinar
def fit(num_epochs, model, criterion, optimizer, train_dl):
    # Repeat for given number of epochs
    for epoch in range(num_epochs):

        # Train with batches of data
        for xb, yb in train_dl:

            # 1. Generate predictions
            pred = model(xb)

            # 2. Calculate Loss
            loss = criterion(pred.flatten(), yb)

            # 3. Compute gradients
            loss.backward()

            # 4. Update parameters using gradients
            optimizer.step()

            # 5. Reset the gradients to zero
            optimizer.zero_grad()

        # Print the progress
        if (epoch + 1) % 10 == 0:
            print(
                "Epoch [{}/{}], Loss: {:.4f}".format(epoch + 1, num_epochs, loss.item())
            )


dataset = RegressionDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32)
model = torch.nn.Linear(1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=3e-4)

fit(5000, model, criterion, optimizer, dataloader)

你應該得到大約0.053損失,改變noise或其他參數以實現更難/更容易的回歸任務。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM