繁体   English   中英

张量流线性回归的pytorch等价物是什么?

[英]what is the pytorch equivalent of a tensorflow linear regression?

我正在学习 pytorch,要对这里以这种方式创建的数据进行基本的线性回归:

from sklearn.datasets import make_regression

x, y = make_regression(n_samples=100, n_features=1, noise=15, random_state=42)
y = y.reshape(-1, 1)
print(x.shape, y.shape)

plt.scatter(x, y)

我知道使用 tensorflow 这段代码可以解决:

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=1, activation='linear', input_shape=(x.shape[1], )))

model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.05), loss='mse')

hist = model.fit(x, y, epochs=15, verbose=0)

但我需要知道 pytorch 等价物会是什么样子,我试图做的是:

# Model Class
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.linear = nn.Linear(1,1)
        
    def forward(self, x):
        x = self.linear(x)
        return x
    
    def predict(self, x):
        return self.forward(x)
    
model = Net()

loss_fn = F.mse_loss
opt = torch.optim.SGD(modelo.parameters(), lr=0.05)

# Funcao para treinar
def fit(num_epochs, model, loss_fn, opt, train_dl):
    
    
    # Repeat for given number of epochs
    for epoch in range(num_epochs):
        
        # Train with batches of data
        for xb, yb in train_dl:
            
            # 1. Generate predictions
            pred = model(xb)
            
            # 2. Calculate Loss
            loss = loss_fn(pred, yb)
            
            # 3. Campute gradients
            loss.backward()
            
            # 4. Update parameters using gradients
            opt.step()
            
            # 5. Reset the gradients to zero
            opt.zero_grad()
            
        # Print the progress
        if (epoch+1) % 10 == 0:
            print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))

# Training
fit(200, model, loss_fn, opt, data_loader)

但是模型没有学到任何东西,我不知道我还能做什么。

输入/输出尺寸为 (1/1)

数据集

首先,你应该定义torch.utils.data.Dataset

import torch
from sklearn.datasets import make_regression


class RegressionDataset(torch.utils.data.Dataset):
    def __init__(self):
        data = make_regression(n_samples=100, n_features=1, noise=0.1, random_state=42)
        self.x = torch.from_numpy(data[0]).float()
        self.y = torch.from_numpy(data[1]).float()

    def __len__(self):
        return len(self.x)

    def __getitem__(self, index):
        return self.x[index], self.y[index]

它可以转换numpy数据PyTorch的tensor__init__和将数据转换为floatnumpy具有double默认而PyTorch的默认值是float ,以使用更少的内存)。

除此之外,它将简单地返回特征tuple和各自的回归目标。

合身

差不多了,但是您必须使模型的输出变平(如下所述)。 torch.nn.Linear将返回形状为(batch, 1)张量,而您的目标的形状为(batch,) flatten()将删除不必要的1维。

# 2. Calculate Loss
loss = criterion(pred.flatten(), yb)

模型

这就是你真正需要的:

model = torch.nn.Linear(1, 1)

任何层都可以直接调用,简单模型不需要forward和继承。

打电话

剩下的几乎没问题,你只需要创建torch.utils.data.DataLoader并传递我们数据集的实例。 DataLoader作用是多次发出dataset __getitem__并创建一批指定大小的(还有一些其他有趣的事情,但这就是想法):

dataset = RegressionDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32)
model = torch.nn.Linear(1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=3e-4)

fit(5000, model, criterion, optimizer, dataloader)

另请注意,我使用了torch.nn.MSELoss() ,因为我们正在传递对象,在这种情况下它看起来比函数更好。

全码

为了使它更容易:

import torch
from sklearn.datasets import make_regression


class RegressionDataset(torch.utils.data.Dataset):
    def __init__(self):
        data = make_regression(n_samples=100, n_features=1, noise=0.1, random_state=42)
        self.x = torch.from_numpy(data[0]).float()
        self.y = torch.from_numpy(data[1]).float()

    def __len__(self):
        return len(self.x)

    def __getitem__(self, index):
        return self.x[index], self.y[index]


# Funcao para treinar
def fit(num_epochs, model, criterion, optimizer, train_dl):
    # Repeat for given number of epochs
    for epoch in range(num_epochs):

        # Train with batches of data
        for xb, yb in train_dl:

            # 1. Generate predictions
            pred = model(xb)

            # 2. Calculate Loss
            loss = criterion(pred.flatten(), yb)

            # 3. Compute gradients
            loss.backward()

            # 4. Update parameters using gradients
            optimizer.step()

            # 5. Reset the gradients to zero
            optimizer.zero_grad()

        # Print the progress
        if (epoch + 1) % 10 == 0:
            print(
                "Epoch [{}/{}], Loss: {:.4f}".format(epoch + 1, num_epochs, loss.item())
            )


dataset = RegressionDataset()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=32)
model = torch.nn.Linear(1, 1)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=3e-4)

fit(5000, model, criterion, optimizer, dataloader)

你应该得到大约0.053损失,改变noise或其他参数以实现更难/更容易的回归任务。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM