簡體   English   中英

PyTorch-如何設置神經元的激活規則以提高神經網絡的效率?

[英]PyTorch - How to set Activation Rules of neurons to increase efficiency of Neural Network?

我正在嘗試使用PyTorch建立反向傳播神經網絡。 我可以成功執行並測試其准確性,但是效率不高。 現在,我應該通過為神經元設置不同的激活規則來提高其效率,以使那些對最終輸出沒有貢獻的神經元從計算中被排除(修剪),從而增加了時間和准確性。

我的代碼如下(摘錄片段)-

# Hyper Parameters
input_size = 20
hidden_size = 50
num_classes =130
num_epochs = 500
batch_size = 5
learning_rate = 0.1

# normalise input data
for column in data:
# the last column is target
if column != data.shape[1] - 1:
    data[column] = data.loc[:, [column]].apply(lambda x: (x - x.mean()) / x.std())

# randomly split data into training set (80%) and testing set (20%)
msk = np.random.rand(len(data)) < 0.8
train_data = data[msk]
test_data = data[~msk]

# define train dataset and a data loader
train_dataset = DataFrameDataset(df=train_data)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)

# Neural Network
class Net(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.sigmoid = nn.Sigmoid()
        self.fc2 = nn.Linear(hidden_size, num_classes)

    def forward(self, x):
        out = self.fc1(x)
        out = self.sigmoid(out)
        out = self.fc2(out)
        return out
net = Net(input_size, hidden_size, num_classes)

# train the model by batch
for epoch in range(num_epochs):
    for step, (batch_x, batch_y) in enumerate(train_loader):
        # convert torch tensor to Variable
        X = Variable(batch_x)
        Y = Variable(batch_y.long())

        # Forward + Backward + Optimize
        optimizer.zero_grad()  # zero the gradient buffer
        outputs = net(X)
        loss = criterion(outputs, Y)
        all_losses.append(loss.data[0])
        loss.backward()
        optimizer.step()

        if epoch % 50 == 0:
            _, predicted = torch.max(outputs, 1)
            # calculate and print accuracy
            total = predicted.size(0)
            correct = predicted.data.numpy() == Y.data.numpy()

            print('Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Accuracy: %.2f %%' % (epoch + 1, num_epochs, step + 1, len(train_data) // batch_size + 1, loss.data[0], 100 * sum(correct)/total))

有人可以告訴我如何在PyTorch中做到這一點,因為我是PyTorch的新手。

我不確定該問題是否應該在stackoverflow上,但是無論如何我都會給您一個提示。 目前,您正在使用S型激活函數,如果輸入值太大或太小,則其梯度會消失。 常用的方法是使用ReLU激活功能(代表整流線性單位)。

ReLU(x)是正域的標識,負域的標識是0,在Python中如下所示:

def ReLU(x):
    if(x > 0):
        return x
    else:
        return 0

它應該在PyTorch中隨時可用

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM