简体   繁体   English

Python 中的多层感知器深度学习使用 Pytorch

[英]Multi Layer Perceptron Deep Learning in Python using Pytorch

I am having errors in executing the train function of my code in MLP.我在 MLP 中执行我的代码的火车 function 时出错。

This is the error:这是错误:

mat1 and mat2 shapes cannot be multiplied (128x10 and 48x10) mat1 和 mat2 形状不能相乘(128x10 和 48x10)

My code for the train function is this:我的火车 function 代码是这样的:

class net(nn.Module):
def __init__(self, input_dim2, hidden_dim2, output_dim2):
    super(net, self).__init__()
    self.input_dim2 = input_dim2
    self.fc1 = nn.Linear(input_dim2, hidden_dim2)
    self.relu = nn.ReLU()
    self.fc2 = nn.Linear(hidden_dim2, hidden_dim2)
    self.fc3 = nn.Linear(hidden_dim2, output_dim2) 
def forward(self, x):
  x = self.fc1(x)
  x = self.relu(x)
  x = self.fc2(x)
  x = self.relu(x)
  x = self.fc3(x) 
  x = F.softmax(self.fc3(x)) 
  
  return x



model = net(input_dim2, hidden_dim2, output_dim2) #create the network
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(model.parameters(), lr = learning_rate2)


def train(num_epochs2):
for i in range(num_epochs2):
  tmp_loss = []
  for (x,y) in train_loader:
    print(y.shape)
    print(x.shape)
    outputs = model(x) #forward pass
    print(outputs.shape)
    loss = criterion(outputs, y) #loss computation
    tmp_loss.append(loss.item()) #recording the loss
    optimizer.zero_grad() #all the accumulated gradient
    loss.backward()  #auto-differentiaton - accumulation of gradient
    optimizer.step() # a gradient step

  print("Loss at {}th epoch: {}".format(i, np.mean(tmp_loss)))  

I don't know where I'm wrong.我不知道我错在哪里。 My code seems to work okay.我的代码似乎工作正常。

From the limited message, I guess the place you are wrong are the following snippets:从有限的消息中,我猜你错的地方是以下片段:

x = self.fc3(x) 
x = F.softmax(self.fc3(x))

Try to replace with:尝试替换为:

x = self.fc3(x) 
x = F.softmax(x)

A good question should include: error backtrace information and complete toy example which could repeat the errors!一个好的问题应该包括:错误回溯信息和可能重复错误的完整玩具示例!

Here an relu activation seems to be missing in the ' init ' function.这里的“ init ”function 中似乎缺少relu激活。 Or there is an extra relu activation in the forward function.或者在前向 function 中有一个额外的relu激活。 Look at the code below and try to figure out what is extra or missing.查看下面的代码并尝试找出额外或缺失的内容。

def __init__(self, input_dim2, hidden_dim2, output_dim2):
    super(net, self).__init__()
    self.input_dim2 = input_dim2
    self.fc1 = nn.Linear(input_dim2, hidden_dim2)
    self.relu = nn.ReLU()
    self.fc2 = nn.Linear(hidden_dim2, hidden_dim2)
    self.fc3 = nn.Linear(hidden_dim2, output_dim2) 
def forward(self, x):
    x = self.fc1(x)
    x = self.relu(x)
    x = self.fc2(x)
    x = self.relu(x)
    x = self.fc3(x) 
    x = F.softmax(self.fc3(x)) 

return x

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM