简体   繁体   中英

I am studying artificial neural networks. Where is the hidden layer?

class MLP(nn.Module):
  def __init__(self):
    super().__init__()

    self.in_dim = 28 * 28
    self.out_dim = 10

    self.fc1 =nn.Linear(self.in_dim,512)
    self.fc2=nn.Linear(512, 256)
    self.fc3 =nn.Linear(256, 128)
    self.fc4 =nn.Linear(128, 64)
    self.fc5 =nn.Linear(64, self.out_dim)

    self.relu = nn.ReLU()

  def forward(self, x):
      a1 = self.relu(self.fc1(x.view(-1,self.in_dim)))      
      a2 = self.relu(self.fc2(a1))
      a3 = self.relu(self.fc3(a2))
      a4 = self.relu(self.fc4(a3))
      logit = self.fc5(a4)

      return logit

It's really basic, but I'm confused after hearing the explanation, so I'm asking. Looking at the code above,

If it is a hidden layer, is a1,a2,a3,a4 correct?

x is the input value, We think that a1 is the result of multiplying x by fc (weight), and a2 is the result of applying the activation function to a1.

Taking into consideration that the hidden layer is located in between the input and the output layer. I will have to say that the hidden layer, in this case, it would be a2 and a3.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM