[英]I am studying artificial neural networks. Where is the hidden layer?
class MLP(nn.Module):
def __init__(self):
super().__init__()
self.in_dim = 28 * 28
self.out_dim = 10
self.fc1 =nn.Linear(self.in_dim,512)
self.fc2=nn.Linear(512, 256)
self.fc3 =nn.Linear(256, 128)
self.fc4 =nn.Linear(128, 64)
self.fc5 =nn.Linear(64, self.out_dim)
self.relu = nn.ReLU()
def forward(self, x):
a1 = self.relu(self.fc1(x.view(-1,self.in_dim)))
a2 = self.relu(self.fc2(a1))
a3 = self.relu(self.fc3(a2))
a4 = self.relu(self.fc4(a3))
logit = self.fc5(a4)
return logit
It's really basic, but I'm confused after hearing the explanation, so I'm asking.这真的很基础,但听了解释后我很困惑,所以我问。 Looking at the code above,
看上面的代码,
If it is a hidden layer, is a1,a2,a3,a4 correct?如果是隐藏层,a1,a2,a3,a4 是否正确?
x is the input value, We think that a1 is the result of multiplying x by fc (weight), and a2 is the result of applying the activation function to a1. x是输入值,我们认为a1是x乘以fc(权重)的结果,a2是对a1应用激活函数的结果。
Taking into consideration that the hidden layer is located in between the input and the output layer.考虑到隐藏层位于输入层和输出层之间。 I will have to say that the hidden layer, in this case, it would be a2 and a3.
我不得不说隐藏层,在这种情况下,它将是 a2 和 a3。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.