简体   繁体   English

MLP分类器神经元权重

[英]MLP Classifier neurons weights

I have the below MLP neural network: 我有以下MLP神经网络:

MLP = MLPClassifier(activation= 'tanh', alpha= 1e-05, hidden_layer_sizes= (2, 3), learning_rate= 'constant' , max_iter= 5000)
MLP.fit(X_train,y_train)

print(MLP.coefs_)

As I understand this neural network has only 2 hidden layers, with 2 neurons in the first hidden layer, and 3 neurons in the second hidden layer. 据我了解,该神经网络只有2个隐藏层,第一个隐藏层有2个神经元,第二个隐藏层有3个神经元。 However, the output to print coefficients above gives the below: 但是,上述输出到打印系数给出了以下内容:

[array([[-0.15020109,  0.29242019],
       [ 0.38515555,  0.06000627],
       [-0.04371792,  0.35203079],
       [ 0.28167529,  0.05948562],
       [-0.46051132, -0.28546222]]), array([[-0.29658042, -1.2229539 ,  0.4949065 ],
       [-0.95435436,  0.3854664 ,  0.6349616 ]]), array([[-0.54332547,  0.27007792,  0.68899707],
       [-0.00191208,  0.89295531, -0.22855791],
       [-0.58939234,  0.39217616,  1.10214481]])]

My question is how to map the above output to each neuron in the hidden layer. 我的问题是如何将以上输出映射到隐藏层中的每个神经元。 From the first glance, it seems that the weight for the first neuron in the first hidden layer is: [-0.29658042, -1.2229539 , 0.4949065 ]. 乍看之下,似乎第一隐藏层中第一神经元的权重为:[-0.29658042,-1.2229539、0.4949065]。 How can weights for a single neuron can be an array of 3 elements? 一个神经元的权重如何才能由3个元素组成的数组?

You need to take into account the input layer and the output layer as well. 您还需要考虑输入层和输出层。 It looks like you are inputting 5-dimensional features and outputting 3-dimensional outputs. 您似乎正在输入5维特征并输出3维输出。 Your network has 2 hidden layers of size 2 and 3. So the coefs_ should have shapes (5,2),(2,3),(3,3) so that your inputs are changing from 5-dim to 2-dim, then from 2-dim to 3-dim then from 3-dim to 3-dim for the output. 您的网络有2个大小分别为2和3的隐藏层 。因此coefs_应该具有形状(5,2),(2,3),(3,3)以便您的输入从5维变为2维,然后从2-dim到3-dim,然后从3-dim到3-dim作为输出。 Remember that there is a weight attached to each connection from one layer to the next. 请记住,从一层到另一层的每个连接都有权重。 So if you have 5 neurons (input layer) connected to 2 neurons (first hidden layer) then you need 5*2=10 weights to describe the 10 connections between these two layers. 因此,如果您有5个神经元(输入层)连接到2个神经元(第一个隐藏层),则需要5*2=10权重来描述这两层之间的10个连接。 That is exactly what a shape (5,2) array has. 这正是形状(5,2)数组所具有的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM