简体   繁体   English

TensorFlow卷积神经网络教程

[英]TensorFlow Convolutional Neural Network tutorial

I'm going through the 'Expert MINST' tf tutorial ( https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html ) and I'm stuck on this part: 我正在阅读'Expert MINST'教程( https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html ),我坚持这一部分:

Densely Connected Layer 密集连接层

Now that the image size has been reduced to 7x7, we add a fully-connected layer with 1024 neurons to allow processing on the entire image. 现在图像尺寸已减少到7x7,我们添加了一个带有1024个神经元的完全连接层,以便对整个图像进行处理。 We reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias, and apply a ReLU. 我们将汇集层中的张量重新整形为一批向量,乘以权重矩阵,添加偏差并应用ReLU。

Why the number 1024? 为什么数字1024? Where did that come from? 那个是从哪里来的?

My understanding with the Fully Connected Layer is that it has to somehow get back to the original image size (and then we start plugging things into our softmax equation). 我对完全连接层的理解是它必须以某种方式回到原始图像大小(然后我们开始将东西插入到我们的softmax方程中)。 In this case, the original image size is Height x Width x Channels = 28*28*1 = 784... not 1024. 在这种情况下,原始图像大小是高度x宽度x通道= 28 * 28 * 1 = 784 ...而不是1024。

What am I missing here? 我在这里错过了什么?

1024 is just an arbitrary number of hidden units . 1024只是任意数量的隐藏单位 At this point, input to the network is reduced to 64 planes, each of size 7x7 pixels. 此时,网络输入减少到64个平面,每个平面大小为7x7像素。 They do not try to "get back to the original image size", they simply claim, that they want a layer which can extract global features , thus they want it to be densly connected to every single neuron from last pooling layer (which represents your input space), while previous operations (convolutions and poolings) were local features . 他们不会试图“回到原始图像大小”,他们只是声称,他们想要一个可以提取全局特征的图层,因此他们希望它能够密集地连接到最后一个池层的每个神经元(代表你的输入空间),而以前的操作(卷积和池)是局部特征

Thus, in order to work with this in MLP manner, you need 7*7*64=3136 neurons. 因此,为了以MLP方式使用它,您需要7 * 7 * 64 = 3136个神经元。 They add another layer of 1024 on top, so if you draw your network, it would be something among the lines of 他们在顶部添加了另一层1024,所以如果你绘制你的网络,它将是其中的一部分

 INPUT - CONV - POOL - .... - CONV - POOL - HIDDEN - OUTPUT

28 x 28-               ....         7*7*64   1024      10
                                    =3136

The number is thus quite arbitary, they simply empirically tested that it works, but you could use any number of units here, or any number of layers. 因此,这个数字是非常随意的,他们只是凭经验测试它是否有效,但你可以在这里使用任意数量的单位,或任意数量的层。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM