简体   繁体   English

可视化theano Convolutional MLP中每层的输出

[英]Visualize output of each layer in theano Convolutional MLP

I am reading Convolutional Neural Networks tutorial . 我正在阅读卷积神经网络教程 I want to visualize output of each layer after model is trained. 我希望在训练模型后可视化每层的输出。 For example in function "evaluate_lenet5" I want to pass a instance (which is an image) to the network and see the output of each layer and the class that trained Neural Network set for the input. 例如,在函数“evaluate_lenet5”中,我想将一个实例(它是一个图像)传递给网络,并查看每个图层的输出以及为输入设置训练神经网络的类。 I thought it may be easy as doing a dot product on an image and Weight vector of each layer, but it did not work at all. 我认为在每个图层的图像和权重向量上做点积可能很容易,但它根本不起作用。

I have objects of each layer as : 我有每个图层的对象:

# Reshape matrix of rasterized images of shape (batch_size, 28 * 28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
# (28, 28) is the size of MNIST images.
layer0_input = x.reshape((batch_size, 1, 28, 28))

# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0 = LeNetConvPoolLayer(
    rng,
    input=layer0_input,
    image_shape=(batch_size, 1, 28, 28),
    filter_shape=(nkerns[0], 1, 5, 5),
    poolsize=(2, 2)
)

# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1 = LeNetConvPoolLayer(
    rng,
    input=layer0.output,
    image_shape=(batch_size, nkerns[0], 12, 12),
    filter_shape=(nkerns[1], nkerns[0], 5, 5),
    poolsize=(2, 2)
)

# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_input = layer1.output.flatten(2)

# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(
    rng,
    input=layer2_input,
    n_in=nkerns[1] * 4 * 4,
    n_out=500,
    activation=T.tanh
)

# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)

So can you suggest a way to visualize a sample of processing an image step by step after the neural network is trained? 因此,您可以建议一种方法来在训练神经网络后逐步可视化处理图像的样本吗?

This isn't so hard. 这不是那么难。 If you are using the same class definition of LeNetConvPoolLayer from the theano deep-learning tutorial, then you just need to compile a function with x as input and [LayerObject].output as output (where LayerObject can be any layer object like layer0 , layer1 etc. whichever layer you want to visualize. 如果你从theano深度学习教程中使用相同的LeNetConvPoolLayer类定义,那么你只需要编译一个函数,其中x作为输入, [LayerObject].output作为输出(其中LayerObject可以是任何层对象,如layer0layer1等你想要想象的那个层。

vis_layer1 = function([x], [layer1.output]) vis_layer1 = function([x],[layer1.output])

Pass a (or many) sample (exactly how you fed the input tensor while training) and you'll get the output of that particular layer for which your function is compiled for. 传递一个(或许多)样本(确切地说,在训练时你如何输入输入张量),你将获得为你的函数编译的特定层的输出。

Note: In this way you will get the outputs in the exact same shape the model has used in calculation. 注意:这样你会得到完全相同的形状模型的计算采用了输出。 However you can reshape it as you want by reshaping the output variable like layer1.output.flatten(n) . 然而,你可以重塑它,只要你想通过整形等的输出变量layer1.output.flatten(n)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM