[英]How to output the second layer of a network?
My model is trained on digit images ( MNIST dataset
).我的模型是在数字图像(
MNIST dataset
)上训练的。 I am trying to print the output of the second layer of my network - an array of 128 numbers.我正在尝试打印网络第二层的输出 - 一个 128 个数字的数组。
After reading a lot of examples - for instance this , and this , or this .在阅读了很多例子之后——比如这个, 这个, 或者这个。
I did not manage to do this on my own network.我没有设法在我自己的网络上做到这一点。 Neither of the solutions work of my own algorithm.
两种解决方案都不适用于我自己的算法。
Link to Colab: https://colab.research.google.com/drive/1MLbpWJmq8JZB4_zKongaHP2o3M1FpvAv?fbclid=IwAR20xRz2i6sFS-Nm6Xwfk5hztdXOuxY4tZaDRXxAx3b986HToa9-IaTgASU Colab 链接: https ://colab.research.google.com/drive/1MLbpWJmq8JZB4_zKongaHP2o3M1FpvAv ? fbclid = IwAR20xRz2i6sFS-Nm6Xwfk5hztdXOuxY4tZaDRXxAx9b986TogA
I received a lot of different error messages.我收到了很多不同的错误消息。 I tried to handle each of them, but couldn't figure it on my own.
我试图处理它们中的每一个,但无法自己解决。
What am I missing?我错过了什么? How to output the Second layer?
如何输出第二层? If my Shape is
(28,28)
- what should be the type & value of input_shape
?如果我的外形是
(28,28)
-应该是什么类型和值input_shape
?
Failed trials & Errors for example:失败的试验和错误,例如:
(1) (1)
for layer in model.layers:
get_2nd_layer_output = K.function([model.layers[0].input],[model.layers[2].output])
layer_output = get_2nd_layer_output(layer)[0]
print('\nlayer output: get_2nd_layer_output=, layer=', layer, '\nlayer output: get_2nd_layer_output=', get_2nd_layer_output)
TypeError: inputs should be a list or tuple.
类型错误:输入应该是列表或元组。
(2) (2)
input_shape=(28, 28)
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functor = K.function([inp, K.learning_phase()], outputs ) # evaluation function
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = functor([test, 0.])
print('layer_outs',layer_outs)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable dense_1/bias from Container: localhost.
tensorflow.python.framework.errors_impl.FailedPreconditionError:从容器读取资源变量dense_1/bias时出错:本地主机。 This could mean that the variable was uninitialized.
这可能意味着该变量未初始化。 Not found: Container localhost does not exist.
未找到:容器 localhost 不存在。 (Could not find resource: localhost/dense_1/bias) [[{{node dense_1/BiasAdd/ReadVariableOp}}]]
(找不到资源:localhost/dense_1/bias)[[{{nodedense_1/BiasAdd/ReadVariableOp}}]]
Looks like you are mixing old keras (before tensorflow 2.0: import keras
) and new keras ( from tensorflow import keras
).看起来您正在混合旧 keras(在 tensorflow 2.0 之前:
import keras
)和新 keras ( from tensorflow import keras
)。
Try not to use old keras alongside tensorflow>=2.0 (and not to refer to the old documentation as in your first link), as it is easily confused with the new one (although nothing strictly illogical):尽量不要在 tensorflow>=2.0 旁边使用旧的 keras (并且不要像在第一个链接中那样引用旧文档),因为它很容易与新文档混淆(尽管没有什么严格不合逻辑的):
from tensorflow import keras
from keras.models import Model
print(Model.__module__) #outputs 'keras.engine.training'
from tensorflow.keras.models import Model
print(Model.__module__) #outputs 'tensorflow.python.keras.engine.training'
Behaviour will be highly unstable mixing those two libraries.混合这两个库的行为将非常不稳定。
Once this is done, using an answer from what you tried, m being your model, and my_input_shape
being the shape of your models input ie the shape of one picture (here (28, 28) or (1, 28, 28) if you have batches):完成后,使用您尝试的答案,m 是您的模型,而
my_input_shape
是您的模型输入的形状,即一张图片的形状(此处为 (28, 28) 或 (1, 28, 28) 如果您有批次):
from tensorflow import keras as K
my_input_data = np.random.rand(*my_input_shape)
new_temp_model = K.Model(m.input, m.layers[3].output) #replace 3 with index of desired layer
output_of_3rd_layer = new_temp_model.predict(my_input_data) #this is what you want
If you have one image img
you can directly write new_temp_model.predict(img)
如果你有一张图片
img
你可以直接写new_temp_model.predict(img)
(Assuming TF2) (假设TF2)
I think the most straightforward approach would be to name your layers, and then call them with standard input, so your model might look like我认为最直接的方法是命名你的层,然后用标准输入调用它们,所以你的模型可能看起来像
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28), name='flatten'),
keras.layers.Dense(128, activation='relu', name='hidden'),
keras.layers.Dense(10, activation='softmax')
])
Then just create an inputs and然后只需创建一个输入和
my_input = tf.random.normal((1, 28, 28)) # Should be like the standard input to your network
output_of_flatten = model.get_layer('flatten')(my_input)
output_of_hidden = model.get_layer('hidden')(output_of_flatten)
output_of_hidden
is what you are looking for output_of_hidden
就是你要找的
If you are looking for a more general solution, assuming your model is sequential, you can use the index
keyword of get_layer
like this如果您正在寻找更通用的解决方案,假设您的模型是顺序的,您可以像这样使用
get_layer
的index
关键字
my_input = tf.random.normal((1, 28, 28)) # Should be like the standard input to your network
desired_index = 1 # 1 == second layer
for i in range(desired_index):
my_input = model.get_layer(index=i)(my_input)
At the end of this loop my_input
should be what you are looking for在此循环结束时
my_input
应该是您要查找的内容
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.