[英]How does the keras functional API model knows about layers when are only passing inputs and outputs of the network
I am new to Keras and my looking into the functional api model structure.我是 Keras 的新手,正在研究功能性 api model 结构。
1- As mentioned here in docs . 1-如文档中所述。 The
keras.Model
takes only input and the output argument, and the layers are listed before the Model. Can someone please tell me how the keras.Model
knows about the layers structures and the multiple layers between input and output, when all we are passing is just the input and output arrays. keras.Model
只接受输入和 output 参数,层列在 Model 之前。有人能告诉我keras.Model
是如何知道输入 33 和 81634 之间的层结构和多层的吗?和 output arrays。
2 - Also, what is output of layers.output
or layers.input
. 2 - 另外,
layers.output
或layers.input
的 output 是什么。 Is the output not a simple tensor? output 不是简单的张量吗? I see below output when I print layers.output using syntax from this example for some other layer.
当我使用此示例中的语法为其他层打印 layers.output 时,我在下面看到 output。 Looks like layers.output and layers.input contains the layer info as well, like
dense_5/Relu:0
.看起来 layers.output 和 layers.input 也包含图层信息,例如
dense_5/Relu:0
。 Can someone please clarify what the components of below output stand for有人可以澄清以下 output 的组件代表什么吗
print [layer.output for layer in model.layers]
output: output:
[<tf.Tensor 'input_6:0' shape=(None, 3) dtype=float32>,
<tf.Tensor 'dense_5/Relu:0' shape=(None, 4) dtype=float32>,
<tf.Tensor 'dense_6/Softmax:0' shape=(None, 5) dtype=float32>]
Like in your example:就像在你的例子中:
inputs = keras.Input(shape=(784,)) # input layer
dense = layers.Dense(64, activation="relu") # describe a dense layer
x = dense(inputs) # set x as a result of dense layer with inputs
x = layers.Dense(64, activation="relu")(x) # "update" x with next layer which has previous dense layer as input
outputs = layers.Dense(10)(x) # set your output
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model") # incorporate all layers in a model
So basically Keras already know what is inside model.所以基本上Keras已经知道model里面是什么了。
To answer your first question about how the model knows about the layers that were called on the intermediate tensors, I think it's helpful to take a look at help(keras.Input)
:要回答关于 model 如何知道在中间张量上调用的层的第一个问题,我认为查看
help(keras.Input)
会很有帮助:
Input()
is used to instantiate a Keras tensor.Input()
用于实例化一个 Keras 张量。A Keras tensor is a symbolic tensor-like object, which we augment with certain attributes that allow us to build a Keras model just by knowing the inputs and outputs of the model.
Keras 张量是一个类似 object 的符号张量,我们通过某些属性对其进行扩充,这些属性允许我们仅通过了解 model 的输入和输出来构建 Keras model。
So basically, Keras is using Python to do some magic under the hood.所以基本上,Keras 正在使用 Python 在引擎盖下做一些魔术。
Each time you call a Keras layer on a Keras tensor, it outputs a Keras tensor that has been mathematically transformed according to the layer's functionality, but also adds some information about that layer to this Keras tensor (in Python attributes of the object).每次在 Keras 张量上调用 Keras 层时,它会输出一个 Keras 张量,该张量已根据该层的功能进行数学转换,但也会向该 Keras 张量(在对象的 Python 属性中)添加有关该层的一些信息。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.