简体   繁体   中英

How does the keras functional API model knows about layers when are only passing inputs and outputs of the network

I am new to Keras and my looking into the functional api model structure.

1- As mentioned here in docs . The keras.Model takes only input and the output argument, and the layers are listed before the Model. Can someone please tell me how the keras.Model knows about the layers structures and the multiple layers between input and output, when all we are passing is just the input and output arrays.

2 - Also, what is output of layers.output or layers.input . Is the output not a simple tensor? I see below output when I print layers.output using syntax from this example for some other layer. Looks like layers.output and layers.input contains the layer info as well, like dense_5/Relu:0 . Can someone please clarify what the components of below output stand for

print [layer.output for layer in model.layers]

output:

 [<tf.Tensor 'input_6:0' shape=(None, 3) dtype=float32>,
  <tf.Tensor 'dense_5/Relu:0' shape=(None, 4) dtype=float32>,
  <tf.Tensor 'dense_6/Softmax:0' shape=(None, 5) dtype=float32>]
  1. You should describe model first, before compiling/fitting/evaluation. You make a sequence: first layer is input, next a bunch of intermediate layers and then output layer.

Like in your example:

inputs = keras.Input(shape=(784,))          # input layer
dense = layers.Dense(64, activation="relu") # describe a dense layer
x = dense(inputs)                           # set x as a result of dense layer with inputs
x = layers.Dense(64, activation="relu")(x)  # "update" x with next layer which has previous dense layer as input
outputs = layers.Dense(10)(x)               # set your output
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model") # incorporate all layers in a model

So basically Keras already know what is inside model.

  1. You was looking at obtaining outputs of intermediate layers. This is common principle in transfer learning (when you use pre-trained model as feature extractor) or for some architectures as skip-connections. In this case you will get multiple outputs. Also.network can has multiple outputs at the end of model, depending on model purpose. On your example they are just for demonstration purposes and do not have some meaning. Take a look at more meaningful feature extraction

To answer your first question about how the model knows about the layers that were called on the intermediate tensors, I think it's helpful to take a look at help(keras.Input) :

Input() is used to instantiate a Keras tensor.

A Keras tensor is a symbolic tensor-like object, which we augment with certain attributes that allow us to build a Keras model just by knowing the inputs and outputs of the model.

So basically, Keras is using Python to do some magic under the hood.

Each time you call a Keras layer on a Keras tensor, it outputs a Keras tensor that has been mathematically transformed according to the layer's functionality, but also adds some information about that layer to this Keras tensor (in Python attributes of the object).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM