I have a keras Model(not Layer) named Model A. Model A contains keras.Layers of types: Dense, Conv2D, AveragePooling2D, BatchNormalization, add, GlobalAveragePooling2D.
The output of model.summary() is as follows:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
layer_type_1 (Layer_type_1) multiple 3776
_________________________________________________________________
... ... ...
_________________________________________________________________
dense (Dense) multiple 1024
=================================================================
Total params: 4,787,808
Trainable params: 4,782,496
Non-trainable params: 5,312
_________________________________________________________________
I have another keras model (model B) which contains model A.
summary():
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) multiple 35
_________________________________________________________________
model_A (ModelA) multiple 4787232
=================================================================
Total params: 4,787,267
Trainable params: 4,781,955
Non-trainable params: 5,312
_________________________________________________________________
I wonder how could the number of total parameters in model B is less than model A?
Since model B contains model A, it must be greater.
我发现这是因为输入形状(到模型 A)由于模型 B 中的第一个 Conv2D 层而从 (W, D, 6) 更改为 (W, D, 5)。
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.