繁体   English   中英

CNN 有多少个隐藏层?

[英]How many hidden layers a CNN has?

我使用 CNN 来解决分类问题。 model架构的代码如下:

model.add(Conv1D(256, 5,padding='same',
                 input_shape=(40,1)))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same'))
model.add(Activation('relu'))
model.add(Dropout(0.1))
model.add(MaxPooling1D(pool_size=(8)))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Conv1D(128, 5,padding='same',))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(8))
model.add(Activation('softmax'))
opt = keras.optimizers.rmsprop(lr=0.00001, decay=1e-6)


这个 model 有多少隐藏层? 还有哪一个是output和输入层?

第一层是输入层,最后一层是 output 层。 介于这两者之间的是隐藏层。

model.add(Conv1D(256, 5,padding='same', input_shape=(40,1))) # input layer
model.add(Activation('relu')) # hidden layer
model.add(Conv1D(128, 5,padding='same')) # hidden layer
model.add(Activation('relu')) # hidden layer
model.add(Dropout(0.1)) # hidden layer
model.add(MaxPooling1D(pool_size=(8))) # hidden layer
model.add(Conv1D(128, 5,padding='same',)) # hidden layer 
model.add(Activation('relu')) # hidden layer
model.add(Conv1D(128, 5,padding='same',)) #hidden layer
model.add(Activation('relu')) # hidden layer
model.add(Flatten()) # hidden layer
model.add(Dense(8)) # hidden layer
model.add(Activation('softmax')) # output layer
opt = keras.optimizers.rmsprop(lr=0.00001, decay=1e-6)

输入层是第一层(指定 input_shape 的层)。 每次使用 model.add 时,都会创建一个新层。 您可以使用 model.summary() 打印出您的 model 层结构,如下所示。

Model: "sequential_8"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_20 (Conv1D)           (None, 40, 256)           1536      
_________________________________________________________________
activation_23 (Activation)   (None, 40, 256)           0         
_________________________________________________________________
conv1d_21 (Conv1D)           (None, 40, 128)           163968    
_________________________________________________________________
activation_24 (Activation)   (None, 40, 128)           0         
_________________________________________________________________
dropout_6 (Dropout)          (None, 40, 128)           0         
_________________________________________________________________
max_pooling1d_4 (MaxPooling1 (None, 5, 128)            0         
_________________________________________________________________
conv1d_22 (Conv1D)           (None, 5, 128)            82048     
_________________________________________________________________
activation_25 (Activation)   (None, 5, 128)            0         
_________________________________________________________________
conv1d_23 (Conv1D)           (None, 5, 128)            82048     
_________________________________________________________________
activation_26 (Activation)   (None, 5, 128)            0         
_________________________________________________________________
flatten_3 (Flatten)          (None, 640)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 8)                 5128      
_________________________________________________________________
activation_27 (Activation)   (None, 8)                 0         
=================================================================
Total params: 334,728
Trainable params: 334,728
Non-trainable params: 0  

这可能有点令人困惑,因为您的实际 output 层是具有 8 个节点和 softmax 激活 function 的层。 我更喜欢按如下方式创建模型

inputs = tf.keras.Input(shape=(40,1))
x = tf.keras.layers.Conv1D(256, 5,padding='same', activation='relu')(inputs)
x=Dropout(.1)(x)
x=MaxPooling1D(pool_size=(8))(x)
x=Conv1D(128, 5,padding='same', activation='relu')(x)
x=Conv1D(128, 5,padding='same', activation='relu')(x)
x=Conv1D(128, 5,padding='same', activation='relu')(x)
x=Flatten()(x)
outputs=Dense(8, activation='softmax')(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)

It is the exact same model but I think it is clearer as to what layer is the actual output
See result below for model.summary()

> Blockquote
Model: "model_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_9 (InputLayer)         [(None, 40, 1)]           0         
_________________________________________________________________
conv1d_44 (Conv1D)           (None, 40, 256)           1536      
_________________________________________________________________
dropout_15 (Dropout)         (None, 40, 256)           0         
_________________________________________________________________
max_pooling1d_12 (MaxPooling (None, 5, 256)            0         
_________________________________________________________________
conv1d_45 (Conv1D)           (None, 5, 128)            163968    
_________________________________________________________________
conv1d_46 (Conv1D)           (None, 5, 128)            82048     
_________________________________________________________________
conv1d_47 (Conv1D)           (None, 5, 128)            82048     
_________________________________________________________________
flatten_11 (Flatten)         (None, 640)               0         
_________________________________________________________________
dense_12 (Dense)             (None, 8)                 5128      
=================================================================
Total params: 334,728
Trainable params: 334,728
Non-trainable params: 0

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM