[英]How to avoide ZeroPadding in convolutional neural network
我有一个卷积神经网络,对于我的数据集,它比其他网络做得更好。 问题在于我需要放置的ZeroPadding2D
来考虑向下/向上采样; 它在输出中创建工件(零样本)。 那么,如何在不改变网络结构(层)的情况下避免ZeroPadding2D
选项。 我需要保持结构原样(no.layers),并且可能会更改数据中的 1- 过滤器 2- 内核 3- 第一维(例如 96) 4- 任何其他选项 Bellow 是我的 CNN
input_img = Input(shape=(96, 44, 1), name='full')
x = GaussianNoise(.1)(input_img)
x = Conv2D(64, (5, 5), activation='relu', padding='same')(x)
x = AveragePooling2D((2, 2), padding='same')(x)
x = Dropout(0.1)(x)
x = Conv2D(128, (5, 5), activation='relu', padding='same')(x)
x = AveragePooling2D((2, 2), padding='same')(x)
x = Dropout(0.2)(x)
x = Conv2D(512, (5, 5), activation='relu', padding='same')(x)
encoded = AveragePooling2D((2, 2), padding='same')(x)
x = Dropout(0.2)(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(512, (5, 5), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Dropout(0.2)(x)
x = Conv2D(128, (5, 5), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Dropout(0.12)(x)
x = Conv2D(64, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
x = Dropout(0.12)(x)
x = ZeroPadding2D(((4, 0), (0, 0)))(x)
decoded = Conv2D(1, (5, 5), activation='tanh', padding='same',
name='out')(x)
autoencoder = Model(input_img, decoded)
我想如果你更换你的Upsampling
+ ZeroPadding
与部分Conv2DTranspose
,可能有助于解决您的问题。
看看这里:
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2DTranspose
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.