简体   繁体   English

卷积神经网络层

[英]Convolutional Neural Network Layers

I want to detect certain patterns using a CNN, however, the last two layers of my CNN get an error when I try to run them.我想使用 CNN 检测某些模式,但是,当我尝试运行它们时,我的 CNN 的最后两层会出错。 I have commented those layers in the code below.我已经在下面的代码中注释了这些层。 *Every Conv2D layer is repeated before the MaxPooling layer. *每个 Conv2D 层在 MaxPooling 层之前重复。

  inputs = tf.keras.layers.Input(shape=(256, 256, 27), name='input_layer')
        lambda_layer = tf.keras.layers.Lambda(lambda value: value / 255)(inputs)
        xp = tf.keras.layers.Conv2D(64, 3, padding='same', activation=tf.nn.relu)(lambda_layer)
        xp = tf.keras.layers.MaxPooling2D()(xp)  # by default uses 2,2
        xp = tf.keras.layers.BatchNormalization()(xp)
        xp = tf.keras.layers.Conv2D(94, 3, padding='same', activation=tf.nn.relu)(xp)
        xp = tf.keras.layers.MaxPooling2D()(xp)
        xp = tf.keras.layers.BatchNormalization()(xp)
        xp = tf.keras.layers.Conv2D(128, 3, padding='same', activation=tf.nn.relu)(xp)
        xp = tf.keras.layers.MaxPooling2D()(xp)
        xp = tf.keras.layers.BatchNormalization()(xp)
        xp = tf.keras.layers.Conv2D(156, 3, padding='valid', activation=tf.nn.relu)(xp)
        xp = tf.keras.layers.MaxPooling2D()(xp)
        xp = tf.keras.layers.BatchNormalization()(xp)
        xp = tf.keras.layers.Conv2D(256, 3, padding='same', activation=tf.nn.relu)(xp)
        xp = tf.keras.layers.MaxPooling2D()(xp)
        xp = tf.keras.layers.BatchNormalization()(xp)
        xp = tf.keras.layers.Conv2D(394, 3, padding='same', activation=tf.nn.relu)(xp)
        xp = tf.keras.layers.MaxPooling2D()(xp)
        xp = tf.keras.layers.BatchNormalization()(xp)
        xp = tf.keras.layers.Conv2D(458, 3, padding='same', activation=tf.nn.relu)(xp)
        xp = tf.keras.layers.MaxPooling2D()(xp)
        xp = tf.keras.layers.BatchNormalization()(xp)
    #    xp = tf.keras.layers.Conv2D(516, 3, padding='same', activation=tf.nn.relu)(xp)
     #   xp = tf.keras.layers.Conv2D(516, 3, padding='same', activation=tf.nn.relu)(xp)
     #   xp = tf.keras.layers.MaxPooling2D()(xp)
    #    xp = tf.keras.layers.BatchNormalization()(xp)
        xp = tf.keras.layers.Dropout(0.25)(xp)
        xp = tf.keras.layers.Flatten()(xp)
        xp = tf.keras.layers.Dense(1024, activation=tf.nn.relu)(xp)
        xp = tf.keras.layers.BatchNormalization()(xp)
        xp = tf.keras.layers.Dropout(0.25)(xp)
        xp = tf.keras.layers.Dense(512, activation=tf.nn.relu)(xp)
        xp = tf.keras.layers.Dropout(0.25)(xp)

I get an error我收到一个错误


Call arguments received:
  • inputs=tf.Tensor(shape=(None, 2, 2, 394), dtype=float32)

Meaning I might have convolved the image too much so it's getting an error.这意味着我可能对图像进行了过多的卷积,因此出现了错误。 What can I do to solve this?我能做些什么来解决这个问题?

Yes, the problem is your input shape is not enough to utilize pooling this much.是的,问题是您的输入形状不足以充分利用池化。 Convolutional layers are okay since you are using padding="same" which means input shape and output shape of layer are identical.卷积层没问题,因为您使用的是 padding="same" 这意味着输入形状和 output 层的形状是相同的。 However everytime you use MaxPooling2D with a kernel of (2, 2) x and y dimensions of the input gets divided by 2. That's why after a while there is nothing to pool.但是,每次使用MaxPooling2D和 kernel 时,输入的 (2, 2) x 和 y 维度都会被除以 2。这就是为什么一段时间后没有什么可以合并的原因。

As you can see here, after a certain time you ran out of data to feed into you layers because you compressed it via pooling too much.正如您在此处看到的,经过一段时间后,您的数据无法输入到您的层中,因为您通过池化的方式对其进行了过多的压缩。

在此处输入图像描述

Possible solutions:可能的解决方案:

  1. You can either change you input shape您可以更改输入形状

  2. Decrease the amount of MaxPooling2D layers.减少 MaxPooling2D 层的数量。

  3. You can learn to use stride and padding parameters to have more control over the output shape after pooling.您可以学习使用 stride 和 padding 参数来更好地控制池化后的 output 形状。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM