简体   繁体   English

Keras-输入尺寸为[?,4,80,64],[5,5,64,64]的'conv2d_5 / convolution'(op:'Conv2D')的4中减去5引起的负尺寸大小

[英]Keras - Negative dimension size caused by subtracting 5 from 4 for 'conv2d_5/convolution' (op: 'Conv2D') with input shapes: [?,4,80,64], [5,5,64,64]

I have a similar model to the one below, but after modifying the architecture, I keep getting the following error: 我有一个与以下模型类似的模型,但是在修改体系结构之后,我不断收到以下错误:

Negative dimension size caused by subtracting 5 from 4 for 'conv2d_5/convolution' (op: 'Conv2D') with input shapes: [?,4,80,64], [5,5,64,64]. 通过对输入形状为[?,4,80,64],[5,5,64,64]的'conv2d_5 / convolution'(op:'Conv2D')的4减去5导致的负尺寸大小。

I am still new to machine learning so I couldn't make much sense of the parameters. 我还是机器学习的新手,所以我不太了解这些参数。 Any help? 有什么帮助吗?

model_img = Sequential(name="img")
    # Cropping
    model_img.add(Cropping2D(cropping=((124,126),(0,0)), input_shape=(376,1344,3)))
    # Normalization
    model_img.add(Lambda(lambda x: (2*x / 255.0) - 1.0))
    model_img.add(Conv2D(16, (7, 7), activation="relu", strides=(2, 2)))
    model_img.add(Conv2D(32, (7, 7), activation="relu", strides=(2, 2)))
    model_img.add(Conv2D(32, (5, 5), activation="relu", strides=(2, 2)))
    model_img.add(Conv2D(64, (5, 5), activation="relu", strides=(2, 2)))
    model_img.add(Conv2D(64, (5, 5), activation="relu", strides=(2, 2)))
    model_img.add(Conv2D(128, (3, 3), activation="relu"))
    model_img.add(Conv2D(128, (3, 3), activation="relu"))
    model_img.add(Flatten())
    model_img.add(Dense(100))
    model_img.add(Dense(50))
    model_img.add(Dense(10))

    model_lidar = Sequential(name="lidar")
    model_lidar.add(Dense(32, input_shape=(360,)))
    model_lidar.add(Dropout(0.1))
    model_lidar.add(Dense(10))

    model_imu = Sequential(name='imu')
    model_imu.add(Dense(32, input_shape=(10, )))
    model_imu.add(Dropout(0.1))
    model_imu.add(Dense(10))

    merged = Merge([model_img, model_lidar, model_imu], mode="concat")
    model = Sequential()
    model.add(merged)
    model.add(Dense(16))
    model.add(Dropout(0.2))
    model.add(Dense(1))

Answer: I couldn't complete the training because of issues with sensor but the model works fine now thanks to the 2 answers below 答:由于传感器问题,我无法完成培训,但是由于以下两个答案,该模型现在可以正常工作

Here is the output shapes of each layer in your model 这是模型中每个图层的输出形状

(?, 376, 1344, 3) - Input
(?, 126, 1344, 3) - Cropping2D
(?, 126, 1344, 3) - Lambda
(?, 60, 669, 16)  - Conv2D 1
(?, 27, 332, 32)  - Conv2D 2
(?, 12, 164, 32)  - Conv2D 3
(?, 4, 80, 64)    - Conv2D 4

By the time the inputs have passed through the 4th Conv2D layer the output shape is already (4,80) . 到输入通过第四Conv2D层时,输出形状已为(4,80) You cannot apply another Conv2D layer with filter size (5, 5) since the first dimension of your output is less than the filter size. 您不能应用具有滤镜大小(5,5)的另一个Conv2D图层,因为输出的第一维小于滤镜大小。

Your stack of convolutional layers reduces the image size quite fast. 卷积层堆栈可以非常快速地减小图像尺寸。 Therefore, once its size along one dimension is only 4, you cannot apply a 5x5-convolution anymore. 因此,一旦其沿一维的大小仅为4,就无法再应用5x5卷积。

Without padding the output dimensions of a convolutional layer is (input_dimension - kernel_size)/strides . 不进行填充的情况下,卷积层的输出尺寸为(input_dimension - kernel_size)/strides Substracting 7 (or 5) multiple times is not that important, but reducing the size by a factor of two gets the dimension down to 4 quite fast. 多次减去7(或5)并不是很重要,但是将大小减小2倍会使尺寸很快减小到4。

The solution is either not to use strides (after the first some layers). 解决方案是不使用跨步(在前几层之后)。 Adding padding helps against loosing size due to the kernel, but not due to strides. 添加填充有助于防止由于内核而不是由于步幅而导致的大小丢失。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 负维度大小由 2 减去 5 导致的 'conv2d_4/convolution' (op: 'Conv2D') 输入形状:[?,5,2,64], [5,5,64,64] - Negative dimension size caused by subtracting 5 from 2 for 'conv2d_4/convolution' (op: 'Conv2D') with input shapes: [?,5,2,64], [5,5,64,64] 输入形状为 [?,1,10000,80], [3,3,80,16] 的 'conv2d_1/convolution'(操作:'Conv2D')从 1 中减去 3 导致的负尺寸大小 - Negative dimension size caused by subtracting 3 from 1 for 'conv2d_1/convolution' (op: 'Conv2D') with input shapes: [?,1,10000,80], [3,3,80,16] 输入形状为 [?,1,74,16], [3,3,16,32] 的“conv2d_2/convolution”(操作:“Conv2D”)从 1 中减去 3 导致的负尺寸大小 - Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution' (op: 'Conv2D') with input shapes: [?,1,74,16], [3,3,16,32] 尺寸必须相等,但对于输入形状为[?,24,24,1],[5,5,64,64]的'Conv2D_1'(op:'Conv2D'),尺寸应为1和64 - Dimensions must be equal, but are 1 and 64 for 'Conv2D_1' (op: 'Conv2D') with input shapes: [?,24,24,1], [5,5,64,64] ValueError:由 1 为 'conv3d_3/convolution' 减去 22 引起的负尺寸大小(操作:'Conv3D') - ValueError: Negative dimension size caused by subtracting 22 from 1 for 'conv3d_3/convolution' (op: 'Conv3D') 多个Conv1D图层:由于'conv1d_2 / convolution / Conv2D从1中减去8而导致的负尺寸大小 - Multiple Conv1D Layers: Negative dimension size caused by subtracting 8 from 1 for 'conv1d_2/convolution/Conv2D 'Conv2D' 从 1 中减去 3 导致的负尺寸大小 - Negative dimension size caused by subtracting 3 from 1 for 'Conv2D' 'conv2d_2/convolution' 从 1 中减去 3 导致的负维度大小 - Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution' CNN Keras:ValueError:负数尺寸是由'conv2d的2减去3引起的 - CNN Keras: ValueError: Negative dimension size caused by subtracting 3 from 2 for 'conv2d '{{node conv2d_3/Conv2D} 从 1 中减去 3 导致的负维度大小 - Negative dimension size caused by subtracting 3 from 1 for '{{node conv2d_3/Conv2D}
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM