简体   繁体   English

在 VGG16 model 中使用灰度图像时出现“输入不兼容错误”

[英]'Input incompatible error' while using gray scale images in VGG16 model

I am working on an assignment to detect facial emotion of a bunch of gray scale images.我正在从事一项检测一堆灰度图像的面部表情的任务。 I am trying to use VGG16 model for this.我正在尝试为此使用 VGG16 model 。 I converted the input array of images that are in gray scale to RGB.我将灰度图像的输入数组转换为 RGB。 But when I pass the RGB image array to my model, I am getting incompatibility errors但是当我将 RGB 图像数组传递给我的 model 时,出现不兼容错误

input_array(gray scale) is of shape 48X48X1 input_array(灰度)的形状为 48X48X1

'Converting gray scale to RGB' '将灰度转换为 RGB'

''' '''

input_RGB = np.ndarray(shape=input_array.shape[0],input_array.shape[1],input_array.shape[2], 3),dtype=np.uint8)

input_RGB[:, :, :, 0] = input_array[:, :, :, 0]
input_RGB[:, :, :, 1] = input_array[:, :, :, 0]
input_RGB[:, :, :, 2] = input_array[:, :, :, 0]

''' 'Model definition' ''' ''' '模型定义' '''

from tensorflow.keras.applications.vgg16 import VGG16
base_model = VGG16(weights='imagenet', include_top=False, input_shape = (48, 48, 3))
model2 = Sequential([base_model])
model2.add(Flatten())
model2.add(Dropout(0.25))
model2.add(Dense(64, activation='relu'))
model2.add(Dropout(0.25))
model2.add(Dense(7, activation='softmax'))
model2.compile(optimizer = 'adam', loss='categorical_crossentropy', metrics=['accuracy'])

''' 'Model fitting' ''' ''' '模型拟合' '''

history = model.fit(input_RGB, output_array, batch_size = 64, epochs= 20, 
validation_split=0.25,callbacks=[VGG_saved])

''' '''

'Error message' ''' '错误信息' '''

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function  *
    outputs = self.distribute_strategy.run(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
    return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:531 train_step  **
    y_pred = self(x, training=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:886 __call__
    self.name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py:216 assert_input_compatibility
    ' but received input with shape ' + str(shape))

ValueError: Input 0 of layer sequential_5 is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape [None, 48, 48, 3]'

Another thing I observed is that when I randomly tried gray scale image arrays as input in model.fit, it did not throw the error however, the validation accuracy is quite low.我观察到的另一件事是,当我随机尝试灰度图像 arrays 作为 model.fit 的输入时,它没有抛出错误,但是验证准确度很低。

Please help请帮忙

Try change transferring from Sequential to尝试更改从Sequential转移到

inputs = keras.Input(shape=(48, 48, 3))
x = base_model(inputs)
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dropout(0.25)(x)
x = keras.layers.Dense(64)(x)
x = keras.layers.Dropout(0.25)(x)
outputs = keras.layers.Dense(7)(x)
model = keras.Model(inputs, outputs)

It might be easier to track what's going wrong.跟踪出了什么问题可能会更容易。

Also instead of Flatten + AVGPooling you should just use GlobalAveragePooling instead, as it makes using variable size input possible.此外,您应该只使用GlobalAveragePooling而不是Flatten + AVGPooling ,因为它可以使用可变大小的输入。

More example https://keras.io/guides/transfer_learning/更多示例https://keras.io/guides/transfer_learning/

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM