简体   繁体   中英

Unable to convert to RGB from Grayscale for transfer learning with FER2013 dataset

I have the similar problem with post here: How to convert RGB images to grayscale, expand dimensions of that grayscale image to use in InceptionV3?

Essentially I am training to use transfer learning (using Inception) to train on the FER2013 to build a model for predicting emotions on pictures. Unfortunately the images are in grayscale and the Inception model uses rgb as inputs.

I tried using the proposed solution however it returns me an error and I do not have enough reputation to comment on the original solution.

This was the original solution:

def to_grayscale_then_rgb(image):
    image = tf.image.rgb_to_grayscale(image)
    image = tf.image.grayscale_to_rgb(image)
    return image

I insert that into my data generator. I've also tried just using grayscale to rgb initially but that returned an error as well.

train_rgb_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,
                                                                    preprocessing_function= to_grayscale_then_rgb ,
                                                                   #preprocessing_function=tf.image.grayscale_to_rgb,
                                                                   vertical_flip= True)

train_dataflow_rgb = train_rgb_datagen.flow_from_directory(train_root,
                                                          target_size = (48,48),
                                                          seed = seed_num)

test_rgb_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,
                                                                   preprocessing_function= to_grayscale_then_rgb,
                                                                   #preprocessing_function=tf.image.grayscale_to_rgb,
                                                                   vertical_flip= True)

test_dataflow_rgb = test_rgb_datagen.flow_from_directory(test_root,
                                                          target_size = (48,48),
                                                         shuffle = False,
                                                          seed = seed_num)

When I tried to train the model, I get the following error:

epochs = 50
steps_per_epoch = 1000

tl_Incept_history = tl_Incept_model.fit(train_dataflow_rgb, 
                                          epochs = epochs, 
                                          validation_data=(test_dataflow_rgb),
                                          #steps_per_epoch=steps_per_epoch,
                                          callbacks=[early_callback, myCallback])

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_10932/801602138.py in <module>
      2 steps_per_epoch = 1000
      3 
----> 4 tl_Incept_history = tl_Incept_model.fit(train_dataflow_rgb, 
      5                                           epochs = epochs,
      6                                           validation_data=(test_dataflow_rgb),

~\Venv\testpy39\lib\site-packages\keras\utils\traceback_utils.py in error_handler(*args, **kwargs)
     65     except Exception as e:  # pylint: disable=broad-except
     66       filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67       raise e.with_traceback(filtered_tb) from None
     68     finally:
     69       del filtered_tb

~\Venv\testpy39\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     56   try:
     57     ctx.ensure_initialized()
---> 58     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
     59                                         inputs, attrs, num_outputs)
     60   except core._NotOkStatusException as e:

InvalidArgumentError:  input depth must be evenly divisible by filter depth: 1 vs 3

The preprocessing code is fine you just seem to have a dimension mismatch. It wants image_size[0], image_size[1], num_channels , where num_channels = 3 if rgb (one for each, r, g, b), and = 1 if grayscale.

You have two instances of target_size = (48,48), - would it work if you changed them to target_size = (48,48,3), ?

If not, to further debug, try your def to_grayscale_then_rgb(image): separately with an image and see what dimensions your returned image comes out as. If it comes out 2D (eg image_size[0], image_size[1],) you could explore reshaping the image within the function like so: XXX = tf.convert_to_tensor(XXX[:,:,:3]) as seen in https://stackoverflow.com/a/60212961/7420967 , although grayscale_to_rgb should output final dimension 3...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM