简体   繁体   中英

InvalidArgumentError : input depth must be evenly divisible by filter depth: 4 vs 3

I'm a beginner. I tried Image Classification by Tensorflow, and got the following error. I found the similar issue on web, but I couldn't understand. What does the error mean? How should I do for it? Please give me some advice. I use 100 files(png/15pix, 15pix) like a sample image. Tensorflow ver.2.0.0 / python ver.3.8.1 / Jupyter notebook.

示例图像

    num_epochs = 30
    steps_per_epoch = round(num_train)//BATCH_SIZE
    val_steps = 20
    history = model.fit(train_data.repeat(),
                epochs=num_epochs,
                steps_per_epoch = steps_per_epoch,
                validation_data=val_data.repeat(), 
                validation_steps=val_steps)

InvalidArgumentError: input depth must be evenly divisible by filter depth: 4 vs 3 [[node sequential_2/mobilenetv2_1.00_96/Conv1/Conv2D (defined at C:\Users\XXXXX\Anaconda3\lib\site-packages\tensorflow_core\python\framework\ops.py:1751) ]] [Op:__inference_distributed_function_42611] Function call stack: distributed_function

If your model looks like this:

model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation = 'relu', input_shape = (150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation = 'relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation = 'relu'),
tf.keras.layers.Dense(10, activation = 'softmax')
])

Change the value of input_shape (at the first convolutional layer) from (150, 150, 3) to (150, 150, 4) .

Replace only the last term (which is 3 here) in the tuple with 4. That should make it work.

The error is because of mismatch in dimensions of the input provided. The model requires a depth of '3' for the input but is given '4'.

I found the answer! In my case, the following program helped it.

XXX = tf.convert_to_tensor(XXX[:,:,:3])

I wish it would help you too. Thank you.

I ran into this error because I was using images that had been converted to grayscale as my data. If anyone is doing this, you can either convert from grayscale to color-format, or re-prepare your data without converting to grayscale, which is what I did.

Per the solution I found: "Perhaps you are trying to feed a grayscale image into CNN which expects a color image. Find shape of input, eg print(model.input.shape) in Keras, you get eg (None, 224, 224, 3) and your input blob must have a corresponding shape, so having a grayscale image you have to convert it into a (formal) color image (all the three channels will be same). However, do not forget that you need know also further aspects of the input blob - mean, range, deviation, … having a good shape, it calculates something, but without concerning these aspects, the calculated result is not good"

I think you read a 4 channel format image. you should convert input image to 'RGB' before forwarding.

If you are having 1 vs 3 error, it is because of input images having Grayscale mode.

I was using this code:

train_generator = train_datagen.flow_from_directory(f'{dataset}/train', target_size=(150,150),batch_size=32,color_mode='grayscale',class_mode="categorical")

To solve the error, I changed the mode to rgb

train_generator = train_datagen.flow_from_directory(f'{dataset}/train', target_size=(150,150),batch_size=32,color_mode='rgb',class_mode="categorical")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM