I have a problem in training a model
I'm learning deep-learning python using keras and tensorflow. I am using efficientnetb0 from imagenet dataset. I had divided the training and testing sets and performed one hot encoding. I have 17 folders or classifications of images.
effnet = EfficientNetB0(weights='imagenet',include_top=False,input_shape=(image_size,image_size,3))
model = effnet.output
model = tf.keras.layers.GlobalAveragePooling2D()(model)
model = tf.keras.layers.Dropout(rate=0.5)(model)
model = tf.keras.layers.Dense(17,activation='softmax')(model)
model = tf.keras.models.Model(inputs=effnet.input, outputs = model)
model.compile(loss='categorical_crossentropy',optimizer = 'Adam', metrics= ['accuracy'])
Everything ran smooth until training the model
history = model.fit(X_train,y_train,validation_split=0.1, epochs =10, verbose=1,batch_size=32, callbacks=[tensorboard,checkpoint,reduce_lr])
ValueError Traceback (most recent call last)
Input In [22], in <cell line: 1>()
----> 1 history = model.fit(X_train,y_train,validation_split=0.1, epochs =20, verbose=1,batch_size=32, callbacks=[tensorboard,checkpoint,reduce_lr])
File C:\Ken\Conda\lib\site-packages\keras\utils\traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
File ~\AppData\Local\Temp\__autograph_generated_filexkmxbaog.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 1051, in train_function *
return step_function(self, iterator)
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 1040, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 1030, in run_step **
outputs = model.train_step(data)
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 890, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "C:\Ken\Conda\lib\site-packages\keras\engine\training.py", line 948, in compute_loss
return self.compiled_loss(
File "C:\Ken\Conda\lib\site-packages\keras\engine\compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "C:\Ken\Conda\lib\site-packages\keras\losses.py", line 139, in __call__
losses = call_fn(y_true, y_pred)
File "C:\Ken\Conda\lib\site-packages\keras\losses.py", line 243, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "C:\Ken\Conda\lib\site-packages\keras\losses.py", line 1787, in categorical_crossentropy
return backend.categorical_crossentropy(
File "C:\Ken\Conda\lib\site-packages\keras\backend.py", line 5119, in categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
ValueError: Shapes (None, 18) and (None, 17) are incompatible
I fixed the error
I just changed dense layer to 18 and deleted dropout layer. I don't know why. I will determine if it is because of the dense layer or the dropout layer. Maybe dropout helps the model not to overfit. Maybe it was the dense layer's error.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.