简体   繁体   English

训练 model 字符识别的准确性没有提高

[英]Accuracy doesn't improve in training model character recognition

I am building a training model for my character recognition system.我正在为我的字符识别系统构建一个培训 model。 During every epochs, I am getting the same accuracy and it doesn't improve.在每个时期,我都获得了相同的准确性,但并没有提高。 I have currently 4000 training images and 77 validation images.我目前有 4000 张训练图像和 77 张验证图像。 My model is as follows:我的 model 如下:

inputs = Input(shape=(32,32,3))
x = Conv2D(filters = 64, kernel_size = 5, activation = 'relu')(inputs)
x = MaxPooling2D()(x)
x = Conv2D(filters = 32,        
kernel_size = 3,            
activation = 'relu')(x)  
x = MaxPooling2D()(x) 
x = Flatten()(x) 
x=Dense(256,
activation='relu')(x)
outputs = Dense(1, activation = 'softmax')(x) 
model = Model(inputs = inputs, outputs = outputs) 
model.compile(
optimizer='adam', 
loss='categorical_crossentropy', 
metrics=['accuracy']) 
data_gen_train = ImageDataGenerator(rescale=1/255)

data_gen_test=ImageDataGenerator(rescale=1/255)

data_gen_valid = ImageDataGenerator(rescale=1/255)

train_generator = data_gen_train.flow_from_directory(directory=r"./drive/My Drive/train_dataset", 
target_size=(32,32), batch_size=10, class_mode="binary")

valid_generator = data_gen_valid.flow_from_directory(directory=r"./drive/My 
                  Drive/validation_dataset", target_size=(32,32), batch_size=2, class_mode="binary")

test_generator = data_gen_test.flow_from_directory(
                 directory=r"./drive/My Drive/test_dataset",
                 target_size=(32, 32),

                 batch_size=6,
                 class_mode="binary"
)
model.fit(
train_generator,
epochs =10, 
steps_per_epoch=400,
validation_steps=37,
validation_data=valid_generator) 

The result is as follows:结果如下:

 Found 4000 images belonging to 2 classes. Found 77 images belonging to 2 classes. Found 6 images belonging to 2 classes. Epoch 1/10 400/400 [==============================] - 14s 35ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5811 Epoch 2/10 400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5811 Epoch 3/10 400/400 [==============================] - 13s 34ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5676 Epoch 4/10 400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5676 Epoch 5/10 400/400 [==============================] - 18s 46ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5541 Epoch 6/10 400/400 [==============================] - 13s 34ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5676 Epoch 7/10 400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5676 Epoch 8/10 400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5946 Epoch 9/10 400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5811 Epoch 10/10 400/400 [==============================] - 13s 33ms/step - loss: 0.0000e+00 - accuracy: 0.5000 - val_loss: 0.0000e+00 - val_accuracy: 0.5811 <tensorflow.python.keras.callbacks.History at 0x7fa3a5f4a8d0>

If you are trying to recognize charaters of 2 classes, you should:如果您尝试识别 2 个类别的字符,您应该:

  • use class_mode="binary" in the flow_from_directory functionflow_from_directory function 中使用class_mode="binary"
  • use binary_crossentropy as loss使用binary_crossentropy作为损失
  • your last layer must have 1 neuron with sigmoid activation function你的最后一层必须有 1 个具有sigmoid激活的神经元 function

In case there are more than 2 classes:如果有 2 个以上的类:

  • do not use class_mode="binary" in the flow_from_directory function不要在flow_from_directory function 中使用class_mode="binary"
  • use categorical_crossentropy as loss使用categorical_crossentropy作为损失
  • your last layer must have n neurons with softmax activation, where n stands for the number of classes你的最后一层必须有n 个具有softmax激活的神经元,其中n代表类的数量

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM