简体   繁体   中英

Validation Accuracy Stuck, Accuracy low

I want to create a machine learning model with Tensorflow which detects flowers. I went in the nature and took pictures of 4 different species (~600 per class, one class got 700).

I load these images with Tensorflow Train Generator:

 train_datagen = ImageDataGenerator(rescale=1./255,
        shear_range=0.2,
        zoom_range=0.15, 
        brightness_range=[0.7, 1.4], 
        fill_mode='nearest', 
        vertical_flip=True,  
        horizontal_flip=True,
        rotation_range=15, 
        
        
        width_shift_range=0.1, 
        height_shift_range=0.1, 
    
        validation_split=0.2) 

    train_generator = train_datagen.flow_from_directory(
        pfad,
        target_size=(imageShape[0],imageShape[1]),
        batch_size=batchSize,
        class_mode='categorical',
        subset='training',
        seed=1,
        shuffle=False,
        #save_to_dir=r'G:\test'
        ) 
    
    validation_generator = train_datagen.flow_from_directory(
        pfad, 
        target_size=(imageShape[0],imageShape[1]),
        batch_size=batchSize,
        shuffle=False,
        seed=1,
        class_mode='categorical',
        subset='validation')

Then I am creating a simple model looking like this:

model = tf.keras.Sequential([
      
     
      
      keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(imageShape[0], imageShape[1],3)),
      keras.layers.MaxPooling2D(2,2),
      keras.layers.Dropout(0.5),
      
      keras.layers.Conv2D(256, (3,3), activation='relu'),
      
      keras.layers.MaxPooling2D(2,2), 
     
      keras.layers.Conv2D(512, (3,3), activation='relu'),
      
      keras.layers.MaxPooling2D(2,2),
     
      keras.layers.Flatten(),

      
      
      
      
      keras.layers.Dense(280, activation='relu'),
      
      keras.layers.Dense(4, activation='softmax')
    ])
    
    
    opt = tf.keras.optimizers.SGD(learning_rate=0.001,decay=1e-5)
    model.compile(loss='categorical_crossentropy',
                  optimizer= opt,
                  metrics=['accuracy'])

And want to start the training process (CPU):

history=model.fit(
        train_generator,
        steps_per_epoch = train_generator.samples // batchSize,
        validation_data = validation_generator, 
        validation_steps = validation_generator.samples // batchSize,
        epochs = 200,callbacks=[checkpoint,early,tensorboard],workers=-1)

The result should be that my validation Accuracy improves, but it starts with 0.3375 and stays at this level the whole training process. Validation loss (1.3737) decreases by 0.001. Accuracy start with 0.15 but increases.

Why is my validation accuracy stuck? Am I using the right loss? Or do I build my model wrong? Is my Tensorflow Train Generator hot encoding the labels?

Thanks

I solved the problem by using RMSprop() without any parameters.

So I changed from:

opt = tf.keras.optimizers.SGD(learning_rate=0.001,decay=1e-5)
model.compile(loss='categorical_crossentropy',optimizer= opt, metrics=['accuracy'])

to:

    opt = tf.keras.optimizers.RMSprop()
    model.compile(loss='categorical_crossentropy',
                  optimizer= opt,
                  metrics=['accuracy'])

This is a similar example, except that for 4 categorical classes, the below is binary. You may want to change the loss to categorical cross entropy, class_mode from binary to categorical in the train and test generators and final dense layer activation to softmax. I am still able to use model.fit_generator()

image_dataGen = ImageDataGenerator(rotation_range=20,
                                width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,
                                zoom_range=0.1,fill_mode='nearest',horizontal_flip=True,
                                vertical_flip=True,rescale=1/255)

train_images = image_dataGen.flow_from_directory(train_path,target_size = image_shape[:2],
                                                color_mode = 'rgb',class_mode = 'binary')
                                            
test_images = image_dataGen.flow_from_directory(test_path,target_size = image_shape[:2],
                                               color_mode = 'rgb',class_mode = 'binary',
                                               shuffle = False)
model = Sequential()

model.add(Conv2D(filters = 32, kernel_size = (3,3),input_shape = image_shape,activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))

model.add(Conv2D(filters = 48, kernel_size = (3,3),input_shape = image_shape,activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))

model.add(Flatten())
model.add(Dense(units = 128,activation = 'relu'))
model.add(Dropout(0.5))

model.add(Dense(units = 1, activation = 'sigmoid'))

model.compile(loss = 'binary_crossentropy',metrics = ['accuracy'], optimizer = 'adam')

results = model.fit_generator(train_images, epochs = 10, callbacks = [early_stop],
                             validation_data = test_images)

Maybe your learning rate is too high.

Use learning rate = 0.000001 and if that does not work then try another optimizer like Adam.

use model.fit_generator() instead of model.fit() Also below points could be helpful.

In order to use.flow_from_directory, you must organize the images in sub-directories. This is an absolute requirement, otherwise the method won't work. The directories should only contain images of one class, so one folder per class of images. Also could you check if the path for the training data and test data is correct? They cannot point to the same location. I have used the ImageGenerator class for classification problem. You can also try changing the optimizer to 'Adam'

Structure Needed: Image Data Folder Class 1 0.jpg 1.jpg... Class 2 0.jpg 1.jpg... ... Class n

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM