简体   繁体   中英

Validation accuracy is highly fluctuating using RESNET

  • I am new to computer vision and i am trying to understand my results. I am classifying two classes and i am getting 53 percent accuracy. When i plot validation, train loss and accuracy. It looks like the loss function graph looks good but accuruacy of validation is too bad.
  • I am suspecting few things things here, i am basically extracting frames from videos (20 fps). the reason for validation accuracy to be fluctuating might be because i might get some of the same frames from training to be reflected on test and validation? Is this because of that? if not please tell me what might be the reason and what could be done better to improve this.

Code:

print(trainX.shape)
print(testX.shape)
print(trainY.shape)
print(testY.shape)

(778, 224, 224, 3)
(195, 224, 224, 3)
(778, 1)
(195, 1)


trainAug = ImageDataGenerator(
    rotation_range=30,
    zoom_range=0.15,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.15,
    horizontal_flip=True,
    fill_mode="nearest")
# initialize the validation/testing data augmentation object (which
# we'll be adding mean subtraction to)
valAug = ImageDataGenerator()
# define the ImageNet mean subtraction (in RGB order) and set the
# the mean subtraction value for each of the data augmentation
# objects
mean = np.array([123.68, 116.779, 103.939], dtype="float32")
trainAug.mean = mean
valAug.mean = mean

baseModel = ResNet50(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)))


# construct the head of the model that will be placed on top of the
# the base model
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(512, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(len(lb.classes_), activation="softmax")(headModel)

model = Model(inputs=baseModel.input, outputs=headModel)

for layer in baseModel.layers:
    layer.trainable = False


epoch = 50

print("[INFO] compiling model...")
opt = SGD(lr=1e-4, momentum=0.9, decay=1e-4 / epoch)
model.compile(loss=BinaryFocalLoss(gamma=2), optimizer=opt,
    metrics=["accuracy"])

print("[INFO] training head...")
H = model.fit(
    x=trainAug.flow(trainX, trainY, batch_size=32),
    steps_per_epoch=len(trainX) // 32,
    validation_data=valAug.flow(testX, testY),
    validation_steps=len(testX) // 32,
    epochs= epoch)


Epoch 1/50
24/24 [==============================] - 12s 399ms/step - loss: 0.2420 - accuracy: 0.4947 - val_loss: 0.1751 - val_accuracy: 0.0052
Epoch 2/50
24/24 [==============================] - 9s 353ms/step - loss: 0.2415 - accuracy: 0.4823 - val_loss: 0.1736 - val_accuracy: 0.6615
Epoch 3/50
24/24 [==============================] - 8s 350ms/step - loss: 0.2325 - accuracy: 0.5026 - val_loss: 0.1736 - val_accuracy: 0.5156
Epoch 4/50
24/24 [==============================] - 9s 354ms/step - loss: 0.2224 - accuracy: 0.5070 - val_loss: 0.1746 - val_accuracy: 0.0104
Epoch 5/50
24/24 [==============================] - 8s 362ms/step - loss: 0.2205 - accuracy: 0.5093 - val_loss: 0.1740 - val_accuracy: 0.1042
Epoch 6/50
24/24 [==============================] - 8s 351ms/step - loss: 0.2064 - accuracy: 0.4877 - val_loss: 0.1738 - val_accuracy: 0.8333
Epoch 7/50
24/24 [==============================] - 8s 348ms/step - loss: 0.2082 - accuracy: 0.5233 - val_loss: 0.1735 - val_accuracy: 0.6875
Epoch 8/50
24/24 [==============================] - 9s 353ms/step - loss: 0.1998 - accuracy: 0.5085 - val_loss: 0.1738 - val_accuracy: 0.9115
Epoch 9/50
24/24 [==============================] - 9s 365ms/step - loss: 0.1972 - accuracy: 0.5100 - val_loss: 0.1739 - val_accuracy: 0.9271
Epoch 10/50
24/24 [==============================] - 8s 349ms/step - loss: 0.1967 - accuracy: 0.4972 - val_loss: 0.1737 - val_accuracy: 0.8802
Epoch 11/50
24/24 [==============================] - 8s 351ms/step - loss: 0.1937 - accuracy: 0.5123 - val_loss: 0.1737 - val_accuracy: 0.1667
Epoch 12/50
24/24 [==============================] - 9s 352ms/step - loss: 0.1909 - accuracy: 0.4901 - val_loss: 0.1739 - val_accuracy: 0.0990
Epoch 13/50
24/24 [==============================] - 9s 353ms/step - loss: 0.1907 - accuracy: 0.4881 - val_loss: 0.1736 - val_accuracy: 0.7760
Epoch 14/50
24/24 [==============================] - 8s 352ms/step - loss: 0.1900 - accuracy: 0.5214 - val_loss: 0.1735 - val_accuracy: 0.2760
Epoch 15/50
24/24 [==============================] - 8s 360ms/step - loss: 0.1878 - accuracy: 0.5185 - val_loss: 0.1735 - val_accuracy: 0.6094
Epoch 16/50
24/24 [==============================] - 9s 356ms/step - loss: 0.1862 - accuracy: 0.5154 - val_loss: 0.1735 - val_accuracy: 0.4375
Epoch 17/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1854 - accuracy: 0.5097 - val_loss: 0.1737 - val_accuracy: 0.0833
Epoch 18/50
24/24 [==============================] - 9s 352ms/step - loss: 0.1841 - accuracy: 0.4989 - val_loss: 0.1734 - val_accuracy: 0.3750
Epoch 19/50
24/24 [==============================] - 8s 352ms/step - loss: 0.1854 - accuracy: 0.5127 - val_loss: 0.1735 - val_accuracy: 0.4479
Epoch 20/50
24/24 [==============================] - 8s 351ms/step - loss: 0.1832 - accuracy: 0.5080 - val_loss: 0.1735 - val_accuracy: 0.6354
Epoch 21/50
24/24 [==============================] - 8s 351ms/step - loss: 0.1829 - accuracy: 0.5197 - val_loss: 0.1734 - val_accuracy: 0.5521
Epoch 22/50
24/24 [==============================] - 8s 352ms/step - loss: 0.1817 - accuracy: 0.4861 - val_loss: 0.1735 - val_accuracy: 0.6667
Epoch 23/50
24/24 [==============================] - 8s 348ms/step - loss: 0.1819 - accuracy: 0.5491 - val_loss: 0.1734 - val_accuracy: 0.6198
Epoch 24/50
24/24 [==============================] - 9s 354ms/step - loss: 0.1812 - accuracy: 0.5278 - val_loss: 0.1734 - val_accuracy: 0.4062
Epoch 25/50
24/24 [==============================] - 9s 360ms/step - loss: 0.1807 - accuracy: 0.5166 - val_loss: 0.1735 - val_accuracy: 0.3125
Epoch 26/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1801 - accuracy: 0.4690 - val_loss: 0.1734 - val_accuracy: 0.5365
Epoch 27/50
24/24 [==============================] - 8s 348ms/step - loss: 0.1799 - accuracy: 0.5023 - val_loss: 0.1734 - val_accuracy: 0.3229
Epoch 28/50
24/24 [==============================] - 8s 348ms/step - loss: 0.1799 - accuracy: 0.4632 - val_loss: 0.1734 - val_accuracy: 0.7500
Epoch 29/50
24/24 [==============================] - 8s 349ms/step - loss: 0.1786 - accuracy: 0.5421 - val_loss: 0.1734 - val_accuracy: 0.6302
Epoch 30/50
24/24 [==============================] - 8s 349ms/step - loss: 0.1783 - accuracy: 0.5218 - val_loss: 0.1734 - val_accuracy: 0.4010
Epoch 31/50
24/24 [==============================] - 8s 347ms/step - loss: 0.1788 - accuracy: 0.4844 - val_loss: 0.1734 - val_accuracy: 0.5677
Epoch 32/50
24/24 [==============================] - 8s 345ms/step - loss: 0.1781 - accuracy: 0.5369 - val_loss: 0.1734 - val_accuracy: 0.3125
Epoch 33/50
24/24 [==============================] - 8s 347ms/step - loss: 0.1781 - accuracy: 0.5177 - val_loss: 0.1734 - val_accuracy: 0.5469
Epoch 34/50
24/24 [==============================] - 9s 355ms/step - loss: 0.1776 - accuracy: 0.5039 - val_loss: 0.1735 - val_accuracy: 0.1302
Epoch 35/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1776 - accuracy: 0.4564 - val_loss: 0.1734 - val_accuracy: 0.6354
Epoch 36/50
24/24 [==============================] - 9s 352ms/step - loss: 0.1779 - accuracy: 0.5218 - val_loss: 0.1735 - val_accuracy: 0.4167
Epoch 37/50
24/24 [==============================] - 8s 345ms/step - loss: 0.1771 - accuracy: 0.5147 - val_loss: 0.1734 - val_accuracy: 0.4948
Epoch 38/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1772 - accuracy: 0.5111 - val_loss: 0.1734 - val_accuracy: 0.6719
Epoch 39/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1771 - accuracy: 0.5616 - val_loss: 0.1734 - val_accuracy: 0.3281
Epoch 40/50
24/24 [==============================] - 8s 352ms/step - loss: 0.1768 - accuracy: 0.5061 - val_loss: 0.1734 - val_accuracy: 0.6198
Epoch 41/50
24/24 [==============================] - 8s 351ms/step - loss: 0.1769 - accuracy: 0.5070 - val_loss: 0.1734 - val_accuracy: 0.2917
Epoch 42/50
24/24 [==============================] - 8s 346ms/step - loss: 0.1767 - accuracy: 0.4723 - val_loss: 0.1734 - val_accuracy: 0.5990
Epoch 43/50
24/24 [==============================] - 8s 347ms/step - loss: 0.1762 - accuracy: 0.5208 - val_loss: 0.1734 - val_accuracy: 0.3490
Epoch 44/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1763 - accuracy: 0.5047 - val_loss: 0.1734 - val_accuracy: 0.6562
Epoch 45/50
24/24 [==============================] - 8s 349ms/step - loss: 0.1762 - accuracy: 0.5760 - val_loss: 0.1735 - val_accuracy: 0.1510
Epoch 46/50
24/24 [==============================] - 9s 354ms/step - loss: 0.1763 - accuracy: 0.4679 - val_loss: 0.1734 - val_accuracy: 0.4479
Epoch 47/50
24/24 [==============================] - 9s 354ms/step - loss: 0.1762 - accuracy: 0.5210 - val_loss: 0.1734 - val_accuracy: 0.5052
Epoch 48/50
24/24 [==============================] - 9s 353ms/step - loss: 0.1757 - accuracy: 0.4938 - val_loss: 0.1734 - val_accuracy: 0.3854
Epoch 49/50
24/24 [==============================] - 9s 353ms/step - loss: 0.1759 - accuracy: 0.4631 - val_loss: 0.1734 - val_accuracy: 0.5990
Epoch 50/50
24/24 [==============================] - 8s 347ms/step - loss: 0.1757 - accuracy: 0.5254 - val_loss: 0.1734 - val_accuracy: 0.4792

Update: tried the code from one of the suggestions

def scaler(x):
        y=x/127.5-1
        return y
trainAug= ImageDataGenerator(preprocessing_function=scaler, rotation_range=30,
    zoom_range=0.15,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.15,
    horizontal_flip=True,
    fill_mode="nearest")
valAug= ImageDataGenerator(preprocessing_function=scaler)
baseModel = ResNet50(weights="imagenet", include_top=False, pooling='max', input_shape=(224, 224, 3))
headModel = baseModel.output
headModel = Dense(512, activation="relu")(headModel)
headModel = Dropout(0.2)(headModel)
headModel = Dense(len(lb.classes_), activation="softmax")(headModel)
model = Model(inputs=baseModel.input, outputs=headModel)
for layer in baseModel.layers:
    layer.trainable = False

epoch:

46/46 [==============================] - 15s 316ms/step - loss: 1.0479 - accuracy: 0.4908 - val_loss: 0.8225 - val_accuracy: 1.0000
Epoch 6/10
46/46 [==============================] - 14s 314ms/step - loss: 1.1222 - accuracy: 0.5143 - val_loss: 0.7951 - val_accuracy: 0.0000e+00
Epoch 7/10
46/46 [==============================] - 14s 311ms/step - loss: 1.1610 - accuracy: 0.5014 - val_loss: 0.8112 - val_accuracy: 1.0000
Epoch 8/10
46/46 [==============================] - 15s 315ms/step - loss: 1.1724 - accuracy: 0.5254 - val_loss: 1.0836 - val_accuracy: 0.0000e+00
Epoch 9/10
46/46 [==============================] - 14s 308ms/step - loss: 1.3111 - accuracy: 0.4942 - val_loss: 0.8058 - val_accuracy: 0.0000e+00
Epoch 10/10
46/46 [==============================] - 14s 310ms/step - loss: 1.2431 - accuracy: 0.4968 - val_loss: 0.9726 - val_accuracy: 0.0000e+00

在此处输入图像描述

Well by looking at the loss for training and validation your model is not training at all so accuracy variation is not the problem. Try simplifying things. Remove all the mean stuff. ResNet was trained on images where the pixels were scaled between -1 and +1 so add this to the generators

def scaler(x):
        y=x/127.5-1
        return y
trainAug= ImageDataGenerator(preprocessing_function=scaler, etc
valAug= ImageDataGenerator(preprocessing_function=scaler)
baseModel = ResNet50(weights="imagenet", include_top=False, pooling='max, input_shape=(224, 224, 3)))
headModel = baseModel.output
headModel = Dense(512, activation="relu")(headModel)
headModel = Dropout(0.2)(headModel)
headModel = Dense(len(lb.classes_), activation="softmax")(headModel)

How are your labels encoded? if they are integers than your loss should be sparse_categorical_crossentropy. If they are one hot encoded your loss should be categorial_crossentropy

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM