簡體   English   中英

使用 RESNET 的驗證准確性波動很大

[英]Validation accuracy is highly fluctuating using RESNET

  • 我是計算機視覺的新手,我正在嘗試了解我的結果。 我正在分類兩個類,我得到了 53% 的准確率。 當我 plot 驗證時,訓練損失和准確性。 看起來損失 function 圖看起來不錯,但驗證的准確性太差了。
  • 我在這里懷疑一些事情,我基本上是從視頻中提取幀(20 fps)。 驗證准確性波動的原因可能是因為我可能會從訓練中獲得一些相同的幀以反映在測試和驗證中? 是不是因為這個? 如果不是,請告訴我可能是什么原因以及可以做些什么來改善這一點。

代碼:

print(trainX.shape)
print(testX.shape)
print(trainY.shape)
print(testY.shape)

(778, 224, 224, 3)
(195, 224, 224, 3)
(778, 1)
(195, 1)


trainAug = ImageDataGenerator(
    rotation_range=30,
    zoom_range=0.15,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.15,
    horizontal_flip=True,
    fill_mode="nearest")
# initialize the validation/testing data augmentation object (which
# we'll be adding mean subtraction to)
valAug = ImageDataGenerator()
# define the ImageNet mean subtraction (in RGB order) and set the
# the mean subtraction value for each of the data augmentation
# objects
mean = np.array([123.68, 116.779, 103.939], dtype="float32")
trainAug.mean = mean
valAug.mean = mean

baseModel = ResNet50(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)))


# construct the head of the model that will be placed on top of the
# the base model
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(512, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(len(lb.classes_), activation="softmax")(headModel)

model = Model(inputs=baseModel.input, outputs=headModel)

for layer in baseModel.layers:
    layer.trainable = False


epoch = 50

print("[INFO] compiling model...")
opt = SGD(lr=1e-4, momentum=0.9, decay=1e-4 / epoch)
model.compile(loss=BinaryFocalLoss(gamma=2), optimizer=opt,
    metrics=["accuracy"])

print("[INFO] training head...")
H = model.fit(
    x=trainAug.flow(trainX, trainY, batch_size=32),
    steps_per_epoch=len(trainX) // 32,
    validation_data=valAug.flow(testX, testY),
    validation_steps=len(testX) // 32,
    epochs= epoch)


Epoch 1/50
24/24 [==============================] - 12s 399ms/step - loss: 0.2420 - accuracy: 0.4947 - val_loss: 0.1751 - val_accuracy: 0.0052
Epoch 2/50
24/24 [==============================] - 9s 353ms/step - loss: 0.2415 - accuracy: 0.4823 - val_loss: 0.1736 - val_accuracy: 0.6615
Epoch 3/50
24/24 [==============================] - 8s 350ms/step - loss: 0.2325 - accuracy: 0.5026 - val_loss: 0.1736 - val_accuracy: 0.5156
Epoch 4/50
24/24 [==============================] - 9s 354ms/step - loss: 0.2224 - accuracy: 0.5070 - val_loss: 0.1746 - val_accuracy: 0.0104
Epoch 5/50
24/24 [==============================] - 8s 362ms/step - loss: 0.2205 - accuracy: 0.5093 - val_loss: 0.1740 - val_accuracy: 0.1042
Epoch 6/50
24/24 [==============================] - 8s 351ms/step - loss: 0.2064 - accuracy: 0.4877 - val_loss: 0.1738 - val_accuracy: 0.8333
Epoch 7/50
24/24 [==============================] - 8s 348ms/step - loss: 0.2082 - accuracy: 0.5233 - val_loss: 0.1735 - val_accuracy: 0.6875
Epoch 8/50
24/24 [==============================] - 9s 353ms/step - loss: 0.1998 - accuracy: 0.5085 - val_loss: 0.1738 - val_accuracy: 0.9115
Epoch 9/50
24/24 [==============================] - 9s 365ms/step - loss: 0.1972 - accuracy: 0.5100 - val_loss: 0.1739 - val_accuracy: 0.9271
Epoch 10/50
24/24 [==============================] - 8s 349ms/step - loss: 0.1967 - accuracy: 0.4972 - val_loss: 0.1737 - val_accuracy: 0.8802
Epoch 11/50
24/24 [==============================] - 8s 351ms/step - loss: 0.1937 - accuracy: 0.5123 - val_loss: 0.1737 - val_accuracy: 0.1667
Epoch 12/50
24/24 [==============================] - 9s 352ms/step - loss: 0.1909 - accuracy: 0.4901 - val_loss: 0.1739 - val_accuracy: 0.0990
Epoch 13/50
24/24 [==============================] - 9s 353ms/step - loss: 0.1907 - accuracy: 0.4881 - val_loss: 0.1736 - val_accuracy: 0.7760
Epoch 14/50
24/24 [==============================] - 8s 352ms/step - loss: 0.1900 - accuracy: 0.5214 - val_loss: 0.1735 - val_accuracy: 0.2760
Epoch 15/50
24/24 [==============================] - 8s 360ms/step - loss: 0.1878 - accuracy: 0.5185 - val_loss: 0.1735 - val_accuracy: 0.6094
Epoch 16/50
24/24 [==============================] - 9s 356ms/step - loss: 0.1862 - accuracy: 0.5154 - val_loss: 0.1735 - val_accuracy: 0.4375
Epoch 17/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1854 - accuracy: 0.5097 - val_loss: 0.1737 - val_accuracy: 0.0833
Epoch 18/50
24/24 [==============================] - 9s 352ms/step - loss: 0.1841 - accuracy: 0.4989 - val_loss: 0.1734 - val_accuracy: 0.3750
Epoch 19/50
24/24 [==============================] - 8s 352ms/step - loss: 0.1854 - accuracy: 0.5127 - val_loss: 0.1735 - val_accuracy: 0.4479
Epoch 20/50
24/24 [==============================] - 8s 351ms/step - loss: 0.1832 - accuracy: 0.5080 - val_loss: 0.1735 - val_accuracy: 0.6354
Epoch 21/50
24/24 [==============================] - 8s 351ms/step - loss: 0.1829 - accuracy: 0.5197 - val_loss: 0.1734 - val_accuracy: 0.5521
Epoch 22/50
24/24 [==============================] - 8s 352ms/step - loss: 0.1817 - accuracy: 0.4861 - val_loss: 0.1735 - val_accuracy: 0.6667
Epoch 23/50
24/24 [==============================] - 8s 348ms/step - loss: 0.1819 - accuracy: 0.5491 - val_loss: 0.1734 - val_accuracy: 0.6198
Epoch 24/50
24/24 [==============================] - 9s 354ms/step - loss: 0.1812 - accuracy: 0.5278 - val_loss: 0.1734 - val_accuracy: 0.4062
Epoch 25/50
24/24 [==============================] - 9s 360ms/step - loss: 0.1807 - accuracy: 0.5166 - val_loss: 0.1735 - val_accuracy: 0.3125
Epoch 26/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1801 - accuracy: 0.4690 - val_loss: 0.1734 - val_accuracy: 0.5365
Epoch 27/50
24/24 [==============================] - 8s 348ms/step - loss: 0.1799 - accuracy: 0.5023 - val_loss: 0.1734 - val_accuracy: 0.3229
Epoch 28/50
24/24 [==============================] - 8s 348ms/step - loss: 0.1799 - accuracy: 0.4632 - val_loss: 0.1734 - val_accuracy: 0.7500
Epoch 29/50
24/24 [==============================] - 8s 349ms/step - loss: 0.1786 - accuracy: 0.5421 - val_loss: 0.1734 - val_accuracy: 0.6302
Epoch 30/50
24/24 [==============================] - 8s 349ms/step - loss: 0.1783 - accuracy: 0.5218 - val_loss: 0.1734 - val_accuracy: 0.4010
Epoch 31/50
24/24 [==============================] - 8s 347ms/step - loss: 0.1788 - accuracy: 0.4844 - val_loss: 0.1734 - val_accuracy: 0.5677
Epoch 32/50
24/24 [==============================] - 8s 345ms/step - loss: 0.1781 - accuracy: 0.5369 - val_loss: 0.1734 - val_accuracy: 0.3125
Epoch 33/50
24/24 [==============================] - 8s 347ms/step - loss: 0.1781 - accuracy: 0.5177 - val_loss: 0.1734 - val_accuracy: 0.5469
Epoch 34/50
24/24 [==============================] - 9s 355ms/step - loss: 0.1776 - accuracy: 0.5039 - val_loss: 0.1735 - val_accuracy: 0.1302
Epoch 35/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1776 - accuracy: 0.4564 - val_loss: 0.1734 - val_accuracy: 0.6354
Epoch 36/50
24/24 [==============================] - 9s 352ms/step - loss: 0.1779 - accuracy: 0.5218 - val_loss: 0.1735 - val_accuracy: 0.4167
Epoch 37/50
24/24 [==============================] - 8s 345ms/step - loss: 0.1771 - accuracy: 0.5147 - val_loss: 0.1734 - val_accuracy: 0.4948
Epoch 38/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1772 - accuracy: 0.5111 - val_loss: 0.1734 - val_accuracy: 0.6719
Epoch 39/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1771 - accuracy: 0.5616 - val_loss: 0.1734 - val_accuracy: 0.3281
Epoch 40/50
24/24 [==============================] - 8s 352ms/step - loss: 0.1768 - accuracy: 0.5061 - val_loss: 0.1734 - val_accuracy: 0.6198
Epoch 41/50
24/24 [==============================] - 8s 351ms/step - loss: 0.1769 - accuracy: 0.5070 - val_loss: 0.1734 - val_accuracy: 0.2917
Epoch 42/50
24/24 [==============================] - 8s 346ms/step - loss: 0.1767 - accuracy: 0.4723 - val_loss: 0.1734 - val_accuracy: 0.5990
Epoch 43/50
24/24 [==============================] - 8s 347ms/step - loss: 0.1762 - accuracy: 0.5208 - val_loss: 0.1734 - val_accuracy: 0.3490
Epoch 44/50
24/24 [==============================] - 8s 350ms/step - loss: 0.1763 - accuracy: 0.5047 - val_loss: 0.1734 - val_accuracy: 0.6562
Epoch 45/50
24/24 [==============================] - 8s 349ms/step - loss: 0.1762 - accuracy: 0.5760 - val_loss: 0.1735 - val_accuracy: 0.1510
Epoch 46/50
24/24 [==============================] - 9s 354ms/step - loss: 0.1763 - accuracy: 0.4679 - val_loss: 0.1734 - val_accuracy: 0.4479
Epoch 47/50
24/24 [==============================] - 9s 354ms/step - loss: 0.1762 - accuracy: 0.5210 - val_loss: 0.1734 - val_accuracy: 0.5052
Epoch 48/50
24/24 [==============================] - 9s 353ms/step - loss: 0.1757 - accuracy: 0.4938 - val_loss: 0.1734 - val_accuracy: 0.3854
Epoch 49/50
24/24 [==============================] - 9s 353ms/step - loss: 0.1759 - accuracy: 0.4631 - val_loss: 0.1734 - val_accuracy: 0.5990
Epoch 50/50
24/24 [==============================] - 8s 347ms/step - loss: 0.1757 - accuracy: 0.5254 - val_loss: 0.1734 - val_accuracy: 0.4792

更新:嘗試了其中一個建議中的代碼

def scaler(x):
        y=x/127.5-1
        return y
trainAug= ImageDataGenerator(preprocessing_function=scaler, rotation_range=30,
    zoom_range=0.15,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.15,
    horizontal_flip=True,
    fill_mode="nearest")
valAug= ImageDataGenerator(preprocessing_function=scaler)
baseModel = ResNet50(weights="imagenet", include_top=False, pooling='max', input_shape=(224, 224, 3))
headModel = baseModel.output
headModel = Dense(512, activation="relu")(headModel)
headModel = Dropout(0.2)(headModel)
headModel = Dense(len(lb.classes_), activation="softmax")(headModel)
model = Model(inputs=baseModel.input, outputs=headModel)
for layer in baseModel.layers:
    layer.trainable = False

時代:

46/46 [==============================] - 15s 316ms/step - loss: 1.0479 - accuracy: 0.4908 - val_loss: 0.8225 - val_accuracy: 1.0000
Epoch 6/10
46/46 [==============================] - 14s 314ms/step - loss: 1.1222 - accuracy: 0.5143 - val_loss: 0.7951 - val_accuracy: 0.0000e+00
Epoch 7/10
46/46 [==============================] - 14s 311ms/step - loss: 1.1610 - accuracy: 0.5014 - val_loss: 0.8112 - val_accuracy: 1.0000
Epoch 8/10
46/46 [==============================] - 15s 315ms/step - loss: 1.1724 - accuracy: 0.5254 - val_loss: 1.0836 - val_accuracy: 0.0000e+00
Epoch 9/10
46/46 [==============================] - 14s 308ms/step - loss: 1.3111 - accuracy: 0.4942 - val_loss: 0.8058 - val_accuracy: 0.0000e+00
Epoch 10/10
46/46 [==============================] - 14s 310ms/step - loss: 1.2431 - accuracy: 0.4968 - val_loss: 0.9726 - val_accuracy: 0.0000e+00

在此處輸入圖像描述

好吧,通過查看訓練和驗證的損失,您的 model 根本沒有訓練,因此精度變化不是問題。 嘗試簡化事情。 刪除所有卑鄙的東西。 ResNet 在像素在 -1 和 +1 之間縮放的圖像上進行了訓練,因此將其添加到生成器中

def scaler(x):
        y=x/127.5-1
        return y
trainAug= ImageDataGenerator(preprocessing_function=scaler, etc
valAug= ImageDataGenerator(preprocessing_function=scaler)
baseModel = ResNet50(weights="imagenet", include_top=False, pooling='max, input_shape=(224, 224, 3)))
headModel = baseModel.output
headModel = Dense(512, activation="relu")(headModel)
headModel = Dropout(0.2)(headModel)
headModel = Dense(len(lb.classes_), activation="softmax")(headModel)

您的標簽是如何編碼的? 如果它們是整數,那么您的損失應該是 sparse_categorical_crossentropy。 如果它們是一種熱編碼,您的損失應該是 categorial_crossentropy

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM