簡體   English   中英

CNN with Keras,訓練期間acc高但測試相同數據集時acc低

[英]CNN with Keras, high acc during training but low during testing for same data set

我正在使用 Google Colab 使用 Keras 構建 CNN。數據集包含 3 個類別,每個類別 class 的圖像數量相同。圖像在我的 Google Drive 中組織為

Images:
-- class 1
-- class 2
-- class 3

讀取數據和創建 CNN 的代碼在這里:

batch_size = 30

data = ImageDataGenerator(rescale=1. / 255, 
                          validation_split=0.2)

training_data = data.flow_from_directory('/content/drive/My Drive/Data/Images', 
                                         target_size=(200, 200), shuffle=True, batch_size = batch_size, 
                                         class_mode='categorical', subset='training')

test_data = data.flow_from_directory('/content/drive/My Drive/Data/Images', 
                                     target_size=(200, 200), batch_size = batch_size, shuffle=False,
                                     class_mode='categorical', subset='validation')

numBatchTest = ceil(len(test_data.filenames) / (1.0 * batch_size)) # 1.0 to avoid integer division
numBatchTrain = ceil(len(training_data.filenames) / (1.0 * batch_size)) # 1.0 to avoid integer division

numClasses = 3

Classifier=Sequential()
Classifier.add(Conv2D(32, kernel_size=(5, 5), input_shape=(200, 200, 3)))
Classifier.add(BatchNormalization())
Classifier.add(Activation('relu'))
Classifier.add(MaxPooling2D(pool_size=(2,2)))
Classifier.add(Dropout(0.2))
               
Classifier.add(Conv2D(64, kernel_size=(3, 3)))
Classifier.add(BatchNormalization())
Classifier.add(Activation('relu'))
Classifier.add(MaxPooling2D(pool_size=(2,2)))
Classifier.add(Dropout(0.2))

Classifier.add(Flatten())

Classifier.add(Dense(64, activation='relu'))
Classifier.add(Dense(32, activation='relu'))
Classifier.add(Dense(16, activation='relu'))
Classifier.add(Dense(8, activation='relu'))
Classifier.add(Dense(numClasses, activation='softmax'))

我訓練.network並使用測試數據作為驗證:

MyEpochs = 150
Classifier.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.SGD(learning_rate=0.01), 
              metrics=['accuracy']) 

Classifier.fit(training_data,
                        batch_size = 30,
                        epochs = MyEpochs,
                        validation_data=test_data,
                        shuffle = 1)

訓練 output 的准確率和驗證准確率都在 90% 以上:

Epoch 135/150
4/4 [==============================] - 0s 123ms/step - loss: 0.0759 - accuracy: 0.9750 - val_loss: 0.1891 - val_accuracy: 0.9667
Epoch 136/150
4/4 [==============================] - 0s 124ms/step - loss: 0.1153 - accuracy: 0.9583 - val_loss: 0.2348 - val_accuracy: 0.9333
Epoch 137/150
4/4 [==============================] - 1s 134ms/step - loss: 0.1059 - accuracy: 0.9417 - val_loss: 0.1893 - val_accuracy: 0.9667
Epoch 138/150
4/4 [==============================] - 0s 122ms/step - loss: 0.0689 - accuracy: 0.9833 - val_loss: 0.1991 - val_accuracy: 0.9667
Epoch 139/150
4/4 [==============================] - 1s 131ms/step - loss: 0.0716 - accuracy: 0.9750 - val_loss: 0.2175 - val_accuracy: 0.9333
Epoch 140/150
4/4 [==============================] - 0s 125ms/step - loss: 0.1118 - accuracy: 0.9417 - val_loss: 0.2466 - val_accuracy: 0.9333
Epoch 141/150
4/4 [==============================] - 1s 126ms/step - loss: 0.1046 - accuracy: 0.9417 - val_loss: 0.2351 - val_accuracy: 0.9333
Epoch 142/150
4/4 [==============================] - 0s 120ms/step - loss: 0.0988 - accuracy: 0.9417 - val_loss: 0.1994 - val_accuracy: 0.9333
Epoch 143/150
4/4 [==============================] - 0s 124ms/step - loss: 0.0803 - accuracy: 0.9500 - val_loss: 0.1910 - val_accuracy: 0.9667
Epoch 144/150
4/4 [==============================] - 0s 124ms/step - loss: 0.0786 - accuracy: 0.9750 - val_loss: 0.1908 - val_accuracy: 0.9667
Epoch 145/150
4/4 [==============================] - 0s 124ms/step - loss: 0.0947 - accuracy: 0.9500 - val_loss: 0.4854 - val_accuracy: 0.8667
Epoch 146/150
4/4 [==============================] - 1s 128ms/step - loss: 0.2091 - accuracy: 0.9000 - val_loss: 0.1858 - val_accuracy: 0.9333
Epoch 147/150
4/4 [==============================] - 0s 124ms/step - loss: 0.0838 - accuracy: 0.9417 - val_loss: 0.1779 - val_accuracy: 0.9667
Epoch 148/150
4/4 [==============================] - 1s 128ms/step - loss: 0.0771 - accuracy: 0.9667 - val_loss: 0.1897 - val_accuracy: 0.9667
Epoch 149/150
4/4 [==============================] - 0s 120ms/step - loss: 0.0869 - accuracy: 0.9667 - val_loss: 0.1982 - val_accuracy: 0.9667
Epoch 150/150
4/4 [==============================] - 0s 119ms/step - loss: 0.0809 - accuracy: 0.9500 - val_loss: 0.2615 - val_accuracy: 0.9333

為了測試 model,我再次預測訓練數據:

training_data.reset()
test_data.reset()

predicted_scores = Classifier.predict(training_data, verbose=1)
predicted_labels = predicted_scores.argmax(axis=1) 

train_labels = []
training_data.reset()

for i in range(0,numBatchTrain):
    train_labels =  np.append(train_labels, (training_data[i][1]).argmax(axis = 1))
print(train_labels)
print(predicted_labels)

acc_score = accuracy_score(train_labels, predicted_labels)
CFM = confusion_matrix(train_labels, predicted_labels)

print("\n", "Accuracy: " + str(format(acc_score,'.3f')))
print("\n", "CFM: \n", confusion_matrix(train_labels, predicted_labels))
print("\n", "Classification report: \n", classification_report(train_labels, predicted_labels))

我在獲取training_datatesting_data的標簽時遇到了一些麻煩,它們的順序似乎與圖像不同,當我剛剛使用training_data.labels時,這就是為什么我將批次循環到 append 標簽。 當我只使用training_data.labels時,結果同樣糟糕。 該代碼中的 output 是:

4/4 [==============================] - 0s 71ms/step
[0. 2. 2. 0. 0. 1. 2. 2. 2. 1. 0. 0. 0. 1. 2. 0. 2. 0. 0. 1. 1. 1. 0. 0.
 0. 2. 2. 0. 1. 2. 0. 2. 1. 1. 2. 2. 0. 1. 0. 2. 0. 1. 1. 0. 2. 2. 0. 2.
 2. 2. 1. 2. 1. 0. 2. 2. 1. 2. 1. 0. 1. 2. 0. 1. 1. 1. 1. 2. 0. 0. 1. 1.
 1. 1. 1. 1. 2. 0. 0. 2. 2. 0. 1. 1. 1. 0. 2. 1. 2. 1. 2. 1. 1. 2. 0. 2.
 2. 0. 0. 2. 1. 0. 2. 0. 0. 1. 1. 2. 0. 0. 1. 1. 0. 0. 1. 2. 0. 2. 0. 2.]
[2 2 2 0 1 1 0 1 1 0 0 2 0 2 0 0 1 2 2 2 2 0 0 2 1 0 2 2 1 1 0 2 1 1 0 0 1
 0 1 0 2 2 2 1 1 1 0 2 0 1 0 0 2 0 0 0 2 0 1 2 2 1 0 2 2 0 1 0 2 2 0 2 0 0
 1 1 2 2 2 0 2 2 1 0 2 1 2 1 0 1 2 2 0 2 0 2 0 0 1 1 1 1 2 2 0 0 1 1 1 2 0
 0 1 0 1 0 2 0 0 0]

 Accuracy: 0.333

 CFM: 
 [[14 10 16]
 [13 14 13]
 [18 10 12]]

 Classification report: 
               precision    recall  f1-score   support

         0.0       0.31      0.35      0.33        40
         1.0       0.41      0.35      0.38        40
         2.0       0.29      0.30      0.30        40

    accuracy                           0.33       120
   macro avg       0.34      0.33      0.33       120
weighted avg       0.34      0.33      0.33       120

訓練時訓練和驗證數據的准確率非常高,但測試時,使用與訓練相同的數據,准確率只有 33.3%。

我認為,這里的問題是 class 標簽在某處混淆了,但我不知所措,如何解決它。 數據集本身很簡單,在 Matlab 構建相同的 CNN,我得到訓練和測試數據的 100% 准確率,但我不能讓它在 Python 運行。

有沒有人有建議,如何讓它在 Python 中運行?

您得到的結果不一致,因為您的訓練圖像生成器啟用了改組 這意味着每次重置生成器時,圖像的順序都會改變。 這就是為什么當您使用圖像生成器並進行一次掃描預測而不是再次重置它並單獨遍歷每個圖像時,您將不會匹配確切的順序。 如果您使用生成器對數據進行訓練,則建議使用混洗,這樣 .network 不僅會記住傳入的數據。

但是,因為您現在將其用於評估目的,您可以禁用它以確保比較的一致性。 因此,如果您希望它可重現,請將shuffle標志設置為False 您可以通過創建另一個圖像生成器並遍歷它來做到這一點:

training_data_noshuffle = data.flow_from_directory('/content/drive/My Drive/Data/Images', 
                                         target_size=(200, 200), shuffle=False, batch_size = batch_size, 
                                         class_mode='categorical', subset='training')
training_data_noshuffle.reset()

predicted_scores = Classifier.predict(training_data_noshuffle, verbose=1)
predicted_labels = predicted_scores.argmax(axis=1) 

train_labels = []
training_data_noshuffle.reset()

for i in range(0,numBatchTrain):
    train_labels =  np.append(train_labels, (training_data_noshuffle[i][1]).argmax(axis = 1))

執行此操作后,您應該會看到使用predict與循環時的標簽現在在順序方面是一致的。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM