簡體   English   中英

Keras CNN 總是預測相同的 class

[英]Keras CNN always predicts same class

編輯:似乎我什至沒有運行 model 足夠的時期,所以我會嘗試一下並返回我的結果

我正在嘗試創建一個分類 3D 大腦圖像的 CNN。 但是,當我運行 CNN 程序時,它總是預測相同的 class 並且不確定我可以采取哪些其他方法來防止這種情況。 我已經用許多看似合理的解決方案搜索了這個問題,但它們沒有奏效

到目前為止,我已經嘗試過:

對於上下文,我在兩組之間進行分類。 我使用的圖像數量是總共 200 張 3D 大腦圖像(每個類別大約 100 張)。 為了增加訓練規模,我使用了從 github 找到的自定義數據增強

查看學習曲線,准確率和丟失率是完全隨機的。 有些運行會減少,有些會增加,有些會在一定范圍內波動

任何幫助,將不勝感激!

import os
import csv
import tensorflow as tf  # 2.0
import nibabel as nib
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from keras.models import Model
from keras.layers import Conv3D, MaxPooling3D, Dense, Dropout, Activation, Flatten 
from keras.layers import Input, concatenate
from keras import optimizers
from keras.utils import to_categorical
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
from augmentedvolumetricimagegenerator.generator import customImageDataGenerator
from keras.callbacks import EarlyStopping


# Administrative items
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# Where the file is located
path = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline'
folder = os.listdir(path)

target_size = (96, 96, 96)


# creating x - converting images to array
def read_image(path, folder):
    mri = []
    for i in range(len(folder)):
        files = os.listdir(path + '\\' + folder[i])
        for j in range(len(files)):
            image = np.array(nib.load(path + '\\' + folder[i] + '\\' + files[j]).get_fdata())
            image = np.resize(image, target_size)
            image = np.expand_dims(image, axis=3)
            image /= 255.
            mri.append(image)
    return mri

# creating y - one hot encoder
def create_y():
    excel_file = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline_label.xlsx'
    excel_read = pd.read_excel(excel_file)
    excel_array = np.array(excel_read['Label'])
    label = LabelEncoder().fit_transform(excel_array)
    label = label.reshape(len(label), 1)
    onehot = OneHotEncoder(sparse=False).fit_transform(label)
    return onehot

# Splitting image train/test
x = np.asarray(read_image(path, folder))
y = np.asarray(create_y())
x_split, x_test, y_split, y_test = train_test_split(x, y, test_size=.2, stratify=y)
x_train, x_val, y_train, y_val = train_test_split(x_split, y_split, test_size=.25, stratify=y_split)
print(x_train.shape, x_val.shape, x_test.shape, y_train.shape, y_val.shape, y_test.shape)


batch_size = 10
num_classes = len(folder)

inputs = Input((96, 96, 96, 1))
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(inputs)
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv1)
drop1 = Dropout(0.5)(pool1)

conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(drop1)
conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv2)
drop2 = Dropout(0.5)(pool2)

conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(drop2)
conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv3)
drop3 = Dropout(0.5)(pool3)

flat1 = Flatten()(drop3)
dense1 = Dense(128, activation='relu')(flat1)
drop5 = Dropout(0.5)(dense1)
dense2 = Dense(num_classes, activation='sigmoid')(drop5)

model = Model(inputs=[inputs], outputs=[dense2])

opt = optimizers.Adagrad(lr=1e-5)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])


train_datagen = customImageDataGenerator(
                                         horizontal_flip=True
                                        )

val_datagen = customImageDataGenerator()

training_set = train_datagen.flow(x_train, y_train, batch_size=batch_size)

validation_set = val_datagen.flow(x_val, y_val, batch_size=batch_size)


callbacks = EarlyStopping(monitor='val_loss', patience=3)

history = model.fit_generator(training_set,
                    steps_per_epoch = 10,
                    epochs = 20,
                    validation_steps = 5,
                    callbacks = [callbacks],
                    validation_data = validation_set)

score = model.evaluate(x_test, y_test, batch_size=batch_size)
print(score)


y_pred = model.predict(x_test, batch_size=batch_size)
y_test = np.argmax(y_test, axis=1)
y_pred = np.argmax(y_pred, axis=1)
confusion = confusion_matrix(y_test, y_pred)
map = sns.heatmap(confusion, annot=True)
print(map)


acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']

plt.figure(1)
plt.plot(acc)
plt.plot(val_acc)
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.title('Accuracy')

plt.figure(2)
plt.plot(loss)
plt.plot(val_loss)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.title('Loss')

你可以在這里找到輸出: https://i.stack.imgur.com/FF13P.jpg

沒有數據集本身就很難提供幫助。 雖然我會測試一兩件事:

  • 我發現 ReLU 激活不適用於 Dense 層,這可能導致單類預測。 嘗試用其他東西(sigmoid,tanh)替換 Dense(128) 層中的 relu
  • Dropout 通常不適用於圖像,您可能需要查看DropBlock
  • 初始學習率非常低,我會從 1e-3 或 1e-4 之間的東西開始
  • 經常發生在我身上的愚蠢事情:您是否將圖像 / label 組合在一起以確保每個圖像都有正確的 label?

同樣,不確定它會解決所有問題,但我希望它可能會有所幫助!

這可能是任何數量的事情,但不當行為可能是由數據本身引起的。

僅從查看代碼來看,您似乎在調用model.predictmodel.evaluate之前還沒有對測試數據進行標准化,方法與您對訓練和驗證數據所做的相同。

我曾經遇到過類似的問題,結果證明這是原因。 作為快速測試,您可以重新調整測試數據,看看是否有幫助。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM