簡體   English   中英

Keras CNN 預測相同的 class 即使在增強之后

[英]Keras CNN predicts same class even after augmentation

我正在嘗試創建一個分類 3D 大腦圖像的 CNN。 但是,當我運行 CNN 程序時,它總是預測相同的 class 並且不確定我可以采取哪些其他方法來防止這種情況。 我已經用許多看似合理的解決方案搜索了這個問題,但它們沒有奏效

到目前為止,我已經嘗試過:

  • 降低學習率
  • 將數據標准化為 [0, 1]
  • 變更優化器
  • 更改最后一層的激活(softmax,sigmoid),我只使用 categorical_crossentropy
  • 添加/刪除丟失層
  • 改成更簡單的 CNN model(無濟於事)
  • 平衡數據集
  • 使用自定義 3D imagedatagenerator() 添加增強數據

請注意,我使用的圖像數量總共是 20 張 3D 大腦圖像(每個類別 5 張),並且我無法增加樣本量,因為根本沒有足夠的圖像。 我最近嘗試了數據增強,但這似乎沒有幫助。

任何幫助,將不勝感激!

import os
import csv
import tensorflow as tf  # 2.0
import nibabel as nib
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from keras.models import Model
from keras.layers import Conv3D, MaxPooling3D, Dense, Dropout, Activation, Flatten 
from keras.layers import Input, concatenate
from keras import optimizers
from keras.utils import to_categorical
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
from augmentedvolumetricimagegenerator.generator import customImageDataGenerator
from keras.callbacks import EarlyStopping


# Administrative items
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

# Where the file is located
path = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline2'
folder = os.listdir(path)

target_size = (96, 96, 96)


# creating x - converting images to array
def read_image(path, folder):
    mri = []
    for i in range(len(folder)):
        files = os.listdir(path + '\\' + folder[i])
        for j in range(len(files)):
            image = np.array(nib.load(path + '\\' + folder[i] + '\\' + files[j]).get_fdata())
            image = np.resize(image, target_size)
            image = np.expand_dims(image, axis=3)
            mri.append(image)
    return mri

# creating y - one hot encoder
def create_y():
    excel_file = r'C:\Users\jesse\OneDrive\Desktop\Research\PD\decline_label.xlsx'
    excel_read = pd.read_excel(excel_file)
    excel_array = np.array(excel_read['Label'])
    label = LabelEncoder().fit_transform(excel_array)
    label = label.reshape(len(label), 1)
    onehot = OneHotEncoder(sparse=False).fit_transform(label)
    return onehot

# Splitting image train/test
x = np.asarray(read_image(path, folder))
y = np.asarray(create_y())
test_size = .2
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=test_size)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)


batch_size = 4
num_classes = 4

inputs = Input((96, 96, 96, 1))
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(inputs)
conv1 = Conv3D(32, [3, 3, 3], padding='same', activation='relu')(conv1)
pool1 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv1)
drop1 = Dropout(0.5)(pool1)

conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(drop1)
conv2 = Conv3D(64, [3, 3, 3], padding='same', activation='relu')(conv2)
pool2 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv2)
drop2 = Dropout(0.5)(pool2)

conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(drop2)
conv3 = Conv3D(128, [3, 3, 3], padding='same', activation='relu')(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv3)
drop3 = Dropout(0.5)(pool3)

conv4 = Conv3D(256, [3, 3, 3], padding='same', activation='relu')(drop3)
conv4 = Conv3D(256, [3, 3, 3], padding='same', activation='relu')(conv4)
pool4 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv4)
drop4 = Dropout(0.5)(pool4)

conv5 = Conv3D(256, [3, 3, 3], padding='same', activation='relu')(drop4)
conv5 = Conv3D(256, [3, 3, 3], padding='same', activation='relu')(conv5)
pool5 = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(conv5)
drop5 = Dropout(0.5)(pool5)

flat1 = Flatten()(drop5)
dense1 = Dense(128, activation='relu')(flat1)
dense2 = Dense(64, activation='relu')(dense1)
dense3 = Dense(32, activation='relu')(dense2)
drop6 = Dropout(0.5)(dense3)
dense4 = Dense(num_classes, activation='softmax')(drop6)

model = Model(inputs=[inputs], outputs=[dense4])

opt = optimizers.Adam(lr=1e-8, beta_1=1e-3, beta_2=1e-4, decay=2e-5)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])


train_datagen = customImageDataGenerator(rescale=1./255,
                                         #width_shift_range=0.2,
                                         #height_shift_range=0.2,
                                         #rotation_range=15,
                                         #shear_range=0.2,
                                         #zoom_range=0.2,
                                         #brightness_range=[0.2, 1.0],
                                         data_format='channels_last',
                                         horizontal_flip=True)

test_datagen = customImageDataGenerator(rescale=1./255)


training_set = train_datagen.flow(x_train, y_train, batch_size=batch_size)

testing_set = test_datagen.flow(x_test, y_test, batch_size=batch_size)


callbacks = EarlyStopping(monitor='val_loss')

model.fit_generator(training_set,
                    steps_per_epoch = 20,
                    epochs = 30,
                    validation_steps = 5,
                    callbacks = [callbacks],
                    validation_data = testing_set)

#score = model.evaluate(x_test, y_test, batch_size=batch_size)
#print(score)


y_pred = model.predict(x_test, batch_size=batch_size)
y_test = np.argmax(y_test, axis=1)
y_pred = np.argmax(y_pred, axis=1)
confusion = confusion_matrix(y_test, y_pred)
map = sns.heatmap(confusion, annot=True)
print(map)

不確定到底發生了什么。 但我有幾點意見建議。

首先查看學習曲線,看看它是否真的適合某些東西。

其次,您將 0.2 個數據集用於包含 5 個類別的 20 個圖像的數據集。 如果您最后的所有圖像都是相同的 label。 您將僅在 label 上進行測試。 所以這可能是一個問題,除非圖像沒有排序。

第三,對於少數數據,看起來您可能有很多密集參數。 通常從小處着手並增加參數的數量。 通過觀察學習曲線,您可以看到一些提示。

最后,遺憾的是機器學習並不神奇,你不能指望用這么少的數據取得好的結果。

亞歷克西斯

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM