簡體   English   中英

如何在機器學習中修復恆定的驗證精度?

[英]How to fix a constant validation accuracy in machine learning?

我正在嘗試使用預訓練的InceptionV3模型對具有平衡類的 dicom 圖像進行圖像分類。

def convertDCM(PathDCM) :
   data = []  
   for dirName, subdir, files in os.walk(PathDCM):
          for filename in sorted(files):
                     ds = pydicom.dcmread(PathDCM +'/' + filename)
                     im = fromarray(ds.pixel_array) 
                     im = keras.preprocessing.image.img_to_array(im)
                     im = cv2.resize(im,(299,299))
                     data.append(im) 
   return data

PathDCM = '/home/Desktop/FULL_BALANCED_COLOURED/'

data = convertDCM(PathDCM)

#scale the raw pixel intensities to the range [0,1]
data = np.array(data, dtype="float")/255.0
labels = np.array(labels,dtype ="int")


#splitting data into training and testing
#test_size is percentage to split into test/train data
(trainX, testX, trainY, testY) = train_test_split(
                            data,labels, 
                            test_size=0.2, 
                            random_state=42) 

img_width, img_height = 299, 299 #InceptionV3 size

train_samples =  300
validation_samples = 50
epochs = 25
batch_size = 15

base_model = keras.applications.InceptionV3(
        weights ='imagenet',
        include_top=False, 
        input_shape = (img_width,img_height,3))

model_top = keras.models.Sequential()
 model_top.add(keras.layers.GlobalAveragePooling2D(input_shape=base_model.output_shape[1:], data_format=None)),
model_top.add(keras.layers.Dense(300,activation='relu'))
model_top.add(keras.layers.Dropout(0.5))
model_top.add(keras.layers.Dense(1, activation = 'sigmoid'))
model = keras.models.Model(inputs = base_model.input, outputs = model_top(base_model.output))

#Compiling model 
model.compile(optimizer = keras.optimizers.Adam(
                    lr=0.0001),
                    loss='binary_crossentropy',
                    metrics=['accuracy'])

#Image Processing and Augmentation 
train_datagen = keras.preprocessing.image.ImageDataGenerator(
          rescale = 1./255,  
          zoom_range = 0.1,
          width_shift_range = 0.2, 
          height_shift_range = 0.2,
          horizontal_flip = True,
          fill_mode ='nearest') 

val_datagen = keras.preprocessing.image.ImageDataGenerator()


train_generator = train_datagen.flow(
        trainX, 
        trainY,
        batch_size=batch_size,
        shuffle=True)


validation_generator = train_datagen.flow(
                testX,
                testY,
                batch_size=batch_size,
                shuffle=True)

當我訓練模型時,隨着驗證損失的波動,我總是得到0.3889的恆定驗證准確度。

#Training the model
history = model.fit_generator(
    train_generator, 
    steps_per_epoch = train_samples//batch_size,
    epochs = epochs, 
    validation_data = validation_generator, 
    validation_steps = validation_samples//batch_size)

Epoch 1/25
20/20 [==============================]20/20 
[==============================] - 195s 49s/step - loss: 0.7677 - acc: 0.4020 - val_loss: 0.7784 - val_acc: 0.3889

Epoch 2/25
20/20 [==============================]20/20 
[==============================] - 187s 47s/step - loss: 0.7016 - acc: 0.4848 - val_loss: 0.7531 - val_acc: 0.3889

Epoch 3/25
20/20 [==============================]20/20 
[==============================] - 191s 48s/step - loss: 0.6566 - acc: 0.6304 - val_loss: 0.7492 - val_acc: 0.3889

Epoch 4/25
20/20 [==============================]20/20 
[==============================] - 175s 44s/step - loss: 0.6533 - acc: 0.5529 - val_loss: 0.7575 - val_acc: 0.3889


predictions= model.predict(testX)
print(predictions)

預測模型也只返回每個圖像一個預測的數組:

[[0.457804  ]
 [0.45051473]
 [0.48343503]
 [0.49180537]...

為什么模型只預測兩個類別之一? 這是否與恆定 val 精度或可能過度擬合有關?

如果你有兩個類,每個圖像都在一個或另一個中,所以一個類的概率足以找到所有東西,因為每個圖像的概率總和應該是 1。所以如果你有 1 個類的概率 p,另一個的概率是 1-p。

如果您想對不屬於這兩個類別之一的圖像進行分類,那么您應該創建第三個類別。

此外,這一行:

model_top.add(keras.layers.Dense(1, activation = 'sigmoid'))

意味着輸出是一個形狀向量(nb_sample,1)並且與你的訓練標簽具有相同的形狀

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM