簡體   English   中英

ROras曲線錯誤且精度低的Keras CNN模型

[英]Keras CNN model with a wrong ROC curve and low accuracy

我正在學習使用在那里找到的數據集之一在Kaggle的Keras上編寫CNN。

我筆記本的鏈接是

https://www.kaggle.com/vj6978/brain-tumor-vimal?scriptVersionId=16814133

鏈接中提供了代碼,數據集和ROC曲線。 ROC曲線本身看起來好像該模型只是在進行猜測,而不是學習到的預測。

測試精度似乎也僅在60%到70%左右達到峰值,這非常安靜。 任何幫助,將不勝感激。

謝謝維馬爾·詹姆斯

我相信您的上一次激活應該是S型而不是softmax。

更新:

只需在Kaggle上分叉內核,並進行如下修改即可獲得更好的結果:

model = Sequential()
model.add(Conv2D(128, (3,3), input_shape = data_set.shape[1:]))
model.add(Activation("relu"))
model.add(AveragePooling2D(pool_size = (2,2)))

model.add(Conv2D(128, (3,3)))
model.add(Activation("relu"))
model.add(AveragePooling2D(pool_size = (2,2)))

model.add(Flatten())
model.add(Dense(64))

model.add(Dense(1))
model.add(Activation("sigmoid")) # Last activation should be sigmoid for binary classification

model.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ['accuracy'])

得到以下結果:

rain on 204 samples, validate on 23 samples
Epoch 1/15
204/204 [==============================] - 2s 11ms/step - loss: 2.8873 - acc: 0.6373 - val_loss: 0.8000 - val_acc: 0.8261
Epoch 2/15
204/204 [==============================] - 1s 3ms/step - loss: 0.7292 - acc: 0.7206 - val_loss: 0.6363 - val_acc: 0.7391
Epoch 3/15
204/204 [==============================] - 1s 3ms/step - loss: 0.4731 - acc: 0.8088 - val_loss: 0.5417 - val_acc: 0.8261
Epoch 4/15
204/204 [==============================] - 1s 3ms/step - loss: 0.3605 - acc: 0.8775 - val_loss: 0.6820 - val_acc: 0.8696
Epoch 5/15
204/204 [==============================] - 1s 3ms/step - loss: 0.2986 - acc: 0.8529 - val_loss: 0.8356 - val_acc: 0.8696
Epoch 6/15
204/204 [==============================] - 1s 3ms/step - loss: 0.2151 - acc: 0.9020 - val_loss: 0.7592 - val_acc: 0.8696
Epoch 7/15
204/204 [==============================] - 1s 3ms/step - loss: 0.1305 - acc: 0.9657 - val_loss: 1.2486 - val_acc: 0.8696
Epoch 8/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0565 - acc: 0.9853 - val_loss: 1.2668 - val_acc: 0.8696
Epoch 9/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0426 - acc: 0.9853 - val_loss: 1.4674 - val_acc: 0.8696
Epoch 10/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0141 - acc: 1.0000 - val_loss: 1.7379 - val_acc: 0.8696
Epoch 11/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0063 - acc: 1.0000 - val_loss: 1.7232 - val_acc: 0.8696
Epoch 12/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0023 - acc: 1.0000 - val_loss: 1.8291 - val_acc: 0.8696
Epoch 13/15
204/204 [==============================] - 1s 3ms/step - loss: 0.0014 - acc: 1.0000 - val_loss: 1.9164 - val_acc: 0.8696
Epoch 14/15
204/204 [==============================] - 1s 3ms/step - loss: 8.6263e-04 - acc: 1.0000 - val_loss: 1.8946 - val_acc: 0.8696
Epoch 15/15
204/204 [==============================] - 1s 3ms/step - loss: 6.8785e-04 - acc: 1.0000 - val_loss: 1.9596 - val_acc: 0.8696
Test loss: 3.079359292984009
Test accuracy: 0.807692289352417

您正在使用具有單個神經元的softmax激活,由於softmax中使用了歸一化,因此它將始終產生恆定的1.0輸出,因此沒有任何意義。 對於二進制分類,您必須將sigmoid激活與單個輸出神經元一起使用。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM