簡體   English   中英

Keras CNN的准確性和損失是恆定的

[英]Keras CNN accuracy and loss are constant

我正在使用ResNet50利用轉移學習構建keras CNN模型。 由於某種原因,我的准確性和損失在每個時期都是完全相同的。 奇怪的是,我在使用類似代碼但使用VGG19時看到了相同的行為。 這使我相信問題不在於實際的模型代碼,而在於預處理中的某個地方。 我已經嘗試過調整學習率,更改優化器,圖像分辨率,凍結圖層等,並且分數不會改變。 我進入了圖像目錄,檢查我的兩個不同的類是否混合在一起,是否不混合。 有什么問題 我只想提前說謝謝。

PS我正在訓練約2000張圖像,並且有兩節課。

import numpy as np
import pandas as pd

import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras.models import Sequential, Model, load_model
from keras.layers import Conv2D, GlobalAveragePooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import applications
from keras import optimizers

img_height, img_width, img_channel = 400, 400, 3 #change chanel to 1 instead of three since it is black and white

base_model = applications.ResNet50(weights='imagenet', include_top=False, input_shape=(img_height, img_width, img_channel))

# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(512, activation='relu',name='fc-1')(x)
#x = Dropout(0.5)(x)
x = Dense(256, activation='relu',name='fc-2')(x)
#x = Dropout(0.5)(x)
# and a logistic layer -- let's say we have 2 classes
predictions = Dense(1, activation='softmax', name='output_layer')(x)

model = Model(inputs=base_model.input, outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer=optimizers.SGD(lr=0.1),
              metrics=['accuracy'])

model.summary()

from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint

batch_size = 6

# prepare data augmentation configuration
train_datagen = ImageDataGenerator(
        rescale=1./255,
        rotation_range=20,
        width_shift_range=0.1,
        height_shift_range=0.1,
        shear_range=0.1,
        zoom_range=0.1,
        horizontal_flip=True,
        vertical_flip=True)


test_datagen = ImageDataGenerator(rescale=1./255)

#possibely resize the image
train_generator = train_datagen.flow_from_directory(
        "../Train/",
        target_size=(img_height, img_width),
        batch_size=batch_size,
        class_mode='binary',
        shuffle=True
)

validation_generator = test_datagen.flow_from_directory(
        "../Test/",
        target_size=(img_height, img_width),
        batch_size=batch_size,
        class_mode='binary',
        shuffle=True)

epochs = 10

history = model.fit_generator(
        train_generator,
        steps_per_epoch=2046 // batch_size,
        epochs=epochs,
        validation_data=validation_generator,
        validation_steps=512 // batch_size,
        callbacks=[ModelCheckpoint('snapshots/ResNet50-transferlearning.model', monitor='val_acc', save_best_only=True)])

這是keras給出的輸出:

Epoch 1/10
341/341 [==============================] - 59s 172ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 2/10
341/341 [==============================] - 57s 168ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 3/10
341/341 [==============================] - 56s 165ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 4/10
341/341 [==============================] - 57s 168ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588
Epoch 5/10
341/341 [==============================] - 57s 167ms/step - loss: 7.0517 - acc: 0.5577 - val_loss: 7.0334 - val_acc: 0.5588

最后一層應為“ Sigmoid”激活,而不是softmax,因為它是二進制分類。

predictions = Dense(1, activation='softmax', name='output_layer')(x)

密集層表示要分類的類別有多少個,因此對於二進制分類,在編寫1的地方需要2。

因此,將該行更改為。

 predictions = Dense(2, activation='softmax', name='output_layer')(x)

僅需注意,請始終嘗試保留一個變量來處理多個類,例如

predictions = Dense(num_classes, activation='softmax', name='output_layer')(x)

然后在代碼的開頭定義num_classes,以提高靈活性和可讀性。

您可以在此處查看有關密集層的文檔: https : //faroit.github.io/keras-docs/2.0.0/layers/core/

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM