简体   繁体   中英

cnn model for binary classification always returning 1

I created a CNN model for binary classification. I used a balanced database of 300 images. I know it's a small database but I used data augmentation. After fitting the model I got 86% val_accuracy on the validation set, but when I wanted to print the probability for each picture, I got probability 1 for most pictures from the first class and even all probabilities are >0.5, and probability 1 for all images from the second class.

This is my model:

model = keras.Sequential([
layers.InputLayer(input_shape=[128, 128, 3]),

preprocessing.Rescaling(scale=1/255),
preprocessing.RandomContrast(factor=0.10),
preprocessing.RandomFlip(mode='horizontal'),
preprocessing.RandomRotation(factor=0.10),

layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),

layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),

layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),

layers.BatchNormalization(renorm=True),
layers.Flatten(),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='sigmoid'),])

阴谋

第一类的概率

第一类的概率

Edit:

model.compile(
    optimizer=tf.keras.optimizers.Adam(),
    loss='binary_crossentropy',
    metrics=['binary_accuracy'],
)

history = model.fit(
    ds_train,
    validation_data=ds_valid,
    epochs=50,
)

Thank you.

A pre-trained model like vgg16 does all the work pretty much well, there is no need to complicate very much the model. So try the following code:

base_model = keras.applications.VGG16(
    weights='imagenet',  
    input_shape=(128, 128, 3),
    include_top=False)
base_model.trainable = True 
inputs = keras.Input(shape=(128, 128, 3))
x = base_model(inputs, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs, outputs)

Set base_model.trainable to False if you want the model to train fast and True for more accurate results. Notice that I used the GlobalAveragePooling2D layer, instead of Flatten, to reduce the number of parameters and to unstack the features.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM