简体   繁体   English

cnn model 用于二进制分类总是返回 1

[英]cnn model for binary classification always returning 1

I created a CNN model for binary classification.我为二进制分类创建了一个 CNN model。 I used a balanced database of 300 images.我使用了一个包含 300 张图像的平衡数据库。 I know it's a small database but I used data augmentation.我知道这是一个小型数据库,但我使用了数据增强。 After fitting the model I got 86% val_accuracy on the validation set, but when I wanted to print the probability for each picture, I got probability 1 for most pictures from the first class and even all probabilities are >0.5, and probability 1 for all images from the second class.在拟合 model 后,我在验证集上得到了 86% 的 val_accuracy,但是当我想打印每张图片的概率时,我从第一张 class 中得到大多数图片的概率 1,甚至所有概率都 > 0.5,所有概率为 1来自第二个 class 的图像。

This is my model:这是我的 model:

model = keras.Sequential([
layers.InputLayer(input_shape=[128, 128, 3]),

preprocessing.Rescaling(scale=1/255),
preprocessing.RandomContrast(factor=0.10),
preprocessing.RandomFlip(mode='horizontal'),
preprocessing.RandomRotation(factor=0.10),

layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),

layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),

layers.BatchNormalization(renorm=True),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.Conv2D(filters=256, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),

layers.BatchNormalization(renorm=True),
layers.Flatten(),
layers.Dense(8, activation='relu'),
layers.Dense(1, activation='sigmoid'),])

阴谋

第一类的概率

第一类的概率

Edit:编辑:

model.compile(
    optimizer=tf.keras.optimizers.Adam(),
    loss='binary_crossentropy',
    metrics=['binary_accuracy'],
)

history = model.fit(
    ds_train,
    validation_data=ds_valid,
    epochs=50,
)

Thank you.谢谢你。

A pre-trained model like vgg16 does all the work pretty much well, there is no need to complicate very much the model.像 vgg16 这样的预训练 model 可以很好地完成所有工作,没有必要让 model 变得非常复杂。 So try the following code:所以试试下面的代码:

base_model = keras.applications.VGG16(
    weights='imagenet',  
    input_shape=(128, 128, 3),
    include_top=False)
base_model.trainable = True 
inputs = keras.Input(shape=(128, 128, 3))
x = base_model(inputs, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs, outputs)

Set base_model.trainable to False if you want the model to train fast and True for more accurate results.如果您希望 model 快速训练,请将base_model.trainable设置为 False,并将其设置为 True 以获得更准确的结果。 Notice that I used the GlobalAveragePooling2D layer, instead of Flatten, to reduce the number of parameters and to unstack the features.请注意,我使用GlobalAveragePooling2D层而不是 Flatten,以减少参数数量并取消堆叠特征。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM