简体   繁体   中英

Transfer Learning model gives 0 accuracy regardless of architecture

I am trying to develop a model using Keras and transfer learning. The dataset I am using can be found here: https://github.com/faezetta/VMMRdb .

I have taken the 10 classes of car brands with the most samples and trained two models built upon the VGG16 architecture using transfer learning, as it can be seen in the code below.

samples_counts = utils.read_dictionary(utils.TOP10_BRANDS_COUNTS_NAME)

train_dataset = image_dataset_from_directory(
    directory=utils.TRAIN_SET_LOCATION,
    labels='inferred',
    label_mode='categorical',
    class_names=list(samples_counts.keys()),
    color_mode='rgb',
    batch_size=32,
    image_size=(56, 56),
    shuffle=True,
    seed=utils.RANDOM_STATE,
    validation_split=0.2,
    subset='training',
    interpolation='bilinear'
)

validation_dataset = image_dataset_from_directory(
    directory=utils.TRAIN_SET_LOCATION,
    labels='inferred',
    label_mode='categorical',
    class_names=list(samples_counts.keys()),
    color_mode='rgb',
    batch_size=32,
    image_size=(56, 56),
    shuffle=True,
    seed=utils.RANDOM_STATE,
    validation_split=0.2,
    subset='validation',
    interpolation='bilinear'
)

test_dataset = image_dataset_from_directory(
    directory=utils.TEST_SET_LOCATION,
    labels='inferred',
    label_mode='categorical',
    class_names=list(samples_counts.keys()),
    color_mode='rgb',
    batch_size=32,
    image_size=(56, 56),
    shuffle=True,
    seed=utils.RANDOM_STATE,
    interpolation='bilinear'
)

image_shape = (utils.RESIZE_HEIGHT, utils.RESIZE_WIDTH, 3)
base_model = apps.VGG16(include_top=False, weights='imagenet', input_shape=image_shape)
base_model.trainable = False

preprocess_input = apps.vgg16.preprocess_input
flatten_layer = layers.Flatten(name='flatten')
specialisation_layer = layers.Dense(1024, activation='relu', name='specialisation_layer')
avg_pooling_layer = layers.GlobalAveragePooling2D(name='pooling_layer')
dropout_layer = layers.Dropout(0.2, name='dropout_layer')
classification_layer = layers.Dense(10, activation='softmax', name='classification_layer')

inputs = tf.keras.Input(shape=(utils.RESIZE_HEIGHT, utils.RESIZE_WIDTH, 3))
x = preprocess_input(inputs)
x = base_model(x, training=False)

# First model
# x = flatten_layer(x)
# x = specialisation_layer(x)

# Second model
x = avg_pooling_layer(x)
x = dropout_layer(x)
outputs = classification_layer(x)
model = tf.keras.Model(inputs, outputs)

model.summary()

steps_per_epoch = len(train_dataset)
validation_steps = len(validation_dataset)
base_learning_rate = 0.0001
optimizer = optimizers.Adam(learning_rate=base_learning_rate)
loss_function = losses.CategoricalCrossentropy()
train_metrics = [metrics.Accuracy(), metrics.AUC(), metrics.Precision(), metrics.Recall()]

model.compile(optimizer=optimizer,
              loss=loss_function,
              metrics=train_metrics)

initial_results = model.evaluate(validation_dataset,
                                 steps=validation_steps,
                                 return_dict=True)

training_history = model.fit(train_dataset, epochs=10, verbose=0,
                             validation_data=validation_dataset,
                             callbacks=[TqdmCallback(verbose=2)],
                             steps_per_epoch=steps_per_epoch,
                             validation_steps=validation_steps)

history = training_history.history
final_results = model.evaluate(test_dataset,
                              return_dict=True,
                              callbacks=[TqdmCallback(verbose=2)])

I keep getting 0 accuracy and bad metrics in general. I have tried the solutions mentioned in Transfer learning bad accuracy and MNIST and transfer learning with VGG16 in Keras- low validation accuracy , without success.

The summary and the results for the first model are:

Model: "functional_1"
input_2 (InputLayer)         [(None, 56, 56, 3)]       0
tf_op_layer_strided_slice (T [(None, 56, 56, 3)]       0
tf_op_layer_BiasAdd (TensorF [(None, 56, 56, 3)]       0
vgg16 (Functional)           (None, 1, 1, 512)         14714688
flatten (Flatten)            (None, 512)               0
specialisation_layer (Dense) (None, 1024)              525312
classification_layer (Dense) (None, 10)                10250

Total params: 15,250,250
Trainable params: 535,562
Non-trainable params: 14,714,688
Initial results: loss = 9.01, accuracy = 0.0, auc = 0.53, precision = 0.13, recall = 0.12
Final results: loss = 2.5, accuracy = 0.0, auc = 0.71, precision = 0.31, recall = 0.14

训练和测试第一个模型的损失和准确性

The summary and the results for the second model are:

Model: "functional_1"
input_2 (InputLayer)         [(None, 56, 56, 3)]       0
tf_op_layer_strided_slice (T [(None, 56, 56, 3)]       0
tf_op_layer_BiasAdd (TensorF [(None, 56, 56, 3)]       0
vgg16 (Functional)           (None, 1, 1, 512)         14714688
pooling_layer (GlobalAverage (None, 512)               0
dropout_layer (Dropout)      (None, 512)               0
classification_layer (Dense) (None, 10)                5130

Total params: 14,719,818
Trainable params: 5,130
Non-trainable params: 14,714,688
Initial Results: loss = 21.6, accuracy = 0, auc = 0.48, precision = 0.07, recall = 0.07
Final Results: loss = 2.02, accuracy = 0, auc = 0.72, precision = 0.44, recall = 0.009

训练和测试第二个模型的损失和准确性

in the code below

# Second model
x = avg_pooling_layer(x)
x = dropout_layer(x)
outputs = classification_layer(x)
model = tf.keras.Model(inputs, outputs)

you need to add a Flatten layer after the avg_pooling_layer. Or alternatively change the ave_pooling_lay to a GlobalMaxPooling2D layer which is what I think is best. so your second model would be

x=tf.keras.layers.GlobalMaxPooling2D()(x)
x = dropout_layer(x)
outputs = classification_layer(x)
model = tf.keras.Model(inputs, outputs)

Also in Vgg you can set the parameter pooling='average then the output is a 1 dimensional tensor so you don't need to flatten it and you don't need to add global average pooling. In your test_dataset and validation_dataset set shuffle=False and set seed=None. Your values for steps_per_epoch and validation steps are incorrect. They are typically set to number of samples//batch_size. You can leave these values as None in model.fit and it will determine these values internally, also set verbose=1 so you can see the results of training for each epoch. Leave callbacks=None I don't even know what TqdmCallback(verbose=2) is. Not listed in any documentation I could find.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM