簡體   English   中英

遷移學習 model 無論架構如何都給出 0 准確度

[英]Transfer Learning model gives 0 accuracy regardless of architecture

我正在嘗試使用 Keras 和遷移學習來開發 model。 我使用的數據集可以在這里找到: https://github.com/faezetta/VMMRdb

我選取了樣本最多的 10 類汽車品牌,並使用遷移學習訓練了兩個基於 VGG16 架構的模型,如下面的代碼所示。

samples_counts = utils.read_dictionary(utils.TOP10_BRANDS_COUNTS_NAME)

train_dataset = image_dataset_from_directory(
    directory=utils.TRAIN_SET_LOCATION,
    labels='inferred',
    label_mode='categorical',
    class_names=list(samples_counts.keys()),
    color_mode='rgb',
    batch_size=32,
    image_size=(56, 56),
    shuffle=True,
    seed=utils.RANDOM_STATE,
    validation_split=0.2,
    subset='training',
    interpolation='bilinear'
)

validation_dataset = image_dataset_from_directory(
    directory=utils.TRAIN_SET_LOCATION,
    labels='inferred',
    label_mode='categorical',
    class_names=list(samples_counts.keys()),
    color_mode='rgb',
    batch_size=32,
    image_size=(56, 56),
    shuffle=True,
    seed=utils.RANDOM_STATE,
    validation_split=0.2,
    subset='validation',
    interpolation='bilinear'
)

test_dataset = image_dataset_from_directory(
    directory=utils.TEST_SET_LOCATION,
    labels='inferred',
    label_mode='categorical',
    class_names=list(samples_counts.keys()),
    color_mode='rgb',
    batch_size=32,
    image_size=(56, 56),
    shuffle=True,
    seed=utils.RANDOM_STATE,
    interpolation='bilinear'
)

image_shape = (utils.RESIZE_HEIGHT, utils.RESIZE_WIDTH, 3)
base_model = apps.VGG16(include_top=False, weights='imagenet', input_shape=image_shape)
base_model.trainable = False

preprocess_input = apps.vgg16.preprocess_input
flatten_layer = layers.Flatten(name='flatten')
specialisation_layer = layers.Dense(1024, activation='relu', name='specialisation_layer')
avg_pooling_layer = layers.GlobalAveragePooling2D(name='pooling_layer')
dropout_layer = layers.Dropout(0.2, name='dropout_layer')
classification_layer = layers.Dense(10, activation='softmax', name='classification_layer')

inputs = tf.keras.Input(shape=(utils.RESIZE_HEIGHT, utils.RESIZE_WIDTH, 3))
x = preprocess_input(inputs)
x = base_model(x, training=False)

# First model
# x = flatten_layer(x)
# x = specialisation_layer(x)

# Second model
x = avg_pooling_layer(x)
x = dropout_layer(x)
outputs = classification_layer(x)
model = tf.keras.Model(inputs, outputs)

model.summary()

steps_per_epoch = len(train_dataset)
validation_steps = len(validation_dataset)
base_learning_rate = 0.0001
optimizer = optimizers.Adam(learning_rate=base_learning_rate)
loss_function = losses.CategoricalCrossentropy()
train_metrics = [metrics.Accuracy(), metrics.AUC(), metrics.Precision(), metrics.Recall()]

model.compile(optimizer=optimizer,
              loss=loss_function,
              metrics=train_metrics)

initial_results = model.evaluate(validation_dataset,
                                 steps=validation_steps,
                                 return_dict=True)

training_history = model.fit(train_dataset, epochs=10, verbose=0,
                             validation_data=validation_dataset,
                             callbacks=[TqdmCallback(verbose=2)],
                             steps_per_epoch=steps_per_epoch,
                             validation_steps=validation_steps)

history = training_history.history
final_results = model.evaluate(test_dataset,
                              return_dict=True,
                              callbacks=[TqdmCallback(verbose=2)])

一般來說,我一直得到 0 的准確性和糟糕的指標。 我已經嘗試過遷移學習差的准確率MNIST 中提到的解決方案,以及在 Keras-low 驗證准確度下使用 VGG16 進行遷移學習,但沒有成功。

第一個 model 的總結和結果是:

Model: "functional_1"
input_2 (InputLayer)         [(None, 56, 56, 3)]       0
tf_op_layer_strided_slice (T [(None, 56, 56, 3)]       0
tf_op_layer_BiasAdd (TensorF [(None, 56, 56, 3)]       0
vgg16 (Functional)           (None, 1, 1, 512)         14714688
flatten (Flatten)            (None, 512)               0
specialisation_layer (Dense) (None, 1024)              525312
classification_layer (Dense) (None, 10)                10250

Total params: 15,250,250
Trainable params: 535,562
Non-trainable params: 14,714,688
Initial results: loss = 9.01, accuracy = 0.0, auc = 0.53, precision = 0.13, recall = 0.12
Final results: loss = 2.5, accuracy = 0.0, auc = 0.71, precision = 0.31, recall = 0.14

訓練和測試第一個模型的損失和准確性

第二個 model 的總結和結果是:

Model: "functional_1"
input_2 (InputLayer)         [(None, 56, 56, 3)]       0
tf_op_layer_strided_slice (T [(None, 56, 56, 3)]       0
tf_op_layer_BiasAdd (TensorF [(None, 56, 56, 3)]       0
vgg16 (Functional)           (None, 1, 1, 512)         14714688
pooling_layer (GlobalAverage (None, 512)               0
dropout_layer (Dropout)      (None, 512)               0
classification_layer (Dense) (None, 10)                5130

Total params: 14,719,818
Trainable params: 5,130
Non-trainable params: 14,714,688
Initial Results: loss = 21.6, accuracy = 0, auc = 0.48, precision = 0.07, recall = 0.07
Final Results: loss = 2.02, accuracy = 0, auc = 0.72, precision = 0.44, recall = 0.009

訓練和測試第二個模型的損失和准確性

在下面的代碼中

# Second model
x = avg_pooling_layer(x)
x = dropout_layer(x)
outputs = classification_layer(x)
model = tf.keras.Model(inputs, outputs)

您需要在 avg_pooling_layer 之后添加一個 Flatten 層。 或者將 ave_pooling_lay 更改為 GlobalMaxPooling2D 層,這是我認為最好的。 所以你的第二個 model 將是

x=tf.keras.layers.GlobalMaxPooling2D()(x)
x = dropout_layer(x)
outputs = classification_layer(x)
model = tf.keras.Model(inputs, outputs)

同樣在 Vgg 中,您可以設置參數 pooling='average 然后 output 是一維張量,因此您不需要將其展平,也不需要添加全局平均池。 在您的 test_dataset 和 validation_dataset 中設置 shuffle=False 並設置 seed=None。 您的 steps_per_epoch 和驗證步驟的值不正確。 它們通常設置為樣本數//batch_size。 您可以在 model.fit 中將這些值保留為 None ,它將在內部確定這些值,還設置 verbose=1 以便您可以查看每個時期的訓練結果。 離開 callbacks=None 我什至不知道 TqdmCallback(verbose=2) 是什么。 未在我能找到的任何文檔中列出。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM