![](/img/trans.png)
[英]Why does my XBGoost model have a good accuracy for training and testing dataset, but poor one for predicting an held out dataset?
[英]Why does my retrained model have poor accuracy?
我正在嘗試使用相同的數據集(MNIST handrwitten digit dataset)重新訓練預訓練模型的最后一層,但重新訓練的模型的准確性比初始模型差得多。 我的初始模型的准確度約為 98%,而重新訓練的模型准確度根據運行情況在 40-80% 之間變化。 當我根本不費心訓練前兩層時,我得到了類似的結果。
和代碼:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
epochs1 = 150
epochs2 = 300
batch_size = 11000
learning_rate1 = 1e-3
learning_rate2 = 1e-4
# Base model
def base_model(input, reuse=False):
with tf.variable_scope('base_model', reuse=reuse):
layer1 = tf.contrib.layers.fully_connected(input, 300)
features = tf.contrib.layers.fully_connected(layer1, 300)
return features
mnist = input_data.read_data_sets('./mnist/', one_hot=True)
image = tf.placeholder(tf.float32, [None, 784])
label = tf.placeholder(tf.float32, [None, 10])
features1 = base_model(image, reuse=False)
features2 = base_model(image, reuse=True)
# Logits1 trained with the base model
with tf.variable_scope('logits1', reuse=False):
logits1 = tf.contrib.layers.fully_connected(features1, 10, tf.nn.relu)
# Logits2 trained while the base model is frozen
with tf.variable_scope('logits2', reuse=False):
logits2 = tf.contrib.layers.fully_connected(features2, 10, tf.nn.relu)
# Var Lists
var_list_partial1 = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='logits1')
var_list_partial2 = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='base_model')
var_list1 = var_list_partial1 + var_list_partial2
var_list2 = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='logits2')
# Sanity check
print("var_list1:", var_list1)
print("var_list2:", var_list2)
# Cross Entropy Losses
loss1 = tf.nn.softmax_cross_entropy_with_logits(logits=logits1, labels=label)
loss2 = tf.nn.softmax_cross_entropy_with_logits(logits=logits2, labels=label)
# Train the final logits layer
train1 = tf.train.AdamOptimizer(learning_rate1).minimize(loss1, var_list=var_list1)
train2 = tf.train.AdamOptimizer(learning_rate2).minimize(loss2, var_list=var_list2)
# Accuracy operations
correct_prediction1 = tf.equal(tf.argmax(logits1, 1), tf.argmax(label, 1))
correct_prediction2 = tf.equal(tf.argmax(logits2, 1), tf.argmax(label, 1))
accuracy1 = tf.reduce_mean(tf.cast(correct_prediction1, "float"))
accuracy2 = tf.reduce_mean(tf.cast(correct_prediction2, "float"))
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
batches = int(len(mnist.train.images) / batch_size)
# Train base model and logits1
for epoch in range(epochs1):
for batch in range(batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(train1, feed_dict={image: batch_xs, label: batch_ys})
# Train logits2 keeping the base model frozen
for epoch in range(epochs2):
for batch in range(batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(train2, feed_dict={image: batch_xs, label: batch_ys})
# Print the both models after training
accuracy = sess.run(accuracy1, feed_dict={image: mnist.test.images, label: mnist.test.labels})
print("Initial Model Accuracy After training final model:", accuracy)
accuracy = sess.run(accuracy2, feed_dict={image: mnist.test.images, label: mnist.test.labels})
print("Final Model Accuracy After Training:", accuracy)
提前致謝!
嘗試從“logits1”和“logits2”中去除非線性。
我將您的代碼更改為:
# Logits1 trained with the base model
with tf.variable_scope('logits1', reuse=False):
#logits1 = tf.contrib.layers.fully_connected(features1, 10, tf.nn.relu)
logits1 = tf.contrib.layers.fully_connected(features1, 10, None)
# Logits2 trained while the base model is frozen
with tf.variable_scope('logits2', reuse=False):
#logits2 = tf.contrib.layers.fully_connected(features2, 10, tf.nn.relu)
logits2 = tf.contrib.layers.fully_connected(features2, 10, None)
結果改為:
Initial Model Accuracy After training final model: 0.9805
Final Model Accuracy After Training: 0.9658
PS 並且 300 + 300 個神經元對於 MNIST 分類器來說太多了,但我認為您的重點不是對 MNIST 進行分類 :)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.