简体   繁体   English

Tensorflow保存和恢复模型的问题

[英]Issue with Tensorflow save and restore model

I am trying to use the Transfer Learning approach. 我正在尝试使用转移学习方法。 Here is a snapshot for the code where my code is learning over the Training data : 这是我的代码正在通过训练数据学习的代码的快照:

max_accuracy = 0.0
    saver = tf.train.Saver()
    for epoch in range(epocs):
        shuffledRange = np.random.permutation(n_train)
        y_one_hot_train = encode_one_hot(len(classes), Y_input)
        y_one_hot_validation = encode_one_hot(len(classes), Y_validation)
        shuffledX = X_input[shuffledRange,:]
        shuffledY = y_one_hot_train[shuffledRange]
        for Xi, Yi in iterate_mini_batches(shuffledX, shuffledY, mini_batch_size):
            sess.run(train_step,
                     feed_dict={bottleneck_tensor: Xi,
                                ground_truth_tensor: Yi})
            # Every so often, print out how well the graph is training.
            is_last_step = (i + 1 == FLAGS.how_many_training_steps)
            if (i % FLAGS.eval_step_interval) == 0 or is_last_step:
                train_accuracy, cross_entropy_value = sess.run(
                  [evaluation_step, cross_entropy],
                  feed_dict={bottleneck_tensor: Xi,
                             ground_truth_tensor: Yi})
                validation_accuracy = sess.run(
                  evaluation_step,
                  feed_dict={bottleneck_tensor: X_validation,
                             ground_truth_tensor: y_one_hot_validation})
                print('%s: Step %d: Train accuracy = %.1f%%, Cross entropy = %f, Validation accuracy = %.1f%%' %
                    (datetime.now(), i, train_accuracy * 100, cross_entropy_value, validation_accuracy * 100))
                result_tensor = sess.graph.get_tensor_by_name(ensure_name_has_port(FLAGS.final_tensor_name))
                probs = sess.run(result_tensor,feed_dict={'pool_3/_reshape:0': Xi[0].reshape(1,2048)})
                if validation_accuracy > max_accuracy :
                   saver.save(sess, 'models/superheroes_model')
                   max_accuracy = validation_accuracy
                   print(probs)
            i+=1  

Here is where my code, where I am loading the model : 这是我的代码,这里是我加载模型的地方:

def load_model () :
    sess=tf.Session()    
    #First let's load meta graph and restore weights
    saver = tf.train.import_meta_graph('models/superheroes_model.meta')
    saver.restore(sess,tf.train.latest_checkpoint('models/'))
    sess.run(tf.global_variables_initializer())
    result_tensor = sess.graph.get_tensor_by_name(ensure_name_has_port(FLAGS.final_tensor_name))  
    X_feature = features[0].reshape(1,2048)        
    probs = sess.run(result_tensor,
                         feed_dict={'pool_3/_reshape:0': X_feature})
    print probs
    return sess  

So now for the same data point I am getting totally different results while training and testing. 因此,现在对于同一数据点,在培训和测试时我得到的结果完全不同。 Its not even close. 它甚至不接近。 During testing, my probabilities are near to 25% as I have 4 classes. 在测试期间,由于我有4个班级,因此我的概率接近25%。 But during training highest class probability is 90%. 但是在训练过程中,最高的上课率是90%。
Is there any issue while saving or restoring the model? 保存或还原模型时是否有任何问题?

Be careful -- you are calling 小心-您正在打电话

sess.run(tf.global_variables_initializer())

after calling 打电话后

saver.restore(sess,tf.train.latest_checkpoint('models/'))

I've done similar before, and I think that resets all your trained weights/biases/etc. 我之前也做过类似的事情,我认为这会重置您所有训练有素的权重/偏见/等。 in the restored model. 在还原的模型中。

If you must, call the initializer prior to restoring the model, and if you need to initialize something specific from the restored model, do it individually. 如果需要,请在还原模型之前调用初始化程序,并且如果需要初始化已还原模型中的特定内容,请单独进行。

删除sess.run(tf.global_variables_initializer())在函数load_model ,如果你这样做,你的所有训练的参数将与该会为每个班级1/4概率的初始值被替换

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM