简体   繁体   English

Tensorflow-恢复模型

[英]Tensorflow - Restoring a model

I have the following code, where I'm trying to restore the model at some point in the code, but seems that I'm getting some infinite loop (not sure), as the program would not return any output although seems to be running: 我有以下代码,我试图在代码中的某个点还原模型,但似乎正在遇到一些无限循环(不确定),因为尽管程序似乎正在运行,但程序不会返回任何输出:

import tensorflow as tf

data, labels = cifar_tools.read_data('C:\\Users\\abc\\Desktop\\Testing')

x = tf.placeholder(tf.float32, [None, 150 * 150])
y = tf.placeholder(tf.float32, [None, 2])

w1 = tf.Variable(tf.random_normal([5, 5, 1, 64]))
b1 = tf.Variable(tf.random_normal([64]))

w2 = tf.Variable(tf.random_normal([5, 5, 64, 64]))
b2 = tf.Variable(tf.random_normal([64]))

w3 = tf.Variable(tf.random_normal([38*38*64, 1024]))
b3 = tf.Variable(tf.random_normal([1024]))

w_out = tf.Variable(tf.random_normal([1024, 2]))
b_out = tf.Variable(tf.random_normal([2]))

def conv_layer(x,w,b):
    conv = tf.nn.conv2d(x,w,strides=[1,1,1,1], padding = 'SAME')
    conv_with_b = tf.nn.bias_add(conv,b)
    conv_out = tf.nn.relu(conv_with_b)
    return conv_out

def maxpool_layer(conv,k=2):
    return tf.nn.max_pool(conv, ksize=[1,k,k,1], strides=[1,k,k,1], padding='SAME')

def model():
    x_reshaped = tf.reshape(x, shape=[-1, 150, 150, 1])

    conv_out1 = conv_layer(x_reshaped, w1, b1)
    maxpool_out1 = maxpool_layer(conv_out1)
    norm1 = tf.nn.lrn(maxpool_out1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
    conv_out2 = conv_layer(norm1, w2, b2)
    norm2 = tf.nn.lrn(conv_out2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75)
    maxpool_out2 = maxpool_layer(norm2)

    maxpool_reshaped = tf.reshape(maxpool_out2, [-1, w3.get_shape().as_list()[0]])
    local = tf.add(tf.matmul(maxpool_reshaped, w3), b3)
    local_out = tf.nn.relu(local)

    out = tf.add(tf.matmul(local_out, w_out), b_out)
    return out

model_op = model()

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(model_op, y))
train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)

correct_pred = tf.equal(tf.argmax(model_op, 1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred,tf.float32))

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    onehot_labels = tf.one_hot(labels, 2, on_value=1.,off_value=0.,axis=-1)
    onehot_vals = sess.run(onehot_labels)
    batch_size = len(data)
    # Restore model
    saver = tf.train.import_meta_graph('mymodel.meta')
    saver.restore(sess, tf.train.latest_checkpoint('./'))
    all_vars = tf.get_collection('vars')
    for v in all_vars:
        v_ = sess.run(v)
        print(v_)

for j in range(0, 5):
    print('EPOCH', j)
    for i in range(0, len(data), batch_size):
        batch_data = data[i:i+batch_size, :]
        batch_onehot_vals = onehot_vals[i:i+batch_size, :]
        _, accuracy_val = sess.run([train_op, accuracy], feed_dict={x: batch_data, y: batch_onehot_vals})
        print(i, accuracy_val)

    print('DONE WITH EPOCH')

What could be the issue? 可能是什么问题? Am I restoring the model correct here? 我在这里恢复模型正确吗?

Thanks. 谢谢。

It seems I had to list the whole path to the model as follows: 看来我必须列出该模型的整个路径,如下所示:

saver = tf.train.import_meta_graph('C:\\Users\\abc\\Desktop\\\Testing\\mymodel.meta')

The same mistake I made when saving the model, as shown here :-) 同样的错误,我保存模型时提出,如图所示这里 :-)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM