繁体   English   中英

Tensorboard没有为Visualization Python提供所有变量输出

[英]Tensorboard not giving all the variables output for Visualization Python

以下是我为tensorflow训练,验证和测试初始化​​的变量。

index_in_epoch = 0;
perm_array  = np.arange(x_train.shape[0])
np.random.shuffle(perm_array)

# function to get the next batch
def get_next_batch(batch_size):
    global index_in_epoch, x_train, perm_array   
    start = index_in_epoch
    index_in_epoch += batch_size

    if index_in_epoch > x_train.shape[0]:
        np.random.shuffle(perm_array) # shuffle permutation array
        start = 0 # start next epoch
        index_in_epoch = batch_size

    end = index_in_epoch
    return x_train[perm_array[start:end]], y_train[perm_array[start:end]]

# parameters
n_steps = seq_len-1 
n_inputs = x_train.shape[2]#4 

n_neurons = 200
n_outputs = y_train.shape[1]#4
n_layers = 2
learning_rate = 0.001

batch_size = 50
n_epochs = 100#200 
train_set_size = x_train.shape[0]
test_set_size = x_test.shape[0]

tf.reset_default_graph()

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_outputs])

# use LSTM Cell with peephole connections
layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons, 
                                 activation=tf.nn.leaky_relu, use_peepholes = True)
         for layer in range(n_layers)]

multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)

stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons]) 
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
outputs = outputs[:,n_steps-1,:] # keep only last output of sequence

loss = tf.reduce_mean(tf.square(outputs - y)) # loss function = mean squared error 
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) 
training_op = optimizer.minimize(loss)

这是我训练和验证模型以及收集值以通过tensorboard显示的方式:

saver = tf.train.Saver()
with tf.Session() as sess: 
    sess.run(tf.global_variables_initializer())

    for iteration in range(int(n_epochs*train_set_size/batch_size)):
        x_batch, y_batch = get_next_batch(batch_size) # fetch the next training batch 
        writer = tf.summary.FileWriter("outputLogs", sess.graph)
        sess.run(training_op, feed_dict={X: x_batch, y: y_batch}) 
        writer.close()
        if iteration % int(5*train_set_size/batch_size) == 0:
            mse_train = loss.eval(feed_dict={X: x_train, y: y_train}) 
            mse_valid = loss.eval(feed_dict={X: x_valid, y: y_valid}) 
            print('%.2f epochs: MSE train/valid = %.10f/%.10f'%(
                iteration*batch_size/train_set_size, mse_train, mse_valid))
            save_path = saver.save(sess, "models\\model"+str(iteration)+".ckpt")

但运行命令后: tensorboard --logdir outputLogs我只获取图形而不是所有其他值图形,如丢失,错误或其他变量,无论我在训练时可以显示什么。 见下图:
张量板的输出

请帮助我想象所有可变参数或输入,以便我可以看到on tensorboard并且可以使训练对我有用。

你必须告诉TensorFlow你想跟踪你的损失。 您只是将图表添加到编写器中。 例如,您可以执行此操作来跟踪您的损失:

loss = ... (your def)
tf.summary.scalar('MyLoss', loss)

# ... maybe add some other variables (you can also make histograms, images, etc. via tf.summary.historam(...))

summ = tf.summary.merge_all()

在您的会话中,您可以像创建一样创建编写器。 然后,您必须评估摘要操作并将其添加到编写器。 但是,您应该在训练循环之外创建编写器,因为您不希望每次迭代都有编写器。 您在add_summary方法中提供迭代作为参数。

saver = tf.train.Saver()
with tf.Session() as sess: 
    sess.run(tf.global_variables_initializer())
    writer = tf.summary.FileWriter("outputLogs", sess.graph)

    for iteration in range(int(n_epochs*train_set_size/batch_size)):

        x_batch, y_batch = get_next_batch(batch_size) # fetch the next training batch 

        [_, s] = sess.run([training_op, summ], feed_dict={X: x_batch, y: y_batch}) 

        writer.add_summary(s, iteration)

        if iteration % int(5*train_set_size/batch_size) == 0:
            mse_train = loss.eval(feed_dict={X: x_train, y: y_train}) 
            mse_valid = loss.eval(feed_dict={X: x_valid, y: y_valid}) 
            print('%.2f epochs: MSE train/valid = %.10f/%.10f'%(
                iteration*batch_size/train_set_size, mse_train, mse_valid))
            save_path = saver.save(sess, "models\\model"+str(iteration)+".ckpt")

您的培训代码应如下所示。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM