簡體   English   中英

嘗試使用未初始化的值rnn / output_projection_wrapper / bias

[英]Attempting to use uninitialized value rnn/output_projection_wrapper/bias

我收到此錯誤:

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value rnn/output_projection_wrapper/bias
         [[Node: rnn/output_projection_wrapper/bias/read = Identity[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](rnn/output_projection_wrapper/bias)]]

這是我的代碼:

n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])

cell = tf.contrib.rnn.OutputProjectionWrapper(
    tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),
    output_size=n_outputs)


outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)



learning_rate = 0.001

loss = tf.reduce_mean(tf.square(outputs - y)) # MSE
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)

init = tf.global_variables_initializer()

saver = tf.train.Saver()


n_iterations = 1500
batch_size = 50

with tf.Session() as sess:
    init.run()
    for iteration in range(n_iterations):
        X_batch, y_batch = next_batch(batch_size, n_steps)
        sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        if iteration % 100 == 0:
            mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
            print(iteration, "\tMSE:", mse)

saver.save(sess, "./my_time_series_model") # not shown in the book

with tf.Session() as sess:
    X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
    y_pred = sess.run(outputs, feed_dict={X: X_new})

我怎樣才能解決這個問題?

在這里,第二個會話會出現問題,因為您沒有使用該會話初始化變量。 因此,最好只為一個圖定義一個會話(因為重新初始化將覆蓋訓練后的變量。)

sess_config = tf.ConfigProto(allow_soft_placement=True,
                                    log_device_placement=True)
sess = tf.Session(config=sess_config)
sess.run(init)
# use this session for all computations 
for iteration in range(n_iterations):
    X_batch, y_batch = next_batch(batch_size, n_steps)
    sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
    if iteration % 100 == 0:
        mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
        print(iteration, "\tMSE:", mse)

saver.save(sess, "./my_time_series_model") # not shown in the book

X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM