簡體   English   中英

Tensorflow:循環內的進給占位符失敗

[英]Tensorflow: feeding placeholder in loop within loop fails

我試圖反復訓練不同大小的隱藏層的神經網絡,以確定它應該有多少神經元。 我寫了一個網,在做一次通過時工作正常。 代碼是:

import tensorflow as tf
import nn

def train(layers, data, folder = 'run1'):
    input_layer_size, hidden_layer_size, num_labels = layers;
    X, y, X_val, y_val = data;

    X_placeholder = tf.placeholder(tf.float32, shape=(None, input_layer_size), name='X')
    y_placeholder = tf.placeholder(tf.uint8, shape=(None, num_labels), name='y')
    Theta1 = tf.Variable(nn.randInitializeWeights(input_layer_size, hidden_layer_size), name='Theta1')
    bias1 = tf.Variable(nn.randInitializeWeights(hidden_layer_size, 1), name='bias1')
    Theta2 = tf.Variable(nn.randInitializeWeights(hidden_layer_size, num_labels), name='Theta2')
    bias2 = tf.Variable(nn.randInitializeWeights(num_labels, 1), name='bias2')
    cost = nn.cost(X_placeholder, y_placeholder, Theta1, bias1, Theta2, bias2)
    optimize = tf.train.GradientDescentOptimizer(0.6).minimize(cost)

    accuracy, precision, recall, f1 = nn.evaluate(X_placeholder, y_placeholder, Theta1, bias1, Theta2, bias2)

    cost_summary = tf.summary.scalar('cost', cost);
    accuracy_summary = tf.summary.scalar('accuracy', accuracy);
    precision_summary = tf.summary.scalar('precision', precision);
    recall_summary = tf.summary.scalar('recall', recall);
    f1_summary = tf.summary.scalar('f1', f1);
    summaries = tf.summary.merge_all();

    sess = tf.Session();
    saver = tf.train.Saver()
    init = tf.global_variables_initializer()
    sess.run(init)

    writer = tf.summary.FileWriter('./tmp/logs/' + folder, sess.graph)

    NUM_STEPS = 20;

    for step in range(NUM_STEPS):
        sess.run(optimize, feed_dict={X_placeholder: X, y_placeholder: y});
        if (step > 0) and ((step + 1) % 10 == 0):
            summary = sess.run(summaries, feed_dict={X_placeholder: X_val, y_placeholder: y_val});
            # writer.add_summary(summary, step);
            print('Step', step + 1, 'of', NUM_STEPS);

    save_path = saver.save(sess, './tmp/model_' + folder + '.ckpt')
    print("Model saved in file: %s" % save_path)
    sess.close();

然而,當我把這個調用放在一個循環中時,我只通過第一次迭代。 在我第一次嘗試第二次迭代時,它似乎失敗了:

summary = sess.run(summaries, feed_dict={X_placeholder: X_val, y_placeholder: y_val});

我得到錯誤: InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'X' with dtype float

我在喂食之前記錄了XX_val ,它們看起來就像它們在每次運行前一樣。 如果我評論說,第二次run部分出它的偉大工程,但我還挺需要我的總結...

我的外循環看起來像這樣:

import train
import loadData

input_layer_size  = 5513;
num_labels = 128;

data = loadData.load(input_layer_size, num_labels);

for hidden_layer_size in range(50, 500, 50):
    train.train([input_layer_size, hidden_layer_size, num_labels], data, 'run' + str(hidden_layer_size))

因為您在循環內調用train函數,所以每次運行它都會創建一個新的占位符副本。 第一次運行它很好,因為只有一個副本。 它第二次運行時你有重復的占位符。 解決方案是從運行培訓的代碼中分離出構建模型的代碼。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM