簡體   English   中英

如何在Tensorflow中存儲臨時變量

[英]How can I store temporary variables in Tensorflow

我想知道TF是否有能力在訓練階段臨時存儲數據? 下面是一個示例:

import tensorflow as tf
import numpy as np


def loss_function(values, a, b):
    N = values.shape[0]
    i = tf.constant(0)
    values_array = tf.get_variable(
        "values", values.shape, initializer=tf.constant_initializer(values), dtype=tf.float32) # The  temporary data solution in this example
    result = tf.constant(0, dtype=tf.float32)

    def body1(i):

        op2 = tf.assign(values_array[i, 0],
                        234.0) # Here is where it should be updated. The value being assigned is actually calculated from variable a and b.

        with tf.control_dependencies([op2]):
            return i + 1

    def condition1(i): return tf.less(i, N)
    i = tf.while_loop(condition1, body1, [i])

    op1 = tf.assign(values_array[0, 0],
                    9999.0) # Here is where it should be updated

    result = result + tf.reduce_mean(values_array) # The final cost is calculated based on the entire values_array
    with tf.control_dependencies([op1]):
        return result

# The parameters we want to calculate in the end
a = tf.Variable(tf.random_uniform([1], 0, 700), name='a')
b = tf.Variable(tf.random_uniform([1], -700, 700), name='b')

values = np.ones([2, 4], dtype=np.float32)

# cost function
cost_function = loss_function(values, a, b)

# training algorithm
optimizer = tf.train.MomentumOptimizer(
    0.1, momentum=0.9).minimize(cost_function)

# initializing the variables
init = tf.global_variables_initializer()

# starting the session session
sess = tf.Session()
sess.run(init)

_, training_cost = sess.run([optimizer, cost_function])

print tf.get_collection(
    tf.GraphKeys.GLOBAL_VARIABLES, scope="values")[0].eval(session=sess)

當前,我從控制台獲得的是:

[[ 0.98750001  0.98750001  0.98750001  0.98750001]
     [ 0.98750001  0.98750001  0.98750001  0.98750001]]

從此示例中可以得到的是(如果可以打印出臨時數據):

[[ 9999.0  1.0  1.0  1.0]
     [ 234.0  1.0  1.0  1.0]]

總的來說,我想要的是cost函數根據輸入的numpy 2D數組以及參數a和b計算一個臨時2D數組。 然后,根據臨時2D陣列計算最終成本。 但是我認為使用TF變量作為臨時存儲可能不正確...

有什么幫助嗎?

謝謝!

您的while循環永遠不會運行,因為i再也不會使用了。 使用tf.control_dependencies使其運行。

另外,當您似乎只想按原樣添加數組時,就添加了values_array的平均值。 擺脫reduce_mean以獲得所需的輸出。

op1 = tf.assign(values_array[0, 0], 9999.0)未完成,因為在以下control_dependencies上下文中沒有op。 將op移至上下文,以確保分配op包含在圖中。

def loss_function(values, a, b):
    N = values.shape[0]
    i = tf.constant(0)
    values_array = tf.get_variable(
        "values", values.shape, initializer=tf.constant_initializer(values), dtype=tf.float32, trainable=False)

    temp_values_array = tf.get_variable(
        "temp_values", values.shape, dtype=tf.float32)

    # copy previous values for calculations & gradients
    temp_values_array = tf.assign(temp_values_array, values_array)

    result = tf.constant(0, dtype=tf.float32)

    def body1(i):

        op2 = tf.assign(temp_values_array[i, 0],
                        234.0) # Here is where it should be updated. The value being assigned is actually calculated from variable a and b.

        with tf.control_dependencies([op2]):
            return [i+1]

    def condition1(i): return tf.less(i, N)

    i = tf.while_loop(condition1, body1, [i])

    with tf.control_dependencies([i]):
        op1 = tf.assign(temp_values_array[0, 0],
                    9999.0) # Here is where it should be updated

        with tf.control_dependencies([op1]):
            result = result + temp_values_array # The final cost is calculated based on the entire values_array

            # save the calculations for later
            op3 = tf.assign(values_array, temp_values_array)
            with tf.control_dependencies([op3]):
                return tf.identity(result)

另外,您正在獲取optimizer因此輸出的未分配元素將比您期望的要小。 如果您這樣做,您的結果將會更接近:

training_cost = sess.run([cost_function])
_ = sess.run([optimizer])

這將確保您在獲得cost_function的結果之前不會進行優化

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM