简体   繁体   中英

How can I store temporary variables in Tensorflow

I am wondering if TF has the capacity to temporarily store data during the training phase? Below is an example:

import tensorflow as tf
import numpy as np


def loss_function(values, a, b):
    N = values.shape[0]
    i = tf.constant(0)
    values_array = tf.get_variable(
        "values", values.shape, initializer=tf.constant_initializer(values), dtype=tf.float32) # The  temporary data solution in this example
    result = tf.constant(0, dtype=tf.float32)

    def body1(i):

        op2 = tf.assign(values_array[i, 0],
                        234.0) # Here is where it should be updated. The value being assigned is actually calculated from variable a and b.

        with tf.control_dependencies([op2]):
            return i + 1

    def condition1(i): return tf.less(i, N)
    i = tf.while_loop(condition1, body1, [i])

    op1 = tf.assign(values_array[0, 0],
                    9999.0) # Here is where it should be updated

    result = result + tf.reduce_mean(values_array) # The final cost is calculated based on the entire values_array
    with tf.control_dependencies([op1]):
        return result

# The parameters we want to calculate in the end
a = tf.Variable(tf.random_uniform([1], 0, 700), name='a')
b = tf.Variable(tf.random_uniform([1], -700, 700), name='b')

values = np.ones([2, 4], dtype=np.float32)

# cost function
cost_function = loss_function(values, a, b)

# training algorithm
optimizer = tf.train.MomentumOptimizer(
    0.1, momentum=0.9).minimize(cost_function)

# initializing the variables
init = tf.global_variables_initializer()

# starting the session session
sess = tf.Session()
sess.run(init)

_, training_cost = sess.run([optimizer, cost_function])

print tf.get_collection(
    tf.GraphKeys.GLOBAL_VARIABLES, scope="values")[0].eval(session=sess)

Currently, what I get from the console is:

[[ 0.98750001  0.98750001  0.98750001  0.98750001]
     [ 0.98750001  0.98750001  0.98750001  0.98750001]]

What expected to get from this example is (if the temporary data can be printed out):

[[ 9999.0  1.0  1.0  1.0]
     [ 234.0  1.0  1.0  1.0]]

Overall, what I want is that the cost function calculates a temporary 2D array based on the input numpy 2D array and parameters a and b. Then, the final cost is calculated from the temporary 2D array. But I think using a TF variable as the temporary storage is probably not correct...

Any help?

Thanks!

Your while loop never runs because i is never used again. use tf.control_dependencies to make it run.

Also, you are adding the mean of values_array, when you seem to just want to add the array as-is. Get rid of reduce_mean to get your desired output.

op1 = tf.assign(values_array[0, 0], 9999.0) wasn't being done because there was no op in the following control_dependencies context. Move the op to the context to ensure that the assignment op is included in the graph.

def loss_function(values, a, b):
    N = values.shape[0]
    i = tf.constant(0)
    values_array = tf.get_variable(
        "values", values.shape, initializer=tf.constant_initializer(values), dtype=tf.float32, trainable=False)

    temp_values_array = tf.get_variable(
        "temp_values", values.shape, dtype=tf.float32)

    # copy previous values for calculations & gradients
    temp_values_array = tf.assign(temp_values_array, values_array)

    result = tf.constant(0, dtype=tf.float32)

    def body1(i):

        op2 = tf.assign(temp_values_array[i, 0],
                        234.0) # Here is where it should be updated. The value being assigned is actually calculated from variable a and b.

        with tf.control_dependencies([op2]):
            return [i+1]

    def condition1(i): return tf.less(i, N)

    i = tf.while_loop(condition1, body1, [i])

    with tf.control_dependencies([i]):
        op1 = tf.assign(temp_values_array[0, 0],
                    9999.0) # Here is where it should be updated

        with tf.control_dependencies([op1]):
            result = result + temp_values_array # The final cost is calculated based on the entire values_array

            # save the calculations for later
            op3 = tf.assign(values_array, temp_values_array)
            with tf.control_dependencies([op3]):
                return tf.identity(result)

Also, you are fetching optimizer so the non-assigned elements of your output are going to be smaller than you expect. Your results would be closer if you did:

training_cost = sess.run([cost_function])
_ = sess.run([optimizer])

This will ensure that you don't optimize before getting the results of cost_function

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM