简体   繁体   中英

Initializing variables multiple times in TensorFlow leaks memory

This is my example code:

N = 3000
with tf.variable_scope("scope") as scope:
    A = tf.Variable(np.random.randn(N,N), dtype=tf.float32, name='A')

sess = tf.Session()

for _ in range(100):
    sess.run(tf.global_variables_initializer())

Running the code allocates >10GB of memory on my machine. I want to re-train my model multiple times without having to reset the whole graph to the default graph every time. What am I missing?

Thanks!

I found the problem. For anybody else having the same problem in the future: The problem seems to be that a new initialization operation is created each time in the loop. The solution for me was to reuse the initialization operation. This fixes the memory 'leakage' for me:

N = 3000
tf.reset_default_graph()
with tf.variable_scope("scope") as scope:
    A = tf.Variable(np.random.randn(N,N), dtype=tf.float32, name='A')

varlist = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="scope")
init = tf.variables_initializer(varlist) # or tf.global_variables_initializer()
for _ in range(100):
    sess = tf.Session()
    sess.run(init) # here we reuse the init operation

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM