简体   繁体   中英

Error while running tensorflow a second time

I am trying to run the following tensorflow code and it's working fine the first time. If I try running it again, it keeps throwing an error saying

ValueError: Variable layer1/weights1 already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

      File "C:\Users\owner\Anaconda3\envs\DeepLearning_NoGPU\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__
        self._traceback = _extract_stack()
      File "C:\Users\owner\Anaconda3\envs\DeepLearning_NoGPU\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op
        original_op=self._default_original_op, op_def=op_def)
      File "C:\Users\owner\Anaconda3\envs\DeepLearning_NoGPU\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
        op_def=op_def)

If I restart the console and then run it, once again it runs just fine.

Given below is my implementation of the neural network.

import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
import tensorflow as tf

learning_rate = 0.001
training_epochs = 100

n_input = 9
n_output = 1

n_layer1_node = 100
n_layer2_node = 100

X_train = np.random.rand(100, 9)
y_train = np.random.rand(100, 1)

with tf.variable_scope('input'):
    X = tf.placeholder(tf.float32, shape=(None, n_input))

with tf.variable_scope('output'):
    y = tf.placeholder(tf.float32, shape=(None, 1))

#layer 1
with tf.variable_scope('layer1'):
    weight_matrix1 = {'weights': tf.get_variable(name='weights1', 
                                                shape=[n_input, n_layer1_node], 
                                                initializer=tf.contrib.layers.xavier_initializer()),
                      'biases': tf.get_variable(name='biases1',
                                shape=[n_layer1_node],
                                initializer=tf.zeros_initializer())}
    layer1_output = tf.nn.relu(tf.add(tf.matmul(X, weight_matrix1['weights']), weight_matrix1['biases']))

#Layer 2
with tf.variable_scope('layer2'):
    weight_matrix2 = {'weights': tf.get_variable(name='weights2', 
                                                shape=[n_layer1_node, n_layer2_node], 
                                                initializer=tf.contrib.layers.xavier_initializer()),
                      'biases': tf.get_variable(name='biases2',
                                shape=[n_layer2_node],
                                initializer=tf.zeros_initializer())}
    layer2_output = tf.nn.relu(tf.add(tf.matmul(layer1_output, weight_matrix2['weights']), weight_matrix2['biases']))

#Output layer
with tf.variable_scope('layer3'):
    weight_matrix3 = {'weights': tf.get_variable(name='weights3', 
                                                shape=[n_layer2_node, n_output], 
                                                initializer=tf.contrib.layers.xavier_initializer()),
                      'biases': tf.get_variable(name='biases3',
                                shape=[n_output],
                                initializer=tf.zeros_initializer())}
    prediction = tf.nn.relu(tf.add(tf.matmul(layer2_output, weight_matrix3['weights']), weight_matrix3['biases']))

cost = tf.reduce_mean(tf.squared_difference(prediction, y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

with tf.Session() as session:

    session.run(tf.global_variables_initializer())


    for epoch in range(training_epochs):

        session.run(optimizer, feed_dict={X: X_train, y: y_train})
        train_cost = session.run(cost, feed_dict={X: X_train, y:y_train})

        print(epoch, " epoch(s) done")

    print("training complete")

As the error suggests I tried adding reuse=True as a parameter in with tf.variable_scope(): but that again is not working.

I am running this inside a conda environment. I am using Python 3.5 and CUDA 8(But it shouldn't matter because this is not configured to run in the GPU) in windows 10.

This is a matter of how TF works. One needs to understand that TF has a "hidden" state - a graph being built. Most of the tf functions create ops in this graph (like every tf.Variable call, every arithmetic operation and so on). On the other hand actual "execution" happens in the tf.Session(). Consequently your code will usually look like this:

build_graph()

with tf.Session() as sess:
  process_something()

since all actual variables, results etc. leave in session only, if you want to "run it twice" you would do

build_graph()

with tf.Session() as sess:
  process_something()

with tf.Session() as sess:
  process_something()

Notice that I am building graph once . Graph is an abstract representation of how things look like, it does not hold any state of computations. When you try to do

build_graph()

with tf.Session() as sess:
  process_something()

build_graph()

with tf.Session() as sess:
  process_something()

you might get errors during second build_graph() due to trying to create variables with the same names (what happens in your case), graph being finalised etc. If you really need to run things this way you simply have to reset graph in between

build_graph()

with tf.Session() as sess:
  process_something()

tf.reset_default_graph()

build_graph()

with tf.Session() as sess:
  process_something()

will work fine.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM