简体   繁体   中英

Error in initializing a tf.variable when trying to define the NN as a class

I'm trying to define a simple tensorflow graph using a python class as following:

import numpy as np
import tensorflow as tf

class NNclass:

def __init__(self, state_d, action_d, state):
    self.s_dim = state_d
    self.a_dim = action_d
    self.state = state
    self.prediction

@property
def prediction(self):
    a = tf.constant(5, dtype=tf.float32)
    w1 = tf.Variable(np.random.normal(0, 1))
    return tf.add(a, w1)

state = tf.placeholder(tf.float64, shape=[None, 1])
NN_instance = NNclass(1, 2, state)

ses = tf.Session()
ses.run(tf.global_variables_initializer())

nn_input = np.array([[0.5], [0.7]])
print(ses.run(NN_instance.prediction,  feed_dict={state: nn_input}))

When I run this code I get the following error:

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value Variable_1

The way I see it, I'm having an instance of NNclass and I go over the tf graph because the def__init__ goes over prediction method. But I don't understand why running this yields the above error. Any help please Thanks

tf.global_variables_initializer() should be called after all variables is created. In your example, prediction function defines w1 variable which is not initialized until ses.run() .

You may create variables inside __init__ function like below:

class NNclass:
    def __init__(self, state_d, action_d, state):
        self.s_dim = state_d
        self.a_dim = action_d
        self.state = state
        self.a = tf.constant(5, dtype=tf.float32)
        self.w1 = tf.Variable(np.random.normal(0, 1))

    @property
    def prediction(self):
        return tf.add(self.a, self.w1)

It's not the best practice to pass the result of a function to sess.run() as you are doing, and this is causing confusion.

A better practice for configuring your network would be to create a build_graph() function where all the tensorflow operations are defined. Then return the tensors you will need to compute (better yet, store them in a dictionary or save them as properties of the object).

Example:

def build_graph():
  a = tf.constant(5, dtype=tf.float32)
  w1 = tf.Variable(np.random.normal(0, 1))
  a_plus_w = tf.add(a, w1)
  state = tf.placeholder(tf.float64, shape=[None, 1])
  return a_plus_w, state

a_plus_w, state = build_graph()
sess = tf.Session()
sess.run(tf.global_variables_initializer())

nn_input = np.array([[0.5], [0.7]])
print(sess.run(a_plus_w,  feed_dict={state: nn_input}))

The key error you're making is that you aren't separating the two phases of development in tensorflow. You have a "build graph" phase where you define all the math operations you want to perform, then second you have an "execution" phase where you use sess.run to ask tensorflow to perform computations for you. When you call sess.run you need to pass tensorflow the tensor (a tf object that has already been defined in the graph) you want to have computed. You shouldn't be passing tensorflow a function to execute.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM