简体   繁体   中英

Tensorflow reuse when inference?

Does tensorflow need to set the reuse ==True when finish training and inference? I have a network like this:

def __build_net(self,placeholder,reuse=False):
         with tf.variable_scope('siamse',reuse=reuse):
             layer = tf.layers.dense(placeholder,3000,activation=tf.nn.leaky_relu)
             layer = tf.layers.batch_normalization(layer)

             embedding= tf.layers.dense(layer,300,activation = tf.nn.leaky_relu)
             print('Siamse Net has built',flush=True)
         return embedding

And I create two network share same parameter:

self.embedding1=self.__build_net(self.centers_placeholder)
self.embedding2=self.__build_net(self.neighbors_placeholder,reuse=True)

I used this network to generate embeddings of some kind of data.

My question is: Do I need to set the reuse to True when doing inference(generate embedding) like this:

       with tf.Session() as sess:
        self.saver.restore(sess,self.store_path+self.model_type+'_model_'+str(self.model_num)+'_'+str(self.center_size)+'_'+str(self.neighbor_size)+'.ckpt')
        embedding = self.__build_net(self.centers_placeholder,reuse=True)
        embeddings = sess.run(embedding,feed_dict = {self.centers_placeholder : data})

Or like this:

    with tf.Session() as sess:
        self.saver.restore(sess,self.store_path+self.model_type+'_model_'+str(self.model_num)+'_'+str(self.center_size)+'_'+str(self.neighbor_size)+'.ckpt')
        embedding = self.__build_net(self.centers_placeholder,reuse=False)
        embeddings = sess.run(embedding,feed_dict = {self.centers_placeholder : data})

And then, When set the variable scope, do I need to give a name to each layer?

Thanks!

No.... reuse means whether you need to use a previously defined variable.

Say, you've created a variable called 'foo/v':

with tf.variable_scope("foo"):
    v = tf.get_variable("v", [1])
    print(v.name)   ---> foo/v:0

Running the following will give:

with tf.variable_scope("foo"):
    v1 = tf.get_variable("v", [1])   ---> gives error as name 'foo/v' exists
    print(v1.name) 
with tf.variable_scope("foo", reuse=False):
    v1 = tf.get_variable("v", [1])   ---> gives error as name 'foo/v' exists
    print(v1.name) 
with tf.variable_scope("foo", reuse=True):
    v1 = tf.get_variable("v", [1])
    print(v1.name)   ---> foo/v:0
with tf.variable_scope("foo", reuse=tf.AUTO_REUSE):
    v1 = tf.get_variable("v", [1])
    print(v1.name)   ---> foo/v:0

But if you run the following from the very begining:

with tf.variable_scope("foo", reuse=True):
    v1 = tf.get_variable("v", [1])
    print(v1.name)   ---> gives error as 'foo/v' does not exist (thus cannot be reused).

Thus I prefer setting reuse=tf.AUTO_REUSE all the time.

For a detailed explanation, please read How Does Variable Scope Work? from the TensorFlow official guide.

By the way, tf.layers.batch_normalization has a training option that needs to be set False during inference. See the explanations here .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM