简体   繁体   English

如何将 tf.layers 变量放入 tf.name_scope/tf.variable_scope?

[英]How to put tf.layers variables in tf.name_scope/tf.variable_scope?

I have a problem with Tensorflow:我对 Tensorflow 有问题:

The following code produces a correct(ish) graph for a convolutional block:以下代码为卷积块生成正确的(ish)图:

def conv_layer(self, inputs, filter_size = 3, num_filters = 256, name = None):
    scope_name = name
    if name == None:
        scope_name = "conv_layer"

    with tf.name_scope(scope_name):
        conv = tf.contrib.layers.conv2d(inputs, num_filters, filter_size, activation_fn = None)
        batch_norm = tf.contrib.layers.batch_norm(conv)
        act = tf.nn.leaky_relu(batch_norm)

        return act

The problem is that the tf.layers API makes some ugly variables that do not actually stay within the name_scope .问题是tf.layers API 产生了一些实际上并不留在name_scope内的丑陋变量。 Here is the Tensorboard view so you can see what I mean.这是 Tensorboard 视图,因此您可以了解我的意思。

张量板

Is there anyway to get those variables to go into the scope?有没有办法让这些变量进入范围? This is a big problem when it comes to visualizing the graph because I plan this network to much larger.当涉及到可视化图形时,这是一个大问题,因为我将该网络规划得更大。 (As you can see to the right, this is already a big problem, I have to remove those from the main graph manually every time I boot up Tensorboard.) (正如您在右侧看到的,这已经是一个大问题,每次启动 Tensorboard 时,我都必须手动从主图中删除它们。)

You can try using tf.variable_scope instead.您可以尝试使用tf.variable_scope代替。 tf.name_scope is ignored by variables created via tf.get_variable() which is usually used by tf.layers functions. tf.name_scope被通过tf.get_variable()创建的变量忽略,通常由tf.layers函数使用。 This is in contrast to variables created via tf.Variable .这与通过tf.Variable创建的变量形成tf.Variable

See this question for an (albeit somewhat outdated) explanation of the differences.有关差异的(尽管有些过时)解释,请参阅此问题

Solution moved from question to an answer:解决方案从问题变成了答案:

Changing each instance of name_scope with variable_scope the problem has been omitted.使用variable_scope更改name_scope每个实例问题已被忽略。 However, I had to assign each variable_scope with a unique ID and set reuse = False .但是,我必须为每个variable_scope分配一个唯一的 ID 并设置reuse = False

def conv_layer(self, inputs, filter_size = 3, num_filters = 256, name = None):
    scope_name = name
    if name == None:
        scope_name = "conv_layer_" + str(self.conv_id)
        self.conv_id += 1

    with tf.variable_scope(scope_name, reuse = False):
        conv = tf.contrib.layers.conv2d(inputs, num_filters, filter_size, activation_fn = None)
        batch_norm = tf.contrib.layers.batch_norm(conv)
        act = tf.nn.leaky_relu(batch_norm)

        return act

As you can see, the variables are nicely hidden away in the correct blocks.如您所见,变量很好地隐藏在正确的块中。

解决方案

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM