繁体   English   中英

使用权重初始化程序与tf.nn.conv2d

[英]Using weights initializer with tf.nn.conv2d

使用tf.layers.conv2d ,设置初始化程序很简单,可以通过其参数完成。 但是如果我使用tf.nn.conv2d怎么tf.nn.conv2d 我用这个代码。 这相当于在tf.layers.conv2d设置kernel_initializer参数吗? 虽然程序运行没有错误,但我不知道如何验证它是否按预期执行。

 with tf.name_scope('conv1_2') as scope:
        kernel = tf.get_variable(initializer=tf.contrib.layers.xavier_initializer(), 
                                 shape=[3, 3, 32, 32], name='weights')
        conv = tf.nn.conv2d(conv1_1, kernel, [1, 1, 1, 1], padding='SAME')
        biases = tf.Variable(tf.constant(0.0, shape=[32], dtype=tf.float32),
                             trainable=True, name='biases')
        out = tf.nn.bias_add(conv, biases)
        self.conv1_2 = tf.nn.relu(out, name=scope)
        self.parameters += [kernel, biases]

下面的操作是相同的(见这里 )。

至于内核和初始化,我把代码中的惊鸿一瞥,它看起来都一样......在layers.conv2d调用tf.get_variable在这一天结束。

但是我想通过经验来看,所以这里有一个测试代码,它使用每个方法声明一个conv2d( tf.layers.conv2dtf.nn.conv2d ),评估初始化的内核并比较它们。

我随意设置了不应该干扰比较的东西,比如输入张量和步幅。

import tensorflow as tf
import numpy as np


# the way you described in your question
def _nn(input_tensor, initializer, filters, size):
    kernel = tf.get_variable(
        initializer=initializer, 
        shape=[size, size, 32, filters],
        name='kernel')

    conv = tf.nn.conv2d(
        input=input_tensor,
        filter=kernel,
        strides=[1, 1, 1, 1],
        padding='SAME')

    return kernel

# the other way
def _layer(input_tensor, initializer, filters, size):
    tf.layers.conv2d(
        inputs=input_tensor,
        filters=filters,
        kernel_size=size,
        kernel_initializer=initializer)

    # 'conv2d/kernel:0' is the name of the generated kernel
    return tf.get_default_graph().get_tensor_by_name('conv2d/kernel:0')

def _get_kernel(method):
    # an isolated context for each conv2d
    graph = tf.Graph()
    sess = tf.Session(graph=graph)

    with graph.as_default(), sess.as_default():
        # important so that same randomness doesnt play a role
        tf.set_random_seed(42)

        # arbitrary input tensor with compatible shape
        input_tensor = tf.constant(1.0, shape=[1, 64, 64, 32])

        initializer = tf.contrib.layers.xavier_initializer()

        kernel = method(
            input_tensor=input_tensor,
            initializer=initializer,
            filters=32,
            size=3)

        sess.run(tf.global_variables_initializer())
        return sess.run(kernel)

if __name__ == '__main__':
    kernel_nn = _get_kernel(_nn)
    kernel_layer = _get_kernel(_layer)

    print('kernels are ', end='')
    # compares shape and values
    if np.array_equal(kernel_layer, kernel_nn):
        print('exactly the same')
    else:
        print('not the same!')

输出是...... 内核完全相同

docs,btw: tf.nn.conv2dtf.layers.conv2d

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM