简体   繁体   English

tf.nn.conv2d_transpose output_shape 动态batch_size

[英]tf.nn.conv2d_transpose output_shape dynamic batch_size

The documentation of tf.nn.conv2d_transpose says: tf.nn.conv2d_transpose 的文档说:

tf.nn.conv2d_transpose(
    value,
    filter,
    output_shape,
    strides,
    padding='SAME',
    data_format='NHWC',
    name=None
)

The output_shape argument requires a 1D tensor specifying the shape of the tensor output by this op. output_shape 参数需要一个一维张量来指定此操作输出的张量的形状。 Here, since my conv-net part has been built entirely on dynamic batch_length placeholders, I can't seem to device a workaround to the static batch_size requirement of the output_shape for this op.在这里,由于我的 conv-net 部分完全建立在动态 batch_length 占位符上,我似乎无法为这个操作的 output_shape 的静态batch_size要求提供解决方法。

There are many discussions around the web for this, however, I couldn't find any solid solution to this issue.网上对此有很多讨论,但是,我找不到任何可靠的解决方案。 Most of them are hacky ones with a global_batch_size variable defined.它们中的大多数是定义了global_batch_size变量的hacky。 I wish to know the best possible solution to this problem.我想知道这个问题的最佳解决方案。 This trained model is going be shipped as a deployed service.这个经过训练的模型将作为已部署的服务提供。

You can use the dynamic shape of a reference tensor, instead of the static one.您可以使用参考张量的动态形状,而不是静态形状。

Usually, wehn you use the conv2d_transpose operation, your're "upsampling" a layer in order to obtain a certain shape of another tensor in your network.通常,当您使用conv2d_transpose操作时,您conv2d_transpose “上采样”一个层以获得网络中另一个张量的特定形状。

If, for instance, you want to replicate the shape of the input_tensor tensor, you can do something like:例如,如果您想复制input_tensor张量的形状,您可以执行以下操作:

import tensorflow as tf

input_tensor = tf.placeholder(dtype=tf.float32, shape=[None, 16, 16, 3])
# static shape
print(input_tensor.shape)

conv_filter = tf.get_variable(
    'conv_filter', shape=[2, 2, 3, 6], dtype=tf.float32)
conv1 = tf.nn.conv2d(
    input_tensor, conv_filter, strides=[1, 2, 2, 1], padding='SAME')
# static shape
print(conv1.shape)

deconv_filter = tf.get_variable(
    'deconv_filter', shape=[2, 2, 6, 3], dtype=tf.float32)

deconv = tf.nn.conv2d_transpose(
    input_tensor,
    filter=deconv_filter,
    # use tf.shape to get the dynamic shape of the tensor
    # know at RUNTIME
    output_shape=tf.shape(input_tensor),
    strides=[1, 2, 2, 1],
    padding='SAME')
print(deconv.shape)

The program outputs:程序输出:

(?, 16, 16, 3)
(?, 8, 8, 6)
(?, ?, ?, ?)

As you can see, the last shape is completely unknown at compile time, because I'm setting the output shape of conv2d_transpose with the result of the tf.shape operation, that returns and thus its values can change at runtime.如您所见,最后一个形状在编译时完全未知,因为我正在使用tf.shape操作的结果设置conv2d_transpose的输出形状,该操作返回,因此其值可以在运行时更改。

You can use the following code to calculate the output shape parameter for tf.nn.conv2d_transpose based on the input to this layer ( input ) and the number of outputs from this layer ( num_outputs ).您可以使用以下代码根据该层的输入 ( input ) 和该层的输出数量 ( num_outputs ) 计算tf.nn.conv2d_transpose的输出形状参数。 Of course, you have the filter size, padding, stride, and data_format.当然,您有过滤器大小、填充、步幅和数据格式。

def calculate_output_shape(input, filter_size_h, filter_size_w, 
    stride_h, stride_w, num_outputs, padding='SAME', data_format='NHWC'):

    #calculation of the output_shape:
    if data_format == "NHWC":
        input_channel_size = input.get_shape().as_list()[3]
        input_size_h = input.get_shape().as_list()[1]
        input_size_w = input.get_shape().as_list()[2]
        stride_shape = [1, stride_h, stride_w, 1]
        if padding == 'VALID':
            output_size_h = (input_size_h - 1)*stride_h + filter_size_h
            output_size_w = (input_size_w - 1)*stride_w + filter_size_w
        elif padding == 'SAME':
            output_size_h = (input_size_h - 1)*stride_h + 1
            output_size_w = (input_size_w - 1)*stride_w + 1
        else:
            raise ValueError("unknown padding")

        output_shape = tf.stack([tf.shape(input)[0], 
                            output_size_h, output_size_w, 
                            num_outputs])
    elif data_format == "NCHW":
        input_channel_size = input.get_shape().as_list()[1]
        input_size_h = input.get_shape().as_list()[2]
        input_size_w = input.get_shape().as_list()[3]
        stride_shape = [1, 1, stride_h, stride_w]
        if padding == 'VALID':
            output_size_h = (input_size_h - 1)*stride_h + filter_size_h
            output_size_w = (input_size_w - 1)*stride_w + filter_size_w
        elif padding == 'SAME':
            output_size_h = (input_size_h - 1)*stride_h + 1
            output_size_w = (input_size_w - 1)*stride_w + 1
        else:
            raise ValueError("unknown padding")

        output_shape = tf.stack([tf.shape(input)[0], 
                                output_size_h, output_size_w, num_outputs])
    else:
        raise ValueError("unknown data_format")

    return output_shape

You can use value of -1 to substitute the exact value of batch_size .您可以使用 -1 的值来替换batch_size的确切值。 Consider the below example whereby I convert variable batch sized input tensor of shape (16, 16, 3) to (32, 32, 6).考虑下面的示例,我将形状为 (16, 16, 3) 的可变批量输入张量转换为 (32, 32, 6)。

import tensorflow as tf

input_tensor = tf.placeholder(dtype = tf.float32, shape = [None, 16, 16, 3])
print (input_tensor.shape)

my_filter = tf.get_variable('filter', shape = [2, 2, 6, 3], dtype = tf.float32)
conv = tf.nn.conv2d_transpose(input_tensor,
                              filter = my_filter,
                              output_shape = [-1, 32, 32, 6],
                              strides = [1, 2, 2, 1],
                              padding = 'SAME')
print (conv.shape)

Will Output you:会输出你:

(?, 16, 16, 3)
(?, 32, 32, 6)

当您需要 train_batch_size 时,只需使用 tf.shape(X_batch)[0]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM