简体   繁体   English

张量流中的conv1d和conv2d

[英]conv1d and conv2d in tensorflow

For conv2d, assuming an input 2D matrix with shape (W,H) and the conv kernel size is (Wk,H), which means the height of the kernel is the same with the height of input matrix. 对于conv2d,假设输入2D矩阵的形状为(W,H),且conv内核大小为(Wk,H),这意味着内核的高度与输入矩阵的高度相同。 In this case, can we think that conv1 with kernel size Wk carries out the same computation as conv2d? 在这种情况下,我们可以认为内核大小为Wk的conv1执行与conv2d相同的计算吗?

For example: 例如:

tf.layers.conv2d(
    kernel_size=tf.Variable(tf.truncated_normal([Wk, H, 1, out_dim], stddev=0.1),
    input=...
) 

equals to: 等于:

tf.layers.conv1d(kernel_size=Wk, input=...)

They're not the same; 他们不一样; the conv2d kernel has many more weights and is going to train differently because of that. conv2d内核的权重更多,因此将进行不同的训练。 Also, depending on what padding is set to, the output size of the conv2d operation may not be 1D either. 另外,根据设置的padding内容,conv2d操作的输出大小也可能不是1D。

tf.nn.conv1d just call the tf.nn.conv2d tf.nn.conv1d只需调用tf.nn.conv2d

This is the description of tf.nn.conv1d : 这是tf.nn.conv1d的描述:

Internally, this op reshapes the input tensors and invokes tf.nn.conv2d . 在内部,此op重塑输入张量并调用tf.nn.conv2d For example, if data_format does not start with "NC", a tensor of shape [batch, in_width, in_channels] is reshaped to [batch, 1, in_width, in_channels] , and the filter is reshaped to [1, filter_width, in_channels, out_channels] . 例如,如果data_format并非以“ NC” [batch, in_width, in_channels]形状为[batch, in_width, in_channels]的张量整形为[batch, 1, in_width, in_channels] ,并将过滤器整形为[1, filter_width, in_channels, out_channels] The result is then reshaped back to [batch, out_width, out_channels] (where out_width is a function of the stride and padding as in conv2d ) and returned to the caller. 然后将结果重新整形为[batch, out_width, out_channels] (其中out_width是stride和padding的函数,如conv2d )并返回给调用方。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM