[英]the Keras layers(functions) corresponding to tf.nn.conv2d_transpose
In Keras
, what are the layers(functions) corresponding to tf.nn.conv2d_transpose
in Tensorflow
? 在Keras
,什么是对应于层(功能) tf.nn.conv2d_transpose
在Tensorflow
? I once saw the comment that we can Just use combinations of UpSampling2D and Convolution2D as appropriate
. 我曾经看过评论,我们可以Just use combinations of UpSampling2D and Convolution2D as appropriate
。 Is that right? 是对的吗?
In the following two examples, they all use this kind of combination. 在以下两个例子中,它们都使用这种组合。
1) In Building Autoencoders in Keras , author builds decoder as follows. 1)在Keras中构建自动编码器时,作者按如下方式构建解码器。
2) In an u-uet implementation , author builds deconvolution as follows 2)在u-uet实现中 ,作者构建如下的反卷积
up6 = merge([UpSampling2D(size=(2, 2))(conv5), conv4], mode='concat', concat_axis=1)
conv6 = Convolution2D(256, 3, 3, activation='relu', border_mode='same')(up6)
conv6 = Convolution2D(256, 3, 3, activation='relu', border_mode='same')(conv6)
The corresponding layers in Keras
are Deconvolution2D layers. Keras
中的相应层是Deconvolution2D层。
It's worth to mention that you should be really careful with them because they sometimes might behave in unexpected way. 值得一提的是,你应该非常小心它们,因为它们有时可能会以意想不到的方式表现出来。 I strongly advise you to read this Stack Overflow question (and its answer) before you start to use this layer. 在您开始使用此图层之前,我强烈建议您阅读此 Stack Overflow问题(及其答案)。
UPDATE: 更新:
Convolution2D * UpSampling2D
. 解卷积是一个相对最近添加的层 - 也许这就是人们建议你使用Convolution2D * UpSampling2D
。 Convolution2D
and UpSampling2D
- so maybe this is the reason why it was mentioned in texts you provided. 实际上 - 从数学的角度来看 - 每个反卷积可能都是由Convolution2D
和UpSampling2D
组成的 - 所以也许这就是为什么在你提供的文本中提到它的原因。 UPDATE 2: 更新2:
Ok. 好。 I think I found an easy explaination why Deconvolution2D
might be presented in a form of a composition of Convolution2D
and UpSampling2D
. 我想我找到了一个简单的解释,为什么Deconvolution2D
可能以Convolution2D
和UpSampling2D
的组合形式UpSampling2D
。 We would use a definition that Deconvolution2D
is a gradient of some convolution layer. 我们将使用Deconvolution2D
是某个卷积层的梯度的定义。 Let's consider three most common cases: 让我们考虑三种最常见的情况:
Convolutional2D
without any pooling. 最简单的是Convolutional2D
没有任何池。 In this case - as it's the linear operation - its gradient is a function itself - so Convolution2D
. 在这种情况下 - 因为它是线性操作 - 它的梯度本身就是一个函数 - 所以Convolution2D
。 Convolution2D
with AveragePooling
. 更棘手的是带有AveragePooling
的Convolution2D
渐变。 So: (AveragePooling2D * Convolution2D)' = AveragePooling2D' * Convolution2D'
. 所以: (AveragePooling2D * Convolution2D)' = AveragePooling2D' * Convolution2D'
。 But a gradient of AveragePooling2D = UpSample2D * constant
- so it's also in this case when the preposition is true. 但是AveragePooling2D = UpSample2D * constant
的渐变AveragePooling2D = UpSample2D * constant
- 所以在介词为真的情况下也是如此。 MaxPooling2D
. 最棘手的是MaxPooling2D
。 In this case still (MaxPooling2D * Convolution2D)' = MaxPooling2D' * Convolution2D'
But MaxPooling2D' != UpSample2D
. 在这种情况下,仍然(MaxPooling2D * Convolution2D)' = MaxPooling2D' * Convolution2D'
但MaxPooling2D' != UpSample2D
。 But in this case one can easily find an easy Convolution2D
which makes MaxPooling2D' = Convolution2D * UpSample2D
(intuitively - a gradient of MaxPooling2D
is a zero matrix with only one 1 on its diagonal. As Convolution2D
might express a matrix operation - it may also represent the injection from a identity matrix to a MaxPooling2D
gradient). 但在这种情况下,可以很容易地找到一个简单Convolution2D
这使得MaxPooling2D' = Convolution2D * UpSample2D
(直观地-的梯度MaxPooling2D
是零矩阵只有一个1在其对角线上为。 Convolution2D
可能表达的矩阵运算-它也可以表示从单位矩阵到MaxPooling2D
梯度的注入)。 So: (MaxPooling2D * Convolution2D)' = UpSampling2D * Convolution2D * Convolution2D = UpSampling2D * Convolution2D'
. 所以: (MaxPooling2D * Convolution2D)' = UpSampling2D * Convolution2D * Convolution2D = UpSampling2D * Convolution2D'
。 The final remark is that all parts of the proof have shown that Deconvolution2D
is a composition of UpSampling2D
and Convolution2D
instead of opposite. 的最后应注意的证明的所有部分已表明Deconvolution2D
是组合物UpSampling2D
和Convolution2D
而不是相反。 One can easily proof that every function of a form a composition of UpSampling2D
and Convolution2D
might be easily presented in a form of a composition of UpSampling2D
and Convolution2D
. 人们可以很容易证明,一个表中各项功能的组成UpSampling2D
和Convolution2D
可能容易的组成形式呈现UpSampling2D
和Convolution2D
。 So basically - the proof is done :) 所以基本上 - 证明已经完成:)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.