[英]Keras: Difference between AveragePooling1D layer and GlobalAveragePooling1D layer
I'm a bit confused when it comes to the average pooling layers of Keras.当谈到 Keras 的平均池化层时,我有点困惑。 The documentation states the following:文档说明如下:
AveragePooling1D: Average pooling for temporal data. AveragePooling1D:时态数据的平均池化。
Arguments参数
pool_size: Integer, size of the average pooling windows. strides: Integer, or None. Factor by which to downscale. Eg 2 will halve the input. If None, it will default to pool_size. padding: One of "valid" or "same" (case-insensitive). data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs.
channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). channel_last 对应于具有形状(批次、步骤、特征)的输入,而 channels_first 对应于具有形状(批次、特征、步骤)的输入。
Input shape输入形状
If data_format='channels_last': 3D tensor with shape: (batch_size, steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, steps)
Output shape输出形状
If data_format='channels_last': 3D tensor with shape: (batch_size, downsampled_steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, downsampled_steps)
and和
GlobalAveragePooling1D: Global average pooling operation for temporal data. GlobalAveragePooling1D:时态数据的全局平均池化操作。
Arguments参数
data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs.
channels_last corresponds to inputs with shape (batch, steps, features) while channels_first corresponds to inputs with shape (batch, features, steps). channel_last 对应于具有形状(批次、步骤、特征)的输入,而 channels_first 对应于具有形状(批次、特征、步骤)的输入。
Input shape输入形状
If data_format='channels_last': 3D tensor with shape: (batch_size, steps, features) If data_format='channels_first': 3D tensor with shape: (batch_size, features, steps)
Output shape输出形状
2D tensor with shape: (batch_size, features)具有形状的 2D 张量:(batch_size, features)
I (think that I) do get the concept of average pooling but I don't really understand why the GlobalAveragePooling1D layer simply drops the steps parameter.我(认为我)确实得到了平均池化的概念,但我真的不明白为什么 GlobalAveragePooling1D 层只是删除了步骤参数。 Thank you very much for your answers.非常感谢您的回答。
GlobalAveragePooling1D
is same as AveragePooling1D
with pool_size=steps
. GlobalAveragePooling1D
与带pool_size=steps
AveragePooling1D
相同。 So, for each feature dimension, it takes average among all time steps.因此,对于每个特征维度,它在所有时间步长中取平均值。 The output thus have shape (batch_size, 1, features)
(if data_format='channels_last'
).因此输出具有形状(batch_size, 1, features)
(如果data_format='channels_last'
)。 They just flatten the second (or third if data_format='channels_first'
) dimension, that is how you get output shape equal to (batch_size, features)
.他们只是压平第二个(或第三个,如果data_format='channels_first'
)维度,这就是你如何获得等于(batch_size, features)
输出形状。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.