[英]Can I use final pooling layer to find best common features after concatenating deep features vector and handcrafted fetures vector?
I have two features vector.我有两个特征向量。 One is deep features vector extracted by CNN and another is handcrafted features extracted by uniform local binary pattern.
一种是CNN提取的深度特征向量,另一种是统一局部二值模式提取的手工特征。 I want to find common best features after concatenating these two features vector.
我想在连接这两个特征向量后找到共同的最佳特征。 I want to use a final pooling layer for this reason.
出于这个原因,我想使用最终的池化层。 Is it possible?
可能吗?
After you have concatenated the two feature vectors, the final pooling layer would help in reducing those feature vectors.连接两个特征向量后,最终的池化层将有助于减少这些特征向量。
If you can define more what you aim to do / which pooling layer do you want to use?如果您可以定义更多您的目标/您想使用哪个池化层?
I'm not sure I understand correctly what you meant by "final pooling layer"我不确定我是否正确理解“最终池化层”的含义
But in my opinion, adding ONLY a pooling layer after the concatenation layer and before the output layer (eg, Dense-softmax...) may not help much in this case as pooling layers have no learnable parameters, and they operate over each activation map independently to reduce the size of the activation maps.但在我看来,在连接层之后和 output 层之前添加一个池化层(例如,Dense-softmax ...)在这种情况下可能没有多大帮助,因为池化层没有可学习的参数,并且它们在每次激活时运行map 独立地减小激活图的大小。
There is one simple way of feature fusion methods I would like to suggest is that you can apply another subnet (set of layers like convolution, pooling, dense) to the concatenated tensor.我想建议一种简单的特征融合方法,即您可以将另一个子网(卷积、池化、密集等层集)应用于级联张量。 Thus, the model can keep learning to enhance the good features.
因此,model 可以不断学习以增强良好的功能。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.