简体   繁体   English

不同规模的HOG功能

[英]HOG features on different scales

Suppose we calculate HOG features of image patches of different sizes, ranging from 64 * 64 to 128 * 128. Now, if we want to do k-means on these, should we normalize the patches which belong to different scale? 假设我们计算了大小从64 * 64到128 * 128的图像补丁的HOG特征。现在,如果我们要对它们进行k均值,是否应该归一化属于不同比例的补丁? I know HOG features are normalized, but does scale matter? 我知道HOG功能已标准化,但是缩放重要吗?

Normally, the HOG representations are normalized. 通常,将HOG表示标准化。 However, you must be careful to the block size. 但是,您必须注意块大小。 In fact, you must have the same number of blocks, whatever the size of the image. 实际上,无论图像大小如何,您都必须具有相同数量的块。 Otherwise, you obtain descriptors of different lengths and the k-means cannot be performed. 否则,您将获得不同长度的描述符,并且无法执行k均值。 This means that when having larger images, you will have larger blocks. 这意味着当具有较大的图像时,您将具有较大的块。 The resulting histograms will contain information from more gradients, so they aren't invariant at this stage. 生成的直方图将包含来自更多梯度的信息,因此它们在此阶段不是不变的。 However, by applying the histogram normalization, the scale invariance of the final descriptor is obtained. 但是,通过应用直方图归一化,可以获得最终描述符的尺度不变性。

Yet, if you are not sure if the histogram normalization is well performed or not, you can extract the descriptor for an image and its resized version and compare them. 但是,如果不确定直方图归一化是否执行良好,则可以提取图像的描述符及其调整大小后的版本并进行比较。 Good luck! 祝好运!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM