简体   繁体   English

为滑动窗口分类器实现HoG特征检测

[英]Implementing HoG features detection for sliding window classifier

I've written a classifier and now want to apply it to detect images. 我已经编写了一个分类器,现在想将其应用于检测图像。 My classifier's already got models for HoG features of some objects I'm interested in. I've got the sliding window thing going, which will slide across the image at different scales as well, and I can get the HoG features for each window. 我的分类器已经有了一些我感兴趣的对象的HoG特征的模型。我已经有了滑动窗口的功能,它也将以不同的比例在图像上滑动,并且我可以获得每个窗口的HoG特征。 My questions is - what's the next step? 我的问题是-下一步是什么?

Is it really as simple as matching the model's HoG features against the features from the window? 将模型的HoG特征与窗口中的特征进行匹配真的简单吗? I understand that with integral images, there's a threshold value for each class (such as face or not face ) and if the computed value of the window-generated image is close enough to the class's values and doesn't cross a threshold, then we say that we've got a match. 我了解到,对于积分图像,每个类都有一个阈值(例如facenot face ),并且如果窗口生成的图像的计算值与该类的值足够接近且未超过阈值,那么我们说我们有比赛。

But how does it work with HoG features? 但是,它如何与HoG功能一起使用?

Yes, it is as simple as that. 是的,就是这么简单。 Once you have your HOG model and your windows, you anly need to apply the window features to the models. 一旦有了HOG模型和窗口,您就需要将窗口功能应用于模型。 And then select the best result (using a threshold or not, depending on your application). 然后选择最佳结果(是否使用阈值,取决于您的应用程序)。

Here you have a sample code that performs the same steps. 在这里,您有一个执行相同步骤的示例代码。 The key part is the following: 关键部分如下:

function detect(im,model,wSize)
   %{
   this function will take three parameters
    1.  im      --> Test Image
    2.  model   --> trained model
    3.  wStize  --> Size of the window, i.e. [24,32]
   and draw rectangle on best estimated window
   %}

topLeftRow = 1;
topLeftCol = 1;
[bottomRightCol bottomRightRow d] = size(im);

fcount = 1;

% this for loop scan the entire image and extract features for each sliding window
for y = topLeftCol:bottomRightCol-wSize(2)   
    for x = topLeftRow:bottomRightRow-wSize(1)
        p1 = [x,y];
        p2 = [x+(wSize(1)-1), y+(wSize(2)-1)];
        po = [p1; p2];
        img = imcut(po,im);     
        featureVector{fcount} = HOG(double(img));
        boxPoint{fcount} = [x,y];
        fcount = fcount+1;
        x = x+1;
    end
end

lebel = ones(length(featureVector),1);
P = cell2mat(featureVector);
% each row of P' correspond to a window
[~, predictions] = svmclassify(P',lebel,model); % classifying each window

[a, indx]= max(predictions);
bBox = cell2mat(boxPoint(indx));
rectangle('Position',[bBox(1),bBox(2),24,32],'LineWidth',1, 'EdgeColor','r');
end

For each window, the code extracts the HOG descriptor and stores them in featureVector . 对于每个窗口,代码都会提取HOG描述符并将其存储在featureVector Then using svmclassify the code detects the windows where the object is present. 然后使用svmclassify代码检测存在对象的窗口。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM