简体   繁体   中英

HOG Feature Implementation with SVM in MATLAB

I would like to do classification based on HOG Features using SVM.

I understand that HOG features is the combination of all the histograms in every cell (ie it becomes one aggregate histogram).

I extract HOG features using MATLAB code in this page for the Dalal-Triggs variant .

For example, I have grayscale image with size of 384 x 512, then I extracted the HOG features at 9 orientations and a cell-size of 8. By doing this, I get 48 x 64 x 36 features.

How can I make this a histogram and use it toward a SVM classifier?

Because for example, I'll have 7 classes of images and I want to do training (total images would be 700 for training) and then classify new data based on the model generated from the training phase.

I read that for multiclass, we can train our SVM with ONE vs ALL, that means that I have to train 7 classifier for my 7 classes.

So for the 1st train, I'll consider the 1st class to be labelled with +1 and the reast class will be 0. And the 2nd train, I'll consider the 2nd class to be labelled with +1 and the reast class will be 0. And so on..

For example, I have classes of colors : Red, green, blue, yellow, white, black and pink.

So for the 1st training, I make only 2 binary which is red and not red..

For the 2nd training, I make label green and not green.. Is it like that??

The syntax to train SVM is:

          SVMStruct = svmtrain(Training,Group)

But in this case, I'll have 7 SVMStruct..

The syntax to classify / testing

          Group = svmclassify(SVMStruct,Sample)

how to declare 7 SVMStruct in here??

Is that right?? Or there are another concept or syntaks that I have to know??

And for training, I'll have 48 x 64 x 36 features, howw I can train these features in SVM?? because as what I read, they just have 1xN matrix of features..

Please help me...

HOG and SVM is most successful algorithm for object detection. To apply this method, yes indeed you must have two different training datasets before they are fed into SVM classifier. For instance, you want to detect an apple, so you must have two training dataset, positive images is the one contains an apple in image and negative images is the one contains no apples in image. Then, you extract the features from both training datasets (positive and negative) into HOG descriptor separately and also label it separately (ie 1 is for positive, 0 for negative). Afterwards, combine the features vector from positive and negative and feed them to SVM Classifier.

You can use SVM Light or LibSVM, which is easier and user friendly for beginner.

The Computer Vision System Toolbox for MATLAB includes extractHOGFeatures function, and the Statistics Toolbox includes SVM. Here's an example of how to classify images using HOG and SVM.

1. How can I make this a histogram and use it toward a SVM classifier?

One distinction I want to make is that you already have a 'histogram' of oriented gradient features. You now need to give these features as input to the SVM. It is weird to assign labels to each of these features because one HoG feature might turn up in another image labelled differently.

In practice what is done is to make another histogram called a bag of words from these HoG features and give them to the SVM as input. The intuition is if two features are very similar you would want one representation for both these HoG features. This reduces the variance in the input data. Now we make this new histogram for each image.

A bag of words is created in the following way:

  1. Cluster all the HoG features into 'words'. Say you have 1000 of
    these words.

  2. Go through all HoG features and assign HoG feature to a word if it is closest(euclidean distance) to that word among all words in bag.

  3. Count how many words are assigned to each word in bag. This is the 1XN(N = number of words in bag) feature histogram which will be given as input to the SVM after labeling.

2. How to carry out multi-class classification using a SVM? If you will retrain the SVM you will get another model. There are two ways how you might do multiclass SVM using SVMTrain

1) One vs One SVM

Train for each label class with input in the following way:

Example for model 1 input will be

bag of words features  for Image 1, RED
bag of words features Image 2, GREEN

Example for model 1 input will be

bag of words features  for Image 3, YELLOW
bag of words features Image 2, GREEN

The above is done for each pair of label classes. You will have N(N-1)/2 models. Now you can count the number of votes for each class from the N(N-1)/2 models to find which label to assign.

2) One vs All SVM

Train for each label class with input in the following way:

Example for model 1 input will be

bag of words features for Image 1, RED
bag of words features for Image 2, NOT RED

Example for model 1 input will be

bag of words features for Image 2, GREEN
bag of words features for Image 1, NOT GREEN

The above is done each label class. You will have N models. Now you can count the number of votes for each class from the N models to find which label to assign.

Read more on category-level classification here: http://www.di.ens.fr/willow/events/cvml2013/materials/slides/tuesday/Tue_bof_summer_school_paris_2013.pdf

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM