简体   繁体   中英

OpenCV 2.4.10 Face detection works with video but fails to detect in a static image

I'm using OpenCV's Cascade Classifier in order to detect faces. I followed the webcam tutorial, and I was able to use detectMultiScale to find and track my face while it was streaming video from my laptop's webcam.

But when I take a photo of myself from my laptop's webcam, I load that image into OpenCV, and apply detectMultiScale on that image, and for some reason, the Cascade Classifier can't detect any faces on that static image!

That static image would definitely have been detected if it was one frame in from my webcam stream, but when I just take that one individual image alone, nothing's being detected.

Here's the code I use (just picked out the relevant lines):

Code in Common:

String face_cascade_name = "/path/to/data/haarcascades/haarcascade_frontalface_alt.xml";
CascadeClassifier face_cascade;

Mat imagePreprocessing(Mat frame) {
    Mat processed_frame;
    cvtColor( frame, processed_frame, COLOR_BGR2GRAY );
    equalizeHist( processed_frame, processed_frame );
    return processed_frame;
}

For Web-cam streaming face detection:

int detectThroughWebCam() {
    VideoCapture capture;
    Mat frame;
    if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading face cascade\n"); return -1; };


    //-- 2. Read the video stream
    capture.open( -1 );
    if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }

    while ( capture.read(frame) )
    {
         if(frame.empty()) {
             printf(" --(!) No captured frame -- Break!");
             break;
         }
         //-- 3. Apply the classifier to the frame
         Mat processed_image = imagePreprocessing( frame);
         vector<Rect> faces;
         face_cascade.detectMultiScale( processed_frame, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT, Size(30, 30) );
         if (faces.size() > 0) cout << "SUCCESS" << endl;
         int c = waitKey(10);
         if( (char)c == 27 ) { break; } // escape
    }
    return 0;
}

For my static image face detection:

void staticFaceDetection() {
    Mat image = imread("path/to/jpg/image");
    Mat processed_frame = imagePreprocessing(image);
    std::vector<Rect> faces;
    //-- Detect faces
    face_cascade.detectMultiScale( processed_frame, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT, Size(30, 30) );
    if (faces.size() > 0) cout << "SUCCESS" << endl;
}

In my eyes, both of these processes are identical (the only difference being the where I'm acquiring the original image), but the video stream version regularly detects faces, while the static method never seems to be able to find a face.

Am I missing something here?

There are a few possible reasons for that.

  1. You save the image in a low resolution. Try saving it in original resolution

  2. Lossy compression. Do you save images a .jpg file? maybe your compression is too strong. Try saving as BMP file (it preservers the original quality).

  3. Format of the image. I don't know what you imagePreprocessing() method does but you might introduce the following problems. The camera captures video in a specific format (Most cameras use YUV). Typically face detection is performed on the first plane Y. When you save the image and read it from disk as RGB you must not run the face detection on the first plane. This would be the 'B' plane and blue color stores very little information about the face. make sure that you correctly convert the image to gray-scale before you run the face detection.

  4. Range of the image. This is a common mistake. Make sure that the dynamic range of the image is correct. Sometimes by mistake you might multiply all the values by 255 effectively turning the entire image to white.

  5. Maybe face detection on images works fine but you somehow clear the faces vector after face detection. Another mistake might be that you read a different image file. For example, you save images to directory 'A' but accidentally read from directory 'B'

If none of the above helps. Do the following debugging. For a video frame 'i' - store it in the memory. Then save it to disk and read it back from file to memory. Now the most important part: compare the images. If they are different - that is the reason for different face detection results. If not, then further investigation is needed. I am pretty sure that the images will not be identical and that is the problem. You can see where images are not identical by taking differences between pixel values and displaying the diff image. You can compare images using memcmp() function which compares 2 memory blocks. Good luck

Solved it!

Really stupid mistake. I didn't call facecascades.load to load the haarcascades for the static image version, but I did that for the video cam version.

It's all working now.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM