繁体   English   中英

即使使用与输入完全相同的图像,Opencv和C ++ Fisherface算法也始终会预测错误并重建黑色图像

[英]Opencv and c++ Fisherface algorithm always predict wrong even with the exactly same image as input and reconstruct a black image

因此,正如标题所述,在使用Windows窗体的界面在c ++上使用opencv时,我只是在发送程序动作的输入时遇到问题,基本上我将处理过的图像保存在文件夹中并可以毫无问题地将它们恢复我什至检查了一百次,当我恢复图像时是否有任何问题,没有,那没有问题,所以我开始测试程序给出的输出图像,并且我意识到从FaceRecognizer拍摄的重建图像是完全是黑色的,我试图更改声明,但没有任何变化,有趣的是,这个算法在我作为测试创建的另一个项目上运行,并且那里没有接口,只有控制台,所以程序在主程序上运行功能,并且在此测试项目中,当我从人脸FaceRecognizer重建图像时,它会返回正常的人脸图像并识别出正确的人脸,但是在具有接口的新项目中,我使用与 输入并识别所有错误并重建黑色图像,我认为FaceRecognizer模型有问题,但是我不知道它到底是什么! 谁能帮我?

这是从文件夹恢复图像的代码:

            if (traiModel)
            {
                // read all folders on the images and for each one give an number id
                // read the images for each folder and add on a vector of images
                // read the folder name and give to all images to set the label info
                Preprocessed_Faces.clear();
                faceLabels.clear();
                Preprocessed_Faces_Names.clear();

                std::string folder = "TrainingFolder\\";
                std::vector<cv::string> foldernames;
                foldernames = get_all_files_names_within_folder(folder);
                std::vector<int> labels;
                for (int f = 0; f < foldernames.size(); f++)
                {
                    std::string thisfoldername = folder + foldernames[f];
                    std::vector<cv::string> filenames;
                    cv::glob(thisfoldername, filenames);
                    Preprocessed_Faces_Names.push_back(foldernames[f]);
                    labels.push_back(f + 1);
                    for (int fn = 0; fn < filenames.size(); fn++)
                    {
                        Preprocessed_Faces.push_back(cv::imread(filenames[fn]));
                        //std::cout << filenames[fn] << std::endl;
                        faceLabels.push_back(f + 1);
                    }
                }

                cv::imwrite("Traintest.PNG", Preprocessed_Faces[0]);

                std::map<int, std::string> map1;
                for (int i = 0; i < Preprocessed_Faces_Names.size(); i++)
                {
                    map1.insert(std::pair<int, std::string>(labels[i], Preprocessed_Faces_Names[i]));
                    std::cout << Preprocessed_Faces_Names[i] << std::endl;
                }
                model->setLabelsInfo(map1);
                model->train(Preprocessed_Faces, faceLabels);
                traiModel = false;
            }

以下是用于识别他尝试先重建面部并进行识别的面部代码:

        if (identif)
        {
            // identify the current face looking on the database
            // Prediction Validation
            // Get some required data from the FaceRecognizer model.
            cv::Mat eigenvectors = model->get<cv::Mat>("eigenvectors");
            cv::Mat averageFaceRow = model->get<cv::Mat>("mean");

            // Project the input image onto the eigenspace.
            cv::Mat projection = cv::subspaceProject(eigenvectors, averageFaceRow, filtered.reshape(1, 1));

            // Generate the reconstructed face back from the eigenspace.
            cv::Mat reconstructionRow = cv::subspaceReconstruct(eigenvectors, averageFaceRow, projection);

            // Make it a rectangular shaped image instead of a single row.
            cv::Mat reconstructionMat = reconstructionRow.reshape(1, filtered.rows);

            // Convert the floating-point pixels to regular 8-bit uchar.
            cv::Mat reconstructedFace = cv::Mat(reconstructionMat.size(), CV_8U);
            reconstructionMat.convertTo(reconstructedFace, CV_8U, 1, 0);

            cv::imwrite("Teste.PNG", filtered);
            cv::imwrite("Teste2.PNG", reconstructedFace);

            int identity = model->predict(filtered);
            double similarity = getSimilarity(filtered, reconstructedFace);
            if (similarity > .7f)
            {
                //identity = -1; // -1 means that the face is not registred in the trainer
            }
            std::cout << "This is: " << identity << " and: " << model->getLabelInfo(identity) << std::endl;
            identif = false;
        }

这是运行用于识别人脸的完整循环的代码,还有上面的2个颂歌的FaceRecognizer的声明:

void RecognitionAlgo::Running()
{
    // Create the cascade classifier object used for the face detection
    cv::CascadeClassifier face_cascade;
    // Use the haarcascade frontalface_alt.xml library
    if (!face_cascade.load("haarcascade_frontalface_alt.xml"))
    {
        Log("Error at face Cascade Load!")
    }


    // Setup image files used in the capture process
    cv::Mat captureFrame;
    cv::Mat grayscaleFrame;
    cv::Mat shrinkFrame;


    // Create a vector to store the face found
    std::vector<cv::Rect> faces;
    std::vector<cv::Mat> Preprocessed_Faces;
    std::vector<cv::string> Preprocessed_Faces_Names;
    cv::string myname;
    std::vector<int> faceLabels;

    // Init the face Recognizer
    cv::initModule_contrib();

    std::string facerecAlgorithm = "FaceRecognizer.Fisherfaces";
    cv::Ptr<cv::FaceRecognizer> model;
    // Use OpenCV's new FaceRecognizer in the "contrib" module;
    model = cv::Algorithm::create<cv::FaceRecognizer>(facerecAlgorithm);
    if (model.empty())
    {
        std::cerr << "ERROR: FaceRecognizer" << std::endl;
    }
    try
    {
        //model->load("TrainedModel.xml");
    }
    catch (const std::exception&)
    {

    }

    // Create a loop to caoture and find faces
    while (true)
    {
        // Capture a new image frame
        if (swCamera)
        {
            captureDevice >> captureFrame;
        }
        else
        {
            captureFrame = cv::imread("face3.PNG");
        }

        // Shrink the captured image, Convert to gray scale and equalize
        cv::cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
        cv::equalizeHist(grayscaleFrame, grayscaleFrame);
        shrinkFrame = ShrinkingImage(grayscaleFrame);

        // Find faces on the shrink image (because it's faster) and store them in the vector array
        face_cascade.detectMultiScale(shrinkFrame, faces, 1.1, 4, CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_SCALE_IMAGE, cv::Size(30, 30));

        if (faces.size() > 0)
            faces = EnlargeResults(captureFrame, faces);

        // Draw a rectangle for all found faces in the vector array on original image
        for (int i = 0; i < faces.size(); i++)
        {
            cv::Point pt1(faces[i].x + faces[i].width, faces[i].y + faces[i].height);
            cv::Point pt2(faces[i].x, faces[i].y);

            cv::Mat theFace = grayscaleFrame(faces[i]);

            // try to treat the face by identifying the eyes, if the eyes fail to detect, it returns theface
            cv::Mat filtered = TreatmentForFace(theFace);

            // Collecting faces and learning from them.

            if (starTraining && TrainName != "")
            {
                if (colFace)
                {
                    Preprocessed_Faces.push_back(filtered);
                    if (myname == "")
                    {
                        myname = TrainName;
                    }
                    colFace = false;
                }
            }
            else
            {
                if (!starTraining && Preprocessed_Faces.size() > 0)
                {
                    // create the person folder
                    std::string command = "mkdir ";
                    std::string foldercom = "TrainingFolder\\" + myname;
                    command += foldercom;
                    system(command.c_str());

                    // create a string to access the recent created folder
                    std::string foldername = foldercom.substr(0, foldercom.size() - (myname.size() + 1));
                    foldername.append("/");
                    foldername.append(myname);
                    foldername.append("/");
                    foldername.append(myname);
                    // save the colected faces on the folder
                    for (int i = 0; i < Preprocessed_Faces.size(); i++)
                    {
                        std::ostringstream oss;
                        oss << i;
                        cv::imwrite(foldername + oss.str() + ".PNG", Preprocessed_Faces[i]);
                    }
                    myname = "";
                    Preprocessed_Faces.clear();
                }
            }

            if (traiModel)
            {
                // read all folders on the images and for each one give an number id
                // read the images for each folder and add on a vector of images
                // read the folder name and give to all images to set the label info
                Preprocessed_Faces.clear();
                faceLabels.clear();
                Preprocessed_Faces_Names.clear();

                std::string folder = "TrainingFolder\\";
                std::vector<cv::string> foldernames;
                foldernames = get_all_files_names_within_folder(folder);
                std::vector<int> labels;
                for (int f = 0; f < foldernames.size(); f++)
                {
                    std::string thisfoldername = folder + foldernames[f];
                    std::vector<cv::string> filenames;
                    cv::glob(thisfoldername, filenames);
                    Preprocessed_Faces_Names.push_back(foldernames[f]);
                    labels.push_back(f + 1);
                    for (int fn = 0; fn < filenames.size(); fn++)
                    {
                        Preprocessed_Faces.push_back(cv::imread(filenames[fn]));
                        //std::cout << filenames[fn] << std::endl;
                        faceLabels.push_back(f + 1);
                    }
                }

                cv::imwrite("Traintest.PNG", Preprocessed_Faces[0]);

                std::map<int, std::string> map1;
                for (int i = 0; i < Preprocessed_Faces_Names.size(); i++)
                {
                    map1.insert(std::pair<int, std::string>(labels[i], Preprocessed_Faces_Names[i]));
                    std::cout << Preprocessed_Faces_Names[i] << std::endl;
                }
                model->setLabelsInfo(map1);
                model->train(Preprocessed_Faces, faceLabels);
                traiModel = false;
            }

            if (identif)
            {
                // identify the current face looking on the database
                // Prediction Validation
                // Get some required data from the FaceRecognizer model.
                cv::Mat eigenvectors = model->get<cv::Mat>("eigenvectors");
                cv::Mat averageFaceRow = model->get<cv::Mat>("mean");

                // Project the input image onto the eigenspace.
                cv::Mat projection = cv::subspaceProject(eigenvectors, averageFaceRow, filtered.reshape(1, 1));

                // Generate the reconstructed face back from the eigenspace.
                cv::Mat reconstructionRow = cv::subspaceReconstruct(eigenvectors, averageFaceRow, projection);

                // Make it a rectangular shaped image instead of a single row.
                cv::Mat reconstructionMat = reconstructionRow.reshape(1, filtered.rows);

                // Convert the floating-point pixels to regular 8-bit uchar.
                cv::Mat reconstructedFace = cv::Mat(reconstructionMat.size(), CV_8U);
                reconstructionMat.convertTo(reconstructedFace, CV_8U, 1, 0);

                cv::imwrite("Teste.PNG", filtered);
                cv::imwrite("Teste2.PNG", reconstructedFace);

                int identity = model->predict(filtered);
                double similarity = getSimilarity(filtered, reconstructedFace);
                if (similarity > .7f)
                {
                    //identity = -1; // -1 means that the face is not registred in the trainer
                }
                std::cout << "This is: " << identity << " and: " << model->getLabelInfo(identity) << std::endl;
                identif = false;
            }
        }

        // Print the output
        cv::resize(captureFrame, captureFrame, cv::Size(800, 600));
        cv::imshow("outputCapture", captureFrame);
        // pause for 33ms
        cv::waitKey(33);
    }
}

OpenCV FaceRecognizer被实现为用于灰度图像。 问题是您正在使用cv :: imread()从磁盘读取图像。

(cv::imread(filenames[fn]));

即使您可能保存了灰度图像,默认情况下imread()也会加载3个通道图像。 指定CV_LOAD_IMAGE_GRAYSCALE,如下所示。

(cv::imread(filenames[fn],CV_LOAD_IMAGE_GRAYSCALE));

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM