简体   繁体   English

OpenCVSharp - 比较图像 - Sift/Surf/Hash/Orb

[英]OpenCVSharp - Compare images - Sift/Surf/Hash/Orb

I know the subject was widely discussed over in the internet however I couldn't find an answer to my question...我知道这个主题在互联网上被广泛讨论,但是我找不到我的问题的答案......

So the ultimate goal of mine is to extract an part of image and compare it with "Template".所以我的最终目标是提取图像的一部分并将其与“模板”进行比较。 I've already extracted and cropped the part of image so that's not the problem.我已经提取并裁剪了图像的一部分,所以这不是问题。

The issue I am having is to trying to compare it to it's template.我遇到的问题是试图将它与它的模板进行比较。 Template was originally in SVG so I converted it to JPG and removed alpha channel.模板最初在 SVG 中,所以我将其转换为 JPG 并删除了 alpha 通道。

Template:模板: 图像模板

Actual image is take from webcam so It's coloured but I can easily do threshold to leave only black/white colour.实际图像是从网络摄像头拍摄的,所以它是彩色的,但我可以很容易地做阈值,只留下黑色/白色。

Webcam image:网络摄像头图像: 在此处输入图像描述

Webcam image with threshold带阈值的网络摄像头图像在此处输入图像描述

So I tried 3 methods.所以我尝试了3种方法。

1. SIFT algorithm. 
 

private void MatchBySift(Mat src1, Mat src2)

        {

            var gray1 = new Mat();
            var gray2 = new Mat();

            Cv2.CvtColor(src1, gray1, ColorConversionCodes.BGR2GRAY);
            Cv2.CvtColor(src2, gray2, ColorConversionCodes.BGR2GRAY);

            var sift = SIFT.Create();

            // Detect the keypoints and generate their descriptors using SIFT
            KeyPoint[] keypoints1, keypoints2;
            var descriptors1 = new Mat();   //<float>
            var descriptors2 = new Mat();   //<float>
            sift.DetectAndCompute(gray1, null, out keypoints1, descriptors1);
            sift.DetectAndCompute(gray2, null, out keypoints2, descriptors2);

            // Match descriptor vectors
            var bfMatcher = new BFMatcher(NormTypes.L2, false);
            var flannMatcher = new FlannBasedMatcher();
            DMatch[] bfMatches;
            DMatch[] flannMatches;
            try
            {
                bfMatches = bfMatcher.Match(descriptors1, descriptors2);
                flannMatches = flannMatcher.Match(descriptors1, descriptors2);
            }
            catch
            {
                bfMatches = new DMatch[0];
                flannMatches = new DMatch[0];
            }
            

            // Draw matches
            var bfView = new Mat();
            Cv2.DrawMatches(gray1, keypoints1, gray2, keypoints2, bfMatches, bfView);
            var flannView = new Mat();
            Cv2.DrawMatches(gray1, keypoints1, gray2, keypoints2, flannMatches, flannView);
            Console.WriteLine("BF Matches: " + bfMatches.Length);
            Console.WriteLine("Flann Matches: " + flannMatches.Length);
            
            using (new Window("SIFT matching (by BFMather)", bfView))
            using (new Window("SIFT matching (by FlannBasedMatcher)", flannView))
            {
                Cv2.WaitKey();
            }
            
        }

2. SURF Algorithm

    private void MatchBySurf(Mat src1, Mat src2)
            {
                var gray1 = new Mat();
                var gray2 = new Mat();
    
                Cv2.CvtColor(src1, gray1, ColorConversionCodes.BGR2GRAY);
                Cv2.CvtColor(src2, gray2, ColorConversionCodes.BGR2GRAY);
    
                var surf = SURF.Create(200, 4, 2, true);
    
                // Detect the keypoints and generate their descriptors using SURF
                KeyPoint[] keypoints1, keypoints2;
                var descriptors1 = new Mat();   //<float>
                var descriptors2 = new Mat();   //<float>
                surf.DetectAndCompute(gray1, null, out keypoints1, descriptors1);
                surf.DetectAndCompute(gray2, null, out keypoints2, descriptors2);
    
                // Match descriptor vectors 
                var bfMatcher = new BFMatcher(NormTypes.L2, false);
                var flannMatcher = new FlannBasedMatcher();
                DMatch[] bfMatches;
                DMatch[] flannMatches;
                try
                {
                    bfMatches = bfMatcher.Match(descriptors1, descriptors2);
                    flannMatches = flannMatcher.Match(descriptors1, descriptors2);
                }
                catch
                {
                    bfMatches = new DMatch[0];
                    flannMatches = new DMatch[0];
                }
    
                // Draw matches
                var bfView = new Mat();
                Cv2.DrawMatches(gray1, keypoints1, gray2, keypoints2, bfMatches, bfView);
                var flannView = new Mat();
                Cv2.DrawMatches(gray1, keypoints1, gray2, keypoints2, flannMatches, flannView);
    
                Console.WriteLine("BF Matches: " + bfMatches.Length);
                Console.WriteLine("Flann Matches: " + flannMatches.Length);
                /*
                using (new Window("SURF matching (by BFMather)", bfView))
                using (new Window("SURF matching (by FlannBasedMatcher)", flannView))
                {
                    Cv2.WaitKey();
                }
                */
            }

3. Hash comparison

    private void MatchByHash(Mat src1, Mat src2)
            {
                Bitmap bit1 = BitmapConverter.ToBitmap(src1);
                Bitmap bit2 = BitmapConverter.ToBitmap(src2);
                List<bool> hash1 = GetHash(bit1);
                List<bool> hash2 = GetHash(bit2);
    
                int equalElements = hash1.Zip(hash2, (i, j) => i == j).Count(eq => eq);
    
                Console.WriteLine("Match by Hash: " + equalElements);
            }
    
            public List<bool> GetHash(Bitmap bmpSource)
            {
                List<bool> lResult = new List<bool>();
                //create new image with 16x16 pixel
                Bitmap bmpMin = new Bitmap(bmpSource, new System.Drawing.Size(16, 16));
                for (int j = 0; j < bmpMin.Height; j++)
                {
                    for (int i = 0; i < bmpMin.Width; i++)
                    {
                        //reduce colors to true / false                
                        lResult.Add(bmpMin.GetPixel(i, j).GetBrightness() < 0.5f);
                    }
                }
                return lResult;
            }

Why I am not happy with results为什么我对结果不满意

  1. SIFT and SURF筛选和冲浪

When comparing above images I get certain similarity based on number of matches.在比较上面的图像时,我会根据匹配的数量得到一定的相似性。 Problem is when I use different template image I get the same numbers all the time... I think it's because there is no colour but I need clarification here问题是当我使用不同的template image时,我总是得到相同的数字......我认为这是因为没有颜色,但我需要在这里澄清

  1. Hash Hash

When used on other 'Template' I got higher result.在其他“模板”上使用时,我得到了更高的结果。 Which basically eliminates this approach unless it could be improved somehow...这基本上消除了这种方法,除非它可以以某种方式改进......

I also heard of ORB algorithm but I've not tried to implement it yet.我也听说过 ORB 算法,但我还没有尝试实现它。 Would that work better on my occasion?这在我的场合会更好吗?

If there is any other approach I could try, please point me in the right direction!如果我可以尝试任何其他方法,请指出正确的方向!

Thank You谢谢你

The main thing is to register the logo and the template.最主要的是注册标志和模板。 As your images are quasi binary and contain only the logo, you can rely on the computation of moments.由于您的图像是准二进制并且仅包含徽标,因此您可以依赖矩的计算。 You will compute them for the binarized image and the binarized template.您将为二值化图像和二值化模板计算它们。

The zeroth order moment of the black pixels is the area.黑色像素的零阶矩是面积。 The square root of the ratio of the areas gives you the scaling factor.面积比的平方根为您提供比例因子。

The first order X at Y moments give you the position of a reference point (the centroid), which will be the same in both. Y 时刻的一阶 X 为您提供参考点(质心)的 position,两者都是相同的。

Then the centered second order moments (X², Y², XY) will give you two main directions (by computing the Eigenvectors of the inertia tensor), and from this, the rotation angle between the logo and the template.然后居中的二阶矩(X²,Y²,XY)将为您提供两个主要方向(通过计算惯性张量的特征向量),并由此得出徽标和模板之间的旋转角度。

https://www.ae.msstate.edu/vlsm/shape/area_moments_of_inertia/papmi.htm https://www.ae.msstate.edu/vlsm/shape/area_moments_of_inertia/papmi.htm

Now you can rescale, rotate and translate either the logo or the template to superimpose them.现在,您可以重新缩放、旋转和平移徽标或模板以叠加它们。 You will obtain a comparison score by simply counting the pixels of matching color.您将通过简单地计算匹配颜色的像素来获得比较分数。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM