简体   繁体   English

检测图像iOS中的黑色像素

[英]Detect black pixel in image iOS

As of now I am searching every pixel 1 by 1 checking the color and seeing if it's black... if it isn't I move on to the next pixel. 到目前为止,我正在逐个搜索每个像素,检查颜色是否为黑色......如果不是,我会继续查看下一个像素。 This is taking forever as I can only check approx. 这是永远的,因为我只能检查约。 100 pixels per second (speeding up my NSTimer freezes the app because it can't check fast enough.) So is there anyway I can just have Xcode return all the pixels that are black and ignore everything else so I only have to check those pixels and not every pixel. 每秒100像素(加速我的NSTimer冻结应用程序,因为它无法足够快地检查。)所以,无论如何我可以让Xcode返回所有黑色的像素并忽略其他所有因素我只需要检查那些像素而不是每个像素。 I am trying to detect a black pixel furthest to the left on my image. 我试图在我的图像上检测到最左边的黑色像素。

Here is my current code. 这是我目前的代码。

- (void)viewDidLoad {
    timer = [NSTimer scheduledTimerWithTimeInterval: 0.01
                                             target: self
                                           selector:@selector(onTick:)
                                           userInfo: nil repeats:YES];
    y1 = 0;
    x1 = 0;
    initialImage = 0;
    height1 = 0;
    width1 = 0;
}

-(void)onTick:(NSTimer *)timer {
    if (initialImage != 1) {
        /*
        IMAGE INITIALLY GETS SET HERE... "image2.image = [blah blah blah];" took this out for non disclosure reasons
        */
        initialImage = 1;
    }
    //image2 is the image I'm checking the pixels of.
    width1 = (int)image2.size.width;
    height1 = (int)image2.size.height;
    CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(image2.CGImage));
    const UInt32 *pixels = (const UInt32*)CFDataGetBytePtr(imageData);
    if ( (pixels[(x1+(y1*width1))]) == 0x000000) { //0x000000 is black right?
        NSLog(@"black!");
        NSLog(@"x = %i", x1);
        NSLog(@"y = %i", y1);
    }else {
        NSLog(@"val: %lu", (pixels[(x1+(y1*width1))]));
        NSLog(@"x = %i", x1);
        NSLog(@"y = %i", y1);
        x1 ++;
        if (x1 >= width1) {
            y1 ++;
            x1 = 0;
        }
    }
    if (y1 > height1) {
        /*
        MY UPDATE IMAGE CODE GOES HERE (IMAGE CHANGES EVERY TIME ALL PIXELS HAVE BEEN CHECKED
        */
        y1 = 0;
        x1 = 0;
    }

Also what if a pixel is really close to black but not perfectly black... Can I add a margin of error in there somewhere so it will still detect pixels that are like 95% black? 如果一个像素真的接近黑色而不是完美的黑色也是怎么样...我可以在某处添加误差范围,这样它仍会检测到像95%黑色的像素吗? Thanks! 谢谢!

Why are you using a timer at all? 你为什么要使用计时器? Why not just have a double for loop in your function that loops over all possible x- and y-coordinates in the image? 为什么不在函数中使用双循环for循环遍历图像中所有可能的x和y坐标? Surely that would be waaaay faster than only checking at most 100 pixels per second. 当然,这比仅检查每秒最多100个像素更快。 You would want to have the x (width) coordinates in the outer loop and the y (height) coordinates in the inner loop so that you are effectively scanning one column of pixels at a time from left to right, since you are trying to find the leftmost black pixel. 您可能希望外循环中的x(宽度)坐标和内循环中的y(高度)坐标,以便您从左到右一次有效地扫描一列像素,因为您正在尝试查找最左边的黑色像素。

Also, are you sure that each pixel in your image has a 4-byte (Uint32) representation? 此外,您确定图像中的每个像素都有一个4字节(Uint32)表示吗? A standard bitmap would have 3 bytes per pixel. 标准位图每像素有3个字节。 To check if a pixel is close to black, you would just examine each byte in the pixel separately and make sure they are all less than some threshold. 要检查像素是否接近黑色,您只需分别检查像素中的每个字节,并确保它们都小于某个阈值。

EDIT: OK, since you are using UIGetScreenImage, I'm going to assume that it is 4-bytes per pixel. 编辑:好的,因为您正在使用UIGetScreenImage,我将假设它是每像素4字节。

const UInt8 *pixels = CFDataGetBytePtr(imageData);
UInt8 blackThreshold = 10; // or some value close to 0
int bytesPerPixel = 4;
for(int x = 0; x < width1; x++) {
  for(int y = 0; y < height1; y++) {
    int pixelStartIndex = (x + (y * width1)) * bytesPerPixel;
    UInt8 alphaVal = pixels[pixelStartIndex]; // can probably ignore this value
    UInt8 redVal = pixels[pixelStartIndex + 1];
    UInt8 greenVal = pixels[pixelStartIndex + 2];
    UInt8 blueVal = pixels[pixelStartIndex + 3];
    if(redVal < blackThreshold && blueVal < blackThreshold && greenVal < blackThreshold) {
      //This pixel is close to black...do something with it
    }
  }
}

If it turns out that bytesPerPixel is 3, then change that value accordingly, remove the alphaVal from the for loop, and subtract 1 from the indices of the red, green, and blue values. 如果事实证明bytesPerPixel为3,则相应地更改该值,从for循环中删除alphaVal,并从红色,绿色和蓝色值的索引中减去1。

Also, my current understanding is that UIGetScreenImage is considered a private function that Apple may or may not reject you for using. 另外,我目前的理解是UIGetScreenImage被认为是Apple可能会或可能不会拒绝您使用的私人功能。

I'm not an expert on pixel-level image processing, but my first thought is: why are you using a timer to do this? 我不是像素级图像处理方面的专家,但我首先想到的是:你为什么要用计时器来做这个? That incurs lots of overhead and makes the code less clear to read. 这会产生大量开销并使代码不太清晰。 (I think it also renders it thread-unsafe.) The overhead is not just from the timer itself but because you are doing all the data setup each time through. (我认为这也使它变得线程不安全。)开销不仅来自计时器本身,而且因为您每次都在进行所有数据设置。

How about using a loop instead to iterate over the pixels? 如何使用循环来迭代像素?

Also, you are leaking imageData (since you create it with a "Copy" method and never release it). 此外,您正在泄漏imageData(因为您使用“复制”方法创建它并且永远不会释放它)。 Currently you are doing this once per timer fire (and your imageData is probably pretty big if you are working on all but the tiniest images), so you are probably leaking tons of memory. 目前你每次定时器都会这样做一次(如果你正在处理除最小的图像之外的所有图像,你的imageData可能很大),所以你可能会泄漏大量的内存。

There is no way you should be doing this with a timer (or no reason I can think of anyway!). 你不应该用计时器这样做(或者无论如何我都无法想到!)。

How big are your images? 你的影像有多大? It should be viable to process the entire image in a single loop reasonably quickly. 应该可以合理快速地在单个循环中处理整个图像。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM