简体   繁体   English

如何在PIL中选择与图像边缘相邻的所有黑色像素?

[英]How can I select all black pixels that are contiguous with an edge of the image in PIL?

I have a set of images petri dishes which unfortunately are not the highest quality (example below, axes aren't part of the images). 我有一套图像培养皿,遗憾的是它们不是最高质量的(下面的例子,轴不是图像的一部分)。 dish1 I'm trying to select the background and calculate its area in pixels with the following: 我正在尝试选择背景并使用以下内容计算其像素区域:

image = Image.open(path)
black_image = 1 * (np.asarray(image.convert('L')) < 12)
black_region = black_image.sum()

This yields the below: 这产生如下:

在此输入图像描述

If I am more stringent with my selection of black pixels, I miss pixels in other images, and if I am looser I end up selecting too much of the petri dish itself. 如果我对我选择的黑色像素更加严格,我会错过其他图像中的像素,如果我更宽松,我最终会选择太多的培养皿本身。 Is there a way I can only select the pixels have a luma value less than 12 AND are contiguous with an edge? 有没有办法我只能选择亮度值小于12且与边缘相邻的像素? I'm open to openCV solutions too. 我也对openCV解决方案持开放态度。

If you take the very top line/row of your image and the very bottom line/row and threshold them you will get this diagram where I have placed the top row at the top and the bottom row at the bottom just outside the limits of the original image - there is no need for you to do that, I am just illustrating the technique. 如果你拍摄图像的最顶行/行和最底线/行和阈值,你会得到这个图,我把顶行排在顶部,底行排在底部,就在极限之外原始图像 - 你没有必要这样做,我只是在说明这项技术。

在此输入图像描述

Now look where the lines change from black to white and then white to black (circled in red at the top). 现在看看线条从黑色变为白色然后从白色变为黑色(在顶部以红色圈出)。 Unfortunately, your images have annotations and axes which I had to trim off so your number will not be identically the same. 不幸的是,你的图像有注释和轴,我不得不修剪,所以你的数字不会完全相同。 On the top line/row, my image changes from black to white at column 319 and back to black at column 648. If I add those together I get 966 and divide by 2, the image centre on the x-axis is at column 483. 在顶行/行,我的图像在第319行从黑色变为白色,在第648行变回黑色。如果我将它们加在一起,我得到966并除以2,x轴上的图像中心位于第483列。

Looking at the bottom line/row the transitions (circled in red) are at columns 234 and 736 which add up to 970 which makes 485 when averaged, so we know the circle centre is on vertical image column 483-485 or say 484. 查看底线/行,过渡(以红色圈出)位于列234和736,它们加起来为970,当平均时为485,因此我们知道圆心位于垂直图像列483-485或484。

Then you should now be able to work out the image centre and radius and mask the image to accurately calculate the background. 然后,您现在应该能够计算出图像中心和半径,并遮盖图像以准确计算背景。

Hopefully, I'm not oversimplifying the problem, but from my point of view, using OpenCV with simple thresholding, morphological operations, and findContours should do the job. 希望我不会过分简化这个问题,但从我的观点来看,使用OpenCV进行简单的阈值处理,形态运算和findContours应该可以胜任。

Please, see the following code: 请看下面的代码:

import cv2
import numpy as np

# Input
input = cv2.imread('images/x0ziO.png', cv2.IMREAD_COLOR)

# Input to grayscale
gray = cv2.cvtColor(input, cv2.COLOR_BGR2GRAY)

# Binary threshold
_, gray = cv2.threshold(gray, 20, 255, cv2.THRESH_BINARY)

# Morphological improvements of the mask
gray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)))
gray = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11)))

# Find contours
cnts, _ = cv2.findContours(gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)

# Filter large size contours; at the end, there should only be one left
largeCnts = []
for cnt in cnts:
    if (cv2.contourArea(cnt) > 10000):
        largeCnts.append(cnt)

# Draw (filled) contour(s)
gray = np.uint8(np.zeros(gray.shape))
gray = cv2.drawContours(gray, largeCnts, -1, 255, cv2.FILLED)

# Calculate background pixel area
bgArea = input.shape[0] * input.shape[1] - cv2.countNonZero(gray)

# Put result on input image
input = cv2.putText(input, 'Background area: ' + str(bgArea), (20, 30), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1.0, (255, 255, 255))

cv2.imwrite('images/output.png', input)

The intermediate "mask" image looks like this: 中间“掩码”图像如下所示:

面具

And, the final output looks like this: 并且,最终输出如下所示:

产量

Try the experimental floodfill() method. 尝试实验floodfill()方法。 https://pillow.readthedocs.io/en/5.1.x/reference/ImageDraw.html?highlight=floodfill#PIL.ImageDraw.PIL.ImageDraw.floodfill https://pillow.readthedocs.io/en/5.1.x/reference/ImageDraw.html?highlight=floodfill#PIL.ImageDraw.PIL.ImageDraw.floodfill

If all your images are like the example, just pick two or four corners of your image to fill with, say, hot pink and count that. 如果您的所有图像都像示例,只需选择图像的两个或四个角来填充,例如粉红色并计算。

See also Image Segmentation with Watershed Algorithm which is much like flood fill but without relying on a single unique color. 另请参见使用分水岭算法的图像分割,这与洪水填充非常相似,但不依赖于单一的唯一颜色。

Since you are open to OpenCV approaches you could use a SimpleBlobDetector 由于您对OpenCV方法SimpleBlobDetector开放态度,因此可以使用SimpleBlobDetector

Obviously the result I got is also not perfect, since there are a lot of hyperparameters to set. 显然,我得到的结果也不完美,因为要设置很多超参数。 The hyperparameters make it pretty flexible, so it is a decent place to start from. 超参数使其非常灵活,因此它是一个体面的起点。

This is what the Detector does (see details here ): 这就是探测器的作用(详见此处 ):

  1. Thresholding : Convert the source images to several binary images by thresholding the source image with thresholds starting at minThreshold. 阈值处理 :通过使用从minThreshold开始的阈值对源图像进行阈值处理,将源图像转换为多个二进制图像。 These thresholds are incremented by thresholdStep until maxThreshold . 这些阈值递增thresholdStep直到maxThreshold So the first threshold is minThreshold , the second is minThreshold + thresholdStep , the third is minThreshold + 2 x thresholdStep , and so on. 因此第一个阈值是minThreshold ,第二个minThreshold + thresholdStepminThreshold + thresholdStep ,第三个minThreshold + thresholdStepminThreshold + 2 x thresholdStep ,依此类推。
  2. Grouping : In each binary image, connected white pixels are grouped together. 分组 :在每个二进制图像中,连接的白色像素被分组在一起。 Let's call these binary blobs. 我们称之为二进制blob。
  3. Merging : The centers of the binary blobs in the binary images are computed, and blobs located closer than minDistBetweenBlobs are merged. 合并 :计算二进制图像中二进制blob的中心,并且合并比minDistBetweenBlobs更靠近的minDistBetweenBlobs

  4. Center & Radius Calculation : The centers and radii of the new merged blobs are computed and returned. 中心和半径计算 :计算并返回新合并blob的中心和半径。

Find the code bellow the image. 找到图像下面的代码。

输出图像

# Standard imports
import cv2
import numpy as np

# Read image
im = cv2.imread("petri.png", cv2.IMREAD_COLOR)

# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()

# Change thresholds
params.minThreshold = 0
params.maxThreshold = 255

# Set edge gradient
params.thresholdStep = 5

# Filter by Area.
params.filterByArea = True
params.minArea = 10

# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector_create(params)

# Detect blobs.
keypoints = detector.detect(im)

# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0, 0, 255),
                                      cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM