简体   繁体   中英

Blurring specific points of an image and storing the pixel value

I have a large amount of images that i would like to check for the pixel intensity at different locations and store their values for 50 images. I have a grid of 9 pixels, representing 1 point at different 22 specific coordinates of an image. What i want to do is calculate the mean value inside a grid of 9 pixels in each 22 positions, which will be the mean value for that specific black point.For instance:

在此处输入图像描述

Every black dot is a grid of 9 pixels, i would like to compute their mean value and store them in order. In practice i have the following list, which are the actual coordinates of pixels in every grid for all 22 coordinates:

    grid_coords = [
[60, 25], 
[61, 25], 
[59, 25], 
[60, 24], 
[59, 24], 
[61, 24], 
[60, 26], 
[61, 26], 
[59, 26], 
[110, 25], 
[111, 25], 
[109, 25], 
[110, 24], 
[109, 24], 
[111, 24], 
[110, 26], 
[111, 26], 
[109, 26], 
[175, 25], 
[176, 25], 
[174, 25], 
[175, 26],
[174, 26], 
[176, 26], 
[175, 24], 
[174, 24], 
[176, 24], 
[65, 40], 
[66, 40], 
[64, 40], 
[65, 39], 
[66, 39], 
[64, 39], 
[65, 41], 
[66, 41], 
[64, 41], 
[110, 50], 
[111, 50], 
[109, 50], 
[110, 49], 
[109, 49], 
[111, 49], 
[110, 51], 
[109, 51], 
[111, 51], 
[170, 40], 
[171, 40], 
[169, 40], 
[170, 39], 
[171, 39], 
[169, 39], 
[170, 41], 
[171, 41], 
[169, 41], 
[43, 55], 
[44, 55], 
[45, 55], 
[43, 56], 
[44, 56], 
[45, 56], 
[43, 54], 
[44, 54], 
[45, 54], 
[180, 55], 
[181, 55], 
[179, 55], 
[180, 56], 
[181, 56], 
[179, 56], 
[181, 54], 
[180, 54], 
[179, 54], 
[30, 85], 
[31, 85], 
[29, 85], 
[30, 86], 
[31, 86], 
[29, 86], 
[30, 84], 
[31, 84], 
[29, 84], 
[65, 75], 
[66, 75], 
[64, 75], 
[65, 74], 
[66, 74], 
[64, 74], 
[65, 76], 
[66, 76], 
[64, 76], 
[100, 105], 
[101, 105], 
[99, 105], 
[100, 104], 
[101, 104], 
[99, 104], 
[100, 106], 
[99, 106], 
[101, 106], 
[125, 105], 
[126, 105], 
[124, 105], 
[125, 104], 
[126, 104], 
[124, 104], 
[125, 106], 
[124, 106], 
[126, 106], 
[160, 75], 
[161, 75], 
[159, 75], 
[160, 74], 
[161, 74], 
[159, 74], 
[160, 76], 
[161, 76], 
[159, 76], 
[190, 85], 
[191, 85], 
[189, 85], 
[190, 86], 
[191, 86], 
[189, 86], 
[190, 84], 
[191, 84], 
[189, 84], 
[30, 142], 
[31, 142], 
[29, 142], 
[30, 143], 
[31, 143], 
[29, 143], 
[30, 141], 
[31, 141],
[29, 141], 
[75, 142], 
[76, 142], 
[74, 142], 
[75, 143], 
[76, 143], 
[74, 143], 
[75, 141], 
[76, 141], 
[74, 141], 
[145, 142], 
[146, 142], 
[144, 142], 
[145, 143], 
[146, 143], 
[144, 143], 
[145, 141], 
[146, 141], 
[144, 141], 
[180, 142], 
[181, 142], 
[179, 142], 
[180, 143], 
[181, 143], 
[179, 143], 
[180, 141], 
[181, 141], 
[179, 141], 
[85, 176], 
[84, 176], 
[86, 176], 
[85, 177], 
[84, 177], 
[86, 177], 
[85, 175], 
[84, 175], 
[86, 175], 
[125, 176], 
[126, 176], 
[124, 176], 
[125, 177], 
[126, 177], 
[124, 177], 
[125, 175], 
[126, 175], 
[124, 175], 
[70, 190], 
[71, 190], 
[69, 190], 
[70, 189], 
[71, 189], 
[69, 189], 
[70, 191], 
[71, 191], 
[69, 191], 
[153, 190], 
[154, 190], 
[152, 190], 
[153, 189], 
[154, 189], 
[152, 189], 
[154, 191], 
[153, 191], 
[152, 191]]

What i am currently doing is: I am grabbing the intensity values at those points for just 1 pixel at a time for a given coordinate, that represents 22 locations, without calculating their mean, for every image inside a folder that i have, like this:

def loadImages(path):

    imagesList = listdir(path)
    loadedImages = []
    for image in imagesList:
        img = PImage.open(path + image)
        arr = np.array(img)
        loadedImages.append(arr)


    return loadedImages 


img_folder = loadImages(path)
for img in img_folder:
    img_feats = []
    for coords in coords_array:
        img_feats.append(img[coords[0], coords[1]])
    face_image_vector.append(img_feats) 

This extracts single pixels for a given array of coordinates in x and y, which is almost what i am trying to do.

Therefore what i am trying to achieve is: Calculate the mean pixel of every point that is composed of 3x3 pixels, for every 22 coordinates/location at the given array above and store them inside a vector. Therefore if i have 10 images, i would have a 10x22 feature vector, if i have 50 images, i should have a 50x22 feature vector.

Currently i'm able to use my code to store the values of each 22 single pixels without the calculation of their neighboring pixels, which is what i would like to do.

numpy arrays support indexing like img[[x1,x2,x3], [y1,y2,y3]] to get a subset of elements, then you can use np.mean on that. I'd recommend using a function to generate those 9 points around a given then only store the centres, it will cut down on hard coded values.

centers = [(30, 85), (30, 142), (44, 55), (60, 25), (65, 40), (65, 75), 
           (70, 190), (75, 142), (85, 176), (100, 105), (110, 25), (110, 50), 
           (125, 105), (125, 176), (145, 142), (153, 190), (160, 75), (170, 40), 
           (175, 25), (180, 55), (180, 142), (190, 85)]

def around(x, y):
    "returns x and y arrays of 3x3 grid around given coordinate."
    xs = []
    ys = []
    for dx in (-1, 0, 1):
        for dy in (-1, 0, 1):
            xs.append(x+dx)
            ys.append(y+dy)
    return (xs,ys)

for img in loadImages(path):
    img_feats = []
    for (x,y) in centers:
        img_feats.append(np.mean(img[around(x,y)]))
    face_image_vector.append(img_feats) 

Using your current approach, what you would have to do is cast your list of coordinates into a numpy array so that each element of the numpy array corresponds to a (9, 2) grid of coordinates.


grid_np = np.array(grid_coords) #grid_coords is the list of grids you gave
grid_np = grid_np.reshape(22,9,2) #total points, pixels per point, # of coords
grid_means = np.zeros((9,2)) #initialize empty array to store means

def centroid(pixel_grid):
    length = pixel_grid.shape[0]
    sum_x = np.sum(pixel_grid[:,0])
    sum_y = np.sum(pixel_grid[:,1])
    return np.array([sum_x / length, sum_y / length])

for i in range(0,len(grid_means)):
    # import pdb; pdb.set_trace()
    grid_means[i] = centroid(grid_np[i])

This will give you

grid_means = 
array([[ 60.,  25.],
       [110.,  25.],
       [175.,  25.],
       [ 65.,  40.],
       [110.,  50.],
       [170.,  40.],
       [ 44.,  55.],
       [180.,  55.],
       [ 30.,  85.]])

##################################################################################

However if I Were to tackle this problem I would do it using OpenCV. We can get the coordinates of the keypoints through a combination of thresholding and blob detection.

# Mask off the black points to 'red', or any unique colour which will help in thresholding.
im = cv2.imread("CDsjQ.png")
orig_im = im.copy()
indices = np.where(im==0)
im[indices[0], indices[1], :] = [0, 0, 255] #OpenCV follows (B,G,R) color indexing
img_hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV) #Change to HSV Color channel

在此处输入图像描述

Now we want to threshold the image so we can isolate these red points in the image.

https://stackoverflow.com/questions/45677452/using-masks-to-apply-different-thresholds-to-different-parts-of-an-image
# this function only masks off red, if you choose another colour you have to create a mask again.
def isolate_red(main_img, sens):
    lower_red = np.array([0, 50, 50])
    upper_red = np.array([10, 255, 255])
    mask1 = cv2.inRange(main_img, lower_red, upper_red)
    lower_red = np.array([170, 50, 50])
    upper_red = np.array([180, 255, 255])
    mask2 = cv2.inRange(main_img, lower_red, upper_red)
    mask = mask1 + mask2
    output_img = main_img.copy()
    output_img[np.where(mask == 0)] = 0
    thresh = cv2.bitwise_not(output_img)
    return thresh

thresh = isolate_red(img_hsv, 20)

在此处输入图像描述

Now after thresholding we can use a blob detector to find the 'blobs'.

im = cv2.blur(thresh, (5,5)) #blur to remove noise

params = cv2.SimpleBlobDetector_Params()
params.minThreshold = 100
params.maxThreshold = 400

detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(im) #keypoints is an object
for key in keypoints:
    print(key.pt[0], key.pt[1]) #access the coordinates from each obj
im_with_keypoints = cv2.drawKeypoints(orig_im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite("Keypoints.png", im_with_keypoints)
Positions in pixel values :
682.5365600585938 473.8377380371094
700.7807006835938 501.54119873046875
558.1123657226562 383.11236572265625
703.1123657226562 371.11236572265625
568.6876831054688 354.10595703125
693.1123657226562 347.11236572265625
615.1123657226562 296.11236572265625
647.1123657226562 295.11236572265625
706.1123657226562 293.11236572265625
541.1123657226562 293.11236572265625
729.1123657226562 223.11236572265625
523.1123657226562 214.11236572265625
576.1123657226562 208.11236572265625
679.6876831054688 200.10594177246094
542.1123657226562 179.11236572265625
626.1123657226562 178.11236572265625
685.1123657226562 177.11236572265625
623.1123657226562 138.11236572265625
551.1123657226562 136.11236572265625
678.1123657226562 131.11236572265625
585.875 481.3636169433594

Yes, there are some noisy points but if you play around with the parameters a little bit you should be able to get rid of them. Or if you can define a region where you are looking for points then you can do an outlier removal. 在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM