简体   繁体   中英

Machine learning image feature extraction

There is a problem about feature extraction from grayscale image in machine learning.

I have a gray image converted from colored with this.

from PIL import Image
img = Image.open('source.png').convert('LA')
img.save('greyscalesource.png')

image2 = imread('greyscalesource.png')
print("The type of this input is {}".format(type(image)))
print("Shape: {}".format(image2.shape))
plt.imshow(image2)

output is: 在此处输入图像描述

I actually need to feature extraction from this gray picture because next part is about train a model with this feature for predict to colorized form of an image.

We can't use any deep learning library

There are some of methods such as SIFT ORB FAST... But I really don't know how can extract features for my aim.

#ORB
orb = cv2.ORB_create()
#keypoints and descriptors
kpO, desO = orb.detectAndCompute(img, None)
img7 = cv2.drawKeypoints(img, kpO, 1, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('_ORB.jpg',img7)

Output of above code is just True.

Is there any solution or idea what should I do?

The descriptor des0 in your line:

kpO, desO = orb.detectAndCompute(img, None)

is the feature you need to use for ML algorithm.

Below is an example of Dense SIFT-based matching on a stereo image pair using ML's knn algo:

Input Image: 在此处输入图像描述

Read input image and split stereo image

import cv2
import matplotlib.pyplot as plt
import numpy as np

def split_input_image(im):
    im1 = im[:,:int(im.shape[1]/2)]
    im2 = im[:,int(im.shape[1]/2):im.shape[1]]
    # Convert to grayscale
    g_im1 = cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
    g_im2 = cv2.cvtColor(im2, cv2.COLOR_BGR2GRAY)
    return im1, im2, g_im1, g_im2

im = cv2.imread('../input_data/Stereo_Pair.jpg')
im1, im2, g_im1, g_im2 = split_input_image(im)

Write function for dense sift

def dense_sift(gray_im):
    sift = cv2.xfeatures2d.SIFT_create()
    step_size = 5
    kp = [cv2.KeyPoint(x,y,step_size) for y in range(0,gray_im.shape[0],step_size)
                                      for x in range(0,gray_im.shape[1],step_size)]
    k,feat = sift.compute(gray_im,kp) # keypoints and features
    return feat, kp

Create an empty template image of similar dimensions to vizualize sift matches

visualize_sift_matches = np.zeros([im1.shape[0],im1.shape[1]])

Get features and key-points for gray-scale images (my order is reversed. don't get confused.)

f1, kp1 = dense_sift(g_im1)
f2, kp2 = dense_sift(g_im2)

Get matches from two feature sets using kNN

bf = cv2.BFMatcher()
matches = bf.knnMatch(f1,f2,k=2)

Find common matches for a min threshold

common_matches = []
for m,n in matches:
    if m.distance < 0.8 * n.distance:
        common_matches.append([m])

Juxtapose the two images and Connect the key-points

visualize_sift_matches = cv2.drawMatchesKnn(im1, kp1, im2, kp2, common_matches,
visualize_sift_matches, flags=2)

Visualize

plt.imshow(visualize_sift_matches)
plt.show()

在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM