简体   繁体   中英

Get main colors of pixels in an image

Im trying to get all the colors of my Image. Im doing it with

unique_rgbs = np.unique(resized_img.reshape(-1, resized_img.shape[2]), axis=0)

My expectation would be to get 6 Colors, but I get something about 2000 because the border between colors is not solid. I then changed the colors of my image, so that it would only have colors red[255,0,0] green[0,255,0] blue[255,0,0], and yellow[0,255,255], and tried to get rid of all other colors with the following code

img[img[...,0] > 128] = 255
img[img[...,0] <= 128] = 0
 
img[img[...,1] > 128] = 255
img[img[...,1] <= 128] = 0
 
img[img[...,2] > 128] = 255
img[img[...,2] <= 128] = 0

but it is not working. The resulting image is only black and white and the result of np.unique says that there are 26 colors in the image.

original image

zoomed in detail

Clustering seems to work in this case:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans

X = plt.imread("xGmSz.jpg").reshape(-1, 3)

# cluster pixels
N = 6
km = KMeans(n_clusters=N, init="k-means++")
km.fit(X)
# get cluster centers
colors = km.cluster_centers_.astype(int)
plt.imshow(cen.reshape(1, N, 3))

It gives:

在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM