简体   繁体   中英

KMeans with huge number of clusters

I have a relatively big graph that has around 6000 vertices and I have to use KMeans and see what are the 5467 clusters. I have to use a different metric that's why I gave the distance_matrix as an input. The problem with this is that since n_clusters is too big it doesn't converge. I was advised to make custom adaptations in order t make it work, but I'm not really sure what that means. That's why I am posting this question here. Any help is welcomed: Thank you! Here is my code:

from sklearn.cluster import KMeans

distance_matrix = floyd_warshall_numpy(G)

cluster = KMeans(n_clusters=5467)

cluster.fit(distance_matrix)

graph_labels = cluster.labels_

I would not advise having such a high number of clusters with Kmeans. Instead, try using Agglomerative clustering with euclidean distance. This would allow you to find a cutoff where you can get the expected number of clusters by grouping points.

在此处输入图像描述

Cutting if off at 5 would give you 4 clusters while curring it off at 2 would give you more.

Dummy code -

from sklearn.cluster import AgglomerativeClustering
import numpy as np
X = np.array([[1, 2], [1, 4], [1, 0],[4, 2], [4, 4], [4, 0]])
clustering = AgglomerativeClustering().fit(X)
clustering.labels_
array([1, 1, 1, 0, 0, 0])

You can use a pre-computed matrix for agglomerative clustering for the same as well

Check the documentation link that I have shared.

在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM