I am working on clustering of sentences using Affinity Propagation Clustering. As an intermediate step I am calculating similarity matrix. It works for small dataset but throws memory error for huge dataset. I have a dataset containing sentences.
Sample dataset:
'open contacts',
'open music player',
'play song',
'call john',
'open camera',
'video download',
...
My code:
import nltk, string
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import AffinityPropagation
import pandas as pd
punctuation_map = dict((ord(char), None) for char in string.punctuation)
stemmer = nltk.stem.snowball.SpanishStemmer()
def stem_tokens(tokens):
return [stemmer.stem(item) for item in tokens]
def normalize(text):
return stem_tokens(nltk.word_tokenize(text.lower().translate(punctuation_map)))
vectorizer = TfidfVectorizer(tokenizer=normalize)
def get_clusters(sentences):
tf_idf_matrix = vectorizer.fit_transform(sentences)
similarity_matrix = (tf_idf_matrix * tf_idf_matrix.T).A
affinity_propagation = AffinityPropagation(affinity="precomputed", damping=0.5)
affinity_propagation.fit(similarity_matrix)
# global labels
labels = affinity_propagation.labels_
# global cluster_centers
cluster_centers = affinity_propagation.cluster_centers_indices_
tagged_sentences = zip(sentences, labels)
clusters = {}
for sentence, cluster_id in tagged_sentences:
clusters.setdefault(sentences[cluster_centers[cluster_id]], []).append(sentence)
#print(len(sentence))
return clusters
#csv file
filename = "/home/ubuntu/VA_data/first_50K.csv"
df = pd.read_csv(filename, header = None)
sentences = df.iloc[:, 0].values.tolist()
clusters = get_clusters(sentences)
Can anybody please suggest me efficient way of finding similarity matrix? My dataset contains 1 million sentences.
一种可能的方法是将数据存储在Spark中,Spark也提供可扩展的矩阵乘法。
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.