简体   繁体   中英

fixed-size topics vector in gensim LDA topic modelling for finding similar texts

I use gensim LDA topic modelling to find topics for each document and to check the similarity between documents by comparing the received topics vectors. Each document is given a different number of matching topics, so the comparison of the vector (by cosine similarity) is incorrect because vectors of the same length are required.

This is the related code:

lda_model_bow = models.LdaModel(corpus=bow_corpus, id2word=dictionary, num_topics=3, passes=1, random_state=47)

#---------------Calculating and Viewing the topics----------------------------
vec_bows = [dictionary.doc2bow(filtered_text.split()) for filtered_text in filtered_texts]

vec_lda_topics=[lda_model_bow[vec_bow] for vec_bow in vec_bows]

for id,vec_lda_topic in enumerate(vec_lda_topics):
    print ('document ' ,id, 'topics: ', vec_lda_topic)

The output vectors is:

document  0 topics:  [(1, 0.25697246), (2, 0.08026043), (3, 0.65391296)]
document  1 topics:  [(2, 0.93666667)]
document  2 topics:  [(2, 0.07910537), (3, 0.20132676)]
.....

As you can see, each vector has a different length, so it is not possible to perform cosine similarity between them.

I would like the output to be:

document  0 topics:  [(1, 0.25697246), (2, 0.08026043), (3, 0.65391296)]
document  1 topics:  [(1, 0.0), (2, 0.93666667), (3, 0.0)]
document  2 topics:  [(1, 0.0), (2, 0.07910537), (3, 0.20132676)]
.....

Any ideas how to do it? tnx

I have used gensim for topic modeling before and I had not faced this issue. Ideally, if you pass num_topics=3 then it returns top 3 topics with the highest probability for each document. And then you should be able to generate the cosine similarity matrix by doing something like this:

lda_model_bow = models.LdaModel(corpus=bow_corpus, id2word=dictionary, num_topics=3, passes=1, random_state=47)
vec_lda_topics = lda_model_bow[bow_corpus]
sim_matrix = similarities.MatrixSimilarity(vec_lda_topics)

But for some reason, if you are getting unequal number of topics you can assume a zero probability value for the remaining topics and include them in your vector when you calculate similarity.

Ps: If you could provide a sample of your input documents, it would be easier to reproduce your output and look into it.

因此,正如panktijk在评论以及本主题中所说的那样,解决方案是将minimum_probability从默认值0.01更改为0.0

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM