簡體   English   中英

應用gensim LDA主題建模后,如何獲取每個主題概率最高的文檔並將其保存在csv文件中?

[英]After applying gensim LDA topic modeling, how to get documents with highest probability for each topic and save them in a csv file?

我使用 gensim LDA 主題建模從語料庫中獲取相關主題。 現在我想獲取代表每個主題的前 20 個文檔:在某個主題中概率最高的文檔。 我想用這種格式將它們保存在一個 CSV 文件中:主題 ID、主題詞、主題中每個詞的概率、每個主題的前 20 個文檔的 4 列。

我已經嘗試過 get_document_topics,我認為這是完成這項任務的最佳方法:

all_topics = lda_model.get_document_topics(語料庫,minimum_probability=0.0,per_word_topics=False)

但我不確定如何獲取最能代表主題的前 20 個文檔並將它們添加到 CSV 文件中。

    data_words_nostops = remove_stopwords(processed_docs)
    # Create Dictionary
    id2word = corpora.Dictionary(data_words_nostops)
    # Create Corpus
    texts = data_words_nostops
    # Term Document Frequency
    corpus = [id2word.doc2bow(text) for text in texts]
    # Build LDA model
    lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
                                               id2word=id2word,
                                               num_topics=20,
                                               random_state=100,
                                               update_every=1,
                                               chunksize=100,
                                               passes=10,
                                               alpha='auto',
                                               per_word_topics=True)

    pprint(lda_model.print_topics())
    #save csv
    fn = "topic_terms5.csv"
    if (os.path.isfile(fn)):
        m = "a"
    else:
        m = "w"

    num_topics=20
    # save topic, term, prob data in the file
    with open(fn, m, encoding="utf8", newline='') as csvfile:
        fieldnames = ["topic_id", "term", "prob"]
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
        if (m == "w"):
            writer.writeheader()

        for topic_id in range(num_topics):
            term_probs = lda_model.show_topic(topic_id, topn=6)
            for term, prob in term_probs:
                row = {}
                row['topic_id'] = topic_id
                row['prob'] = prob
                row['term'] = term
                writer.writerow(row)

預期結果:具有以下格式的 CSV 文件:主題 ID、主題詞、每個詞的概率、每個主題的前 20 個文檔的 4 列。

首先,每個文檔都有一個主題向量,一個看起來像這樣的元組列表:

[(0, 3.0161273e-05), (1, 3.0161273e-05), (2, 3.0161273e-05), (3, 3.0161273e-05), (4, 
3.0161273e-05), (5, 0.06556476), (6, 0.14744747), (7, 3.0161273e-05), (8, 3.0161273e- 
05), (9, 3.0161273e-05), (10, 3.0161273e-05), (11, 0.011416071), (12, 3.0161273e-05), 
(13, 3.0161273e-05), (14, 3.0161273e-05), (15, 0.057074558), (16, 3.0161273e-05), 
(17, 3.0161273e-05), (18, 3.0161273e-05), (19, 3.0161273e-05), (20, 0.7178939), (21, 
 3.0161273e-05), (22, 3.0161273e-05), (23, 3.0161273e-05), (24, 3.0161273e-05)]

其中,例如 (0, 3.0161273e-05),0 是主題 ID,3.0161273e-05 是概率。

您需要將此數據結構重新排列成一個表單,以便您可以跨文檔進行比較。

您可以執行以下操作:

#Create a dictionary, with topic ID as the key, and the value is a list of tuples 
(docID, probability of this particular topic for the doc) 

topic_dict = {i: [] for i in range(20)}  # Assuming you have 20 topics. 

#Loop over all the documents to group the probability of each topic

for docID in range(num_docs):
    topic_vector = lda_model[corpus[docID]]
    for topicID, prob in topic_vector:
        topic_dict[topicID].append((docID, prob))

#Then, you can sort the dictionary to find the top 20 documents:

for topicID, probs in topic_dict.items():
    doc_probs = sorted(probs, key = lambda x: x[1], reverse = True)
    docs_top_20 = [dp[0] for dp in doc_probs[:20]]  

您將獲得每個主題的主題 20 文檔。 您可以收集一個列表(這將是一個列表列表)或一個字典,以便將它們輸出。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM