簡體   English   中英

Gensim - TF-IDF,如何執行適當的 Genesis TF-IDF?

[英]Gensim - TF-IDF, how to perform a proper genesis TF-IDF?

我正在嘗試在我的學士論文的一部分上執行一些 NLP(更准確地說是一個 TF-IDF 項目)。

我將其中的一小部分導出到一個名為“thesis.txt”的文檔中,似乎在將清理后的文本數據擬合到 gensim Dictionary 時遇到了問題。

所有的詞都被標記化了,存儲在一個詞袋中,我不知道我做錯了什么。

這是我得到的錯誤:

    ---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-317-73828cccaebe> in <module>
     17 
     18 #Create dictionary
---> 19 dictionary = Dictionary(tokens_no_stop)
     20 
     21 #Create bag of words

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in __init__(self, documents, prune_at)
     89 
     90         if documents is not None:
---> 91             self.add_documents(documents, prune_at=prune_at)
     92 
     93     def __getitem__(self, tokenid):

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in add_documents(self, documents, prune_at)
    210 
    211             # update Dictionary with the document
--> 212             self.doc2bow(document, allow_update=True)  # ignore the result, here we only care about updating token ids
    213 
    214         logger.info(

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in doc2bow(self, document, allow_update, return_missing)
    250         """
    251         if isinstance(document, string_types):
--> 252             raise TypeError("doc2bow expects an array of unicode tokens on input, not a single string")
    253 
    254         # Construct (word, frequency) mapping.

TypeError: doc2bow expects an array of unicode tokens on input, not a single string

在此先感謝您的幫助:)(在我的代碼下方查找)

from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from collections import Counter
from gensim.corpora import Dictionary
from gensim.models.tfidfmodel import TfidfModel

f = open('/Users/romeoleon/Desktop/Python & R/NLP/TRIAL_THESIS/thesis.txt','r')
text = f.read()

#Tokenize text
Tokens = word_tokenize(text)

#Lower case everything
Tokens = [t.lower() for t in Tokens]

#Keep only leters
tokens_alpha = [t for t in Tokens if t.isalpha()]

#Remove stopwords
tokens_no_stop = [t for t in tokens_alpha if t not in stopwords.words('french')]

#Create Lemmatizer
lem = WordNetLemmatizer()
lemmatized = [lem.lemmatize(t) for t in tokens_no_stop]


#Create dictionary
dictionary = Dictionary(tokens_no_stop)

#Create bag of words
bow = [dictionary.doc2bow(line) for line in tokens_no_stop]

#Model TFID
tfidf = TfidfModel(bow)
bow_tfidf = tfidf[bow]

您的tokens_no_stop是一個字符串列表,但Dictionary需要一個字符串列表(更准確地說是字符串可迭代的迭代)。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM