繁体   English   中英

Gensim - TF-IDF,如何执行适当的 Genesis TF-IDF?

[英]Gensim - TF-IDF, how to perform a proper genesis TF-IDF?

我正在尝试在我的学士论文的一部分上执行一些 NLP(更准确地说是一个 TF-IDF 项目)。

我将其中的一小部分导出到一个名为“thesis.txt”的文档中,似乎在将清理后的文本数据拟合到 gensim Dictionary 时遇到了问题。

所有的词都被标记化了,存储在一个词袋中,我不知道我做错了什么。

这是我得到的错误:

    ---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-317-73828cccaebe> in <module>
     17 
     18 #Create dictionary
---> 19 dictionary = Dictionary(tokens_no_stop)
     20 
     21 #Create bag of words

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in __init__(self, documents, prune_at)
     89 
     90         if documents is not None:
---> 91             self.add_documents(documents, prune_at=prune_at)
     92 
     93     def __getitem__(self, tokenid):

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in add_documents(self, documents, prune_at)
    210 
    211             # update Dictionary with the document
--> 212             self.doc2bow(document, allow_update=True)  # ignore the result, here we only care about updating token ids
    213 
    214         logger.info(

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in doc2bow(self, document, allow_update, return_missing)
    250         """
    251         if isinstance(document, string_types):
--> 252             raise TypeError("doc2bow expects an array of unicode tokens on input, not a single string")
    253 
    254         # Construct (word, frequency) mapping.

TypeError: doc2bow expects an array of unicode tokens on input, not a single string

在此先感谢您的帮助:)(在我的代码下方查找)

from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from collections import Counter
from gensim.corpora import Dictionary
from gensim.models.tfidfmodel import TfidfModel

f = open('/Users/romeoleon/Desktop/Python & R/NLP/TRIAL_THESIS/thesis.txt','r')
text = f.read()

#Tokenize text
Tokens = word_tokenize(text)

#Lower case everything
Tokens = [t.lower() for t in Tokens]

#Keep only leters
tokens_alpha = [t for t in Tokens if t.isalpha()]

#Remove stopwords
tokens_no_stop = [t for t in tokens_alpha if t not in stopwords.words('french')]

#Create Lemmatizer
lem = WordNetLemmatizer()
lemmatized = [lem.lemmatize(t) for t in tokens_no_stop]


#Create dictionary
dictionary = Dictionary(tokens_no_stop)

#Create bag of words
bow = [dictionary.doc2bow(line) for line in tokens_no_stop]

#Model TFID
tfidf = TfidfModel(bow)
bow_tfidf = tfidf[bow]

您的tokens_no_stop是一个字符串列表,但Dictionary需要一个字符串列表(更准确地说是字符串可迭代的迭代)。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM