简体   繁体   中英

Gensim - TF-IDF, how to perform a proper genesis TF-IDF?

I am trying to perform some NLP (more precisely a TF-IDF project) on a part of my bachelor thesis.

I exported a small part of it in a single document called 'thesis.txt' and it seems that I'm encountering an issue when fitting the cleaned textual data to gensim Dictionary.

All the words are tokenized, stored in a bag of words and I can't figure out what I am doing wrong.

Here's the error I got:

    ---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-317-73828cccaebe> in <module>
     17 
     18 #Create dictionary
---> 19 dictionary = Dictionary(tokens_no_stop)
     20 
     21 #Create bag of words

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in __init__(self, documents, prune_at)
     89 
     90         if documents is not None:
---> 91             self.add_documents(documents, prune_at=prune_at)
     92 
     93     def __getitem__(self, tokenid):

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in add_documents(self, documents, prune_at)
    210 
    211             # update Dictionary with the document
--> 212             self.doc2bow(document, allow_update=True)  # ignore the result, here we only care about updating token ids
    213 
    214         logger.info(

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in doc2bow(self, document, allow_update, return_missing)
    250         """
    251         if isinstance(document, string_types):
--> 252             raise TypeError("doc2bow expects an array of unicode tokens on input, not a single string")
    253 
    254         # Construct (word, frequency) mapping.

TypeError: doc2bow expects an array of unicode tokens on input, not a single string

Thanks in advance for your help:) (Find below my code)

from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from collections import Counter
from gensim.corpora import Dictionary
from gensim.models.tfidfmodel import TfidfModel

f = open('/Users/romeoleon/Desktop/Python & R/NLP/TRIAL_THESIS/thesis.txt','r')
text = f.read()

#Tokenize text
Tokens = word_tokenize(text)

#Lower case everything
Tokens = [t.lower() for t in Tokens]

#Keep only leters
tokens_alpha = [t for t in Tokens if t.isalpha()]

#Remove stopwords
tokens_no_stop = [t for t in tokens_alpha if t not in stopwords.words('french')]

#Create Lemmatizer
lem = WordNetLemmatizer()
lemmatized = [lem.lemmatize(t) for t in tokens_no_stop]


#Create dictionary
dictionary = Dictionary(tokens_no_stop)

#Create bag of words
bow = [dictionary.doc2bow(line) for line in tokens_no_stop]

#Model TFID
tfidf = TfidfModel(bow)
bow_tfidf = tfidf[bow]

Your tokens_no_stop is a list of strings, but Dictionary takes a list of list of strings (more accurately an iterable of iterables of strings).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM