簡體   English   中英

Gensim:類型錯誤:doc2bow 需要輸入的 unicode 標記數組,而不是單個字符串

[英]Gensim: TypeError: doc2bow expects an array of unicode tokens on input, not a single string

我從一些 python 任務開始,我在使用 gensim 時遇到了一個問題。 我正在嘗試從我的磁盤加載文件並處理它們(將它們拆分並小寫()它們)

我的代碼如下:

dictionary_arr=[]
for file_path in glob.glob(os.path.join(path, '*.txt')):
    with open (file_path, "r") as myfile:
        text=myfile.read()
        for words in text.lower().split():
            dictionary_arr.append(words)
dictionary = corpora.Dictionary(dictionary_arr)

列表 (dictionary_arr) 包含所有文件中所有單詞的列表,然后我使用 gensim corpora.Dictionary 來處理列表。 但是我面臨一個錯誤。

TypeError: doc2bow expects an array of unicode tokens on input, not a single string

我不明白有什么問題,請提供一點指導,我們將不勝感激。

在dictionary.py中,初始化函數是:

def __init__(self, documents=None):
    self.token2id = {} # token -> tokenId
    self.id2token = {} # reverse mapping for token2id; only formed on request, to save memory
    self.dfs = {} # document frequencies: tokenId -> in how many documents this token appeared

    self.num_docs = 0 # number of documents processed
    self.num_pos = 0 # total number of corpus positions
    self.num_nnz = 0 # total number of non-zeroes in the BOW matrix

    if documents is not None:
        self.add_documents(documents)

函數 add_documents 從文檔集合構建字典。 每個文檔都是一個令牌列表:

def add_documents(self, documents):

    for docno, document in enumerate(documents):
        if docno % 10000 == 0:
            logger.info("adding document #%i to %s" % (docno, self))
        _ = self.doc2bow(document, allow_update=True) # ignore the result, here we only care about updating token ids
    logger.info("built %s from %i documents (total %i corpus positions)" %
                 (self, self.num_docs, self.num_pos))

所以,如果你以這種方式初始化 Dictionary ,你必須傳遞文檔而不是單個文檔。 例如,

dic = corpora.Dictionary([a.split()])

沒問題。

字典的輸入需要一個標記化的字符串:

dataset = ['driving car ',
           'drive car carefully',
           'student and university']

# be sure to split sentence before feed into Dictionary
dataset = [d.split() for d in dataset]

vocab = Dictionary(dataset)

大家好,我遇到了同樣的問題。 這對我有用

    #Tokenize the sentence into words
    tokens = [word for word in sentence.split()]

    #Create dictionary
    dictionary = corpora.Dictionary([tokens])
    print(dictionary)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM