[英]Pandas NLTK tokenizing “unhashable type: 'list'”
下面這個例子: 使用 Python 和 Gephi 進行 Twitter 數據挖掘:案例合成生物學
CSV to: df['Country', 'Responses']
'Country'
Italy
Italy
France
Germany
'Responses'
"Loren ipsum..."
"Loren ipsum..."
"Loren ipsum..."
"Loren ipsum..."
我可以完成第 1 步和第 2 步,但在第 3 步中出現錯誤:
TypeError: unhashable type: 'list'
我相信這是因為我在數據幀中工作並且進行了這個(可能是錯誤的)修改:
原始示例:
#divide to words
tokenizer = RegexpTokenizer(r'\w+')
words = tokenizer.tokenize(tweets)
我的代碼:
#divide to words
tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)
我的完整代碼:
df = pd.read_csv('CountryResponses.csv', encoding='utf-8', skiprows=0, error_bad_lines=False)
tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)
words = df['tokenized_sents']
#remove 100 most common words based on Brown corpus
fdist = FreqDist(brown.words())
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]
Out: ['the',
',',
'.',
'of',
'and',
...]
#keep only most common words
fdist = FreqDist(words)
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]
TypeError: unhashable type: 'list'
關於不可散列的列表有很多問題,但我認為沒有一個是完全相同的。 有什么建議嗎? 謝謝。
追溯
TypeError Traceback (most recent call last)
<ipython-input-164-a0d17b850b10> in <module>()
1 #keep only most common words
----> 2 fdist = FreqDist(words)
3 mostcommon = fdist.most_common(100)
4 mclist = []
5 for i in range(len(mostcommon)):
/home/*******/anaconda3/envs/*******/lib/python3.5/site-packages/nltk/probability.py in __init__(self, samples)
104 :type samples: Sequence
105 """
--> 106 Counter.__init__(self, samples)
107
108 def N(self):
/home/******/anaconda3/envs/******/lib/python3.5/collections/__init__.py in __init__(*args, **kwds)
521 raise TypeError('expected at most 1 arguments, got %d' % len(args))
522 super(Counter, self).__init__()
--> 523 self.update(*args, **kwds)
524
525 def __missing__(self, key):
/home/******/anaconda3/envs/******/lib/python3.5/collections/__init__.py in update(*args, **kwds)
608 super(Counter, self).update(iterable) # fast path when counter is empty
609 else:
--> 610 _count_elements(self, iterable)
611 if kwds:
612 self.update(kwds)
TypeError: unhashable type: 'list'
FreqDist
函數接受一個可迭代的可散列對象(制成字符串,但它可能適用於任何東西)。 你得到的錯誤是因為你傳入了一個可迭代的列表。 正如您所建議的,這是因為您所做的更改:
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)
如果我正確理解Pandas 應用函數文檔,那一行就是將nltk.word_tokenize
函數應用於某些系列。 word-tokenize
返回一個單詞列表。
作為解決方案,只需在嘗試應用FreqDist
之前將列表添加在一起,如下所示:
allWords = []
for wordList in words:
allWords += wordList
FreqDist(allWords)
一個更完整的修訂來做你想做的事。 如果您只需要識別第二組 100,請注意mclist
將第二次識別。
df = pd.read_csv('CountryResponses.csv', encoding='utf-8', skiprows=0, error_bad_lines=False)
tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)
lists = df['tokenized_sents']
words = []
for wordList in lists:
words += wordList
#remove 100 most common words based on Brown corpus
fdist = FreqDist(brown.words())
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]
Out: ['the',
',',
'.',
'of',
'and',
...]
#keep only most common words
fdist = FreqDist(words)
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
mclist.append(mostcommon[i][0])
# mclist contains second-most common set of 100 words
words = [w for w in words if w in mclist]
# this will keep ALL occurrences of the words in mclist
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.