簡體   English   中英

帶有 nltk 的 Pandas Dataframe 中大多數“兩個單詞組合”流行的希伯來語單詞的計數

[英]Count of most 'two words combination' popular Hebrew words in a pandas Dataframe with nltk

我有一個包含“注釋”列的 csv 數據文件,其中包含希伯來語的滿意答案。

我想找到最流行的單詞和流行的“2 個單詞組合”,它們出現的次數並將它們繪制在條形圖中。

到目前為止我的代碼:

PYTHONIOENCODING="UTF-8"  
df= pd.read_csv('keep.csv', encoding='utf-8' , usecols=['notes'])
words= df.notes.str.split(expand=True).stack().value_counts()

這會生成一個帶有計數器的單詞列表,但會考慮希伯來語中的所有停用詞,並且不會生成“2 個單詞組合”的頻率。 我也試過這段代碼,這不是我要找的:

 top_N = 30
 txt = df.notes.str.lower().str.replace(r'\|', ' ').str.cat(sep=' ')
 words = nltk.tokenize.word_tokenize(txt)
 word_dist = nltk.FreqDist(words)
 rslt = pd.DataFrame(word_dist.most_common(top_N),
                columns=['Word', 'Frequency'])
 print(rslt)
 print('=' * 60)

我怎樣才能使用 nltk 來做到這一點?

除了 jezrael 發布的內容之外,我還想介紹另一種實現這一目標的技巧。 由於您正在嘗試獲得單個以及兩個詞的頻率,因此您還可以利用everygram函數。

給定一個數據框:

import pandas as pd

df = pd.DataFrame()
df['notes'] = ['this is sentence one', 'is sentence two this one', 'sentence one was good']

使用everygrams(word_tokenize(x), 1, 2)得到一字兩字的形式,得到一、二、三字的組合,可以把2改為3,以此類推。 所以在你的情況下它應該是:

from nltk import everygrams, word_tokenize

x = df['notes'].apply(lambda x: [' '.join(ng) for ng in everygrams(word_tokenize(x), 1, 2)]).to_frame()

此時你應該看到:

                                               notes
0  [this, is, sentence, one, this is, is sentence...
1  [is, sentence, two, this, one, is sentence, se...
2  [sentence, one, was, good, sentence one, one w...

您現在可以通過展平列表和 value_counts 來獲取計數:

import numpy as np

flattenList = pd.Series(np.concatenate(x.notes))
freqDf = flattenList.value_counts().sort_index().rename_axis('notes').reset_index(name = 'frequency')

最終輸出:

           notes  frequency
0           good          1
1             is          2
2    is sentence          2
3            one          3
4        one was          1
5       sentence          3
6   sentence one          2
7   sentence two          1
8           this          2
9        this is          1
10      this one          1
11           two          1
12      two this          1
13           was          1
14      was good          1

現在繪制圖形很容易:

import matplotlib.pyplot as plt 

plt.figure()
flattenList.value_counts().plot(kind = 'bar', title = 'Count of 1-word and 2-word frequencies')
plt.xlabel('Words')
plt.ylabel('Count')
plt.show()

輸出:

在此處輸入圖片說明

使用nltk.util.bigrams

從所有值計算 bigrams 的解決方案:

df = pd.DataFrame({'notes':['aa bb cc','cc cc aa aa']})

top_N = 3
txt = df.notes.str.lower().str.replace(r'\|', ' ').str.cat(sep=' ')
words = nltk.tokenize.word_tokenize(txt)

bigrm = list(nltk.bigrams(words))
print (bigrm)
[('aa', 'bb'), ('bb', 'cc'), ('cc', 'cc'), ('cc', 'cc'), ('cc', 'aa'), ('aa', 'aa')]

word_dist = nltk.FreqDist([' '.join(x) for x in bigrm])
rslt = pd.DataFrame(word_dist.most_common(top_N),
                columns=['Word', 'Frequency'])
print(rslt)
    Word  Frequency
0  cc cc          2
1  aa bb          1
2  bb cc          1

每個列的每個拆分值的雙元組的解決方案:

df = pd.DataFrame({'notes':['aa bb cc','cc cc aa aa']})

top_N = 3
f = lambda x: list(nltk.bigrams(nltk.tokenize.word_tokenize(x)))
b = df.notes.str.lower().str.replace(r'\|', ' ').apply(f)
print (b)

word_dist = nltk.FreqDist([' '.join(y) for x in b for y in x])
rslt = pd.DataFrame(word_dist.most_common(top_N),
                    columns=['Word', 'Frequency'])
print(rslt)
    Word  Frequency
0  aa bb          1
1  bb cc          1
2  cc cc          1

如果需要用單獨的單詞計算二元組:

top_N = 3
f = lambda x: list(nltk.everygrams(nltk.tokenize.word_tokenize(x, 1, 2)))
b = df.notes.str.lower().str.replace(r'\|', ' ').apply(f)
print (b)

word_dist = nltk.FreqDist([' '.join(y) for x in b for y in x])
rslt = pd.DataFrame(word_dist.most_common(top_N),
                    columns=['Word', 'Frequency'])

最后由DataFrame.plot.bar

rslt.plot.bar(x='Word', y='Frequency')

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM