简体   繁体   中英

Python Pandas NLTK: Show Frequency of Common Phrases (ngrams) From Text Field in Dataframe Using BigramCollocationFinder

I have the following sample tokenized data frame:

No  category    problem_definition_stopwords
175 2521       ['coffee', 'maker', 'brewing', 'properly', '2', '420', '420', '420']
211 1438       ['galley', 'work', 'table', 'stuck']
912 2698       ['cloth', 'stuck']
572 2521       ['stuck', 'coffee']

I ran the code below successfully to get out ngram phrases.

finder = BigramCollocationFinder.from_documents(df['problem_definition_stopwords'])

# only bigrams that appear 1+ times
finder.apply_freq_filter(1) 

# return the 10 n-grams with the highest PMI
finder.nbest(bigram_measures.pmi, 10) 

The results are shown below with top 10 pmi:

[('brewing', 'properly'), ('galley', 'work'), ('maker', 'brewing'), ('properly', '2'), ('work', 'table'), ('coffee', 'maker'), ('2', '420'), ('cloth', 'stuck'), ('table', 'stuck'), ('420', '420')]

I want the above result to appear in a data frame containing frequency counts showing how often those bigrams occurred.

Sample desired output:

ngram                    frequency
'brewing', 'properly'    1
'galley', 'work'         1
'maker', 'brewing'       1
'properly', '2'          1
...                      ...

How do I do the above in Python?

This should do it...

First, set up your dataset (or a similar one):

import pandas as pd
from nltk.collocations import *
import nltk.collocations
from nltk import ngrams
from collections import Counter

s = pd.Series(
    [
        ['coffee', 'maker', 'brewing', 'properly', '2', '420', '420', '420'],
        ['galley', 'work', 'table', 'stuck'],
        ['cloth', 'stuck'],
        ['stuck', 'coffee']
    ]
)

finder = BigramCollocationFinder.from_documents(s.values)
bigram_measures = nltk.collocations.BigramAssocMeasures()

# only bigrams that appear 1+ times
finder.apply_freq_filter(1) 

# return the 10 n-grams with the highest PMI
result = finder.nbest(bigram_measures.pmi, 10)

Use nltk.ngrams to recreate the ngrams list:

ngram_list = [pair for row in s for pair in ngrams(row, 2)]

Use collections.Counter to count the number of times each ngram appears across the entire corpus:

counts = Counter(ngram_list).most_common()

Build a DataFrame that looks like what you want:

pd.DataFrame.from_records(counts, columns=['gram', 'count'])
                   gram  count
0            (420, 420)      2
1       (coffee, maker)      1
2      (maker, brewing)      1
3   (brewing, properly)      1
4         (properly, 2)      1
5              (2, 420)      1
6        (galley, work)      1
7         (work, table)      1
8        (table, stuck)      1
9        (cloth, stuck)      1
10      (stuck, coffee)      1

You can then filter to look at only those ngrams produced by your finder.nbest call:

df = pd.DataFrame.from_records(counts, columns=['gram', 'count'])
df[df['gram'].isin(result)]

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM