简体   繁体   中英

python nltk -- stemming list of sentences/phrases

I have bunch of sentences in a list and I wanted to use nltk library to stem it. I am able to stem one sentence at a time, however I am having issues stemming sentences from a list and joining them back together. Is there a step I am missing? Quite new to nltk library. Thanks!

import nltk 
from nltk.stem import PorterStemmer 
ps = PorterStemmer()

# Success: one sentences at a time 
data = 'the gamers playing games'
words = word_tokenize(data)
for w in words:
    print(ps.stem(w))


# Fails: 

data_list = ['the gamers playing games',
            'higher scores',
            'sports']
words = word_tokenize(data_list)
for w in words:
    print(ps.stem(w))

# Error: TypeError: expected string or bytes-like object
# result should be: 
['the gamer play game',
 'higher score',
 'sport']

You're passing a list to word_tokenize which you can't.

The solution is to wrap your logic in another for-loop ,

data_list = ['the gamers playing games','higher scores','sports']
for words in data_list:
    words = tokenize.word_tokenize(words)
    for w in words:
        print(ps.stem(w))

>>>>the
gamer
play
game
higher
score
sport

To stem and recompile back into list data structure, I'd go for:

ps = PorterStemmer()
data_list_s = [] 
for words in data_list:
    words = word_tokenize(words)
    words_s = ''
    for w in words:
        w_s = ps.stem(w)
        words_s+=w_s+' '
    data_list_s.append(words_s)

This will put the stemmed results of each element from data_list into a new list called data_list_s .

import nltk
from nltk.tokenize import sent_tokenize
from nltk.stem import PorterStemmer

sentence = """At eight o'clock on Thursday morning, Arthur didn't feel very good. So i take him to hospital."""

sentence = sentence.lower()

word_tokens = nltk.word_tokenize(sentence)
sent_tokens = sent_tokenize(sentence)

stemmer = PorterStemmer()
stemmed_word = []
stemmed_sent = []
for token in word_tokens:
    stemmed_word.append(stemmer.stem(token))
    
for sent_token in sent_tokens:
    stemmed_sent.append(stemmer.stem(sent_token))
    
print(stemmed_word)
print(stemmed_sent)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM