简体   繁体   中英

python nltk keyword extraction from sentence

"First thing we do, let's kill all the lawyers." - William Shakespeare

Given the quote above, I would like to pull out "kill" and "lawyers" as the two prominent keywords to describe the overall meaning of the sentence. I have extracted the following noun/verb POS tags:

[["First", "NNP"], ["thing", "NN"], ["do", "VBP"], ["lets", "NNS"], ["kill", "VB"], ["lawyers", "NNS"]]

The more general problem I am trying to solve is to distill a sentence to the "most important"* words/tags to summarise the overall "meaning"* of a sentence.

*note the scare quotes. I acknowledge this is a very hard problem and there is most likely no perfect solution at this point in time. Nonetheless, I am interested to see attempts at solving the specific problem (extracting "kill" and "lawyers" ) and the general problem (summarising the overall meaning of a sentence in keywords/tags)

I don't think theres any perfect answer to this question because there aren't any gold-set of input/output mappings which everybody will agree upon. You think the most important words for that sentence are ('kill', 'lawyers'), someone else might argue the correct answer should be ('first', 'kill', 'lawyers'). If you are able to very precisely and completely unambiguously describe exactly what you want your system to do, your problem will be more than half solved.

Until then, I can suggest some additional heuristics to help you get what you want.
Build an idf dictionary using your data, ie build a mapping from every word to a number that correlates with how rare that word is. Bonus points for doing it for larger n-grams as well.

By combining the idf values of each word in your input sentence along with their POS tags, you answer questions of the form 'What is the rarest verb in this sentence?', 'What is the rarest noun in this sentence', etc. In any reasonable corpus, 'kill' should be rarer than 'do', and 'lawyers' rarer than 'thing', so maybe trying to find the rarest noun and rarest verb in a sentence and returning just those two will do the trick for most of your intended use cases. If not, you can always make your algorithm a little more complicated and see if that seems to do the job better.

Ways to expand this include trying to identify larger phrases using n-gram idf's, building a full parse-tree of the sentence (using maybe the stanford parser ) and identifying some pattern within these trees to help you figure out which parts of the tree do important things tend to be based, etc.

One simple approach would be to keep stop word lists for NN, VB etc. These would be high frequency words that usually don't add much semantic content to a sentence.

The snippet below shows distinct lists for each type of word token, but you could just as well employ a single stop word list for both verbs and nouns (such as this one).

stop_words = dict(
    NNP=['first', 'second'],
    NN=['thing'],
    VBP=['do','done'],
    VB=[],
    NNS=['lets', 'things'],
)


def filter_stop_words(pos_list):
    return [[token, token_type] 
            for token, token_type in pos_list 
            if token.lower() not in stop_words[token_type]]

in your case, you can simply use Rake (thanks to Fabian) package for python to get what you need:

>>> path = #your path 
>>> r = RAKE.Rake(path)
>>> r.run("First thing we do, let's kill all the lawyers")
[('lawyers', 1.0), ('kill', 1.0), ('thing', 1.0)]

the path can be for example this file.

but in general, you better to use NLTK package for the NLP usages

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM