[英]NLTK: How do I traverse a noun phrase to return list of strings?
在 NLTK 中,如何遍歷已解析的句子以返回名詞短語字符串列表?
我有兩個目標:
(1) 創建名詞短語列表而不是使用“traverse()”方法打印它們。 我目前使用StringIO來記錄現有的traverse()方法的output。 這不是一個可接受的解決方案。
(2) 反解析名詞短語字符串,這樣:'(NP Michael/NNP Jackson/NNP)' 變成'Michael Jackson'。 NLTK 中有反解析的方法嗎?
NLTK 文檔建議使用 traverse() 來查看名詞短語,但是如何在此遞歸方法中捕獲“t”以便生成字符串名詞短語列表?
from nltk.tag import pos_tag
def traverse(t):
try:
t.label()
except AttributeError:
return
else:
if t.label() == 'NP': print(t) # or do something else
else:
for child in t:
traverse(child)
def nounPhrase(tagged_sent):
# Tag sentence for part of speech
tagged_sent = pos_tag(sentence.split()) # List of tuples with [(Word, PartOfSpeech)]
# Define several tag patterns
grammar = r"""
NP: {<DT|PP\$>?<JJ>*<NN>} # chunk determiner/possessive, adjectives and noun
{<NNP>+} # chunk sequences of proper nouns
{<NN>+} # chunk consecutive nouns
"""
cp = nltk.RegexpParser(grammar) # Define Parser
SentenceTree = cp.parse(tagged_sent)
NounPhrases = traverse(SentenceTree) # collect Noun Phrase
return(NounPhrases)
sentence = "Michael Jackson likes to eat at McDonalds"
tagged_sent = pos_tag(sentence.split())
NP = nounPhrase(tagged_sent)
print(NP)
這目前打印:
(NP 邁克爾/NNP 傑克遜/NNP)
(NP 麥當勞/NNP)
並將“無”存儲到 NP
def extract_np(psent):
for subtree in psent.subtrees():
if subtree.label() == 'NP':
yield ' '.join(word for word, tag in subtree.leaves())
cp = nltk.RegexpParser(grammar)
parsed_sent = cp.parse(tagged_sent)
for npstr in extract_np(parsed_sent):
print (npstr)
提取名詞短語的另一種可能性是使用Constituent-Treelib 庫,它可以通過以下方式安裝: pip install constituent-treelib
。
使用這個庫,我們需要執行以下步驟來提取(名詞)短語:
from constituent_treelib import ConstituentTree, BracketedTree
# First, we define the parsed sentence from where we want to extract phrases
parsed_sentence = "(S (NP (NNP Michael) (NNP Jackson)) (VP (VBZ likes) (S (VP (TO to) (VP (VB eat) (PP (IN at) (NP (NNPS McDonalds))))))))"
# ...and wrap the parsed sentence into a BracketedTree object
parsed_sentence = BracketedTree(parsed_sentence)
# Next, we define the language that should be considered with respect to the underlying models
language = ConstituentTree.Language.English
# You can also specify the desired model for the language ("Small" is selected by default)
spacy_model_size = ConstituentTree.SpacyModelSize.Large
# Now, we create the neccesary NLP pipeline, which is required to create a ConstituentTree object
nlp = ConstituentTree.create_pipeline(language, spacy_model_size)
# If you wish, you can instruct the library to download and install the models automatically
# nlp = ConstituentTree.create_pipeline(language, spacy_model_size, download_models=True)
# Now we can instantiate a ConstituentTree object and pass it the parsed sentence as well as the NLP pipeline
tree = ConstituentTree(parsed_sentence, nlp)
# Finally, we can extract all phrases from the tree
all_phrases = tree.extract_all_phrases(avoid_nested_phrases=True)
>>> {'S': ['Michael Jackson likes to eat at McDonalds'],
>>> 'NP': ['Michael Jackson'],
>>> 'VP': ['likes to eat at McDonalds'],
>>> 'PP': ['at McDonalds']}
# ...or restrict them only to noun phrases
noun_phrases = all_phrases['NP']
>>> ['Michael Jackson']
如果您還想可視化樹,可以按如下方式進行:
tree.export_tree('my_tree.pdf')
結果:
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.