简体   繁体   中英

How to get all noun phrases in Spacy

I am new to Spacy and I would like to extract "all" the noun phrases from a sentence. I'm wondering how I can do it. I have the following code:

import spacy

nlp = spacy.load("en")

file = open("E:/test.txt", "r")
doc = nlp(file.read())
for np in doc.noun_chunks:
    print(np.text)

But it returns only the base noun phrases, that is, phrases which don't have any other NP in them. That is, for the following phrase, I get the result below:

Phrase: We try to explicitly describe the geometry of the edges of the images.

Result: We, the geometry, the edges, the images .

Expected result: We, the geometry, the edges, the images, the geometry of the edges of the images, the edges of the images.

How can I get all the noun phrases, including nested phrases?

Please see commented code below to recursively combine the nouns. Code inspired by the Spacy Docs here

import spacy

nlp = spacy.load("en")

doc = nlp("We try to explicitly describe the geometry of the edges of the images.")

for np in doc.noun_chunks: # use np instead of np.text
    print(np)

print()

# code to recursively combine nouns
# 'We' is actually a pronoun but included in your question
# hence the token.pos_ == "PRON" part in the last if statement
# suggest you extract PRON separately like the noun-chunks above

index = 0
nounIndices = []
for token in doc:
    # print(token.text, token.pos_, token.dep_, token.head.text)
    if token.pos_ == 'NOUN':
        nounIndices.append(index)
    index = index + 1


print(nounIndices)
for idxValue in nounIndices:
    doc = nlp("We try to explicitly describe the geometry of the edges of the images.")
    span = doc[doc[idxValue].left_edge.i : doc[idxValue].right_edge.i+1]
    span.merge()

    for token in doc:
        if token.dep_ == 'dobj' or token.dep_ == 'pobj' or token.pos_ == "PRON":
            print(token.text)

For every noun chunk you can also get the subtree beneath it. Spacy provides two ways to access that: left_edge and right edge attributes and the subtree attribute, which returns a Token iterator rather than a span. Combining noun_chunks and their subtree lead to some duplication which can be removed later.

Here is an example using the left_edge and right edge attributes

{np.text
  for nc in doc.noun_chunks
  for np in [
    nc, 
    doc[
      nc.root.left_edge.i
      :nc.root.right_edge.i+1]]}                                                                                                                                                                                                                                                                                                                                                                                                                                                 

==>

{'We',
 'the edges',
 'the edges of the images',
 'the geometry',
 'the geometry of the edges of the images',
 'the images'}

Please try this to get all nouns from a text:

import spacy
nlp = spacy.load("en_core_web_sm")
text = ("We try to explicitly describe the geometry of the edges of the images.")
doc = nlp(text)
print([chunk.text for chunk in doc.noun_chunks])
from spacy.matcher import Matcher
import spacy
nlp = spacy.load("en_core_web_sm")

doc = nlp('Features of the iphone applications include a beautiful design, smart search, automatic labels and optional voice responses.') ## sample text
matcher = Matcher(nlp.vocab)
pattern = [{"POS": "NOUN", "OP": "*"}] ## getting all nouns
matcher.add("NOUN_PATTERN", [pattern])
print(matcher(doc, as_spans=True))

Getting all the nouns of your text. Using matcher and patterns are great to get the combination you want. Change the "en_core_web_sm" if you want a better model "en_core_web_bm"

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM