簡體   English   中英

為什么NLTK中的pos_tag將“ please”標記為NN?

[英]Why is pos_tag in NLTK tagging “please” as NN?

我有一個嚴重的問題:我下載了NLTK的最新版本,但得到了一個奇怪的POS輸出:

import nltk
import re

sample_text="start please with me"
tokenized = nltk.sent_tokenize(sample_text)  

for i in tokenized:
            words=nltk.word_tokenize(i)
            tagged=nltk.pos_tag(words)
            chunkGram=r"""Chank___Start:{<VB|VBZ>*}  """                           
            chunkParser=nltk.RegexpParser(chunkGram)
            chunked=chunkParser.parse(tagged)
            print(chunked) 

[OUT]:

(請以S start / JJ / NN with / IN me / PRP)

我不知道為什么“開始”標記為JJ ,“請”標記為NN

默認的NLTK pos_tag以某種方式得知, please成為名詞。 在幾乎所有情況下,用適當的英語都是不正確的,例如

>>> from nltk import pos_tag
>>> pos_tag('Please go away !'.split())
[('Please', 'NNP'), ('go', 'VB'), ('away', 'RB'), ('!', '.')]
>>> pos_tag('Please'.split())
[('Please', 'VB')]
>>> pos_tag('please'.split())
[('please', 'NN')]
>>> pos_tag('please !'.split())
[('please', 'NN'), ('!', '.')]
>>> pos_tag('Please !'.split())
[('Please', 'NN'), ('!', '.')]
>>> pos_tag('Would you please go away ?'.split())
[('Would', 'MD'), ('you', 'PRP'), ('please', 'VB'), ('go', 'VB'), ('away', 'RB'), ('?', '.')]
>>> pos_tag('Would you please go away !'.split())
[('Would', 'MD'), ('you', 'PRP'), ('please', 'VB'), ('go', 'VB'), ('away', 'RB'), ('!', '.')]
>>> pos_tag('Please go away ?'.split())
[('Please', 'NNP'), ('go', 'VB'), ('away', 'RB'), ('?', '.')]

以WordNet為基准,在任何情況下都please取名詞。

>>> from nltk.corpus import wordnet as wn
>>> wn.synsets('please')
[Synset('please.v.01'), Synset('please.v.02'), Synset('please.v.03'), Synset('please.r.01')]

但是我認為這主要是由於用於訓練PerceptronTagger的文本而不是標記器本身的實現。

現在,我們來看看預訓練的PerceptronTragger里面的內容,我們看到它只知道1500多個單詞:

>>> from nltk import PerceptronTagger
>>> tagger = PerceptronTagger()
>>> tagger.tagdict['I']
'PRP'
>>> tagger.tagdict['You']
'PRP'
>>> tagger.tagdict['start']
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: 'start'
>>> tagger.tagdict['Start']
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: 'Start'
>>> tagger.tagdict['please']
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: 'please'
>>> tagger.tagdict['Please']
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
KeyError: 'Please'
>>> len(tagger.tagdict)
1549

您可以采取的一種技巧是入侵標記器:

>>> tagger.tagdict['start'] = 'VB'
>>> tagger.tagdict['please'] = 'VB'
>>> tagger.tag('please start with me'.split())
[('please', 'VB'), ('start', 'VB'), ('with', 'IN'), ('me', 'PRP')]

但是最合乎邏輯的做法是簡單地重新訓練標記器,請參閱http://www.nltk.org/_modules/nltk/tag/perceptron.html#PerceptronTagger.train


而且,如果您不想重新訓練標記器,請參閱Python NLTK pos_tag不返回正確的詞性標記

最有可能的是,使用StanfordPOSTagger可以滿足您的需求:

>>> from nltk import StanfordPOSTagger
>>> sjar = '/home/alvas/stanford-postagger/stanford-postagger.jar'
>>> m = '/home/alvas/stanford-postagger/models/english-left3words-distsim.tagger'
>>> spos_tag = StanfordPOSTagger(m, sjar)
>>> spos_tag.tag('Please go away !'.split())
[(u'Please', u'VB'), (u'go', u'VB'), (u'away', u'RB'), (u'!', u'.')]
>>> spos_tag.tag('Please'.split())
[(u'Please', u'VB')]
>>> spos_tag.tag('Please !'.split())
[(u'Please', u'VB'), (u'!', u'.')]
>>> spos_tag.tag('please !'.split())
[(u'please', u'VB'), (u'!', u'.')]
>>> spos_tag.tag('please'.split())
[(u'please', u'VB')]
>>> spos_tag.tag('Would you please go away !'.split())
[(u'Would', u'MD'), (u'you', u'PRP'), (u'please', u'VB'), (u'go', u'VB'), (u'away', u'RB'), (u'!', u'.')]
>>> spos_tag.tag('Would you please go away ?'.split())
[(u'Would', u'MD'), (u'you', u'PRP'), (u'please', u'VB'), (u'go', u'VB'), (u'away', u'RB'), (u'?', u'.')]

對於Linux:請參閱https://gist.github.com/alvations/e1df0ba227e542955a8a

對於Windows:請參見https://gist.github.com/alvations/0ed8641d7d2e1941b9f9

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM