簡體   English   中英

Nltk word tokenizer將結尾單引號視為單獨的單詞

[英]Nltk word tokenizer treats ending single quote as a separate word

這是來自IPython筆記本的代碼片段:

test = "'v'"
words = word_tokenize(test)
words

輸出是:

["'v", "'"]

如您所見,結尾單引號被視為單獨的單詞,而第一個單詞是“v”的一部分。 我希望有

["'v'"]

要么

["'", "v", "'"]

有沒有辦法實現這個目標?

嘗試from nltk.tokenize.moses import MosesTokenizer, MosesDetokenizer

from nltk.tokenize.moses import MosesTokenizer, MosesDetokenizer
t, d = MosesTokenizer(), MosesDetokenizer()
tokens = t.tokenize(test)
tokens
[''v'']

其中' = '

您還可以使用escape=False參數來防止轉義XML特殊字符:

>>> m.tokenize("'v'", escape=False)
["'v'"]

保持'v'的輸出與原始的Moses標記器一致,即

~/mosesdecoder/scripts/tokenizer$ perl tokenizer.perl -l en < x
Tokenizer Version 1.1
Language: en
Number of threads: 1
&apos;v&apos;

如果您希望探索並處理單引號,還有其他標記器

看起來它不是一個bug,而是來自nltk.word_tokenize()的預期輸出。

這與Robert McIntyre tokenizer.sed的Treebank單詞標記器一致

$ sed -f tokenizer.sed 
'v'
'v ' 

正如@Prateek指出的那樣,您可以嘗試其他可能符合您需求的標記器。


更有趣的問題是為什么起始單引號堅持下面的字符?

我們能不能破解TreebankWordTokenizer ,像什么在做https://github.com/nltk/nltk/blob/develop/nltk/tokenize/ 初始化的.py

import re

from nltk.tokenize.treebank import TreebankWordTokenizer

# Standard word tokenizer.
_treebank_word_tokenizer = TreebankWordTokenizer()

# See discussion on https://github.com/nltk/nltk/pull/1437
# Adding to TreebankWordTokenizer, the splits on
# - chervon quotes u'\xab' and u'\xbb' .
# - unicode quotes u'\u2018', u'\u2019', u'\u201c' and u'\u201d'

improved_open_quote_regex = re.compile(u'([«“‘„]|[`]+|[\']+)', re.U)
improved_close_quote_regex = re.compile(u'([»”’])', re.U)
improved_punct_regex = re.compile(r'([^\.])(\.)([\]\)}>"\'' u'»”’ ' r']*)\s*$', re.U)
_treebank_word_tokenizer.STARTING_QUOTES.insert(0, (improved_open_quote_regex, r' \1 '))
_treebank_word_tokenizer.ENDING_QUOTES.insert(0, (improved_close_quote_regex, r' \1 '))
_treebank_word_tokenizer.PUNCTUATION.insert(0, (improved_punct_regex, r'\1 \2 \3 '))

_treebank_word_tokenizer.tokenize("'v'")

[OUT]:

["'", 'v', "'"]

是的,修改將適用於OP中的字符串,但它將開始打破所有的陰影 ,例如

>>> print(_treebank_word_tokenizer.tokenize("'v', I've been fooled but I'll seek revenge."))
["'", 'v', "'", ',', 'I', "'", 've', 'been', 'fooled', 'but', 'I', "'", 'll', 'seek', 'revenge', '.']

請注意,原始的nltk.word_tokenize()將起始單引號保留到clitics並輸出:

>>> print(nltk.word_tokenize("'v', I've been fooled but I'll seek revenge."))
["'v", "'", ',', 'I', "'ve", 'been', 'fooled', 'but', 'I', "'ll", 'seek', 'revenge', '.']

https://github.com/nltk/nltk/blob/develop/nltk/tokenize/treebank.py#L268上有一些策略可以處理結束引號,但不是在clitics之后的起始引號

但是這個“問題”的主要原因是因為Word Tokenizer沒有平衡引號的感覺。 如果我們看看MosesTokenizer ,有很多機制來處理引號。


有趣的是,斯坦福CoreNLP並沒有這樣做。

在終端:

wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31

java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-preload tokenize,ssplit,pos,lemma,parse,depparse \
-status_port 9000 -port 9000 -timeout 15000

蟒蛇:

>>> from nltk.parse.corenlp import CoreNLPParser
>>> parser = CoreNLPParser()
>>> parser.tokenize("'v'")
<generator object GenericCoreNLPParser.tokenize at 0x1148f9af0>
>>> list(parser.tokenize("'v'"))
["'", 'v', "'"]
>>> list(parser.tokenize("I've"))
['I', "'", 've']
>>> list(parser.tokenize("I've'"))
['I', "'ve", "'"]
>>> list(parser.tokenize("I'lk'"))
['I', "'", 'lk', "'"]
>>> list(parser.tokenize("I'lk"))
['I', "'", 'lk']
>>> list(parser.tokenize("I'll"))
['I', "'", 'll']

看起來有某種正則表達式黑客入侵以識別/糾正英國的陰謀

如果我們做一些逆向工程:

>>> list(parser.tokenize("'re"))
["'", 're']
>>> list(parser.tokenize("you're"))
['you', "'", 're']
>>> list(parser.tokenize("you're'"))
['you', "'re", "'"]
>>> list(parser.tokenize("you 're'"))
['you', "'re", "'"]
>>> list(parser.tokenize("you the 're'"))
['you', 'the', "'re", "'"]

可以添加正則表達式來修補word_tokenize ,例如

>>> import re
>>> pattern = re.compile(r"(?i)(\')(?!ve|ll|t)(\w)\b")
>>> pattern.sub(r'\1 \2', x)
"I'll be going home I've the ' v ' isn't want I want to split but I want to catch tokens like ' v and ' w ' ."
>>> x = "I 'll be going home I 've the 'v ' isn't want I want to split but I want to catch tokens like 'v and 'w ' ."
>>> pattern.sub(r'\1 \2', x)
"I 'll be going home I 've the ' v ' isn't want I want to split but I want to catch tokens like ' v and ' w ' ."

所以我們可以這樣做:

import re
from nltk.tokenize.treebank import TreebankWordTokenizer

# Standard word tokenizer.
_treebank_word_tokenizer = TreebankWordTokenizer()

# See discussion on https://github.com/nltk/nltk/pull/1437
# Adding to TreebankWordTokenizer, the splits on
# - chervon quotes u'\xab' and u'\xbb' .
# - unicode quotes u'\u2018', u'\u2019', u'\u201c' and u'\u201d'

improved_open_quote_regex = re.compile(u'([«“‘„]|[`]+)', re.U)
improved_open_single_quote_regex = re.compile(r"(?i)(\')(?!re|ve|ll|m|t|s|d)(\w)\b", re.U)
improved_close_quote_regex = re.compile(u'([»”’])', re.U)
improved_punct_regex = re.compile(r'([^\.])(\.)([\]\)}>"\'' u'»”’ ' r']*)\s*$', re.U)
_treebank_word_tokenizer.STARTING_QUOTES.insert(0, (improved_open_quote_regex, r' \1 '))
_treebank_word_tokenizer.STARTING_QUOTES.append((improved_open_single_quote_regex, r'\1 \2'))
_treebank_word_tokenizer.ENDING_QUOTES.insert(0, (improved_close_quote_regex, r' \1 '))
_treebank_word_tokenizer.PUNCTUATION.insert(0, (improved_punct_regex, r'\1 \2 \3 '))

def word_tokenize(text, language='english', preserve_line=False):
    """
    Return a tokenized copy of *text*,
    using NLTK's recommended word tokenizer
    (currently an improved :class:`.TreebankWordTokenizer`
    along with :class:`.PunktSentenceTokenizer`
    for the specified language).

    :param text: text to split into words
    :type text: str
    :param language: the model name in the Punkt corpus
    :type language: str
    :param preserve_line: An option to keep the preserve the sentence and not sentence tokenize it.
    :type preserver_line: bool
    """
    sentences = [text] if preserve_line else sent_tokenize(text, language)
    return [token for sent in sentences
            for token in _treebank_word_tokenizer.tokenize(sent)]

[OUT]:

>>> print(word_tokenize("The 'v', I've been fooled but I'll seek revenge."))
['The', "'", 'v', "'", ',', 'I', "'ve", 'been', 'fooled', 'but', 'I', "'ll", 'seek', 'revenge', '.']
>>> word_tokenize("'v' 're'")
["'", 'v', "'", "'re", "'"]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM