繁体   English   中英

Python NLTK Word标记化UnicodeDecode错误

[英]Python NLTK Word Tokenize UnicodeDecode Error

尝试以下代码时出现错误。 我尝试从文本文件读取并使用nltk将单词标记化。 有任何想法吗? 文本文件可以在这里找到

from nltk.tokenize import word_tokenize
short_pos = open("./positive.txt","r").read()
#short_pos = short_pos.decode('utf-8').lower()
short_pos_words = word_tokenize(short_pos)

错误:

Traceback (most recent call last):
  File "sentimentAnalysis.py", line 19, in <module>
    short_pos_words = word_tokenize(short_pos)
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 106, in word_tokenize
    return [token for sent in sent_tokenize(text, language)
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 91, in sent_tokenize
    return tokenizer.tokenize(text)
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1226, in tokenize
    return list(self.sentences_from_text(text, realign_boundaries))
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1274, in sentences_from_text
    return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1265, in span_tokenize
    return [(sl.start, sl.stop) for sl in slices]
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1304, in _realign_boundaries
    for sl1, sl2 in _pair_iter(slices):
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 311, in _pair_iter
    for el in it:
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1280, in _slices_from_text
    if self.text_contains_sentbreak(context):
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1325, in text_contains_sentbreak
    for t in self._annotate_tokens(self._tokenize_words(text)):
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 1460, in _annotate_second_pass
    for t1, t2 in _pair_iter(tokens):
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 310, in _pair_iter
    prev = next(it)
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 577, in _annotate_first_pass
    for aug_tok in tokens:
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 542, in _tokenize_words
    for line in plaintext.split('\n'):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xed in position 6: ordinal not in range(128)

谢谢你的支持。

看起来这段文字是用Latin-1编码的。 所以这对我有用:

import codecs    
with codecs.open("positive.txt", "r", "latin-1") as inputfile:
        text=inputfile.read()

    short_pos_words = word_tokenize(text)   
    print len(short_pos_words)

您可以通过例如在诸如TextWrangler之类的优质编辑器中查看文件来测试不同的编码。 您可以

1)用不同的编码打开文件,看看哪种看起来不错,然后

2)查看导致问题的字符。 在您的情况下,该字符是位置4645的字符-恰好是西班牙语评论中的重音词。 那不是Ascii的一部分,所以那是行不通的。 它也不是UTF-8中的有效代码点。

您的文件使用“ latin-1”编码。

from nltk.tokenize import word_tokenize
import codecs   

with codecs.open("positive.txt", "r", "latin-1") as inputfile:
    text=inputfile.read()

short_pos_words = word_tokenize(text)   
print short_pos_words

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM