繁体   English   中英

使用 scikit-learn 标记文本

[英]Tokenizing text with scikit-learn

我有以下代码从一组文件(文件夹名称是类别名称)中提取特征以进行文本分类。

import sklearn.datasets
from sklearn.feature_extraction.text import TfidfVectorizer

train = sklearn.datasets.load_files('./train', description=None, categories=None, load_content=True, shuffle=True, encoding=None, decode_error='strict', random_state=0)
print len(train.data)
print train.target_names

vectorizer = TfidfVectorizer()
X_train = vectorizer.fit_transform(train.data)

它抛出以下堆栈跟踪:

Traceback (most recent call last):
  File "C:\EclipseWorkspace\TextClassifier\main.py", line 16, in <module>
    X_train = vectorizer.fit_transform(train.data)
  File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 1285, in fit_transform
    X = super(TfidfVectorizer, self).fit_transform(raw_documents)
  File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 804, in fit_transform
    self.fixed_vocabulary_)
  File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 739, in _count_vocab
    for feature in analyze(doc):
  File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 236, in <lambda>
    tokenize(preprocess(self.decode(doc))), stop_words)
  File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 113, in decode
    doc = doc.decode(self.encoding, self.decode_error)
  File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 32054: invalid start byte

我运行 Python 2.7。 我怎样才能让它工作?

编辑:我刚刚发现这对于utf-8编码的文件非常有效(我的文件是ANSI编码的)。 有什么方法可以让sklearn.datasets.load_files()使用ANSI编码?

ANSI 是 UTF-8 的严格子集,因此它应该可以正常工作。 但是,从堆栈跟踪来看,您的输入似乎在某处包含字节 0xFF,这不是有效的 ANSI 字符。

我通过将错误设置从“严格”更改为“忽略”来解决问题

vectorizer = CountVectorizer(binary = True, decode_error = u'ignore')
word_tokenizer = vectorizer.build_tokenizer()
doc_terms_list_train = [word_tokenizer(str(doc_str, encoding = 'utf-8', errors = 'ignore')) for doc_str in doc_str_list_train]
doc_train_vec = vectorizer.fit_transform(doc_str_list_train)

这里是countvectorizer fucntion的详细解释

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM