简体   繁体   English

如何使用sklearn库使用朴素贝叶斯执行文本分类?

[英]How to perform text classification with naive bayes using sklearn library?

I am trying text classification using naive bayes text classifier. 我正在尝试使用朴素贝叶斯文本分类器进行文本分类。 My data is in the below format and based on the question and excerpt i have to decide the topic of the question. 我的数据采用以下格式,根据问题和摘录,我必须确定问题的主题。 The training data is having more than 20K records. 训练数据有超过2万条记录。 I know SVM would be a better option here but i want to go with Naive Bayes using sklearn library . 我知道SVM在这里会是一个更好的选择,但是我想使用sklearn库来处理朴素贝叶斯

{[{"topic":"electronics","question":"What is the effective differencial effective of this circuit","excerpt":"I'm trying to work out, in general terms, the effective capacitance of this circuit (see diagram: http://i.stack.imgur.com/BS85b.png).  \n\nWhat is the effective capacitance of this circuit and will the ...\r\n        "},
{"topic":"electronics","question":"Outlet Installation--more wires than my new outlet can use [on hold]","excerpt":"I am replacing a wall outlet with a Cooper Wiring USB outlet (TR7745).  The new outlet has 3 wires coming out of it--a black, a white, and a green.  Each one needs to be attached with a wire nut to ...\r\n        "}]}

This is what i have tried so far, 到目前为止,这是我尝试过的

import numpy as np
import json
from sklearn.naive_bayes import *

topic = []
question = []
excerpt = []

with open('training.json') as f:
    for line in f:
        data = json.loads(line)
        topic.append(data["topic"])
        question.append(data["question"])
        excerpt.append(data["excerpt"])

unique_topics = list(set(topic))
new_topic = [x.encode('UTF8') for x in topic]
numeric_topics = [name.replace('gis', '1').replace('security', '2').replace('photo', '3').replace('mathematica', '4').replace('unix', '5').replace('wordpress', '6').replace('scifi', '7').replace('electronics', '8').replace('android', '9').replace('apple', '10') for name in new_topic]
numeric_topics = [float(i) for i in numeric_topics]

x1 = np.array(question)
x2 = np.array(excerpt)
X = zip(*[x1,x2])
Y = np.array(numeric_topics)
print X[0]
clf = BernoulliNB()
clf.fit(X, Y)
print "Prediction:", clf.predict( ['hello'] )

But as expected i am getting ValueError: could not convert string to float. 但是按预期,我得到ValueError:无法将字符串转换为浮点型。 My question is how can i create a simple classifier to classify the question and excerpt into related topic ? 我的问题是如何创建一个简单的分类器,将问题和摘要分类为相关主题?

All classifiers in sklearn require input to be represented as vectors of some fixed dimensionality. sklearn中的所有分类器都要求将输入表示为某些固定维数的向量。 For text there are CountVectorizer , HashingVectorizer and TfidfVectorizer which can transform your strings into vectors of floating numbers. 对于文本,有CountVectorizerHashingVectorizerTfidfVectorizer可以将您的字符串转换为浮点数的向量。

vect = TfidfVectorizer()
X = vect.fit_transform(X)

Obviously, you'll need to vectorize your test set in the same way 显然,您需要以相同的方式对测试集进行向量化

clf.predict( vect.transform(['hello']) )

See a tutorial on using sklearn with textual data . 请参阅有关将sklearn与文本数据一起使用教程

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM