簡體   English   中英

朴素貝葉斯的K折交叉驗證

[英]K-Fold Cross Validation for Naive Bayes

我正在嘗試使用sklearn對我的朴素貝葉斯分類器進行k折驗證

train = csv_io.read_data("../Data/train.csv")
target = np.array( [x[0] for x in train] )
train = np.array( [x[1:] for x in train] )

#In this case we'll use a random forest, but this could be any classifier
cfr = RandomForestClassifier(n_estimators=100)

#Simple K-Fold cross validation. 10 folds.
cv = cross_validation.KFold(len(train), k=10, indices=False)

#iterate through the training and test cross validation segments and
#run the classifier on each one, aggregating the results into a list
results = []
for traincv, testcv in cv:
    probas = cfr.fit(train[traincv], target[traincv]).predict_proba(train[testcv])
    results.append( myEvaluationFunc(target[testcv], [x[1] for x in probas]) )

#print out the mean of the cross-validated results
print "Results: " + str( np.array(results).mean() )

我從該網站https://www.kaggle.com/wiki/GettingStartedWithPythonForDataScience/history/969找到了代碼。 在示例中,分類器是RandomForestClassifier,我想使用我自己的朴素貝葉斯分類器,但是我不太確定fit方法在這行上的作用probas = cfr.fit(train [traincv],target [traincv])。predict_proba (火車[testcv])

似乎您只需要更改cfr,例如:

cfr = sklearn.naive_bayes.GaussianNB()

它應該工作相同。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM