[英]How do I know what prior's I'm giving to sci-kit learn? (Naive-bayes classifiers.)
[英]How to iterate through different sci-kit learn classifiers
我正在使用 scikit-learn 運行一堆模型來解決分類問題。
如何迭代不同的 scikit-learn 模型?
from sklearn.ensemble import AdaBoostClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.dummy import DummyClassifier
classifiers_name = ['AdaBoostClassifier',
'BernoulliNB',
'DummyClassifier']
def fitting_classifier(clf, X_train, y_train):
return clf.fit(X_train, y_train)
for clf_n in classifiers_name:
locals()['results_' + clf_n] = fitting_classifier(locals()[clf_n + str(())], X_train, y_train)
我似乎在這部分代碼中遇到了錯誤: fitting_classifier(locals()[clf_n + str(())], X_train, y_train)
。 顯示的錯誤是:
<ipython-input-31-cccf30ff4392> in summary_scores(file_path, image_format, scores)
140 for clf_sn in classifiers_name:
--> 141 locals()['results_' + clf_n] = fitting_classifier(locals()[clf_n + str(())], X_train, y_train)
142
143 # results_AdaBoostClassifier = fitting_classifier(AdaBoostClassifier(), X_train, y_train)
KeyError: 'AdaBoostClassifier()'
對此的任何幫助將不勝感激。 謝謝你。
由於您沒有提到這樣做的目的。 為什么您要迭代不同的 scikit-learn 模型?
如果您想找出上面的哪個 model 更適合並且表現更好,您可以使用類似這樣的東西
# -------- Cross validate model with Kfold stratified cross val ---------------
kfold = StratifiedKFold(n_splits=10)
# Modeling step Test differents algorithms
classifiers = ['AdaBoostClassifier',
'BernoulliNB',
'DummyClassifier']
results = []
for model in classifiers :
results.append(cross_val_score(model, X_train, y = y_train, scoring = "accuracy", cv = kfold, n_jobs=4))
cv_means = []
cv_std = []
for cv_result in results:
cv_means.append(cv_result.mean())
cv_std.append(cv_result.std())
cv_res = pd.DataFrame({"CrossValMeans":cv_means,"CrossValerrors": cv_std,"Algorithm":["AdaBoostClassifier","BernoulliNB","DummyClassifier"]})`
如果你正在嘗試合奏這些
分別訓練它們,並使用 HyperParams 找到 model 的最佳估計器,然后將VotingClassifier用作:
DTC = DecisionTreeClassifier()
ADB = AdaBoostClassifier(DTC)
ada_param_grid = { # Params here }
gsABC = GridSearchCV(ADB,param_grid = ada_param_grid , cv=kfold, scoring="accuracy", n_jobs= 4, verbose = 1)
AdaBoost_best =gsABC.best_estimator_
# Likewise you can do for others and then perform Voting
votingC = VotingClassifier(estimators=[('ada', AdaBoost_best), ('nb', BernoulliNB_best),
('dc', DummyClassifier_best)], voting='soft', n_jobs=4)
votingC = votingC.fit(X_train, Y_train)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.