简体   繁体   English

交叉验证不同指标-Sklearn

[英]Cross Validation for Different Metrics - Sklearn

When I am doing cross validation using Python's Sklearn and take the score of different metrics (accuracy, precision, etc.) like this: 当我使用Python的Sklearn进行交叉验证并采用不同指标(准确性,精度等)的分数时,如下所示:

result_accuracy = cross_val_score(classifier, X_train, y_train, scoring='accuracy', cv=10)
result_precision = cross_val_score(classifier, X_train, y_train, scoring='precision', cv=10)
result_recall = cross_val_score(classifier, X_train, y_train, scoring='recall', cv=10)
result_f1 = cross_val_score(classifier, X_train, y_train, scoring='f1', cv=10)

Did every execution of cross_val_score() function for different metrics made the same 10 folds of the training data or not? 针对不同指标的cross_val_score()函数的每次执行是否使训练数据具有相同的10倍? If not, do I need to make the general 10-folds first using KFold, like this: 如果不是这样,我是否需要首先使用KFold将普通纸折成10折,如下所示:

seed = 7
kf = KFold(n_splits=10, random_state=seed)

result_accuracy = cross_val_score(classifier, X_train, y_train, scoring='accuracy', cv=kf)
result_precision = cross_val_score(classifier, X_train, y_train, scoring='precision', cv=kf)
result_recall = cross_val_score(classifier, X_train, y_train, scoring='recall', cv=kf)
result_f1 = cross_val_score(classifier, X_train, y_train, scoring='f1', cv=kf)

在cross_val_score中使用random_state参数来设置每次相同的分割。

result_accuracy = cross_val_score(classifier, X_train, y_train, scoring='accuracy', cv=10, random_state=42)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM