簡體   English   中英

cross_val_score 和 StratifiedKFold 給出不同的結果

[英]cross_val_score and StratifiedKFold give different result

這是帶有循環的StratifiedKFold代碼

kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=2020)
for train_idx, val_idx in kfold.split(train,labels):
  x_train,y_train=train[train_idx],labels[train_idx]
  x_val,y_val=train[val_idx],labels[val_idx]

  count_vectorizer = CountVectorizer()
  count_vectorizer.fit(x_train)
  X_train_cv = count_vectorizer.transform(x_train)
  X_val_cv  = count_vectorizer.transform(x_val)

  cv_classifier = LogisticRegression(solver='lbfgs',C=25,max_iter=500)
  cv_classifier.fit(X_train_cv, y_train)
  y_pred = cv_classifier.predict(X_val_cv)
  f1=f1_score(y_val, y_pred,average='macro')
  print(f1)

我得到的結果是

0.49
0.46
0.48
0.48
0.50

遵循cross_val_score代碼

from sklearn.model_selection import cross_val_score
cv_classifier = LogisticRegression(solver='lbfgs',C=25,max_iter=500,class_weight='balance')
count_vectorizer = CountVectorizer()
count_vectorizer.fit(train)
train_cv = count_vectorizer.transform(train)
print(cross_val_score(cv_classifier,train_cv, labels, cv=StratifiedKFold(5,shuffle = True),scoring='f1_macro'))

我得到的結果是

0.70 0.74 0.70 0.734 0.679

EIDT我添加了pipeline

cv_classifier = LogisticRegression(solver='lbfgs',C=25,max_iter=500,class_weight='balance')
classifier_pipeline = make_pipeline(CountVectorizer(), cv_classifier)

print(cross_val_score(classifier_pipeline,train, labels, cv=StratifiedKFold(5,shuffle = True),scoring='f1_macro'))

原因在第二種情況更好的結果是,train_cv數據集已經安裝,並通過變換count_vectorizer

在前一種情況下,在每個 CV 折疊中,您在訓練數據上擬合矢量化器並轉換驗證數據。 這是正確的方法,因為矢量化器在擬合期間看不到驗證數據。

為了對cross_val_score()執行相同的cross_val_score() ,您應該創建一個包含向量化器和邏輯回歸模型的管道 然后,您將此管道傳遞給cross_val_score()而數據應該是初始train數據集(而不是train_cv數據集)。

你需要在這里設置隨機種子cv=StratifiedKFold(5,shuffle = True)以及或給予同樣的kfoldcross_val_score

我為您的工作流程創建了一個玩具示例。

from sklearn.pipeline import make_pipeline
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.datasets import fetch_20newsgroups
from sklearn.metrics import f1_score
import numpy as np

categories = ['alt.atheism', 'talk.religion.misc']
newsgroups_train = fetch_20newsgroups(subset='train',
                                      categories=categories)

from sklearn.linear_model import LogisticRegression

train, labels = np.array(newsgroups_train.data), newsgroups_train.target


kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=2020)
for train_idx, val_idx in kfold.split(train, labels):
    x_train, y_train = train[train_idx], labels[train_idx]
    x_val, y_val = train[val_idx], labels[val_idx]

    count_vectorizer = CountVectorizer()
    count_vectorizer.fit(x_train)
    X_train_cv = count_vectorizer.transform(x_train)
    X_val_cv = count_vectorizer.transform(x_val)

    cv_classifier = LogisticRegression(solver='lbfgs', C=25, max_iter=500)
    cv_classifier.fit(X_train_cv, y_train)
    y_pred = cv_classifier.predict(X_val_cv)
    f1 = f1_score(y_val, y_pred, average='macro')
    print(f1)

cv_classifier = LogisticRegression(solver='lbfgs',C=25,max_iter=500,class_weight='balance') 
classifier_pipeline = make_pipeline(CountVectorizer(), cv_classifier) 
print(cross_val_score(classifier_pipeline,train, labels, cv=kfold,scoring='f1_macro'))

輸出:

0.9059466848940533
0.9527147766323024
0.9174937965260546
0.9336297237218165
0.9526315789473685
[0.90594668 0.95271478 0.9174938  0.93362972 0.95263158]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM