[英]What is the difference between the default CV in Gridsearch and Kfold
[英]What is the correct way to calculate performance metrics when using KFold CV or Stratified CV?
閱讀了一些教程后,這是我第一次構建 Keras 深度學習模型,因為我是機器學習和深度學習的初學者。 大多數教程使用train-test split來訓練和測試模型。 但是,我選擇使用 StratifiedKFold CV。 代碼如下。
X = dataset[:,0:80].astype(float)
Y = dataset[:,80]
kfold = StratifiedKFold(n_splits=10,random_state=seed)
for train, test in kfold.split(X, Y):
# create model
model = Sequential()
model.add(Dense())
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='Adam',metrics=['accuracy'])
model.fit(X[train], Y[train], epochs=100,batch_size=128, verbose=0)
scores = model.evaluate(X[test], Y[test], verbose=1)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
cvscores.append(scores[1] * 100)
print("%.2f%% (+/- %.2f%%)" % (numpy.mean(cvscores), numpy.std(cvscores)))
Y[pred]= model.predict(X[test])
acc = accuracy_score(Y[test],Y[pred])
confusion = confusion_matrix(Y[test], Y[pred])
print(confusion)
plot_confusion_matrix(confusion, classes =['No','Yes'],title='Confusion Matrix')
TP= confusion[1,1]
TN= confusion[0,0]
FP= confusion[0,1]
FN= confusion[1,0]
print('Accuracy: ')
print((TP + TN) / float(TP + TN + FP + FN))
print(accuracy_score(Y[test],Y[pred]))
fpr, tpr, thresholds = roc_curve(Y[test], y_pred_prob)
plt.plot(fpr, tpr)
print(roc_auc_score(y_test, y_pred_prob))
y_pred_class = binarize([y_pred_prob], 0.3)[0]
confusion_new = confusion_matrix(Y[test], y_pred_class)
print(confusion_new)
我已經理解了Kfold CV和StratifiedKFoldCV的理論概念。 我遇到過python 中的 KFold 究竟是做什么的? 、 KFolds 交叉驗證與 train_test_split以及更多鏈接。 但是當我計算性能指標時,它給了我以下錯誤。
NameError: name 'pred' is not defined
NameError: name 'y_pred_prob' is not defined
NameError: name 'roc_curve' is not defined
我在這里做錯了什么? 為什么我會收到這些錯誤? 我該如何解決?
謝謝。
您可以嘗試以下方法:
X = dataset[:,0:80].astype(float)
Y = dataset[:,80]
# define model
model = Sequential()
model.add(Dense(10))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='Adam',metrics=['accuracy'])
# create folds
folds = list(StratifiedKFold(n_splits=10, shuffle=True, random_state=1).split(X, Y))
# train model for every fold
for j, (train_idx, val_idx) in enumerate(folds):
print('\nFold ',j)
X_train_cv = X[train_idx]
y_train_cv = Y[train_idx]
X_valid_cv = X[val_idx]
y_valid_cv= Y[val_idx]
model.fit(X_train_cv,
y_train_cv,
epochs=100,
batch_size=128,
validation_data = (X_valid_cv, y_valid_cv),
verbose=0)
print(model.evaluate(X_valid_cv, y_valid_cv))
# check metrics for each fold
pred = model.predict(X_valid_cv)
acc = accuracy_score(y_valid_cv, pred)
confusion = confusion_matrix(y_valid_cv, pred)
print(confusion)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.