[英]Why am I getting the same values for test accuracy and balenced accuracy for all the 10 folds?
For each fold test accuracy and balanced test accuracy is different, but the values are same.对于每个折测试精度和平衡测试精度是不同的,但值是相同的。 For example, Fold 1, test accuracy is 86, balanced test accuracy is 86. For Fold 2, test accuracy is 90, balanced test accuracy is 90 For fold 3, test accuracy is 70.555, test accuracy is 70.555...Here is my code
例如Fold 1,测试精度为86,平衡测试精度为86。对于Fold 2,测试精度为90,平衡测试精度为90。对于Fold 3,测试精度为70.555,测试精度为70.555 ...这是我的代码
fold_no = 1
reports = []
accuracies = []
sensitivities = []
specificities = []
test_accuracy = []
for train, test in kfold.split(X_train, y_train):
model = Sequential()
model.add(Conv3D(128, kernel_size=(3, 3, 3))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_regularizer='l2'))
model.add(Dense(4096, activation='relu', kernel_regularizer='l2'))
model.add(Dropout(0.3))
model.add(Dense(1, activation='sigmoid', kernel_regularizer='l2'))
# Compile the model
model.compile(loss=tensorflow.keras.losses.mean_squared_error,
optimizer=tensorflow.keras.optimizers.Adam(learning_rate=learning_rate),
metrics=['accuracy'])```
history = model.fit(X_train[train], y_train[train],
batch_size=batch_size,
epochs=no_epochs,
verbose=verbosity, validation_data=(X_train[test], y_train[test]))
# Compute the classification report for the testing set
y_pred = model.predict(X_test, verbose = 0)
c = model.evaluate(X_test, y_test)
test_accuracy.append(c[1])
report = classification_report(y_test, (y_pred>0.5), output_dict=True)
from sklearn.metrics import balanced_accuracy_score
bal_acc=balanced_accuracy_score(y_test,(y_pred>0.5))
print("balenced acc is " + str(bal_acc))
# Extract the sensitivity and specificity values from the report
sensitivity = report["1"]["recall"]
specificity = report["0"]["recall"]
sensitivities.append(sensitivity)
specificities.append(specificity)
print(specificity))
print(sensitivity))
When classes are balanced to begin with, balanced accuracy and accuracy are the same:当类开始平衡时,平衡的准确性和准确性是相同的:
from sklearn.metrics import accuracy_score, balanced_accuracy_score
y_true = [0, 0, 0, 0, 1, 1, 1, 1] # 4 negatives, 4 positives
y_pred = [0, 0, 1, 0, 1, 0, 1, 1]
print(accuracy_score(y_true, y_pred), balanced_accuracy_score(y_true, y_pred))
# 0.75 0.75
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.