def inspection_performance(predicted_fraud, test_fraud): Inspect_Rate = [] Precision=[] Recall=[] for i in range(1,100,1): threshold = np.percentile(predicted_fraud, i) precision = np.mean(test_fraud[predicted_fraud > threshold]) recall = sum(test_fraud[predicted_fraud > threshold])/sum(test_fraud) Inspect_Rate.append(100-i) Precision.append(precision) Recall.append(recall) compiled_conf_matrix = pd.DataFrame({ 'İnceleme Oranı':Inspect_Rate, 'Kesinlik':Precision, 'Hatırlama':Recall, }) return compiled_conf_matrix
I can not print the confusion_matrix results. Then I want to get the certainty, recall, and the f-score, but I can't. This is my code guide. How can I get this by writing code similar to the one below?
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
As an example, I want to get results like this at the end.
print(classification_report(y_test, y_pred))
precision recall f1-score support
0 0.96 0.68 0.80 37117
1 0.14 0.67 0.23 2883
accuracy 0.68 40000
macro avg 0.55 0.68 0.52 40000
weighted avg 0.90 0.68 0.76 40000
I have to write something else instead of y_test and y_pred. But I don't know how to write.
For reference, the code of the ROC-AUC curve is as follows. Which tag should be instead of (y_true, y_pred) here?
from matplotlib import pyplot as plt from sklearn.metrics import roc_curve from sklearn.metrics import auc fpr, tpr, _ = roc_curve(test_labels.drop(np.where(np.isnan(y_pred))[0]), np.delete(y_pred, np.where(np.isnan(y_pred))[0])) plt.plot(fpr, tpr, label='ROC curve') plt.plot([0, 1], [0, 1], 'k--', label='Random guess') plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve') plt.legend(loc="lower right") plt.show() print('auc: ', auc(fpr, tpr))
print(confusion_matrix(?, ?))
print(classification_report(?, ?))
What should get where there are essentially question marks?
Your question is vague. Do you want the confusion matrix or the classification report?
from sklearn.metrics import confusion_matrix
print(confusion_matrix(y_true, y_pred))
where,
y_true: ground_truth labels
y_pred: predicted labels
Now, in your case there are two parameters of the function: predicted_fraud, test_fraud. Is test_fraud your ground_truth? There must be two labels ie either fraud or no fraud (1,0) if yes then,
from sklearn.metrics import confusion_matrix
print(confusion_matrix(test_fraud.classes, predicted_fraud))
from sklearn.metrics import classification_report
target_names = ['fraud', 'No fraud']
print(classification_report(test_fraud.classes, predicted_fraud, target_names=target_names))
Classification report will give you the main classification metrics of each class (fraud, no fraud) like: precision, recall, f1 score, accuracy etc.
Furthermore, there is a github link , it helped me too, hope this helps you as well.
I solved the problem. The thing that should happen is as follows.
from sklearn.metrics import confusion_matrix
print(confusion_matrix(test_labels, y_pred.round()))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.