简体   繁体   中英

Calculate Metrics in Keras model Deep Learning

Hope You are doing well; I have a model that predict the human postures (bend,lie,sit,stand) then I have a multi-label classification problem, I would to calculate metrics (accuracy,fmeasure,recall,precision) of my model, The question, this following code it is correct to calculate those metrics in multi-label classification?

def check_units(y_true, y_pred):
    if y_pred.shape[1] != 1:
        y_pred = y_pred[:,1:2]
        y_true = y_true[:,1:2]

return y_true, y_pred

def precision(y_true, y_pred):
    y_true, y_pred = check_units(y_true, y_pred)
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())

return precision

def recall(y_true, y_pred):
    y_true, y_pred = check_units(y_true, y_pred)
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
    recall = true_positives / (possible_positives + K.epsilon())

return recall

def fmeasure(y_true, y_pred):
    def recall(y_true, y_pred):
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall = true_positives / (possible_positives + K.epsilon())

    return recall

def precision(y_true, y_pred):
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())

    return precision

    y_true, y_pred = check_units(y_true, y_pred)
    precision = precision(y_true, y_pred)
    recall = recall(y_true, y_pred)

return 2*((precision*recall)/(precision+recall+K.epsilon()))

You can use build-in functions in sklearn as follows:

from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score

true_labels=[0,0,1,0,2,1,1,2,2,0,0,1]  # three classes 0,1,2
predicted_labels=[0,1,1,0,2,0,1,2,1,1,0,1] # predicted labels

print("Accuracy=",accuracy_score(true_labels,predicted_labels))
print("F1 Score=",f1_score(true_labels,predicted_labels, average="macro"))
print("Precision=",precision_score(true_labels,predicted_labels, average="macro"))
print("Recall=",recall_score(true_labels,predicted_labels, average="macro"))

You can compare the results of your functions with these built-in functions to test them. Hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM