简体   繁体   中英

How to optimize the ratio of (True positive)/(False postive) instead of accuracy?

The classic metrics is "accuracy", which is related to: (True positive + True negative)/(False positive + False negative)

In a classification problem, False negative is more tolerable than false positive. That is, I want to assign more weight to improving (True positive)/(False positive). How to accomplish this?

model.compile(optimizer='adam',
          loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
          metrics=['accuracy'])

Tensorflow allows sensitivities to be shifted for these metrics https://www.tensorflow.org/api_docs/python/tf/keras/metrics/SensitivityAtSpecificity , or if you want the false positives directly (which I think only gives you access to the number of false positives if that helps): https://www.tensorflow.org/api_docs/python/tf/keras/metrics/FalsePositives . I do not know much about tensorflow but I hope this helps

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM