简体   繁体   中英

Trouble with custom loss function in Keras

I'm having trouble adding a penalty to binary_crossentropy. The idea is to penalize the loss function when the mean of predefined groups of errors breaches a certain threshold. Below is the helper function that takes the mask expressing the groups and the already computed crossentropy. It will simply return the number of times some threshold was breached to penalize the actual loss function calling it.

def penalty(groups_mask, binary_crossentropy):
  errors = binary_crossentropy
  unique_groups = set(groups_mask)
  groups_mask = np.array(groups_mask)
  threshold = # whatever
  c = 0
  for group in unique_groups:
      error_mean = K.mean(errors[(groups_mask == group).nonzero()], axis=-1)
      if error_mean > threshold:
        c += 1
  return c

The trouble is that error_mean is not a scalar and I can't figure out a simple ways to compare it to threshold.

You must do everything using tensors and functions from the keras backend

import keras.backend as K

In the line of the error, you must compare things using those functions too:

....
c = K.variable([0])
.....
.....
    errorGreater = K.cast(K.greater(error_mean,threshold), K.floatx())
    c+=K.max(errorGreater) #if error_mean is 1 element only, you can just c+=errorGreater.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM