简体   繁体   中英

how to work on numpy arrays during training for custom loss function

I am currently working on a neural.network implemented with tensorflow and keras, what i need to do is to call a function, that i cannot reimplement, that works on numpy arrays and not on Tensors, My first idea was to just call.numpy like this

def loss_gi(y_true, y_pred): # <class 'tensorflow.python.framework.ops.Tensor'>
  x = gamma(np.squeeze(y_true.numpy() , axis=0), np.squeeze(y_pred.numpy() , axis=0)) 
  return np.nansum(x)

with strategy.scope():
  model = hd_unet_model(INPUT_SIZE)
  model.compile(optimizer=Adam(lr=0.001), 
                loss=loss_gi)

Where gamma returns a volume.
But during Model.fit if you try to call y.numpy on a Tensor you get the error Tensor has no attribute numpy , this happens because .numpy works only in eager execution and not in graph execution (at least this is what i understood).

Does anybody know of a way to create a custom loss function that works on numpy arrays?

The loss function got to be in TF so that it produces gradient. Using Numpy loss function kills the whole idea of TF library as tensors with gradients. So your choices are:

  1. using pure TF for the custom loss function.

  2. if y_true is originally in numpy - converting y_true to TF before runnign the model.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM