简体   繁体   中英

How can I implement this custom loss function in Keras?

I'm trying to implement a custom loss function on my neural network, which would look like this, if tensors were, instead, numpy arrays:

def custom_loss(y_true, y_pred):
    activated = y_pred[y_true > 1]
    return np.abs(activated.mean() - activated.std()) / activated.std()

The y's have a shape of (batch_size, 1) ; that's to say, it's a scalar output for each input row.

obs: this post ( Converting Tensor to np.array using K.eval() in Keras returns InvalidArgumentError ) gave me an initial direction for which to walk on.

Edit:

This is a reproducible setup for which I'm trying to apply the custom loss function:

import numpy as np

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers


X = np.random.normal(0, 1, (256, 5))
Y = np.random.normal(0, 1, (256, 1))

model = keras.Sequential([
    layers.Dense(1),
])

model.compile(optimizer='adam', loss=custom_loss)

model.fit(X, Y)

The .fit() on the last line throws the error AttributeError: 'Tensor' object has no attribute 'mean' , if I define custom_loss as stated above on my question.

Have you tried writing it in tensorflow as had gradient problems? Or is this just how to do so in tensorflow? -- Don't worry, I won't give you a classic toxic SO response, I would try something like this (not tested: but seems along the right track):

def custom_loss(y_true, y_pred):
    activated = tf.boolean_mask(y_pred, tf.where(y_true>1))
    return tf.math.abs(tf.reduce_mean(activated) - tf.math.reduce_std(activated)) / tf.math.reduce_std(activated))

You may need to play around with dimensions in there, since all of those functions allow for specifying the dimensions to work with.

Also, you will lose the loss function when you save the model, unless you subclass the general loss function. That may be more detail than you are looking for, but if you have problems saving and loading the model, let me know.

It's a simple catch. You can use your custom loss as follows

def custom_loss(y_true, y_pred):
    activated = y_pred[y_true > 1]
    return tf.math.abs(tf.reduce_mean(activated) - 
                       tf.math.reduce_std(activated)) / tf.math.reduce_std(activated)

or if you want to use tf.boolean_mask(tensor, mask, ..) then you need to ensure that the mask condition is in the shape of (None,) or 1D . And if we apply tf.where(y_true>1) it will produce a 2D tensor that needs to be reshaped in your case.

def custom_loss(y_true, y_pred):
    activated = tf.boolean_mask(y_pred, tf.reshape(tf.where(y_true>1),[-1]) )
    return tf.math.abs(tf.reduce_mean(activated) - 
                       tf.math.reduce_std(activated)) / tf.math.reduce_std(activated)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM