I am writing a custom loss function which calculates mean squared error while ignoring nans. The issue is that my data is an image which occasionally has NaN pixels. I simply want to ignore these nan pixels and calculate the summed squared error between prediction and data, then calculate mean over examples. If I were to write a function for this in Tensorflow I would write:
def nanmean_squared_error(y_true, y_pred):
residuals = y_true - y_pred
residuals_no_nan = tf.where(tf.is_nan(residuals), tf.zeros_like(residuals), residuals)
sum_residuals = tf.reduce_sum(residuals_no_nan, [1, 2])
return sum_residuals
But this code does not work as a custom Keras loss function.
I believe I can use keras.backend.switch/zeros_like/sum instead of the tensorflow versions. But I cannot find any replacement for tf.is_nan. Does anyone have a suggestion on how to implement this?
It seems it doesn't work because you are not taking absolute or square values.
If you mean "squared" error, there must be a square in your code (or you will have negative errors and everything will blow to huge negative errors).
def nanmean_squared_error(y_true, y_pred):
residuals = K.square(y_true - y_pred)
residuals_no_nan = tf.where(tf.is_nan(residuals), tf.zeros_like(residuals), residuals)
sum_residuals = tf.reduce_sum(residuals_no_nan, [1, 2])
return sum_residuals
But to be honest, I'd probably try to replace the image nans with a certain value before entering the model. I don't know what kind of problems may appear from having nans all around, considering gradients, all intermediate layers, etc.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.