简体   繁体   中英

How do you compile a custom loss function in Keras that concatenates the prediction with information from the input tensor?

My goal is to take the current input "image" which consists of [v, x, y] data to predict the current u-field data. The flow field I have is divergence free and I am trying to create a divergence free custom loss function with the code below.

`

def div_loss_fn(input_img):
    def loss(y_true, y_pred):
        y_pred = loss
        temp = input_img[0]
        temp = np.concatenate([temp, y_pred], axis=-1)
        m,n,r = temp.shape
        out_arr = np.column_stack((np.repeat(np.arange(m),n),temp.reshape(m*n,-1)))
        out_df = pd.DataFrame(out_arr,columns=['NA', 'v', 'x', 'y', 'u'])
        out_df = out_df.drop(['NA'], axis=1)
        out_df = out_df.sort_values(['y', 'x'])
        out_df['backdudx'] = out_df['u'].diff() / out_df['x'].diff()
        out_df['forwarddudx'] = out_df['u'].diff(periods=-1) / out_df['x'].diff(periods=-1)
        out_df = out_df.sort_values(['x', 'y'])    
        out_df['backdvdy'] = out_df['v'].diff() / out_df['y'].diff()
        out_df['forwarddvdy'] = out_df['v'].diff(periods=-1) / out_df['y'].diff(periods=-1)
        out_df = out_df.fillna(0)
        out_df['divergence'] = (out_df['backdudx'] - out_df['forwarddudx']) + (out_df['backdvdy'] - out_df['forwarddvdy'])
        div_loss = np.sum(out_df['divergence'])
        return K.square(div_loss)
    return loss

`

However, I run into the error, "zero-dimensional arrays cannot be concatenated" when initializing the model because shapes have not been defined for y_pred yet. How can I get past this error?

I had to change the loss function to Keras backend functions as @Dr.Snoopy had suggested. The following worked:

def forward_div_loss_fn(input_img):
    def loss(y_true, y_pred):
        batch_size, width, height, channels = input_img.get_shape().as_list()

        u_field = y_pred
        v_field = input_img[:, :, :, 0]
        x_field = input_img[:, :, :, 1]
        y_field = input_img[:, :, :, 2]
        
        u_field = K.reshape(u_field, shape=[K.shape(u_field)[0], width, height])

        forward_dudx = (u_field[batch_size, 1:-1, 2:] - u_field[batch_size, 1:-1, 1:-1]) / (x_field[batch_size, 1:-1, 2:] - x_field[batch_size, 1:-1, 1:-1])
        forward_dvdy = (v_field[batch_size, :-2, 1:-1] - v_field[batch_size, 1:-1, 1:-1]) / (y_field[batch_size, :-2, 1:-1] - y_field[batch_size, 1:-1, 1:-1])

        forward_divergence = forward_dudx + forward_dvdy  # COMPUTES FORWARD DIVERGENCE
        forward_divergence = tf.where(tf.math.is_nan(forward_divergence), tf.zeros_like(forward_divergence), forward_divergence)

        return K.square(forward_divergence) #+ u_field_loss[1:-1, :]
    return loss

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM