繁体   English   中英

如何在 Keras 中编译自定义损失 function ,将预测与来自输入张量的信息连接起来?

[英]How do you compile a custom loss function in Keras that concatenates the prediction with information from the input tensor?

我的目标是采用由 [v, x, y] 数据组成的当前输入“图像”来预测当前的 u-field 数据。 我拥有的流场是无发散的,我正在尝试使用以下代码创建一个无发散的自定义损失 function。

`

def div_loss_fn(input_img):
    def loss(y_true, y_pred):
        y_pred = loss
        temp = input_img[0]
        temp = np.concatenate([temp, y_pred], axis=-1)
        m,n,r = temp.shape
        out_arr = np.column_stack((np.repeat(np.arange(m),n),temp.reshape(m*n,-1)))
        out_df = pd.DataFrame(out_arr,columns=['NA', 'v', 'x', 'y', 'u'])
        out_df = out_df.drop(['NA'], axis=1)
        out_df = out_df.sort_values(['y', 'x'])
        out_df['backdudx'] = out_df['u'].diff() / out_df['x'].diff()
        out_df['forwarddudx'] = out_df['u'].diff(periods=-1) / out_df['x'].diff(periods=-1)
        out_df = out_df.sort_values(['x', 'y'])    
        out_df['backdvdy'] = out_df['v'].diff() / out_df['y'].diff()
        out_df['forwarddvdy'] = out_df['v'].diff(periods=-1) / out_df['y'].diff(periods=-1)
        out_df = out_df.fillna(0)
        out_df['divergence'] = (out_df['backdudx'] - out_df['forwarddudx']) + (out_df['backdvdy'] - out_df['forwarddvdy'])
        div_loss = np.sum(out_df['divergence'])
        return K.square(div_loss)
    return loss

`

但是,我在初始化 model 时遇到错误,“零维 arrays 无法连接”,因为尚未为 y_pred 定义形状。 我怎样才能克服这个错误?

我不得不按照@Dr.Snoopy 的建议将损失 function 更改为 Keras 后端功能。 以下工作:

def forward_div_loss_fn(input_img):
    def loss(y_true, y_pred):
        batch_size, width, height, channels = input_img.get_shape().as_list()

        u_field = y_pred
        v_field = input_img[:, :, :, 0]
        x_field = input_img[:, :, :, 1]
        y_field = input_img[:, :, :, 2]
        
        u_field = K.reshape(u_field, shape=[K.shape(u_field)[0], width, height])

        forward_dudx = (u_field[batch_size, 1:-1, 2:] - u_field[batch_size, 1:-1, 1:-1]) / (x_field[batch_size, 1:-1, 2:] - x_field[batch_size, 1:-1, 1:-1])
        forward_dvdy = (v_field[batch_size, :-2, 1:-1] - v_field[batch_size, 1:-1, 1:-1]) / (y_field[batch_size, :-2, 1:-1] - y_field[batch_size, 1:-1, 1:-1])

        forward_divergence = forward_dudx + forward_dvdy  # COMPUTES FORWARD DIVERGENCE
        forward_divergence = tf.where(tf.math.is_nan(forward_divergence), tf.zeros_like(forward_divergence), forward_divergence)

        return K.square(forward_divergence) #+ u_field_loss[1:-1, :]
    return loss

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM