簡體   English   中英

如何在 Keras 中編譯自定義損失 function ,將預測與來自輸入張量的信息連接起來?

[英]How do you compile a custom loss function in Keras that concatenates the prediction with information from the input tensor?

我的目標是采用由 [v, x, y] 數據組成的當前輸入“圖像”來預測當前的 u-field 數據。 我擁有的流場是無發散的,我正在嘗試使用以下代碼創建一個無發散的自定義損失 function。

`

def div_loss_fn(input_img):
    def loss(y_true, y_pred):
        y_pred = loss
        temp = input_img[0]
        temp = np.concatenate([temp, y_pred], axis=-1)
        m,n,r = temp.shape
        out_arr = np.column_stack((np.repeat(np.arange(m),n),temp.reshape(m*n,-1)))
        out_df = pd.DataFrame(out_arr,columns=['NA', 'v', 'x', 'y', 'u'])
        out_df = out_df.drop(['NA'], axis=1)
        out_df = out_df.sort_values(['y', 'x'])
        out_df['backdudx'] = out_df['u'].diff() / out_df['x'].diff()
        out_df['forwarddudx'] = out_df['u'].diff(periods=-1) / out_df['x'].diff(periods=-1)
        out_df = out_df.sort_values(['x', 'y'])    
        out_df['backdvdy'] = out_df['v'].diff() / out_df['y'].diff()
        out_df['forwarddvdy'] = out_df['v'].diff(periods=-1) / out_df['y'].diff(periods=-1)
        out_df = out_df.fillna(0)
        out_df['divergence'] = (out_df['backdudx'] - out_df['forwarddudx']) + (out_df['backdvdy'] - out_df['forwarddvdy'])
        div_loss = np.sum(out_df['divergence'])
        return K.square(div_loss)
    return loss

`

但是,我在初始化 model 時遇到錯誤,“零維 arrays 無法連接”,因為尚未為 y_pred 定義形狀。 我怎樣才能克服這個錯誤?

我不得不按照@Dr.Snoopy 的建議將損失 function 更改為 Keras 后端功能。 以下工作:

def forward_div_loss_fn(input_img):
    def loss(y_true, y_pred):
        batch_size, width, height, channels = input_img.get_shape().as_list()

        u_field = y_pred
        v_field = input_img[:, :, :, 0]
        x_field = input_img[:, :, :, 1]
        y_field = input_img[:, :, :, 2]
        
        u_field = K.reshape(u_field, shape=[K.shape(u_field)[0], width, height])

        forward_dudx = (u_field[batch_size, 1:-1, 2:] - u_field[batch_size, 1:-1, 1:-1]) / (x_field[batch_size, 1:-1, 2:] - x_field[batch_size, 1:-1, 1:-1])
        forward_dvdy = (v_field[batch_size, :-2, 1:-1] - v_field[batch_size, 1:-1, 1:-1]) / (y_field[batch_size, :-2, 1:-1] - y_field[batch_size, 1:-1, 1:-1])

        forward_divergence = forward_dudx + forward_dvdy  # COMPUTES FORWARD DIVERGENCE
        forward_divergence = tf.where(tf.math.is_nan(forward_divergence), tf.zeros_like(forward_divergence), forward_divergence)

        return K.square(forward_divergence) #+ u_field_loss[1:-1, :]
    return loss

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM