簡體   English   中英

海關損失 Function - Keras

[英]Custom Loss Function - Keras

我正在開發一個回歸 model 來預測加密貨幣價格,我創建了一個簡單的損失 function。這個想法很簡單,Y 目標是某個查找 window 的價格變化,所以這些值要么是正值要么是負值。 這個想法是首先應用 mae 損失 function,然后在 y_pred 為正且 y_true 為負的情況下進行懲罰,反之亦然。 並減少 y_pred 為正且 y_true 也為正的損失,反之亦然。 然而,當我用我的損失 function 進行訓練時,精度不會高於 0.50,在常規 mae 損失 function 的情況下,精度達到 0.535 左右。知道是什么原因造成的嗎?

損失function:

def loss_fn(
    # the loss mode [mae, rmse, mape, huber].
    mode="mae",
    # the threshold.
    threshold=0.0,
    # penalize incorrect predictions (when predicted positive and is negative and reversed) (should be >= 1).
    penalizer=1.0, 
    # reduce correct predictions (when predicted positive and is positive and reversed) (should be <= 1).
    reducer=1.0,
):
    def loss_function(y_true, y_pred):
        if mode == "mae":
            loss = keras.losses.MAE(y_true, y_pred)
        elif mode == "rmse":
            loss = K.sqrt(K.mean(K.square(y_pred - y_true)))
        elif mode == "mape":
            loss = keras.losses.mean_absolute_percentage_error(y_true, y_pred)
        elif mode == "huber":
            loss = keras.losses.Huber()(y_true, y_pred)
        if penalizer != 1.0 or reducer != 1.0:
            # apply penalizer.
            mask = tf.where(
                tf.logical_or(
                    tf.logical_and(K.less_equal(y_pred, -1 * threshold), K.greater(y_true, 0.0)),
                    tf.logical_and(K.greater_equal(y_pred, threshold), K.less(y_true, 0.0)),
                ),
                penalizer,
                1.0,
            )[:, 0]
            loss = tf.multiply(loss, mask)
            # apply reducer.
            mask = tf.where(
                tf.logical_or(
                    tf.logical_and(K.less_equal(y_pred, -1 * threshold), K.less(y_true, 0.0)),
                    tf.logical_and(K.greater_equal(y_pred, threshold), K.greater(y_true, 0.0)),
                ),
                reducer,
                1.0,
            )[:, 0]
            loss = tf.multiply(loss, mask)
            loss = tf.math.reduce_mean(loss)
        return loss
    return loss_function
loss = loss_fn(mode="mae", threshold=0.0, penalizer=3.0, reducer=1.0/3.0)

有沒有人看到任何可能導致這種情況的錯誤或錯誤?

來自“mae”的培訓日志作為損失:

Epoch 1/250
2829/2829 [==============================] - 44s 12ms/step - loss: 0.8713 - precision: 0.5311 - val_loss: 0.9731 - val_precision: 0.5343
Epoch 2/250
2829/2829 [==============================] - 32s 11ms/step - loss: 0.8705 - precision: 0.5341 - val_loss: 0.9732 - val_precision: 0.5323
Epoch 3/250
2829/2829 [==============================] - 31s 11ms/step - loss: 0.8702 - precision: 0.5343 - val_loss: 0.9727 - val_precision: 0.5372
Epoch 4/250
2829/2829 [==============================] - 31s 11ms/step - loss: 0.8701 - precision: 0.5345 - val_loss: 0.9730 - val_precision: 0.5336
Epoch 5/250
2829/2829 [==============================] - 32s 11ms/step - loss: 0.8700 - precision: 0.5344 - val_loss: 0.9732 - val_precision: 0.5316
Epoch 6/250
2829/2829 [==============================] - 32s 11ms/step - loss: 0.8699 - precision: 0.5347 - val_loss: 0.9726 - val_precision: 0.5334
Epoch 7/250
2829/2829 [==============================] - 32s 11ms/step - loss: 0.8697 - precision: 0.5346 - val_loss: 0.9731 - val_precision: 0.5331
Epoch 8/250
2829/2829 [==============================] - 32s 11ms/step - loss: 0.8695 - precision: 0.5343 - val_loss: 0.9722 - val_precision: 0.5382
Epoch 9/250
2829/2829 [==============================] - 32s 11ms/step - loss: 0.8693 - precision: 0.5346 - val_loss: 0.9724 - val_precision: 0.5330
Epoch 10/250
2829/2829 [==============================] - 32s 11ms/step - loss: 0.8693 - precision: 0.5345 - val_loss: 0.9732 - val_precision: 0.5331
Epoch 11/250
2829/2829 [==============================] - 32s 11ms/step - loss: 0.8692 - precision: 0.5342 - val_loss: 0.9738 - val_precision: 0.5339
Epoch 12/250
2829/2829 [==============================] - 31s 11ms/step - loss: 0.8690 - precision: 0.5345 - val_loss: 0.9729 - val_precision: 0.5356
Epoch 13/250
2829/2829 [==============================] - 31s 11ms/step - loss: 0.8687 - precision: 0.5350 - val_loss: 0.9728 - val_precision: 0.5342

來自自定義損失 function 的訓練日志(啟用 EarlyStopping):

Epoch 1/250
2829/2829 [==============================] - 42s 12ms/step - loss: 1.4488 - precision: 0.5039 - val_loss: 1.5693 - val_precision: 0.5021
Epoch 2/250
2829/2829 [==============================] - 33s 12ms/step - loss: 1.4520 - precision: 0.5022 - val_loss: 1.6135 - val_precision: 0.5132
Epoch 3/250
2829/2829 [==============================] - 33s 12ms/step - loss: 1.4517 - precision: 0.5019 - val_loss: 1.6874 - val_precision: 0.4983
Epoch 4/250
2829/2829 [==============================] - 33s 12ms/step - loss: 1.4536 - precision: 0.5017 - val_loss: 1.6885 - val_precision: 0.4982
Epoch 5/250
2829/2829 [==============================] - 33s 12ms/step - loss: 1.4513 - precision: 0.5028 - val_loss: 1.6654 - val_precision: 0.5004
Epoch 6/250
2829/2829 [==============================] - 34s 12ms/step - loss: 1.4578 - precision: 0.4997 - val_loss: 1.5716 - val_precision: 0.5019

知道是什么原因造成的嗎?

假設您設置了可重復性種子,否則它可能只是初始化,當您更改損失 function 時,您更改了梯度將迭代以優化您的網絡的域。

並且由於您無法保證您的 model 將達到全局最小值,但很可能會停止在局部最小值,這可能只是意味着,給定相同的初始化(設置種子),優化過程停止在不同的局部最小值。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM