[英]Why does my LSTM model predict wrong values although the loss is decreasing?
[英]Why does the loss of the LSTM model rise up over several epochs?
我建立了一個簡單的LSTM回歸模型,該模型是多對多模型。 模型的損失從一開始就下降了,但是上升了幾個時期。 我哪里錯了?
我已經在8個CPU內核上運行了一個從200到600的紀元數的模型,但是結果保持不變。
這是我的代碼。
model = Sequential()
model.add(LSTM(50, activation='relu', return_sequences=True, input_shape=(n_steps, n_features)))
model.add(LSTM(50, activation='relu'))
model.add(Dense(output_steps))
model.compile(optimizer='adam', loss='mse')
我預計損失將逐漸減少,波動很小。
但是我看到了如下結果
7143/7143 [==============================] - 2s 281us/step - loss: 6595.8919
Epoch 63/200
7143/7143 [==============================] - 2s 289us/step - loss: 6557.3760
Epoch 64/200
7143/7143 [==============================] - 2s 280us/step - loss: 6947.0848
Epoch 65/200
7143/7143 [==============================] - 2s 282us/step - loss: 6439.9647
Epoch 66/200
7143/7143 [==============================] - 2s 277us/step - loss: 6583.3354
Epoch 67/200
7143/7143 [==============================] - 2s 278us/step - loss: 6724.0296
Epoch 68/200
7143/7143 [==============================] - 2s 279us/step - loss: 6457.0547
Epoch 69/200
7143/7143 [==============================] - 2s 278us/step - loss: 6371.6533
Epoch 70/200
7143/7143 [==============================] - 2s 279us/step - loss: 6644.9585
Epoch 71/200
7143/7143 [==============================] - 2s 277us/step - loss: 6340.0420
Epoch 72/200
7143/7143 [==============================] - 2s 279us/step - loss: 9484.5966
Epoch 73/200
7143/7143 [==============================] - 2s 277us/step - loss: 10975.8083
Epoch 74/200
7143/7143 [==============================] - 2s 275us/step - loss: 10174.8291
Epoch 75/200
7143/7143 [==============================] - 2s 282us/step - loss: 9863.0310
Epoch 76/200
7143/7143 [==============================] - 2s 278us/step - loss: 9882.6081
Epoch 77/200
7143/7143 [==============================] - 2s 280us/step - loss: 9398.1880
您使用的學習率對於第71個時代之后的優化而言可能太大。 原因是與后面的錯誤相比,與開始時的錯誤相比,學習率可能相對更大。
您可以使用較小的學習率,也可以單調降低lr作為解決方案。
祝好運 :-)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.