繁体   English   中英

为什么我的损失函数随着每个 epoch 增加?

[英]Why is my loss function increasing with each epoch?

我是 ML 的新手,所以如果这是任何人都可以想出的愚蠢问题,我很抱歉。 我在这里使用 TensorFlow 和 Keras。

所以这是我的代码:

import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
    keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer="sgd", loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))

我得到这个作为输出 [我没有显示整个 500 行,只有 20 个时期:

Epoch 1/500
1/1 [==============================] - 0s 210ms/step - loss: 450.9794
Epoch 2/500
1/1 [==============================] - 0s 4ms/step - loss: 1603.0852
Epoch 3/500
1/1 [==============================] - 0s 10ms/step - loss: 5698.4731
Epoch 4/500
1/1 [==============================] - 0s 7ms/step - loss: 20256.3398
Epoch 5/500
1/1 [==============================] - 0s 10ms/step - loss: 72005.1719
Epoch 6/500
1/1 [==============================] - 0s 4ms/step - loss: 255956.5938
Epoch 7/500
1/1 [==============================] - 0s 3ms/step - loss: 909848.5000
Epoch 8/500
1/1 [==============================] - 0s 5ms/step - loss: 3234236.0000
Epoch 9/500
1/1 [==============================] - 0s 3ms/step - loss: 11496730.0000
Epoch 10/500
1/1 [==============================] - 0s 3ms/step - loss: 40867392.0000
Epoch 11/500
1/1 [==============================] - 0s 3ms/step - loss: 145271264.0000
Epoch 12/500
1/1 [==============================] - 0s 3ms/step - loss: 516395584.0000
Epoch 13/500
1/1 [==============================] - 0s 4ms/step - loss: 1835629312.0000
Epoch 14/500
1/1 [==============================] - 0s 3ms/step - loss: 6525110272.0000
Epoch 15/500
1/1 [==============================] - 0s 3ms/step - loss: 23194802176.0000
Epoch 16/500
1/1 [==============================] - 0s 3ms/step - loss: 82450513920.0000
Epoch 17/500
1/1 [==============================] - 0s 3ms/step - loss: 293086593024.0000
Epoch 18/500
1/1 [==============================] - 0s 5ms/step - loss: 1041834835968.0000
Epoch 19/500
1/1 [==============================] - 0s 3ms/step - loss: 3703408164864.0000
Epoch 20/500
1/1 [==============================] - 0s 3ms/step - loss: 13164500484096.0000

如您所见,它呈指数级增长。 很快(在第 64 个时期),这些数字变为inf 然后,从无穷大开始,它做某事并变成NaN (非数字)。 我认为随着时间的推移,模​​型会更好地找出模式,这是怎么回事?

我注意到一件事,如果我将xsys的长度从 20 减少到 10,损失会减少并变为7.9193e-05 在我将两个 numpy 数组的长度增加到18它开始不受控制地增加,否则就可以了。 我给出了 20 个值,因为我认为如果我给出更多数据,模型会更好,这就是我给出 20 个值的原因。

你的阿尔法/学习率似乎太大了。

尝试使用较低的学习率,如下所示:

import tensorflow as tf
import numpy as np
from tensorflow import keras
model = keras.Sequential([
    keras.layers.Dense(units=1, input_shape=[1])
])
# manually set the optimizer, default learning_rate=0.01
opt = keras.optimizers.SGD(learning_rate=0.0001)

model.compile(optimizer=opt, loss="mean_squared_error")
xs = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0, 20.0], dtype=float)
ys = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
model.fit(xs, ys, epochs=500)
print(model.predict([25.0]))

...实际上有效。

ADAM 效果更好的原因,可能是因为它自适应地估计了学习率 - 我认为 ADAM 中的 A 代表 Adaptive ;))

Epoch 1/500
1/1 [==============================] - 0s 129ms/step - loss: 1.2133
Epoch 2/500
1/1 [==============================] - 0s 990us/step - loss: 1.1442
Epoch 3/500
1/1 [==============================] - 0s 0s/step - loss: 1.0792
Epoch 4/500
1/1 [==============================] - 0s 1ms/step - loss: 1.0178
Epoch 5/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9599
Epoch 6/500
1/1 [==============================] - 0s 1ms/step - loss: 0.9053
Epoch 7/500
1/1 [==============================] - 0s 0s/step - loss: 0.8538
Epoch 8/500
1/1 [==============================] - 0s 1ms/step - loss: 0.8053
Epoch 9/500
1/1 [==============================] - 0s 999us/step - loss: 0.7595
Epoch 10/500
1/1 [==============================] - 0s 1ms/step - loss: 0.7163
...
Epoch 499/500
1/1 [==============================] - 0s 1ms/step - loss: 9.9431e-06
Epoch 500/500
1/1 [==============================] - 0s 999us/step - loss: 9.9420e-06

编辑:

来自https://arxiv.org/pdf/1412.6980.pdf

该方法根据梯度的一阶和二阶矩的估计计算不同参数的个体自适应学习率; Adam 这个名字来源于自适应矩估计

优化器 SGD 似乎在您的数据集上表现不佳。 如果你用“adam”替换优化器,你应该得到你期望的结果。

model.compile(optimizer="adam", loss="mean_squared_error")

预测应该是你所期望的

print(model.predict([25.0]))
# [[12.487587]]

我不是 100% 关于为什么 SGD 优化器工作如此糟糕。

编辑:

@MortenJensen(下文)很好地解释了为什么 adam 优化器做得更好。 总结:sgd 做得不好的原因是它需要较小的学习率。 然而,Adam 具有自适应学习率。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM