[英]Why is accuracy lower 0.01, but prediction very good (99,99%)
我用 Python 中的 TensorFlow 2 做了我自己的第一個神經網絡。 我的想法是建立一個神經網絡,它能夠找到將二進制數(8 位)轉換為十進制數的解決方案。 幾次嘗試后:是的,它非常精確!
但我不明白的是:准確性非常低。
第二件事是:model 必須訓練超過 200.000 個值。 對於 256 個可能的答案? 我的代碼/模型中的故障在哪里?
#dataset
def dataset(length, num):
global testdata, solution
testdata = np.random.randint(2, size=(num, length))
solution = testdata.copy()
solution = np.zeros((num, 1))
for i in range(num):
for n in range(length):
x = testdata [i,length - n -1] * (2 ** n)
solution [i] += x
length = 8
num = 220000
dataset (length, num)
#Modell
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(1, activation='relu')
])
model.compile(optimizer='adam',
loss='mean_squared_error',
metrics=['accuracy'])
#Training und Evaluate
model.fit(testdata, solution, epochs=4)
model.evaluate(t_testdata, t_solution, verbose=2)
model.summary()
損失:6.6441e-05 - 准確度:0.0077
它不應該是 0.77 或更高嗎?
您不應該將精度視為回歸問題的指標,因為您正在嘗試 output 單個值,即使精度的微小變化也會導致為零,您可以考慮以下示例。
假設您嘗試預測值 15,而 model 返回值 14.99,則結果精度仍將為零。
m = tf.keras.metrics.Accuracy()
_ = m.update_state([[15]], [[14.99]])
m.result().numpy()
結果:
0.0
您可以考慮以下回歸指標列表。
我用上面列出的指標之一嘗試了同樣的問題,結果如下。
def bin2int(bin_list):
#bin_list = [0, 0, 0, 1]
int_val = ""
for k in bin_list:
int_val += str(int(k))
#int_val = 11011011
return int(int_val, 2)
def dataset(num):
# num - no of samples
bin_len = 8
X = np.zeros((num, bin_len))
Y = np.zeros((num))
for i in range(num):
X[i] = np.around(np.random.rand(bin_len)).astype(int)
Y[i] = bin2int(X[i])
return X, Y
no_of_smaples = 220000
trainX, trainY = dataset(no_of_smaples)
testX, testY = dataset(5)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(1, activation='relu')
])
model.compile(optimizer='adam',
loss='mean_absolute_error',
metrics=['mse'])
model.fit(trainX, trainY,validation_data = (testX,testY),epochs=4)
model.summary()
Output:
Epoch 1/4
6875/6875 [==============================] - 15s 2ms/step - loss: 27.6938 - mse: 2819.9429 - val_loss: 0.0066 - val_mse: 5.2560e-05
Epoch 2/4
6875/6875 [==============================] - 15s 2ms/step - loss: 0.0580 - mse: 0.1919 - val_loss: 0.0066 - val_mse: 6.0013e-05
Epoch 3/4
6875/6875 [==============================] - 16s 2ms/step - loss: 0.0376 - mse: 0.0868 - val_loss: 0.0106 - val_mse: 1.2932e-04
Epoch 4/4
6875/6875 [==============================] - 15s 2ms/step - loss: 0.0317 - mse: 0.0466 - val_loss: 0.0177 - val_mse: 3.2429e-04
Model: "sequential_11"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_24 (Dense) multiple 72
_________________________________________________________________
dense_25 (Dense) multiple 9
_________________________________________________________________
round_4 (Round) multiple 0
=================================================================
Total params: 81
Trainable params: 81
Non-trainable params: 0
預測:
model.predict([[0., 0., 0., 0., 0., 1., 1., 0.]])
數組([[5.993815]],dtype=float32)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.