簡體   English   中英

訓練后Tensorflow神經網絡始終保持50%的把握

[英]Tensorflow neural network is always 50% sure after training

我剛剛遵循了有關神經網絡的教程,並試圖將自己的知識用於測試。 我制作了一個簡單的XOR邏輯學習網絡,但由於某種原因,它總是返回0.5 (確定0.5 50%)。 這是我的代碼:

import tensorflow as tf
import numpy as np

def random_normal(shape=1):
    return (np.random.random(shape) - 0.5) * 2

train_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]])
train_y = np.array([1, 1, 0, 0])

input_size = 2
hidden_size = 16
output_size = 1

x = tf.placeholder(dtype=tf.float32, name="X")
y = tf.placeholder(dtype=tf.float32, name="Y")

W1 = tf.Variable(random_normal((input_size, hidden_size)), dtype=tf.float32, name="W1")
W2 = tf.Variable(random_normal((hidden_size, output_size)), dtype=tf.float32, name="W2")

b1 = tf.Variable(random_normal(hidden_size), dtype=tf.float32, name="b1")
b2 = tf.Variable(random_normal(output_size), dtype=tf.float32, name="b2")

l1 = tf.sigmoid(tf.add(tf.matmul(x, W1), b1), name="l1")
result = tf.sigmoid(tf.add(tf.matmul(l1, W2), b2), name="l2")

r_squared = tf.square(result - y)
loss = tf.reduce_mean(r_squared)

optimizer = tf.train.GradientDescentOptimizer(0.1)
train = optimizer.minimize(loss)

hm_epochs = 10000

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for itr in range(hm_epochs):
        sess.run(train, {x: train_x, y: train_y})
        if itr % 100 == 0:
            print("Epoch {} done".format(itr))
    print(sess.run(result, {x: [[1, 0]]}))

抱歉,如果這是一個不好的問題,我是機器學習的新手。

您的神經網絡實際上是正確的,答案可能會讓您感到驚訝。 更改...

train_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]])
train_y = np.array([1, 1, 0, 0])

至...

train_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]]).reshape((4, 2))
train_y = np.array([1, 1, 0, 0]).reshape((4, 1))

您可以檢查np.array([1, 1, 0, 0]).shape(4,) ,不是(4, 1) 結果, y的形狀也變為(4,) ,因此result - y的形狀為(4, 4) 換句話說,損失計算的16個差異與預測和標簽的實際比較無關 因此,我對未來的建議是:始終明確指定占位符的形狀,以便更輕松地發現這些錯誤。

您可以在我創建的GitHub gist中找到完整的代碼。 還有一點要注意:最后一個S形實際上使學習[0, 1]輸出變得更加困難 如果刪除它,則網絡收斂速度會更快。

使用TensorFlow

import tensorflow as tf
import keras
import numpy as np
seed = 128

train_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]])
train_y = np.array([1, 1, 0, 0])

test_x = np.array([[1, 0], [0, 1], [1, 1], [0, 0]])
test_y = np.array([1, 1, 0, 0])

num_classes = 2
y_train_binary = keras.utils.to_categorical(train_y, num_classes)
y_test_binary = keras.utils.to_categorical(test_y, num_classes)

def random_normal(shape=1):
    return (np.random.random(shape) - 0.5) * 2

n_hidden_1 = 16
n_input = train_x.shape[1]
n_classes = y_train_binary.shape[1]

weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}

biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

keep_prob = tf.placeholder("float")

training_epochs = 500
display_step = 100
batch_size = 1

x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])

def multilayer_perceptron(x, weights, biases):
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    layer_1 = tf.nn.relu(layer_1)
    out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
    return out_layer

predictions = multilayer_perceptron(x, weights, biases)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predictions, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.1).minimize(cost)

sess = tf.Session()

sess.run(tf.global_variables_initializer())

for epoch in range(training_epochs):
    avg_cost = 0.0
    total_batch = int(len(train_x) / batch_size)
    x_batches = np.array_split(train_x, total_batch)
    y_batches = np.array_split(y_train_binary, total_batch)
    for i in range(total_batch):
        batch_x, batch_y = x_batches[i], y_batches[i]
        _, c = sess.run([optimizer, cost], 
                        feed_dict={x: batch_x, y: batch_y})
        avg_cost += c / total_batch

    if epoch % display_step == 0:
        print("Epoch:", '%04d' % (epoch+1), "cost={:.9f}".format(avg_cost))

print("Optimization Finished!")
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy:", accuracy.eval({x: test_x, y: y_test_binary}, session=sess))

紀元:0001費用= 3.069790050
紀元:0101費用= 0.001279908
紀元:0201費用= 0.000363608
紀元:0301費用= 0.000168160
紀元:0401費用= 0.000095065
優化完成!
准確度:1.0

test_input = [0, 1]
'Label: ', np.argmax(sess.run(predictions , feed_dict={ x:[test_input]}))

(“標簽:”,1)


對於這種簡單的情況,您可以使用Keras快速測試並查看數據集是否非常適合神經網絡。 但是,您將需要模擬更多數據以充分調整網絡。 我認為梯度下降算法僅使用4個實例的反向傳播就無法找到最佳點。

讓我們模擬更多數據

n = 1000

X_train = np.zeros((n, 2))
y_train = np.zeros((n,))

X_test = np.zeros((n//3, 2))
y_test = np.zeros((n//3,))

for i in range(n):
    if n%3 == 0:
        a, b = np.random.randint(0,2), np.random.randint(0,2)
        X_test[i, 0], X_test[i, 1] = a, b
        y_test[i] = (a and not b) or (not a and b)
    a, b = np.random.randint(0,2), np.random.randint(0,2)
    X_train[i, 0], X_train[i, 1] = a, b
    y_train[i] = (a and not b) or (not a and b)

num_classes = 2
y_train_binary = keras.utils.to_categorical(y_train, num_classes)
y_test_binary = keras.utils.to_categorical(y_test, num_classes)

input_shape = (2,)

現在讓我們建立模型

model = Sequential()                 
model.add(Dense(16, activation='relu',input_shape=input_shape))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss='categorical_crossentropy',
                       optimizer='rmsprop',
                       metrics=['acc'])

history=model.fit(X_train,
                  y_train_binary,
                  epochs=10,
                  batch_size=8,
                  validation_data=(X_test, y_test_binary))

這將導致100%的准確性。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM