繁体   English   中英

为什么张量流代码中的训练模型的准确性没有改变?

[英]Why the accuracy of the training model is not changed in the tensorflow code?

我是tensorflow和python的新手。 我通过添加一个包含50个单位的隐藏层来修改了一个示例tensorflow代码,但是精度结果变成错误的,并且无论模型进行多少次训练,它都没有改变。 我找不到任何代码问题。 数据集是MNIST:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data", one_hot = True)

batch_size = 100
n_batch = mnist.train.num_examples // batch_size

x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])


W = tf.Variable(tf.zeros([784, 50]))
b = tf.Variable(tf.zeros([50]))

Wx_plus_b_L1 = tf.matmul(x,W) + b
L1 = tf.nn.relu(Wx_plus_b_L1)

W_2 = tf.Variable(tf.zeros([50, 10]))
b_2 = tf.Variable(tf.zeros([10]))

prediction = tf.nn.softmax(tf.matmul(L1, W_2) + b_2)


loss = tf.reduce_mean(tf.square(y - prediction))
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)


correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(prediction,1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

init = tf.global_variables_initializer()


with tf.Session() as sess:
   sess.run(init)
   for epoch in range(21):
    for batch in range(n_batch):
        batch_xs, batch_ys = mnist.train.next_batch(batch_size)
        sess.run(train_step, feed_dict={x:batch_xs, y:batch_ys})
    acc = sess.run(accuracy, feed_dict = {x:mnist.test.images, y:mnist.test.labels})
    print("Iter:" + str(epoch) + ", Testing Accuray:" + str(acc))

输出始终是相同的精度: Iter:0, Testing Accuray:0.1135 2018-05-31 18:05:21.039188: W tensorflow/core/framework/allocator.cc:101] Allocation of 31360000 exceeds 10% of system memory. Iter:1, Testing Accuray:0.1135 2018-05-31 18:05:22.551525: W tensorflow/core/framework/allocator.cc:101] Allocation of 31360000 exceeds 10% of system memory. Iter:2, Testing Accuray:0.1135 2018-05-31 18:05:24.070686: W tensorflow/core/framework/allocator.cc:101] Allocation of 31360000 exceeds 10% of system memory. Iter:0, Testing Accuray:0.1135 2018-05-31 18:05:21.039188: W tensorflow/core/framework/allocator.cc:101] Allocation of 31360000 exceeds 10% of system memory. Iter:1, Testing Accuray:0.1135 2018-05-31 18:05:22.551525: W tensorflow/core/framework/allocator.cc:101] Allocation of 31360000 exceeds 10% of system memory. Iter:2, Testing Accuray:0.1135 2018-05-31 18:05:24.070686: W tensorflow/core/framework/allocator.cc:101] Allocation of 31360000 exceeds 10% of system memory. 这段代码有什么问题? 谢谢~~

我认为这与图表有关。 准确性永远不会更新,因为您正在调用的唯一操作已被更新,请将此代码更改为

with tf.Session() as sess:
   sess.run(init)
   for epoch in range(21):
    for batch in range(n_batch):
        batch_xs, batch_ys = mnist.train.next_batch(batch_size)
        sess.run([train_step,accuracy], feed_dict={x:batch_xs, y:batch_ys})
    acc = sess.run(accuracy, feed_dict = {x:mnist.test.images, y:mnist.test.labels})
    print("Iter:" + str(epoch) + ", Testing Accuray:" + str(acc))

原因是我初始化了所有权重并将零偏。 如果这样,神经元的所有输出将是相同的。 同一层内所有神经元的反向传播行为都相同-相同的梯度,权重更新相同,这显然是不可接受的结果。

我在Titanic数据集上遇到了同样的问题。 帮助了学习率的改变:

optimize = tf.train.AdamOptimizer(learning_rate=0.000001).minimize(mean_loss)

当我从0.001更改时,精度最终开始变化。 在此之前,我尝试使用层数批处理大小隐藏层大小,但没有任何帮助。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM