簡體   English   中英

TensorFlow MNIST for Experts低精度

[英]TensorFlow MNIST for Experts low accuracy

我遵循了tensorflow MNIST專家教程。 我編寫如下代碼,該代碼是本教程的副本。 但是,當我運行我的代碼時,准確性僅為92%,86%...。它在我的Mac上僅運行1或2分鍾即可非常快地運行。 隨着步數的增加,准確性

step 0, training accuracy 0.08
step 100, training accuracy 0.1
step 200, training accuracy 0.16
step 300, training accuracy 0.22
step 400, training accuracy 0.1
step 500, training accuracy 0.18
step 600, training accuracy 0.26
step 700, training accuracy 0.16
step 800, training accuracy 0.24
...
step 19600, training accuracy 0.9
step 19700, training accuracy 0.82
step 19800, training accuracy 0.98
step 19900, training accuracy 0.86
test accuracy 0.9065

但是當我運行官方代碼mnist_deep.py時 它工作非常緩慢,輸出為

step 0, training accuracy 0.1
step 100, training accuracy 0.84
step 200, training accuracy 0.84
step 300, training accuracy 0.9
step 400, training accuracy 0.88
step 500, training accuracy 0.92
step 600, training accuracy 0.98
step 700, training accuracy 0.96
step 800, training accuracy 0.96
step 900, training accuracy 0.96
step 1000, training accuracy 0.96
step 1100, training accuracy 0.94
step 1200, training accuracy 0.96

它運作良好。 我比較我的代碼和mnist_deep.py。 唯一的區別是它們與一起使用。 為什么我的代碼如此糟糕? 以及為什么要與? 下面是我的代碼。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

from tensorflow.examples.tutorials.mnist import input_data

import tensorflow as tf

def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

def conv2d(x, W):
    return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')

def max_pool_2x2(x):
    return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')

def main(_):
    mnist = input_data.read_data_sets("/MNIST_data/", one_hot=True)

    x = tf.placeholder(tf.float32, [None, 784])
    y_ = tf.placeholder(tf.float32, [None, 10])

    x_image = tf.reshape(x, [-1, 28, 28, 1])

    W_conv1 = weight_variable([5, 5, 1, 32])
    b_conv1 = bias_variable([32])
    h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
    h_pool1 = max_pool_2x2(h_conv1)

    W_conv2 = weight_variable([5, 5, 32, 64])
    b_conv2 = bias_variable([64])
    h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
    h_pool2 = max_pool_2x2(h_conv2)

    W_fc1 = weight_variable([7 * 7 * 64, 1024])
    b_fc1 = bias_variable([1024])
    h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
    h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

    keep_prob = tf.placeholder(tf.float32)
    h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

    W_fc2 = weight_variable([1024, 10])
    b_fc2 = bias_variable([10])
    y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
    train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for i in range(20000):
            batch = mnist.train.next_batch(50)
            if i % 100 == 0:
                train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
                print('step %d, training accuracy %g' % (i, train_accuracy))
                train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})

        print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

if __name__ == '__main__':
    tf.app.run(main=main)

您已將train_step.run調用放入if i % 100 == 0:塊內。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM