繁体   English   中英

从Tensorflow获得相反的输出以OR门学习

[英]Getting the opposite outputs from Tensorflow learn with OR gate

给定DNN(多层感知器的简单情况),分别具有5个和3个维度的2个隐藏层,我正在训练一个模型来识别“或”门。

使用张量流学习,似乎给了我反向输出,我不知道为什么:

from tensorflow.contrib import learn
classifier = learn.DNNClassifier(hidden_units=[5, 3], n_classes=2)

or_input = np.array([[0.,0.], [0.,1.], [1.,0.]])
or_output = np.array([[0,1,1]]).T

classifier.fit(or_input, or_output, steps=0.05, batch_size=3)
classifier.predict(np.array([ [1., 1.], [1., 0.] , [0., 0.] , [0., 1.]]))

[OUT]:

array([0, 0, 1, 0])

如果我是“老派”,没有tensorflow.learn按照以下说明进行操作。

import tensorflow as tf
# Parameters
learning_rate = 1.0
num_epochs = 1000

# Network Parameters
input_dim = 2 # Input dimensions.
hidden_dim_1 = 5 # 1st layer number of features
hidden_dim_2 = 3 # 2nd layer number of features
output_dim = 1 # Output dimensions.

# tf Graph input
x = tf.placeholder("float", [None, input_dim])
y = tf.placeholder("float", [hidden_dim_2, output_dim])

# With biases.
weights = {
    'syn0': tf.Variable(tf.random_normal([input_dim, hidden_dim_1])),
    'syn1': tf.Variable(tf.random_normal([hidden_dim_1, hidden_dim_2])),
    'syn2': tf.Variable(tf.random_normal([hidden_dim_2, output_dim]))
}


biases = {
    'b0': tf.Variable(tf.random_normal([hidden_dim_1])),
    'b1': tf.Variable(tf.random_normal([hidden_dim_2])),
    'b2': tf.Variable(tf.random_normal([output_dim]))
}


# Create a model
def multilayer_perceptron(X, weights, biases):
    # Hidden layer 1  + sigmoid activation function
    layer_1 = tf.add(tf.matmul(X, weights['syn0']), biases['b0'])
    layer_1 = tf.nn.sigmoid(layer_1)
    # Hidden layer 2 + sigmoid activation function
    layer_2 = tf.add(tf.matmul(layer_1, weights['syn1']), biases['b1'])
    layer_2 = tf.nn.sigmoid(layer_2)
    # Output layer
    out_layer = tf.matmul(layer_2, weights['syn2']) + biases['b2']
    out_layer = tf.nn.sigmoid(out_layer)
    return out_layer

# Construct model
pred = multilayer_perceptron(x, weights, biases)

# Define loss and optimizer
cost = tf.sub(y, pred) 
# Or you can use fancy cost like:
##tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
init = tf.initialize_all_variables()

or_input = np.array([[0.,0.], [0.,1.], [1.,0.]])
or_output = np.array([[0.,1.,1.]]).T

# Launch the graph
with tf.Session() as sess:
    sess.run(init)
    # Training cycle
    for epoch in range(num_epochs):
        batch_x, batch_y = or_input, or_output # Loop over all data points.
        # Run optimization op (backprop) and cost op (to get loss value)
        _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})
        #print (c)

    # Now let's test it on the unknown dataset.
    new_inputs = np.array([[1.,1.], [1.,0.]])
    feed_dict = {x: new_inputs}
    predictions = sess.run(pred, feed_dict)
    print (predictions)

[OUT]:

[[ 0.99998868]
 [ 0.99998868]]

为什么我使用tensorflow.learn得到反向输出? 我使用tensorflow.learn什么吗?

如何获取tensorflow.learn代码以产生与“老式” tensorflow框架相同的输出?

如果为steps指定正确的参数,则会得到良好的结果:

classifier.fit(or_input, or_output, steps=1000, batch_size=3)

结果:

array([1, 1, 0, 1])

steps如何运作

steps参数指定您运行训练操作的次数。 让我给你举一些例子:

  • 使用batch_size = 16steps = 10 ,您将看到总共160示例
  • 在您的示例中, batch_size = 3steps = 1000 ,该算法将看到3000示例。 实际上,它将看到与您提供的3个示例相同的1000倍

因此, steps数不是纪元数,而是您运行训练操作的次数或看到新批次的次数。


为什么steps = 0.05

tf.learn代码中,它们不检查steps是否为整数。 他们只是运行一会儿循环检查(在此行 ):

last_step < max_steps

因此,如果max_steps = 0.05 ,则其行为将与max_steps = 1 (在循环last_step递增)相同。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM