简体   繁体   中英

Tensorflow: No improvement in loss while training neural net

I made this neural net but every time I run this it gives me different loss to start with which remains constant for the complete loop. I want to predict one value in 'yy' for every 3 values in 'xx' as input. Also how can I show my output? For example: I want to show an array having predictions as close as possible to the values in 'yy'.

import tensorflow as tf

xx=(
        [178.72,218.38,171.1],
        [211.57,215.63,173.13],
        [196.25,196.69,116.91],
        [121.88,132.07,85.02],
        [117.04,135.44,112.54],
        [118.13,124.04,97.98],
        [116.73,125.88,99.04],
        [118.75,125.01,110.16],
        [109.69,111.72,69.07],
        [76.57,96.88,67.38],
        [91.69,128.43,87.57],
        [117.57,146.43,117.57]
      )

yy=(
        [212.09],
        [195.58],
        [127.6],
        [116.5],
        [117.95],
        [117.55],
        [117.55],
        [110.39],
        [74.33],
        [91.08],
        [121.75],
        [127.3]
       )


x=tf.placeholder(tf.float32,[None,3])
y=tf.placeholder(tf.float32,[None,1])
n1=5
n2=5
classes=12

def neuralnetwork(data):

    hl1={'weights':tf.Variable(tf.random_normal([3,n1])),'biases':tf.Variable(tf.random_normal([n1]))}   

    hl2={'weights':tf.Variable(tf.random_normal([n1,n2])),'biases':tf.Variable(tf.random_normal([n2]))}

    op={'weights':tf.Variable(tf.random_normal([n2,classes])),'biases':tf.Variable(tf.random_normal([classes]))}

    l1=tf.add(tf.matmul(data,hl1['weights']),hl1['biases'])
    l1=tf.nn.relu(l1)
    l2=tf.add(tf.matmul(l1,hl2['weights']),hl2['biases'])
    l2=tf.nn.relu(l2)
    output=tf.matmul(l2,op['weights'])+op['biases']
    return output

def train(x):
        pred=neuralnetwork(x)
       # cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels=y))
        sq = tf.square(pred-y)
        loss=tf.reduce_mean(sq)

        optimizer = tf.train.GradientDescentOptimizer(0.01)
        train = optimizer.minimize(loss)

        #optimizer=tf.train.RMSPropOptimizer(0.01).minimize(cost)
        epochs=100



        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            for epoch in range(epochs):
                epoch_loss=0
                for i in range (int(1)):
                    batch_x=xx
                    batch_y=yy
                  # a=tf.shape(xx)
                   #print(sess.run(a))
                    c=sess.run(loss,feed_dict={x:batch_x, y: batch_y})
                    epoch_loss+=c
                    print("Epoch ",epoch," completed out of ",epochs, 'loss:', epoch_loss)


train(x)

I am not sure what exactly you are trying to accomplish, but it seems to me this is a regression problem, not a classification problem. I think the following code is what you want. I have cleaned it up a little bit but still tried to keep it in a way you would recognize it. I would personally write this in a different way.

import tensorflow as tf

xx = (
    [178.72, 218.38, 171.1],
    [211.57, 215.63, 173.13],
    [196.25, 196.69, 116.91],
    [121.88, 132.07, 85.02],
    [117.04, 135.44, 112.54],
    [118.13, 124.04, 97.98],
    [116.73, 125.88, 99.04],
    [118.75, 125.01, 110.16],
    [109.69, 111.72, 69.07],
    [76.57, 96.88, 67.38],
    [91.69, 128.43, 87.57],
    [117.57, 146.43, 117.57]
)

yy = (212.09, 195.58, 127.6, 116.5, 117.95, 117.55, 117.55,
      110.39, 74.33, 91.08, 121.75, 127.3)

x = tf.placeholder(tf.float32, [None, 3])
y = tf.placeholder(tf.float32, [None])


def neuralnetwork(data, n1=5, n2=5):
    hl1 = {'weights': tf.Variable(tf.random_normal([3, n1])), 'biases':
           tf.Variable(tf.random_normal([n1]))}

    hl2 = {'weights': tf.Variable(tf.random_normal([n1, n2])),
           'biases': tf.Variable(tf.random_normal([n2]))}

    op = {'weights': tf.Variable(tf.random_normal([n2, 1])), 'biases':
          tf.Variable(tf.random_normal([1]))}

    l1 = tf.add(tf.matmul(data, hl1['weights']), hl1['biases'])
    l1 = tf.nn.relu(l1)
    l2 = tf.add(tf.matmul(l1, hl2['weights']), hl2['biases'])
    l2 = tf.nn.relu(l2)
    output = tf.matmul(l2, op['weights']) + op['biases']
    return output


N_EPOCHS = 100
if __name__ == '__main__':
    pred = neuralnetwork(x)
    loss = tf.reduce_mean(tf.squared_difference(pred, y))

    optimizer = tf.train.GradientDescentOptimizer(0.01)
    train = optimizer.minimize(loss)

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch in range(N_EPOCHS):
            epoch_loss = sess.run([train, loss], feed_dict={x: xx, y: yy})[1]
            print("Epoch", epoch, " completed out of", N_EPOCHS, "loss:",
                  epoch_loss)

You are making two primary mistakes:

  1. You are trying to have 12 output nodes, what you probably want is a single node, which tries to predict the corresponding y values.

  2. You are not calling the train operation, so the optimizer is not actually doing anything.

Also how can I show my output? For example: I want to show an array having predictions as close as possible to the values in 'yy'

For example with these lines:

predictions = sess.run(pred, feed_dict={x: xx, y: yy})
print("Predictions:", predictions)

This would simply evaluate the part of the computational graph which is necessary to compute the pred tensor using the entire dataset as input by feeding it into the placeholder.

However, as you can see your network simply learns to predict the average value of your labels no matter the input.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM