简体   繁体   中英

Unable to extract individual predictions after training a model

I have a model that reports around 0.67 accuracy on a testing dataset. However i would like to extract the individual predictions per sample to see how it performed on each. I read that i should run the logits layer in the session and would get an array of predictions per sample. This does not seem to work, as it will just have a prediction for the first category for every sample in the verification set.

    # predictions
    logits = tf.layers.dense(flat_d, NUM_CLASSES, activation=tf.nn.relu)

    # Cost function and optimizer
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=labels))
    # cost = -tf.reduce_sum(labels*tf.log(logits + 1e-10))
    train_step = tf.train.AdamOptimizer(learning_rate_).minimize(cost)

    # Accuracy
    correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
....

    print("Initializing the model")
    sess = tf.Session()
    sess.run(tf.global_variables_initializer())

    for e in np.arange(num_epochs):
        bx,by = load_batch(trainX,trainY,batch_size_)
        sess.run(train_step,feed_dict={inputs:bx,labels:by,keep_prob:0.9,learning_rate_:learn_rate})

    acc = sess.run(accuracy,feed_dict={inputs:testX,labels:testY,keep_prob:1.0,learning_rate_:learn_rate})
    print("----------\nAccuracy:%.3f"%(acc))


    # Extract individual precitions by class
    print(sess.run(logits,feed_dict={inputs:testX,labels:testY,keep_prob:1.0}))

Here is the summary of the output:

Number of validation samples: 203

By category:
 0: 34
 1: 138
 2: 31

print(sess.run(accuracy,feed_dict={inputs:testX,labels:testY,keep_prob:1.0,learning_rate_:learn_rate}))
Accuracy:0.680

print(sess.run(logits,feed_dict={inputs:testX,labels:testY,keep_prob:1.0}))

[[1.6453042  0.         0.        ]
 [1.454038   0.         0.        ]
 [1.6372575  0.         0.        ]
 [1.5505953  0.         0.        ]
 [1.6624011  0.         0.        ]
 [1.6376897  0.         0.        ]
 [1.5477558  0.         0.        ]
 [1.5303426  0.         0.        ]
 [1.3636262  0.         0.        ]
 [1.5397886  0.         0.        ]
 [1.8849531  0.         0.        ]
.....

etc.

Any pointers of what i'm doing wrong, or how i could extract the predictions for each sample instead of just the overall accuracy?

Thanks

The command, print(sess.run(logits,feed_dict={inputs:testX,labels:testY,keep_prob:1.0})) prints the Probabilities of each Class .

In your code, you are using NUM_CLASSES and softmax_cross_entropy_with_logits which suggests that your Machine Learning problem is Classification with 3 Classes .

As of now, your Predictions are incorrect (

[[1.6453042  0.         0.        ]
 [1.454038   0.         0.        ]

) because you are using Activation Function, ' Relu ' for Classification .

One correction which you need to make in your code for correct Predictions is to replace ' relu ' with ' softmax ' ie, replace

logits = tf.layers.dense(flat_d, NUM_CLASSES, activation=tf.nn.relu)

with

logits = tf.layers.dense(flat_d, NUM_CLASSES, activation=tf.nn.softmax)

Then print(sess.run(logits,feed_dict={inputs:testX,labels:testY,keep_prob:1.0})) will output some thing like,

[9.87792790e-01, 1.05240829e-02, 4.70216728e-05],
       [8.80016625e-01, 1.13000005e-01, 5.14236337e-04]

with each value representing the Probability corresponding to the respective Class .

Please let me know if you face any other error and I will be Happy to help you.

Hope this helps. Happy Learning!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM