I implemented an MLP
and it works perfectly. However, I'm having an issue trying to print the confusion matrix.
My model is defined as...
logits = layers(X, weights, biases)
Where...
def layers(x, weights, biases):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
I train the model on the mnist
dataset. After training I am am able to print out the accuracy of the model successfully...
pred = tf.nn.softmax(logits)
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Accuracy: ", accuracy.eval({X:mnist.test.images, y:mnist.test.labels}))
The accuracy gives me 90%. Now I want to print out the confusion matrix of the results. I tried the following...
confusion = tf.confusion_matrix(
labels=mnist.test.labels, predictions=correct_prediction)
But this gives me errors...
ValueError: Can not squeeze dim[1], expected a dimension of 1, got 10 for 'confusion_matrix/remove_squeezable_dimensions/Squeeze' (op: 'Squeeze') with input shapes: [10000,10].
What is the proper way to print out the confusion matrix? I've been struggling for some time.
Looks like one of the arguments for tf.confusion_matrix
has a 10 as second dim. The question is if mnist.test.labels
or correct_prediction
are one-hot encoded? That would explain it. You need the labels there as one dim tensors. Can you print the shapes for those two tensors?
Also it looks like correct_prediction
is a boolean tensor to flag whether your predictions are accurate. For the confusion matrix you want the predicted label, that would be tf.argmax( pred, 1 )
instead. Similarly, if your labels are one-hot encoded, you want to decode them for the confusion matrix. So try this line for confusion
:
confusion = tf.confusion_matrix(
labels = tf.argmax( mnist.test.labels, 1 ),
predictions = tf.argmax( pred, 1 ) )
In order to print the confusion matrix itself, it is necessary to use eval
with the final result:
print(confusion.eval({x:mnist.test.images, y:mnist.test.labels}))
this works for me:
confusion = tf.confusion_matrix(
labels = tf.argmax( mnist.test.labels, 1 ),
predictions = tf.argmax( y, 1 ) )
print(confusion.eval({x:mnist.test.images, y_:mnist.test.labels}))
[[ 960 0 2 2 1 5 7 2 1 0]
[ 0 1113 3 2 0 1 4 2 10 0]
[ 6 7 941 15 12 2 10 8 27 4]
[ 2 1 27 926 1 12 1 8 24 8]
[ 1 2 6 1 928 0 9 2 9 24]
[ 9 2 8 51 12 729 15 9 50 7]
[ 13 3 10 2 9 9 905 2 5 0]
[ 1 9 28 8 11 1 0 938 3 29]
[ 6 10 7 19 9 13 8 5 891 6]
[ 9 7 2 9 43 5 0 14 12 908]]
For NLTK Confusion Matrix you need a list
classifier = NaiveBayesClassifier.train(trainfeats)
refsets = collections.defaultdict(set)
testsets = collections.defaultdict(set)
lsum = []
tsum = []
for i, (feats, label) in enumerate(testfeats):
refsets[label].add(i)
observed = classifier.classify(feats)
testsets[observed].add(i)
lsum.append(label)
tsum.append(observed
print (nltk.ConfusionMatrix(lsum,tsum))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.