简体   繁体   English

Tensorflow多类分类损失

[英]Tensorflow multi class classification loss

I've been recently trying to implement a multi-class classification LSTM architecture, based on this example: biLSTM example 我最近一直在尝试基于此示例实现多类分类LSTM体系结构: biLSTM示例

After I changed 我改变之后

self.label = tf.placeholder(tf.int32, [None])

to

self.label = tf.placeholder(tf.int32, [None,self.n_class)

The model seems to train normally, yet I am having trouble with this step: 该模型似乎可以正常训练,但是我在执行此步骤时遇到了麻烦:

    self.loss = tf.reduce_mean(
        tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y_hat, labels=self.label))

    # prediction
    self.prediction = tf.argmax(tf.nn.softmax(y_hat), 1)

As, even though the model learns normally, the predictions does not seem to work for multiple variables. 即使模型正常学习,预测对于多个变量似乎也不起作用。 I was wondering how should one code the self.prediction object, so that it emits a vector of predictions for individual instances? 我想知道如何编码self.prediction对象,以便它为各个实例发出预测的向量?

Thank you very much. 非常感谢你。

I was wondering how should one code the self.prediction object, so that it emits a vector of predictions for individual instances? 我想知道如何编码self.prediction对象,以便它为各个实例发出预测的向量?

In general tf.nn.softmax returns a vector of probabilities. 通常, tf.nn.softmax返回一个概率向量。 You just can't see them because your are using tf.argmax , which returns the index of the largest value. 您只是看不到它们,因为您使用的是tf.argmax ,它返回tf.argmax的索引。 Therefore you will just get one number. 因此,您只会得到一个号码。 Just remove tf.argmax and you should be fine. 只需删除tf.argmax,就可以了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM