简体   繁体   English

使用 CNN 进行推文情绪预测

[英]Tweet emotion prediction using CNN

I have a built a CNN model for tweet emotion detection and the final step is as follows:我已经构建了一个用于推文情感检测的 CNN model,最后一步如下:

tweets_emotion = model.predict(val_tweets, verbose= 0)

which gave me predicted output like this这给了我这样的预测 output

array([[3.1052819e-01, 2.7634043e-01, 1.6270137e-03, 7.7674150e-01],
       [5.0230421e-02, 7.7430069e-01, 7.7313791e-09, 2.0278792e-01],
       [9.9952579e-01, 1.3450404e-03, 5.8804121e-20, 3.2991991e-07],
       ...,
       [3.9727339e-01, 2.8888196e-01, 1.9649005e-02, 2.1239746e-01],
       [1.2528910e-01, 3.2127723e-01, 3.2503495e-03, 5.5401272e-01],
       [5.8543805e-02, 4.5720499e-05, 2.9060062e-12, 9.3766922e-01]],
      dtype=float32)

my actual output should look like this:我的实际 output 应该是这样的:

array([[1., 0., 0., 0.],
       [1., 0., 0., 0.],
       [1., 0., 0., 0.],
       ...,
       [0., 0., 0., 1.],
       [0., 0., 0., 1.],
       [0., 0., 0., 1.]], dtype=float32)

Is there a way to convert my predicted output (tweets_emotion) to look like the output that I expected?有没有办法将我预测的 output (tweets_emotion) 转换为我预期的 output?

Using the 6 predictions example you show here:使用您在此处显示的 6 个预测示例:

import numpy as np

tweets_emotion = np.array([[3.1052819e-01, 2.7634043e-01, 1.6270137e-03, 7.7674150e-01],
                           [5.0230421e-02, 7.7430069e-01, 7.7313791e-09, 2.0278792e-01],
                           [9.9952579e-01, 1.3450404e-03, 5.8804121e-20, 3.2991991e-07],
                           [3.9727339e-01, 2.8888196e-01, 1.9649005e-02, 2.1239746e-01],
                           [1.2528910e-01, 3.2127723e-01, 3.2503495e-03, 5.5401272e-01],
                           [5.8543805e-02, 4.5720499e-05, 2.9060062e-12, 9.3766922e-01]])

tweets_emotion_class = np.argmax(tweets_emotion, axis=1)
tweets_emotion_class
# array([3, 1, 0, 0, 3, 3])

You should be able to verify by simple visual inspection that indeed, the maximum values of each array element are the ones shown in tweets_emotion_class .您应该能够通过简单的目视检查来验证每个数组元素的最大值确实是tweets_emotion_class中显示的值。

Irrelevant to your issue but, as mentioned in the comments, sigmoid activation for the last network layer does not make sense in single-label multi-class settings with one-hot encoded labels, as your setting seems to be - you should change it to softmax .与您的问题无关,但正如评论中所述,最后一个网络层的sigmoid激活在具有单热编码标签的单标签多类设置中没有意义,因为您的设置似乎是 - 您应该将其更改为softmax

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM