简体   繁体   中英

TensorFlow implementing Seq2seq Sentiment analysis

I'm currently playing with Tensorflow Seq2seq model, trying to implement sentiment analysis. My idea is to feed the encoder with IMDB comment, the decoder with [Pad] or [Go] and the target with [neg]/[pos]. Most of my code is quite similar with the example of seq2seq translation. But the result I get is quite strange. For each batch, the results are either all [neg] or all [pos].

"encoder input : I was hooked almost immediately.[pad][pad][pad]"

"decoder input : [pad]"

"target : [pos]"

Since this result is very particular, I was wondering if anyone knows what would lead to this kind of thing?

I would recommend to try using a simpler architecture - RNN or CNN encoder that feeds into logistic classifier. This architectures has been showing very good results on sentiment analysis (amazon reviews, yelp reviews, etc).

For examples of such models, you can see here - various encoders (LSTM or Convolution) on words and characters.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM