简体   繁体   English

Keras LSTM模型获得标签的概率

[英]Keras LSTM model get probabilities of labels

I created a keras LSTM model to predict the next word given a sentence: 我创建了一个keras LSTM模型来预测给定句子的下一个单词:

pretrained_weights = w2v_model.wv.syn0
vocab_size, emdedding_size = pretrained_weights.shape

lstm_model = Sequential()
lstm_model.add(Embedding(input_dim= vocab_size, output_dim=emdedding_size, weights=[pretrained_weights]))
lstm_model.add(LSTM(units=emdedding_size))
lstm_model.add(Dense(units=vocab_size))
lstm_model.add(Activation('softmax'))
lstm_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')

lstm_model.fit(X, y, batch_size=128, epochs=3)

When X are sentences and y are the next word for each sentence. 当X是句子,y是每个句子的下一个单词。 Now , I have a sentence, and 5 words, and I want to rank them by probability given the sentence. 现在,我有一个句子和5个单词,我想根据给定句子的概率对它们进行排名。 What is the best way to do so? 最好的方法是什么?

Change your activation function of LSTM output layer to 'sigmoid',it will work. 将您的LSTM输出层的激活功能更改为“ Sigmoid”,它将起作用。

pretrained_weights = w2v_model.wv.syn0
vocab_size, emdedding_size = pretrained_weights.shape

lstm_model = Sequential()
lstm_model.add(Embedding(input_dim= vocab_size, output_dim=emdedding_size, weights=[pretrained_weights]))
lstm_model.add(LSTM(units=emdedding_size))
lstm_model.add(Dense(units=vocab_size))
lstm_model.add(Activation('sigmoid'))
lstm_model.compile(optimizer='adam', loss='mean_squared_error')

lstm_model.fit(X, y, batch_size=128, epochs=3)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM