簡體   English   中英

LSTM模型的准確性很低

[英]Accuracy of LSTM model is very low

我正在嘗試建立模型來預測文本。

x_train的形狀:(19992,40,1)

array([[[0.00680272],
        [0.01417234],
        [0.        ],
        ...,

        [0.01473923],
        [0.        ],
        [0.0085034 ]]])

y_train的形狀為:(19992,42)(它是一熱編碼的)

array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 0., 0.],
       [1., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)

我的模型是:

model = Sequential()
model.add(LSTM(256, input_shape=(40,1), return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(128))
model.add(Dropout(0.5))
model.add(Dense(42, activation='softmax'))

model.compile(optimizer='RMSprop', loss='categorical_crossentropy', metrics=['accuracy'])

現在即使以150個紀元訓練模型,我也只能達到0.512的精度。 我應該在模型中進行哪些改進以提高其准確性?

Train on 15993 samples, validate on 3999 samples
Epoch 1/15
15993/15993 [==============================] - 23s 3ms/step - loss: 2.9527 - acc: 0.2013 - val_loss: 2.8762 - val_acc: 0.2061
Epoch 2/15
15993/15993 [==============================] - 23s 3ms/step - loss: 2.8670 - acc: 0.2111 - val_loss: 2.8678 - val_acc: 0.2061
Epoch 3/15
15993/15993 [==============================] - 23s 3ms/step - loss: 2.8548 - acc: 0.2117 - val_loss: 2.8615 - val_acc: 0.2061
Epoch 4/15
15993/15993 [==============================] - 22s 3ms/step - loss: 2.8516 - acc: 0.2121 - val_loss: 2.8629 - val_acc: 0.2061
Epoch 5/15
15993/15993 [==============================] - 22s 3ms/step - loss: 2.8447 - acc: 0.2117 - val_loss: 2.8663 - val_acc: 0.2061
Epoch 6/15
15993/15993 [==============================] - 21s 3ms/step - loss: 2.8445 - acc: 0.2133 - val_loss: 2.8657 - val_acc: 0.2061
Epoch 7/15
15993/15993 [==============================] - 22s 3ms/step - loss: 2.8404 - acc: 0.2134 - val_loss: 2.8657 - val_acc: 0.2061
Epoch 8/15
15993/15993 [==============================] - 21s 3ms/step - loss: 2.8401 - acc: 0.2117 - val_loss: 2.8673 - val_acc: 0.2061
Epoch 9/15
15993/15993 [==============================] - 21s 3ms/step - loss: 2.8391 - acc: 0.2139 - val_loss: 2.8657 - val_acc: 0.2061
Epoch 10/15
15993/15993 [==============================] - 22s 3ms/step - loss: 2.8412 - acc: 0.2141 - val_loss: 2.8642 - val_acc: 0.2061
Epoch 11/15
15993/15993 [==============================] - 21s 3ms/step - loss: 2.8394 - acc: 0.2149 - val_loss: 2.8680 - val_acc: 0.2061
Epoch 12/15
15993/15993 [==============================] - 22s 3ms/step - loss: 2.8404 - acc: 0.2154 - val_loss: 2.8658 - val_acc: 0.2061
Epoch 13/15
15993/15993 [==============================] - 22s 3ms/step - loss: 2.8380 - acc: 0.2161 - val_loss: 2.8672 - val_acc: 0.2061
Epoch 14/15
15993/15993 [==============================] - 22s 3ms/step - loss: 2.8384 - acc: 0.2169 - val_loss: 2.8674 - val_acc: 0.2061
Epoch 15/15
15993/15993 [==============================] - 22s 3ms/step - loss: 2.8378 - acc: 0.2171 - val_loss: 2.8702 - val_acc: 0.2061

我認為您正在考慮基於LSTM的字符級語言模型。 這種模型通常使用多維嵌入作為輸入,而不僅僅是一維標量。 因此,對於Keras,您可以嘗試以下網絡體系結構:

model = Sequential()
model.add(Embedding(42, output_dim=64, input_length=40))
model.add(LSTM(256,input_shape=(40,1),return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(128))
model.add(Dropout(0.5))
model.add(Dense(42,activation='softmax'))

其中output_dim是嵌入尺寸的數量。 該網絡的輸入是整數矩陣[batch_size x input_length] ,其中每個元素都是char索引。 請查看此帖子以了解詳細信息。 希望這可以幫助!

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM