簡體   English   中英

驗證丟失的原因很高?

[英]Reason for Validation loss is going high?

我對深度學習模型非常陌生,並嘗試使用 LSTM 訓練多標簽分類文本 model。我有大約 2600 條記錄,有 4 個類別。使用 80% 進行訓練,使用 rest 進行驗證。

代碼中沒有什么復雜的,即正在讀取 csv,標記數據並饋送到 model。 但是在 3-4 個 epoch 之后,驗證損失變得大於 1,而 train_loss 趨於零。據我搜索,這是過度擬合的情況。 為了克服這個問題,我嘗試了不同的層,改變了單位。但問題仍然存在。 如果我停在 1-2 個時期,那么預測就會出錯。

下面是我的 model 創建代碼:-

ACCURACY_THRESHOLD = 0.75
class myCallback(tf.keras.callbacks.Callback): 
    def on_epoch_end(self, epoch, logs={}): 
        print(logs.get('val_accuracy'))
        fname='Arabic_Model_'+str(logs.get('val_accuracy'))+'.h5'
        if(logs.get('val_accuracy') > ACCURACY_THRESHOLD):   
          #print("\nWe have reached %2.2f%% accuracy, so we will stopping training." %(acc_thresh*100))   
          #self.model.stop_training = True
          self.model.save(fname)
          #from google.colab import files
          #files.download(fname)      


# The maximum number of words to be used. (most frequent)
MAX_NB_WORDS = vocab_len
# Max number of words in each complaint.
MAX_SEQUENCE_LENGTH = 50
# This is fixed.
EMBEDDING_DIM = 100

callbacks = myCallback()
def create_model(vocabulary_size, seq_len):
   

    model =  models.Sequential()
   
    model.add(Embedding(input_dim=MAX_NB_WORDS+1, output_dim=EMBEDDING_DIM, 
                        input_length=seq_len,mask_zero=True))
    
    model.add(GRU(units=64, return_sequences=True))
    model.add(Dropout(0.4))
    model.add(LSTM(units=50))  
   
    #model.add(LSTM(100)) 
    #model.add(Dropout(0.4))
    #Bidirectional(tf.keras.layers.LSTM(embedding_dim))
    
    #model.add(Bidirectional(LSTM(128)))
    model.add(Dense(50, activation='relu'))
    
    #model.add(Dense(200, activation='relu'))
    model.add(Dense(4, activation='softmax'))

    model.compile(loss='categorical_crossentropy', optimizer='adam', 
                  metrics=['accuracy'])
    
    model.summary()

    return model

model=create_model(MAX_NB_WORDS, MAX_SEQUENCE_LENGTH)

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_4 (Embedding)      (None, 50, 100)           2018600   
_________________________________________________________________
gru_2 (GRU)                  (None, 50, 64)            31680     
_________________________________________________________________
dropout_10 (Dropout)         (None, 50, 64)            0         
_________________________________________________________________
lstm_6 (LSTM)                (None, 14)                4424      
_________________________________________________________________
dense_7 (Dense)              (None, 50)                750       
_________________________________________________________________
dropout_11 (Dropout)         (None, 50)                0         
_________________________________________________________________
dense_8 (Dense)              (None, 4)                 204       
=================================================================
Total params: 2,055,658
Trainable params: 2,055,658
Non-trainable params: 0
_________________________________________________________________


model.fit(sequences, y_train, validation_data=(sequences_test, y_test), 
              epochs=25, batch_size=5, verbose=1,
              callbacks=[callbacks]
             )

如果我能確定克服過度擬合的問題,那將非常有幫助。您可以參考下面的協作以查看完整的代碼:-

https://colab.research.google.com/drive/13N94kBKkHIX2TR5B_lETyuH1QTC5VuRf?usp=sharing

附加時代的圖像

編輯:--- 我現在使用的是我用 gensim 創建的預訓練嵌入層,但現在准確度下降了。另外,我的記錄大小是 4643。

附上以下代碼:- 在這個“English_dict.p”中是我使用 gensim 創建的字典。

embeddings_index=load(open('English_dict.p', 'rb'))

vocab_size=len(embeddings_index)+1

embedding_model = zeros((vocab_size, 100))

for word, i in embedding_matrix.word_index.items():
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        embedding_model[i] = embedding_vector

model.add(Embedding(input_dim=MAX_NB_WORDS, output_dim=EMBEDDING_DIM, 
                         weights=[embedding_model],trainable=False,
                        input_length=seq_len,mask_zero=True))


    Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_2 (Embedding)      (None, 50, 100)           2746300   
_________________________________________________________________
gru_2 (GRU)                  (None, 50, 64)            31680     
_________________________________________________________________
dropout_2 (Dropout)          (None, 50, 64)            0         
_________________________________________________________________
lstm_2 (LSTM)                (None, 128)               98816     
_________________________________________________________________
dense_3 (Dense)              (None, 50)                6450      
_________________________________________________________________
dense_4 (Dense)              (None, 4)                 204       
=================================================================
Total params: 2,883,450
Trainable params: 137,150
Non-trainable params: 2,746,300
_________________________________________________________________

如果我做錯了什么,請告訴我。 您可以參考上面的協作以供參考。

在此處輸入圖像描述

是的,這是經典的過擬合。 為什么會發生 - 神經網絡有超過 200 萬個可訓練參數(2 055 658),而您只有 2600 條記錄(您將 80% 用於訓練)。 NN 太大,不是泛化,而是記憶

怎么解決:

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM