简体   繁体   English

验证丢失的原因很高?

[英]Reason for Validation loss is going high?

I'm very new to deep learning models, and trying to train a multi-label classifying text model using LSTM.I have around 2600 records which has 4 categories.Using 80% for train and rest for validations.我对深度学习模型非常陌生,并尝试使用 LSTM 训练多标签分类文本 model。我有大约 2600 条记录,有 4 个类别。使用 80% 进行训练,使用 rest 进行验证。

There is nothing complex in code ie am reading csv, tokenizing the data and feeding to model.代码中没有什么复杂的,即正在读取 csv,标记数据并馈送到 model。 But after 3-4 epochs validation loss becomes greater than 1 whereas train_loss tends to zero.As far as i searched it is the case of over fitting.但是在 3-4 个 epoch 之后,验证损失变得大于 1,而 train_loss 趋于零。据我搜索,这是过度拟合的情况。 To overcome this, i tried with different layers,changing the units.But still problem remains as it is.为了克服这个问题,我尝试了不同的层,改变了单位。但问题仍然存在。 If i stop at 1-2 epochs, then predictions get's wrong.如果我停在 1-2 个时期,那么预测就会出错。

Below is my model creation code:-下面是我的 model 创建代码:-

ACCURACY_THRESHOLD = 0.75
class myCallback(tf.keras.callbacks.Callback): 
    def on_epoch_end(self, epoch, logs={}): 
        print(logs.get('val_accuracy'))
        fname='Arabic_Model_'+str(logs.get('val_accuracy'))+'.h5'
        if(logs.get('val_accuracy') > ACCURACY_THRESHOLD):   
          #print("\nWe have reached %2.2f%% accuracy, so we will stopping training." %(acc_thresh*100))   
          #self.model.stop_training = True
          self.model.save(fname)
          #from google.colab import files
          #files.download(fname)      


# The maximum number of words to be used. (most frequent)
MAX_NB_WORDS = vocab_len
# Max number of words in each complaint.
MAX_SEQUENCE_LENGTH = 50
# This is fixed.
EMBEDDING_DIM = 100

callbacks = myCallback()
def create_model(vocabulary_size, seq_len):
   

    model =  models.Sequential()
   
    model.add(Embedding(input_dim=MAX_NB_WORDS+1, output_dim=EMBEDDING_DIM, 
                        input_length=seq_len,mask_zero=True))
    
    model.add(GRU(units=64, return_sequences=True))
    model.add(Dropout(0.4))
    model.add(LSTM(units=50))  
   
    #model.add(LSTM(100)) 
    #model.add(Dropout(0.4))
    #Bidirectional(tf.keras.layers.LSTM(embedding_dim))
    
    #model.add(Bidirectional(LSTM(128)))
    model.add(Dense(50, activation='relu'))
    
    #model.add(Dense(200, activation='relu'))
    model.add(Dense(4, activation='softmax'))

    model.compile(loss='categorical_crossentropy', optimizer='adam', 
                  metrics=['accuracy'])
    
    model.summary()

    return model

model=create_model(MAX_NB_WORDS, MAX_SEQUENCE_LENGTH)

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_4 (Embedding)      (None, 50, 100)           2018600   
_________________________________________________________________
gru_2 (GRU)                  (None, 50, 64)            31680     
_________________________________________________________________
dropout_10 (Dropout)         (None, 50, 64)            0         
_________________________________________________________________
lstm_6 (LSTM)                (None, 14)                4424      
_________________________________________________________________
dense_7 (Dense)              (None, 50)                750       
_________________________________________________________________
dropout_11 (Dropout)         (None, 50)                0         
_________________________________________________________________
dense_8 (Dense)              (None, 4)                 204       
=================================================================
Total params: 2,055,658
Trainable params: 2,055,658
Non-trainable params: 0
_________________________________________________________________


model.fit(sequences, y_train, validation_data=(sequences_test, y_test), 
              epochs=25, batch_size=5, verbose=1,
              callbacks=[callbacks]
             )

It will be very helpful if i can get a sure shot to overcome overfitting.You can refer to below collab to see complete code:-如果我能确定克服过度拟合的问题,那将非常有帮助。您可以参考下面的协作以查看完整的代码:-

https://colab.research.google.com/drive/13N94kBKkHIX2TR5B_lETyuH1QTC5VuRf?usp=sharing https://colab.research.google.com/drive/13N94kBKkHIX2TR5B_lETyuH1QTC5VuRf?usp=sharing

附加时代的图像

Edit:--- I am now using pre-trained embedding layer which I created with gensim but now accuracy got decreased.Also,my record size is 4643.编辑:--- 我现在使用的是我用 gensim 创建的预训练嵌入层,但现在准确度下降了。另外,我的记录大小是 4643。

Attaching below code:- in this 'English_dict.p' is the dictionary which i created using gensim.附上以下代码:- 在这个“English_dict.p”中是我使用 gensim 创建的字典。

embeddings_index=load(open('English_dict.p', 'rb'))

vocab_size=len(embeddings_index)+1

embedding_model = zeros((vocab_size, 100))

for word, i in embedding_matrix.word_index.items():
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        embedding_model[i] = embedding_vector

model.add(Embedding(input_dim=MAX_NB_WORDS, output_dim=EMBEDDING_DIM, 
                         weights=[embedding_model],trainable=False,
                        input_length=seq_len,mask_zero=True))


    Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_2 (Embedding)      (None, 50, 100)           2746300   
_________________________________________________________________
gru_2 (GRU)                  (None, 50, 64)            31680     
_________________________________________________________________
dropout_2 (Dropout)          (None, 50, 64)            0         
_________________________________________________________________
lstm_2 (LSTM)                (None, 128)               98816     
_________________________________________________________________
dense_3 (Dense)              (None, 50)                6450      
_________________________________________________________________
dense_4 (Dense)              (None, 4)                 204       
=================================================================
Total params: 2,883,450
Trainable params: 137,150
Non-trainable params: 2,746,300
_________________________________________________________________

Let me know if I am doing anything wrong.如果我做错了什么,请告诉我。 You can refer above collab for reference.您可以参考上面的协作以供参考。

在此处输入图像描述

Yes, it is classical overfitting.是的,这是经典的过拟合。 Why it is happening - the neural network has more than 2 million trainable parameters (2 055 658) while you have only 2600 records (you are using 80% for training).为什么会发生 - 神经网络有超过 200 万个可训练参数(2 055 658),而您只有 2600 条记录(您将 80% 用于训练)。 The NN is too big and instead of generalization , does memorization . NN 太大,不是泛化,而是记忆

How to solve:怎么解决:

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM