简体   繁体   中英

Validation Accuracy is fluctuating

Data is comprised of time-series sensor data and an imbalanced Dataset. The data set contains 12 classes of data and needs prediction human physical activities.

Architecture:
Note : LSTM output is directly feeding to the output

 con_l1 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu")(
            input_layer) 
 con_l2 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu")(con_l1)
 con_l3 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu")(con_l2)
 con_l4 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu")(con_l3)
 rl = Reshape((int(con_l4.shape[1]), int(con_l4.shape[2]) * int(con_l4.shape[3])))(con_l4)
 lstm_l5 = tf.keras.layers.LSTM(128, activation='tanh',
                                recurrent_initializer=tf.keras.initializers.Orthogonal(seed=0), dropout=0.5,
                                       recurrent_dropout=0.25, return_sequences=True)(
     rl)  # required output of each cell to feed into second LSTM layer, so thats why return_sequences=True
       
 lstm_l6 = tf.keras.layers.LSTM(128, activation='tanh',
                                           recurrent_initializer=tf.keras.initializers.Orthogonal(seed=1), dropout=0.5,
                                           recurrent_dropout=0.25, return_sequences=True)(lstm_l5)

Learning Rate with decay of 0.9 after each 10 epochs -

tf.keras.optimizers.Adam(learning_rate=0.001)

model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["acc"])
early_Stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode = 'min', patience=10, restore_best_weights=True)

The training accuracy and loss monotonically increase and decrease, respectively. But, my validation data accuracy starts to fluctuate wildly. The fluctuation in validation loss and accuracy can be seen in the attached ScreenShot.

Here is the Screen Shot of my training: 在此处输入图片说明

I have set 300 epochs, but training stopped after some iterations like here only on 21 . I have read this post Why is the validation accuracy fluctuating? , somehow got the idea that it is an overfitting issue and can be overcome by using dropout. So, change the value of dropout (a bit up-down) But, it doesn't stop the fluctuations. Could anyone help me figure out where I am going wrong?

Looks like Overfitting to me also

The following is a summary of what can be found here: https://www.tensorflow.org/tutorials/keras/overfit_and_underfit#strategies_to_prevent_overfitting

Reduce the size of your network

simpler models are less likely to overfit than complex ones.

This one is pretty simple, smaller networks do not have as much room to do the brittle kind of learning that leads to remembering the training set

Weight Regularisation

a common way to mitigate overfitting is to put constraints on the complexity of a network by forcing its weights only to take small values

L2 weight regularisation is more common and is also know as weight decay, you can add this to your layers using the kernel_regularizer parameter eg:

tf.keras.layers.Conv2D(64, (5, 1), activation="relu", kernel_regularizer=regularizers.l2(0.001))

drop out

The intuitive explanation for dropout is that because individual nodes in the network cannot rely on the output of the others, each node must output features that are useful on their own

You are already using some dropout, but try placing a dropout layer between every conv2d layer also, experiment find what value between 0.2 and 0.5 works best.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM