简体   繁体   English

如何改善 CNN model 达到损失高原?

[英]How to improve CNN model reaching loss plateau?

I am using TensorFlow in Python 3 to create a CNN that creates a multi class (ie expected output is 3 probabilities out of 92) based on a vector of photon energies shape(20, 1). I am using TensorFlow in Python 3 to create a CNN that creates a multi class (ie expected output is 3 probabilities out of 92) based on a vector of photon energies shape(20, 1). My model below is a result of multiple iterations and gradually increasing complexity.我下面的 model 是多次迭代并逐渐增加复杂性的结果。

However, the model seems to consistently reach a certain loss value no matter what additions (or reductions) I do.然而,无论我做什么添加(或减少),model 似乎始终达到一定的损失值。

The code below is the model along with some hyper parameters that I am optimising using Keras-Tuner.下面的代码是 model 以及我正在使用 Keras-Tuner 优化的一些超参数。

hp_learning_rate = hp.Choice('learning_rate', values = [1e-2, 1e-5, 3e-4, 5e-5, 5e-6])

hp_activation_C_1 = hp.Choice('activation_c1', values=["relu", "swish"])
hp_activation_C_2 = hp.Choice('activation_c2', values=["relu", "swish"])
hp_activation_D_1 = hp.Choice('activation_d1', values=["softsign", "relu", "swish"])
hp_activation_D_2 = hp.Choice('activation_d2', values=["softsign", "relu", "swish"])

hp_drop = hp.Choice('dropout_%', values=[0.05, 0.04, 0.03, 0.02])

hp_filters_1 = hp.Choice('num_filters_1', values=[32, 64, 96])
hp_filters_2 = hp.Choice('num_filters_2', values=[64, 96, 128, 256])
hp_filters_3 = hp.Choice('num_filters_3', values=[96, 128, 256, 384])

hp_kernel_size_1 = hp.Choice('kernel_size_1', values=[3, 5])

hp_units_1 = hp.Int('units_1', min_value = 64, max_value = 2624, step = 128)
hp_units_2 = hp.Int('units_2', min_value = 64, max_value = 2624, step = 128)
# hp_pool_size_1 = hp.Choice('pool_size_1', values=[2, 3, 4])



model = Sequential()

model.add(Conv1D(filters=hp_filters_1, kernel_size=hp_kernel_size_1,
                 activation=hp_activation_C_1, input_shape=(20, 1)))
model.add(BatchNormalization())
model.add(Dropout(hp_drop))
model.add(Conv1D(filters=hp_filters_2, kernel_size=3, activation=hp_activation_C_2))
model.add(BatchNormalization())
model.add(Dropout(hp_drop))
model.add(AveragePooling1D(pool_size=3, strides=2))
# model.add(MaxPooling1D(pool_size=hp_pool_size_1,strides=3))


model.add(Conv1D(filters=hp_filters_3, kernel_size=3, activation=hp_activation_C_2))
model.add(BatchNormalization())
model.add(Dropout(hp_drop))
model.add(AveragePooling1D(pool_size=3,strides=2))
# model.add(MaxPooling1D(pool_size=2,strides=2))


model.add(BatchNormalization())
model.add(Dropout(hp_drop))


model.add(Flatten())
model.add(Dense(hp_units_1, activation=hp_activation_D_1))
model.add(BatchNormalization())
model.add(Dropout(hp_drop))
model.add(Dense(hp_units_2, activation=hp_activation_D_2))
model.add(Dense(92, activation='softmax'))

    
early_stop = EarlyStopping(monitor='val_mse',
                           patience=5,
                           restore_best_weights=True,
                           min_delta=0.00005)


reduce_lr = ReduceLROnPlateau(monitor="val_mse",
                              factor=0.5,
                              patience=3,
                              min_lr=1e-6,
                              min_delta=0.00008)

So my question is, am I overcomplicating the model for the required objective?所以我的问题是,我是否将 model 过度复杂化以达到所需的目标? And how can I improve that performance to further reduce the loss?以及如何提高性能以进一步减少损失?

You might try using an adjustable learning rate.您可以尝试使用可调整的学习率。 The Keras callback ReduceLROnPlateau makes this easy to do. Keras 回调ReduceLROnPlateau使这很容易做到。 Documentation is here.文档在这里。 Set the callback to monitor validation loss.设置回调以监控验证丢失。 My recommended code is shown below:我推荐的代码如下所示:

red_lr=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss",factor=0.5,
    patience=2,verbose=1,mode="auto", min_delta=0.0001, cooldown=0, min_lr=0)

in model.fit add callbacks=[red_lr]model.fit添加callbacks=[red_lr]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM