I am training convolutional neural network (using for segmentation
). And I have a problem with validation results ( val_loss
, val_acc
- in network for segmentation is using term val_dice_coef
) because there is only a minimal change and the result is not improving compared to acc
- dice_coef
and loss
.
I am using this code example with own optimizer - my parameters:
total_number_of_data = 3547 #val + training data
epochs = 400
image_size = 128
batch_size = 2
val_data_size = 400
opt = optimizers.RMSprop(learning_rate=0.0000001, decay=1e-6)
Results after 350
epochs:
epoch | dice_coef | loss | val_loss | val_dice_coef
------------------------------------------------------------------------------------------
1 | 0.5633156299591064 | 0.43668392300605774 | 0.4752978980541229 | 0.5247021317481995
350 | 0.9698152542114258 | 0.03018493764102459 | 0.3346560299396515 | 0.6653439402580261
What should I do?
Their is not any Particular sol you have to try all possible case here.But I tell you more general process that maximum grandmaster follow.
def build_lrfn(lr_start=0.00001, lr_max=0.0008,
lr_min=0.00001, lr_rampup_epochs=20,
lr_sustain_epochs=0, lr_exp_decay=.8):
lr_max = lr_max * strategy.num_replicas_in_sync
def lrfn(epoch):
if epoch < lr_rampup_epochs:
lr = (lr_max - lr_start) / lr_rampup_epochs * epoch + lr_start
elif epoch < lr_rampup_epochs + lr_sustain_epochs:
lr = lr_max
else:
lr = (lr_max - lr_min) * lr_exp_decay**(epoch - lr_rampup_epochs - lr_sustain_epochs) + lr_min
return lr
return lrfn
lrfn = build_lrfn()
lr_schedule = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose=1)
history = model.fit(
train_dataset,
epochs=EPOCHS,
callbacks=[lr_schedule],
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=valid_dataset
)
for more optimizer i always follow this one link
In my opinion Adam recently work best for your model
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.