I'm re-training ssd_mobilenet_v2_quantized_300x300_coco
object detection model on a custom dataset. The dataset consists of approx 2.6k images and 19 classes. After the training step reaches 10k-12k the loss graph starts increasing. This happens even if I change my model to ssd_mobilenet_v2_coco
and at the same step range. I couldn't find anything that is related to this behaviour in the config file. Also this disappers when using faster_rcnn
models. When the issue arises the mAP becomes almost constant. Also tha accuracy doesn't go beyond 50%. Can anyone explain this behaviour ?
Sample Dataset:
Loss Graph
a) ssd_mobilenet_v2_quantized_300x300_coco
b) ssd_mobilenet_v2_coco
Config File: a) ssd_mobilenet_v2_quantized_300x300_coco
b) ssd_mobilenet_v2_coco
What about your training loss? Notice that total_loss
is the validation loss here.
If your training loss is decreasing while the validation loss is increasing, this is clearly a sign of overfitting, you may use regularization loss during training by adding the following in the config file, in part train_config
add_regularization_loss: true
just as batch_size: 24
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.