简体   繁体   English

什么是衡量训练性能的好学习率图

[英]What is a good learning rate graph to gauge training peformance

I am training from scratch an SSD based object detection network.我正在从头开始训练基于 SSD 的 object 检测网络。 I am training on 250,000 images.我正在训练 250,000 张图像。 The dataset has a skew for some classes, but I have decent representation for minority classes as well (2000 or so).数据集对某些类有偏差,但我对少数类也有很好的表示(2000 左右)。

I see that the model is not training well, with 150k steps, it has only reached 8% precision and 25% recall.我看到 model 训练不好,150k 步,它只达到了 8% 的准确率和 25% 的召回率。 The learning rate is also not a smooth graph.学习率也不是平滑图。 What kind of learning rate graph should I expect and what other things I can try to improve my training?我应该期待什么样的学习率图表,以及我可以尝试哪些其他方法来改进我的训练?

  optimizer {
    rms_prop_optimizer {
      learning_rate {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.004000000189989805
          decay_steps: 800720
          decay_factor: 0.949999988079071
        }
      }
      momentum_optimizer_value: 0.8999999761581421
      decay: 0.8999999761581421
      epsilon: 1.0
    }
  }

在此处输入图像描述 在此处输入图像描述 在此处输入图像描述 在此处输入图像描述

the learning rate of your model is poor.You are certainly over-fitting it.你的 model 的学习率很差。你肯定是过拟合了。 这是你应该期待的。

some changes are: 1.do cross-validation first 2.try removing some features.一些变化是: 1.首先进行交叉验证 2.尝试删除一些功能。 3.make sure your labels are coded right 3.确保您的标签编码正确

There could be multiple things which might be wrong.可能有很多事情可能是错误的。 Its hard to tell.很难说。 for reference I can give you the architecture.作为参考,我可以给你架构。

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from layers import *
from data import voc, coco
import os

class SSD(nn.Module):
"""Single Shot Multibox Architecture
The network is composed of a base VGG network followed by the
added multibox conv layers.  Each multibox layer branches into
    1) conv2d for class conf scores
    2) conv2d for localization predictions
    3) associated priorbox layer to produce default bounding
       boxes specific to the layer's feature map size.
See: https://arxiv.org/pdf/1512.02325.pdf for more details.
Args:
    phase: (string) Can be "test" or "train"
    size: input image size
    base: VGG16 layers for input, size of either 300 or 500
    extras: extra layers that feed to multibox loc and conf layers
    head: "multibox head" consists of loc and conf conv layers
"""

def __init__(self, phase, size, base, extras, head, num_classes):
    super(SSD, self).__init__()
    self.phase = phase
    self.num_classes = num_classes
    self.cfg = (coco, voc)[num_classes == 21]
    self.priorbox = PriorBox(self.cfg)
    self.priors = Variable(self.priorbox.forward(), volatile=True)
    self.size = size

    # SSD network
    self.vgg = nn.ModuleList(base)
    # Layer learns to scale the l2 normalized features from conv4_3
    self.L2Norm = L2Norm(512, 20)
    self.extras = nn.ModuleList(extras)

    self.loc = nn.ModuleList(head[0])
    self.conf = nn.ModuleList(head[1])

    if phase == 'test':
        self.softmax = nn.Softmax(dim=-1)
        self.detect = Detect(num_classes, 0, 200, 0.01, 0.45)

def forward(self, x):
    """Applies network layers and ops on input image(s) x.
    Args:
        x: input image or batch of images. Shape: [batch,3,300,300].
    Return:
        Depending on phase:
        test:
            Variable(tensor) of output class label predictions,
            confidence score, and corresponding location predictions for
            each object detected. Shape: [batch,topk,7]
        train:
            list of concat outputs from:
                1: confidence layers, Shape: [batch*num_priors,num_classes]
                2: localization layers, Shape: [batch,num_priors*4]
                3: priorbox layers, Shape: [2,num_priors*4]
    """
    sources = list()
    loc = list()
    conf = list()

    # apply vgg up to conv4_3 relu
    for k in range(23):
        x = self.vgg[k](x)

    s = self.L2Norm(x)
    sources.append(s)

    # apply vgg up to fc7
    for k in range(23, len(self.vgg)):
        x = self.vgg[k](x)
    sources.append(x)

    # apply extra layers and cache source layer outputs
    for k, v in enumerate(self.extras):
        x = F.relu(v(x), inplace=True)
        if k % 2 == 1:
            sources.append(x)

    # apply multibox head to source layers
    for (x, l, c) in zip(sources, self.loc, self.conf):
        loc.append(l(x).permute(0, 2, 3, 1).contiguous())
        conf.append(c(x).permute(0, 2, 3, 1).contiguous())

    loc = torch.cat([o.view(o.size(0), -1) for o in loc], 1)
    conf = torch.cat([o.view(o.size(0), -1) for o in conf], 1)
    if self.phase == "test":
        output = self.detect(
            loc.view(loc.size(0), -1, 4),                   # loc preds
            self.softmax(conf.view(conf.size(0), -1,
                         self.num_classes)),                # conf preds
            self.priors.type(type(x.data))                  # default boxes
        )
    else:
        output = (
            loc.view(loc.size(0), -1, 4),
            conf.view(conf.size(0), -1, self.num_classes),
            self.priors
        )
    return output

def load_weights(self, base_file):
    other, ext = os.path.splitext(base_file)
    if ext == '.pkl' or '.pth':
        print('Loading weights into state dict...')
        self.load_state_dict(torch.load(base_file,
                             map_location=lambda storage, loc: storage))
        print('Finished!')
    else:
        print('Sorry only .pth and .pkl files supported.')



def vgg(cfg, i, batch_norm=False):
    layers = []
    in_channels = i
    for v in cfg:
        if v == 'M':
            layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
        elif v == 'C':
            layers += [nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)]
        else:
            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
            if batch_norm:
                layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
            else:
                layers += [conv2d, nn.ReLU(inplace=True)]
            in_channels = v
    pool5 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
    conv6 = nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)
    conv7 = nn.Conv2d(1024, 1024, kernel_size=1)
    layers += [pool5, conv6,
               nn.ReLU(inplace=True), conv7, nn.ReLU(inplace=True)]
    return layers


def add_extras(cfg, i, batch_norm=False):
    # Extra layers added to VGG for feature scaling
    layers = []
    in_channels = i
    flag = False
    for k, v in enumerate(cfg):
        if in_channels != 'S':
            if v == 'S':
                layers += [nn.Conv2d(in_channels, cfg[k + 1],
                           kernel_size=(1, 3)[flag], stride=2, padding=1)]
            else:
                layers += [nn.Conv2d(in_channels, v, kernel_size=(1, 3)[flag])]
            flag = not flag
        in_channels = v
    return layers


def multibox(vgg, extra_layers, cfg, num_classes):
    loc_layers = []
    conf_layers = []
    vgg_source = [21, -2]
    for k, v in enumerate(vgg_source):
        loc_layers += [nn.Conv2d(vgg[v].out_channels,
                                 cfg[k] * 4, kernel_size=3, padding=1)]
        conf_layers += [nn.Conv2d(vgg[v].out_channels,
                        cfg[k] * num_classes, kernel_size=3, padding=1)]
    for k, v in enumerate(extra_layers[1::2], 2):
        loc_layers += [nn.Conv2d(v.out_channels, cfg[k]
                                 * 4, kernel_size=3, padding=1)]
        conf_layers += [nn.Conv2d(v.out_channels, cfg[k]
                                  * num_classes, kernel_size=3, padding=1)]
    return vgg, extra_layers, (loc_layers, conf_layers)


base = {
    '300': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'C', 512, 512, 512, 'M',
            512, 512, 512],
    '512': [],
}
extras = {
    '300': [256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256],
    '512': [],
}
mbox = {
    '300': [4, 6, 6, 6, 4, 4],  # number of boxes per feature map location
    '512': [],
}


def build_ssd(phase, size=300, num_classes=21):
    if phase != "test" and phase != "train":
        print("ERROR: Phase: " + phase + " not recognized")
        return
    if size != 300:
        print("ERROR: You specified size " + repr(size) + ". However, " +
              "currently only SSD300 (size=300) is supported!")
        return
    base_, extras_, head_ = multibox(vgg(base[str(size)], 3),
                                     add_extras(extras[str(size)], 1024),
                                     mbox[str(size)], num_classes)
    return SSD(phase, size, base_, extras_, head_, num_classes)

This is a hyperparameter tuning problem.这是一个超参数调整问题。
First off, you need to stick with a basic vanilla stochastic gradient descent when you are just getting started.首先,刚开始时,您需要坚持使用基本的普通随机梯度下降法。 This is both to keep things simple as well as get a feel for your dataset and model.这既是为了让事情变得简单,也是为了让您了解您的数据集和 model。 Also, this will lower the amount of hyper-parameters that you can tune.此外,这将减少您可以调整的超参数的数量。
Then, you should train a low number of epochs (depending on training time), and vary the learning rate in a logarithmic way.然后,您应该训练少量的 epoch(取决于训练时间),并以对数方式改变学习率。

Learning rate:学习率:
0.0001 0.0001
0.001 0.001
0.01 0.01
0.1 0.1
1.0 1.0
10 10
100 100

This way you will have a ballpark idea of where your network will train well.这样,您将大致了解您的网络将在哪里训练。 Only after finding a decent ballpark should you begin to add exponential decay, and momentum.只有在找到一个像样的球场之后,您才应该开始添加指数衰减和动量。

Try reading a few resources on hyperparameter tuning .尝试阅读一些关于超参数调整的资源。 Also, Andrew Ng's courses on coursera are great for teaching methods for building and tuning DNN's.此外,Andrew Ng 在 coursera 上的课程非常适合教授构建和调整 DNN 的方法。


To answer your question: I think your learning rate is way to low.回答你的问题:我认为你的学习率太低了。 I would expect the shape of the learning rate graph to be a little different.我希望学习率图的形状会有所不同。 But with the momentum working against the exponential decay, it is hard to know without plotting it explicitly.但是随着动量对抗指数衰减,如果不明确绘制它就很难知道。 Your precision and recall plots actually show that the network is learning something.您的准确率和召回率图实际上表明网络正在学习一些东西。 However because your learning rate decays, it looks like it peaks really low, then flattens out.但是,由于您的学习率下降,看起来它的峰值非常低,然后变平。 Most likely, your learning rate is effectively zero at this point.最有可能的是,此时您的学习率实际上为零。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM