简体   繁体   English

在没有帮助的情况下如何克服卷积神经网络中的过度拟合?

[英]How to overcome overfitting in convolutional neural network when nothing helps?

I'm training a convolutional neural network with siamese architecture and constrastive loss function for face verification task.我正在训练一个具有 siamese 架构和 constrative loss 函数的卷积神经网络,用于人脸验证任务。 And I'm faced with a huge difference in training and validation accuracy starting from literally first three or five epochs.从字面上的前三到五个 epoch 开始,我就面临着训练和验证准确性的巨大差异。 When training accuracy reaches 95% I have ~65% validation accuracy.当训练准确度达到 95% 时,我的验证准确度约为 65%。 It is fluctuating somewhere near 70% but never reaches this number.它在接近 70% 的地方波动,但从未达到这个数字。 these are training and validation accuracy plotted on one chart这些是绘制在一张图表上的训练和验证准确性

So to avoid this I tried a range of standard techniques when it comes to overfitting, but before listing them here I should say that none of them really changes the picture.所以为了避免这种情况,我尝试了一系列关于过度拟合的标准技术,但在列出它们之前,我应该说它们都没有真正改变图片。 The gap between training and validation accuracy stays the same.训练和验证准确率之间的差距保持不变。 So I used:所以我使用了:

  • L1 regularization with lambda varying from 0.0001 to 10000.0 lambda 的 L1 正则化从 0.0001 到 10000.0
  • L2 regularization with lambda varying from 0.0001 to 10000.0 L2 正则化与 lambda 从 0.0001 到 10000.0
  • Dropout with rate from 0.2 to 0.8辍学率从 0.2 到 0.8
  • Data augmentation techniques (rotation, shifting, zooming)数据增强技术(旋转、移位、缩放)
  • Removing fully connected layers except last layer.删除除最后一层之外的全连接层。

Nothing of these really help, so I appreciate any advises from you guys.这些都没有真正的帮助,所以我感谢你们的任何建议。 And some information about the network itself.以及一些关于网络本身的信息。 I'm using tensorflow.我正在使用张量流。 This is how the model itself look like:这是模型本身的样子:

net = tf.layers.conv2d(
    inputs,
    kernel_size=(7, 7),
    filters=15,
    strides=1,
    activation=tf.nn.relu,
    kernel_initializer=w_init,
    kernel_regularizer=reg)
# 15 x 58 x 58
net = tf.layers.max_pooling2d(net, pool_size=(2, 2), strides=2)
# 15 x 29 x 29
net = tf.layers.conv2d(
    net,
    kernel_size=(6, 6),
    filters=45,
    strides=1,
    activation=tf.nn.relu,
    kernel_initializer=w_init,
    kernel_regularizer=reg)
# 45 x 24 x 24
net = tf.layers.max_pooling2d(net, pool_size=(4, 4), strides=4)
# 45 x 6 x 6
net = tf.layers.conv2d(
    net,
    kernel_size=(6, 6),
    filters=256,
    strides=1,
    activation=tf.nn.relu,
    kernel_initializer=w_init,
    kernel_regularizer=reg)
# 256 x 1 x 1
net = tf.reshape(net, [-1, 256])
net = tf.layers.dense(net, units=512, activation=tf.nn.relu, kernel_regularizer=reg, kernel_initializer=w_init)
net = tf.layers.dropout(net, rate=0.2)
# net = tf.layers.dense(net, units=256, activation=tf.nn.relu, kernel_regularizer=reg, kernel_initializer=w_init)
# net = tf.layers.dropout(net, rate=0.75)
return tf.layers.dense(net, units=embedding_size, activation=tf.nn.relu, kernel_initializer=w_init)

This is how loss function is implemented:损失函数是这样实现的:

def contrastive_loss(out1, out2, labels, margin):
distance = compute_euclidian_distance_square(out1, out2)
positive_part = labels * distance
negative_part = (1 - labels) * tf.maximum(tf.square(margin) - distance, 0.0)
return tf.reduce_mean(positive_part + negative_part) / 2

This is how I get and augment data (I'm using LFW dataset):这就是我获取和扩充数据的方式(我使用的是 LFW 数据集):

ROTATIONS_RANGE = range(1, 25)
SHIFTS_RANGE = range(1, 18)
ZOOM_RANGE = (1.05, 1.075, 1.1, 1.125, 1.15, 1.175, 1.2, 1.225, 1.25, 1.275, 1.3, 1.325, 1.35, 1.375, 1.4)
IMG_SLICE = (slice(0, 64), slice(0, 64))


def pad_img(img):
    return np.pad(img, ((0, 2), (0, 17)), mode='constant')


def get_data(rotation=False, shifting=False, zooming=False):
    train_data = fetch_lfw_pairs(subset='train')
    test_data = fetch_lfw_pairs(subset='test')

    x1s_trn, x2s_trn, ys_trn, x1s_vld, x2s_vld = [], [], [], [], []

    for (pair, y) in zip(train_data.pairs, train_data.target):
        img1, img2 = pad_img(pair[0]), pad_img(pair[1])
        x1s_trn.append(img1)
        x2s_trn.append(img2)
        ys_trn.append(y)

        if rotation:
            for angle in ROTATIONS_RANGE:
                x1s_trn.append(np.asarray(rotate(img1, angle))[IMG_SLICE])
                x2s_trn.append(np.asarray(rotate(img2, angle))[IMG_SLICE])
                ys_trn.append(y)
                x1s_trn.append(np.asarray(rotate(img1, -angle))[IMG_SLICE])
                x2s_trn.append(np.asarray(rotate(img2, -angle))[IMG_SLICE])
                ys_trn.append(y)

        if shifting:
            for pixels_to_shift in SHIFTS_RANGE:
                x1s_trn.append(shift(img1, pixels_to_shift))
                x2s_trn.append(shift(img2, pixels_to_shift))
                ys_trn.append(y)
                x1s_trn.append(shift(img1, -pixels_to_shift))
                x2s_trn.append(shift(img2, -pixels_to_shift))
                ys_trn.append(y)

        if zooming:
            for zm in ZOOM_RANGE:
                x1s_trn.append(np.asarray(zoom(img1, zm))[IMG_SLICE])
                x2s_trn.append(np.asarray(zoom(img2, zm))[IMG_SLICE])
                ys_trn.append(y)

    for (img1, img2) in test_data.pairs:
        x1s_vld.append(pad_img(img1))
        x2s_vld.append(pad_img(img2))

    return (
        np.array(x1s_trn),
        np.array(x2s_trn),
        np.array(ys_trn),
        np.array(x1s_vld),
        np.array(x2s_vld),
        np.array(test_data.target)
    )

Thanks all!谢谢大家!

This is common problem with small size dataset (LFW dataset size = 13,000 images).这是小数据集(LFW 数据集大小 = 13,000 张图像)的常见问题。

you can try:你可以试试:

You could try to use batch normalization instead of dropout.您可以尝试使用批量标准化而不是 dropout。 Or even both (though some weird things usually happen when using both).或者两者兼而有之(尽管使用两者时通常会发生一些奇怪的事情)。

Or as @Abdu307 proposes, use pretrained layers.或者如@Abdu307 所建议的那样,使用预训练层。 You can train the model with a huge general dataset and later on do some fine-tuning with your face dataset.您可以使用庞大的通用数据集训练模型,然后对您的人脸数据集进行一些微调。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM