简体   繁体   English

Caffe模型无法学习

[英]Caffe model fails to learn

I have the following convolutional model implemented in Keras, where after training for 100,000 epoch, it shows excellent performance with greate accuracy. 我在Keras中实现了以下卷积模型,在训练了100,000个纪元后,它以非常高的精度显示了出色的性能。

img_rows, img_cols = 24, 15
input_shape = (img_rows, img_cols, 1)
nb_filters = 32
pool_size = (2, 2)
kernel_size = (3, 3)

model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
                        border_mode='valid',
                        input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='adadelta',
              metrics=['accuracy'])

However after trying to implement the same model in Caffe, it fails to train with an almost fixed loss value >=2.1 && <=2.6. 但是,在尝试在Caffe中实现相同的模型后,它无法以> = 2.1 && <= 2.6的几乎固定的损失值进行训练。 Here is my Caffe prototext implementation: 这是我的Caffe原型实现:

name: "FneishNet"
layer {
  name: "inlayer1"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  data_param {
    source: "examples/fneishnet_numbers/fneishnet_numbers_train_lmdb"
    batch_size: 128
    backend: LMDB
  }
}
layer {
  name: "inlayer1"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  data_param {
    source: "examples/fneishnet_numbers/fneishnet_numbers_val_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 32
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "conv1"
  top: "conv1"
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "conv1"
  top: "conv2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 32
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv2"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 1
  }
}
layer {
  name: "drop1"
  type: "Dropout"
  bottom: "pool1"
  top: "pool1"
  dropout_param {
    dropout_ratio: 0.25
  }
}
layer {
  name: "flatten1"
  type: "Flatten"
  bottom: "pool1"
  top: "flatten1"
}
layer {
  name: "fc1"
  type: "InnerProduct"
  bottom: "flatten1"
  top: "fc1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 128
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu3"
  type: "ReLU"
  bottom: "fc1"
  top: "fc1"
}
layer {
  name: "drop2"
  type: "Dropout"
  bottom: "fc1"
  top: "fc1"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc2"
  type: "InnerProduct"
  bottom: "fc1"
  top: "fc2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 11
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "fc2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "fc2"
  bottom: "label"
  top: "loss"
}

And here is my model solver (hyper-parameters): 这是我的模型求解器(超参数):

net: "models/fneishnet_numbers/train_val.prototxt"
test_iter: 1000
test_interval: 4000
test_initialization: false
display: 40
average_loss: 40
base_lr: 0.01
gamma: 0.1
lr_policy: "poly"
power: 0.5
max_iter: 3000000
momentum: 0.9
weight_decay: 0.0005
snapshot: 100000
snapshot_prefix: "models/fneishnet_numbers/fneishnet_numbers_quick"
solver_mode: CPU

I believe that if i have no problem translating the model into Caffe, then it should performs the same way it do in Keras, so i think i had missed something. 我相信,如果我将模型转换为Caffe没问题,那么它的执行方式应该与Keras中的执行方式相同,所以我认为我错过了一些东西。 Any help would be appreciated, thanks. 任何帮助,将不胜感激,谢谢。

poly: the effective learning rate follows a polynomial decay, to be // zero by the max_iter. poly:有效学习率遵循多项式衰减,由max_iter //变为零。 return base_lr (1 - iter/max_iter) ^ (power) 返回base_lr(1-iter / max_iter)^(幂)

So basically, are you sure you want to keep power set to 0.5 in returns base_lr (1 - iter/max_iter) ^ (power)? 因此,基本上,您确定要在返回值base_lr(1-iter / max_iter)^(power)中将功率设置为0.5吗? I think that might be the problem as you are decaying to minus something, try 2? 我认为这可能是问题所在,因为您正在衰减减去某些东西,尝试2?

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM