简体   繁体   English

Caffe conv层的重量和尺寸

[英]Caffe conv layer weights and dimensions

I came across this nice article which gives an intuitive explanation of how convnets work. 我偶然发现了这篇很好的文章 ,它直观地解释了如何运行。

Now trying to understand what is going on exactly inside a caffe conv layer: 现在试图了解caffe conv层内部正在发生的事情:

With input data of shape 1 x 13 x 19 x 19, and 128 filters conv layer: 输入数据形状为1 x 13 x 19 x 19,以及128个滤波器转换层:

layers {
  name: "conv1_7x7_128"
  type: CONVOLUTION
  blobs_lr: 1.
  blobs_lr: 2.
  bottom: "data"
  top: "conv2"
  convolution_param {
    num_output: 128
    kernel_size: 7
    pad: 3
    weight_filler {
      type: "xavier"
      }
      bias_filler {
      type: "constant"
      }
    }
}

Layer output shape is 1 x 128 x 19 x 19 if i understand correctly. 如果我理解正确,图层输出形状是1 x 128 x 19 x 19。

Looking at the layer's weights' shapes in net->layers()[1]->blobs() : net->layers()[1]->blobs()查看图层的权重形状:

layer  1: type Convolution  'conv1_7x7_128'
  blob 0: 128 13 7 7
  blob 1: 128

Looks like blob 0 has all the weigths: one 7x7 matrix per plane (13) per filter (128). 看起来blob 0具有所有的重量:每个平面一个7x7矩阵(13)每个滤波器(128)。

Doing convolutions with blob 0 on 1 x 13 x 19 x 19 data, if i understand correctly we end up with 128 x 13 x 19 x 19 output (there's padding so each 7x7 matrix produces one number for each pixel) 在1 x 13 x 19 x 19数据上使用blob 0进行卷积,如果我理解正确,我们最终得到128 x 13 x 19 x 19输出(有填充,因此每个7x7矩阵为每个像素生成一个数字)

  • How does 128 x 13 x 19 x 19 turn into layer's 1 x 128 x 19 x 19 output ? 128 x 13 x 19 x 19如何变成图层的1 x 128 x 19 x 19输出?

  • What are the 128 weights in blob 1 ? blob 1中的128个权重是多少?

Bonus question: what is blobs_lr ? 奖金问题:什么是blobs_lr

You are quoting an older version of caffe's prototxt format. 您正在引用旧版本的caffe原型文本格式。 Adjusting to new format will give you 调整为新格式将给你

layer {  # layer and not layer*s*
  name: "conv1_7x7_128"
  type: "Convolution"  # tyoe as string
  param { lr_mult: 1. }  # instead of blobs_lr
  param { lr_mult: 2. }  # instead of blobs_lr 
  bottom: "data"
  top: "conv2"
  convolution_param {
    num_output: 128
    kernel_size: 7
    pad: 3
    weight_filler {
      type: "xavier"
      }
      bias_filler {
      type: "constant"
      }
    }
}

If you have input data of shape 1 x 13 x 19 x 19, means your batch_size is 1, you have 13 channels with spatial dimensions of 19 x 19. 如果您输入的数据shape 1 x 13 x 19 x 19,则表示您的batch_size为1,您有13个通道,空间尺寸为19 x 19。
Applying 128 filters of 7 x 7 (each filter is applied to all 13 input channels) means you have 128 filters of shape 13 x 7 x 7 (this is the shape of your first layer's parameter). 应用128个7 x 7过滤器(每个过滤器应用于所有13个输入通道)意味着您有128个shape 13 x 7 x 7的过滤器(这是您第一层参数的shape )。 Applying each filter results with a single output channel 1 x 1 x 19 x 19, since you have 128 such filters you end up with 1 x 128 x 19 x 19 output. 使用单个输出通道1 x 1 x 19 x 19应用每个滤波器结果,因为您有128个这样的滤波器,最终得到1 x 128 x 19 x 19输出。

The second layer's parameter is the bias term - an additive scalar to the result of each filter. 第二层的参数是偏差项 - 每个滤波器结果的附加标量。 You can turn off the bias term by adding 您可以通过添加来关闭偏差项

bias_term: false

To the convolution_param of you layer. 到你层的convolution_param

You can read more about convolution layer here . 您可以在此处阅读有关卷积图层的更多信息。

As for the bonus question, Eliethesaiyan already answered it well in his comment . 至于奖金问题,Eliethesaiyan在他的评论中已经很好地回答了这个问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM