简体   繁体   中英

deep learning - a number of naive questions about caffe

I am trying to understand the basics of caffe, in particular to use with python.

My understanding is that the model definition (say a given neural net architecture) must be included in the '.prototxt' file.

And that when you train the model on data using the '.prototxt' , you save the weights/model parameters to a '.caffemodel' file

Also, there is a difference between the '.prototxt' file used for training (which includes learning rate and regularization parameters) and the one used for testing/deployment, which does not include them.

Questions:

  1. is it correct that the '.prototxt' is the basis for training and that the '.caffemodel' is the result of training (weights), using the '.prototxt' on the training data?
  2. is it correct that there is a '.prototxt' for training and one for testing, and that there are only slight differences (learning rate and regularization factors on training), but that the nn architecture (assuming you use neural nets) is the same?

Apologies for such basic questions and possibly some very incorrect assumptions, I am doing some online research and the lines above summarize my understanding to date.

Let's take a look at one of the examples provided with BVLC/caffe: bvlc_reference_caffenet .
You'll notice that in fact there are 3 '.prototxt' files:

The net architecture represented by train_val.prototxt and deploy.prototxt should be mostly similar. There are few main difference between the two:

  • Input data: during training one usually use a predefined set of inputs for training/validation. Therefore, train_val usually contains an explicit input layer, eg, "HDF5Data" layer or a "Data" layer. On the other hand, deploy usually does not know in advance what inputs it will get, it only contains a statement:

     input: "data" input_shape { dim: 10 dim: 3 dim: 227 dim: 227 } 

    that declares what input the net expects and what should be its dimensions.
    Alternatively, One can put an "Input" layer:

     layer { name: "input" type: "Input" top: "data" input_param { shape { dim: 10 dim: 3 dim: 227 dim: 227 } } } 
  • Input labels: during training we supply the net with the "ground truth" expected outputs, this information is obviously not available during deploy .
  • Loss layers: during training one must define a loss layer. This layer tells the solver in what direction it should tune the parameters at each iteration. This loss compares the net's current prediction to the expected "ground truth". The gradient of the loss is back-propagated to the rest of the net and this is what drives the learning process. During deploy there is no loss and no back-propagation.

In caffe, you supply a train_val.prototxt describing the net, the train/val datasets and the loss. In addition, you supply a solver.prototxt describing the meta parameters for training. The output of the training process is a .caffemodel binary file containing the trained parameters of the net.
Once the net was trained, you can use the deploy.prototxt with the .caffemodel parameters to predict outputs for new and unseen inputs.

Yes but, there is diffrent types of .prototxt files for example

https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt

this is for the training and testing network

for commandline training ypu can use a solver file which is also .prototxt file for example

https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_solver.prototxt

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM