简体   繁体   English

配置Fast-Rcnn.config以使用Adam优化器和其他参数

[英]Configure Fast-Rcnn.config to use Adam optimizer and other parameters

I have following fast_rcnn_resnet101_coco.config ( here ). 我关注了fast_rcnn_resnet101_coco.config( 这里 )。 In this config file I have replaced momentum_optimizer with adam optimizer as follows: 在此配置文件中,我已使用adam优化器替换了momentum_optimizer,如下所示:

train_config: {
  batch_size: 1
  optimizer {
    #momentum_optimizer: {
    adam_optimizer: {
      learning_rate: {
        manual_step_learning_rate {
          initial_learning_rate: 0.00001
          schedule {
            step: 4500
            learning_rate: .00001
          }
          schedule {
            step: 10000
            learning_rate: .000001
          }
        }
      }
      #momentum_optimizer_value: 0.9
    }
    use_moving_average: false
  }
  gradient_clipping_by_norm: 10.0
  fine_tune_checkpoint: "faster_rcnn_resnet101_coco_2018_01_28/model.ckpt"
  from_detection_checkpoint: true
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
}

I have referred Tensorflow Object Detection: use Adam instead of RMSProp to do this change. 我已经提到了Tensorflow对象检测:使用Adam而不是RMSProp来进行此更改。 My aim is to configure my faster rcnnresnet101.config file ( attached here ) to match that of this file: 我的目标是配置我的更快的rcnnresnet101.config文件( 在此处附加 )以匹配此文件的文件:

在此输入图像描述

My aim is that my .config file should have all the parameters mentioned in the .yaml file. 我的目标是我的.config文件应该包含.yaml文件中提到的所有参数。 So far I have succeed in doing this only for one parameter ("learning rate"). 到目前为止,我已经成功地只为一个参数(“学习率”)这样做。 How can I integrate rpn_batch size, step size etc. parameters in my config file ? 如何在配置文件中集成rpn_batch大小,步长等参数?

The basic fact you need to understand is the following : 您需要了解的基本事实如下:

The config file is must match the message TrainEvalPipelineConfig . 配置文件必须与消息TrainEvalPipelineConfig匹配。 Now that message consists of multiple components. 现在该消息由多个组件组成。 So, if you want to modify something in a component, you should go to the proto file in which that component message is defined, see the possible parameters therein and then modify the configuration file according to that. 因此,如果要修改组件中的某些内容,则应该转到定义该组件消息的proto文件,查看其中的可能参数,然后根据该文件修改配置文件。 That is exactly what you ultimately did in order to change the optimizer. 这正是您最终为更改优化程序所做的工作。

To give you a hint, if you want to change the RPN batch size, you have to modify this parameter . 要给出提示,如果要更改RPN批处理大小,则必须修改此参数 So, look it up in the proto file and simply add it to your final configuration file. 因此,在proto文件中查找它,只需将其添加到最终配置文件中即可。

To give you an illustration, if I were to use the original configuration file with one minor change which is a RPN batch size of 128, my configuration file will appear as follows : 为了举例说明,如果我使用原始配置文件进行一次微小更改(RPN批量大小为128),我的配置文件将显示如下:

# Faster R-CNN with Resnet-101 (v1), configuration for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.

model {
  faster_rcnn {
    num_classes: 90
    image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 600
        max_dimension: 1024
      }
    }
    feature_extractor {
      type: 'faster_rcnn_resnet101'
      first_stage_features_stride: 16
    }
    first_stage_anchor_generator {
      grid_anchor_generator {
        scales: [0.25, 0.5, 1.0, 2.0]
        aspect_ratios: [0.5, 1.0, 2.0]
        height_stride: 16
        width_stride: 16
      }
    }
    first_stage_box_predictor_conv_hyperparams {
      op: CONV
      regularizer {
        l2_regularizer {
          weight: 0.0
        }
      }
      initializer {
        truncated_normal_initializer {
          stddev: 0.01
        }
      }
    }
    first_stage_nms_score_threshold: 0.0
    first_stage_nms_iou_threshold: 0.7
    first_stage_max_proposals: 300
    first_stage_localization_loss_weight: 2.0
    first_stage_objectness_loss_weight: 1.0
    # below i modify the RPN batch size to 128
    first_stage_minibatch_size: 128 
    initial_crop_size: 14
    maxpool_kernel_size: 2
    maxpool_stride: 2
    second_stage_box_predictor {
      mask_rcnn_box_predictor {
        use_dropout: false
        dropout_keep_probability: 1.0
        fc_hyperparams {
          op: FC
          regularizer {
            l2_regularizer {
              weight: 0.0
            }
          }
          initializer {
            variance_scaling_initializer {
              factor: 1.0
              uniform: true
              mode: FAN_AVG
            }
          }
        }
      }
    }
    second_stage_post_processing {
      batch_non_max_suppression {
        score_threshold: 0.0
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 300
      }
      score_converter: SOFTMAX
    }
    second_stage_localization_loss_weight: 2.0
    second_stage_classification_loss_weight: 1.0
  }
}

train_config: {
  batch_size: 1
  optimizer {
    momentum_optimizer: {
      learning_rate: {
        manual_step_learning_rate {
          initial_learning_rate: 0.0003
          schedule {
            step: 900000
            learning_rate: .00003
          }
          schedule {
            step: 1200000
            learning_rate: .000003
          }
        }
      }
      momentum_optimizer_value: 0.9
    }
    use_moving_average: false
  }
  gradient_clipping_by_norm: 10.0
  fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
  from_detection_checkpoint: true
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record-?????-of-00100"
  }
  label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}

eval_config: {
  num_examples: 8000
  # Note: The below line limits the evaluation process to 10 evaluations.
  # Remove the below line to evaluate indefinitely.
  max_evals: 10
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record-?????-of-00010"
  }
  label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
  shuffle: false
  num_readers: 1
}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何在 tensorflow 2.0.0 中使用 Lazy Adam 优化器 - How to use Lazy Adam optimizer in tensorflow 2.0.0 使用 tf.optimizer.Adam 来最小化 Tensorflow Core 2 alpha 中变量的损失的正确方法 - Correct way to use tf.optimizer.Adam to minimize loss with respect to a Variable in Tensorflow Core 2 alpha 使用配置文件或CLI Args作为外部参数 - Use Config File or CLI Args for External Parameters ValueError:无法解释优化器标识符: <tensorflow.python.keras.optimizers.adam object at 0x7f149b4f7908></tensorflow.python.keras.optimizers.adam> - ValueError: Could not interpret optimizer identifier: <tensorflow.python.keras.optimizers.Adam object at 0x7f149b4f7908> 我可以使用配置文件保存连接字符串参数吗? - Can I use a config file to hold connection string parameters? 如何正确使用 tensorflow2 中的优化器? - How to use the optimizer in tensorflow2 correct? 在pytorch中,如何将add_param_group()与优化器一起使用? - In pytorch how do you use add_param_group () with a optimizer? 尝试使用 GEKKO OPTIMIZER 时,“找不到与指定签名和转换匹配的循环匹配 ufunc 解决方案” - "No loop matching the specified signature and casting was found for ufunc solve" when trying to use GEKKO OPTIMIZER 快速比较列表中项目以在python中实现相似性的方法 - Fast way to compare items in list with each other for similarity in python 如何使用 tf.compat.v1.train 来使用 AdagradDAOptimizer 优化器? - How do I use the AdagradDAOptimizer Optimizer by using tf.compat.v1.train?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM