[英]Change training parameters in the config file after training for a certain number of steps using the Tensorflow Object Detection API
I have trained an Inception Resnet v2 model on a dataset for 61000 steps so far with the following values in the configuration file of the model:到目前为止,我已经在 61000 步的数据集上训练了一个 Inception Resnet v2 模型,在模型的配置文件中使用以下值:
adam_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 150000
learning_rate: .0001
}
Now, If I want to reduce the learning rate of my model from now on, will making the below change:现在,如果我想从现在开始降低模型的学习率,将进行以下更改:
adam_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0003
schedule {
step: 60000
learning_rate: .0001
}
And restarting from the checkpoint actually reduces the learning rate of my model from 0.0003
to 0.0001
since the number of steps that it has already trained for so far is greater than 60000?从检查点重新开始实际上将我的模型的学习率从
0.0003
到0.0001
因为它到目前为止已经训练的步数大于 60000? If not, is there any other way to achieve this?如果没有,有没有其他方法可以实现这一目标?
One possible way is to use the already trained 61000 steps model file as the fine-tune checkpoint and then you can modify the lr as you like.一种可能的方法是使用已经训练好的 61000 步模型文件作为微调检查点,然后您可以根据需要修改 lr。 In this case, you are essentially training from step 1.
在这种情况下,您实际上是从第 1 步开始训练。
转到您的配置文件并搜索此节点,然后添加粗线 train_config: { num_steps: 5000
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.