[英]Decaying the learning rate from the 100th epoch
知道
learning_rate = 0.0004
optimizer = torch.optim.Adam(
model.parameters(),
lr=learning_rate, betas=(0.5, 0.999)
)
有沒有辦法從第 100 個 epoch 開始衰減學習率?
這是一個好習慣嗎:
decayRate = 0.96
my_lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=my_optimizer, gamma=decayRate)
from torch.optim.lr_scheduler import MultiStepLR
# reduce the learning rate by 0.1 after epoch 100
scheduler = MultiStepLR(optimizer, milestones=[100,], gamma=0.1)
更多信息請參考: MultiStepLR 。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.