[英]How to pick the best learning rate and optimizer using LearningRateScheduler
I have known the LearningRateScheduler from Coursera course but copying it the same way will result in poor model performance.我从 Coursera 课程中知道 LearningRateScheduler,但以相同的方式复制它会导致模型性能不佳。 Perhaps due to the range I set.
也许是由于我设置的范围。 The instructions from Keras website is limited.
Keras 网站上的说明是有限的。
def duo_LSTM_model(X_train, y_train, X_test,y_test,num_classes,batch_size=68,units=128, learning_rate=0.005, epochs=20, dropout=0.2, recurrent_dropout=0.2 ):
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(tf.keras.layers.Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout,return_sequences=True)))
model.add(tf.keras.layers.Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
model.add(Dense(num_classes, activation='softmax'))
adamopt = tf.keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-8)
RMSopt = tf.keras.optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=1e-6)
SGDopt = tf.keras.optimizers.SGD(lr=learning_rate, momentum=0.9, decay=0.1, nesterov=False)
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
model.compile(loss='binary_crossentropy',
optimizer=adamopt,
metrics=['accuracy'])
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, y_test),
verbose=1,
callbacks=[lr_schedule])
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
yhat = model.predict(X_test)
return history, that
I have two questions.我有两个问题。
How 1e-8 * 10**(epoch / 20)
does this work? 1e-8 * 10**(epoch / 20)
如何工作?
How should we choose the range for the 3 different optimizers?我们应该如何选择 3 种不同优化器的范围?
Before answering the two questions in your post, let's first clarify LearningRateScheduler
is not for picking the 'best' learning rate.在回答您帖子中的两个问题之前,让我们首先澄清
LearningRateScheduler
不是为了选择“最佳”学习率。
It is an alternative to using a fixed learning rate is to instead vary the learning rate over the training process.
使用固定学习率的另一种方法是在训练过程中改变学习率。
I think what you really want to ask is "how to determine the best initial learning rate".我认为您真正想问的是“如何确定最佳初始学习率”。 If I am correct, then you need to learn about hyperparameter tuning.
如果我是对的,那么您需要了解超参数调整。
Answer to Q1:回答 Q1:
In order to answer how 1e-8 * 10**(epoch / 20)
works, let's create a simple regression task为了回答
1e-8 * 10**(epoch / 20)
工作的,让我们创建一个简单的回归任务
import tensorflow as tf
import tensorflow.keras.backend as K
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input,Dense
x = np.linspace(0,100,1000)
y = np.sin(x) + x**2
x_train,x_val,y_train,y_val = train_test_split(x,y,test_size=0.3)
input_x = Input(shape=(1,))
y = Dense(10,activation='relu')(input_x)
y = Dense(1,activation='relu')(y)
model = Model(inputs=input_x,outputs=y)
adamopt = tf.keras.optimizers.Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=1e-8)
def schedule_func(epoch):
print()
print('calling lr_scheduler on epoch %i' % epoch)
print('current learning rate %.8f' % K.eval(model.optimizer.lr))
print('returned value %.8f' % (1e-8 * 10**(epoch / 20)))
return 1e-8 * 10**(epoch / 20)
lr_schedule = tf.keras.callbacks.LearningRateScheduler(schedule_func)
model.compile(loss='mse',optimizer=adamopt,metrics=['mae'])
history = model.fit(x_train,y_train,
batch_size=8,
epochs=10,
validation_data=(x_val, y_val),
verbose=1,
callbacks=[lr_schedule])
In the script above, instead of using a lambda
function, I wrote a function schedule_func
.在上面的脚本中,我没有使用
lambda
函数,而是编写了一个函数schedule_func
。 Running the script, you will see that 1e-8 * 10**(epoch / 20)
just set the learning rate for each epoch
, and the learning rate is increasing.运行脚本,你会看到
1e-8 * 10**(epoch / 20)
只是设置了每个epoch
的学习率,并且学习率在增加。
Answer to Q2:对 Q2 的回答:
There are a bunch of nice posts, for example有一堆不错的帖子,例如
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.