简体   繁体   English

超参数调整中的 keras-tuner 错误

[英]keras-tuner error in hyperparameter tuning

i am trying my first time to get a keras-tuner tuned deep learning model.我第一次尝试获得一个 keras-tuner 调谐的深度学习 model。 My tuning code goes like below:我的调优代码如下:

def build_model_test(hp):
    model = models.Sequential()
    model.add(layers.InputLayer(input_shape=(100,28)))
    model.add(layers.Dense(28,activation = 'relu'))
    model.add(BatchNormalization(momentum = 0.99))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Conv1D(filters=hp.Int(
    'num_filters',
    16, 128,
    step=16
),kernel_size=3,strides=1,padding='same',activation='relu'))
    model.add(BatchNormalization(momentum = 0.99))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Conv1D(filters=hp.Int(
    'num_filters',
    16, 128,
    step=16
),kernel_size=3,strides=1,padding='same',activation='relu'))
    model.add(BatchNormalization(momentum = 0.99))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Conv1D(filters=hp.Int(
    'num_filters',
    16, 128,
    step=16
),kernel_size=3,strides=1,padding='same',activation='relu'))
    model.add(BatchNormalization(momentum = 0.99))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Dense(units=hp.Int('units',min_value=16,max_value=512,step=32,default=128),activation = 'relu'))
    model.add(Dropout(hp.Float('dropout', 0, 0.5, step=0.1, default=0.5)))
    model.add(layers.Dense(1, activation = 'linear'))

    model.compile(
        optimizer='adam',
        loss=['mean_squared_error'],
        metrics=[tf.keras.metrics.RootMeanSquaredError()]
    )
    return model

tuner = RandomSearch(
    build_model_test,
    objective='mean_squared_error',
    max_trials=20,
    executions_per_trial=3,
    directory='my_dir',
    project_name='helloworld')


x_train,x_test=dataframes[0:734,:,:],dataframes[734:1100,:,:]
y_train,y_test=target_fx[0:734,:,:],target_fx[734:1100,:,:]


tuner.search(x_train, y_train,
             epochs=20,
             validation_data=(x_test, y_test))

models = tuner.get_best_models(num_models=1)

but as soon as the 20th epoch arrives it prints this error但是一旦第 20 个时代到来,它就会打印出这个错误

ValueError                                Traceback (most recent call last)
<ipython-input-59-997de3dfa9e5> in <module>
     52 tuner.search(x_train, y_train,
     53              epochs=20,
---> 54              validation_data=(x_test, y_test))
     55 
     56 models = tuner.get_best_models(num_models=1)

~\Anaconda3\envs\deeplearning\lib\site-packages\kerastuner\engine\base_tuner.py in search(self, *fit_args, **fit_kwargs)
    128 
    129             self.on_trial_begin(trial)
--> 130             self.run_trial(trial, *fit_args, **fit_kwargs)
    131             self.on_trial_end(trial)
    132         self.on_search_end()

~\Anaconda3\envs\deeplearning\lib\site-packages\kerastuner\engine\multi_execution_tuner.py in run_trial(self, trial, *fit_args, **fit_kwargs)
    107             averaged_metrics[metric] = np.mean(execution_values)
    108         self.oracle.update_trial(
--> 109             trial.trial_id, metrics=averaged_metrics, step=self._reported_step)
    110 
    111     def _configure_tensorboard_dir(self, callbacks, trial_id, execution=0):

~\Anaconda3\envs\deeplearning\lib\site-packages\kerastuner\engine\oracle.py in update_trial(self, trial_id, metrics, step)
    182         
    183         trial = self.trials[trial_id]
--> 184         self._check_objective_found(metrics)
    185         for metric_name, metric_value in metrics.items():
    186             if not trial.metrics.exists(metric_name):

~\Anaconda3\envs\deeplearning\lib\site-packages\kerastuner\engine\oracle.py in _check_objective_found(self, metrics)
    351                 'Objective value missing in metrics reported to the '
    352                 'Oracle, expected: {}, found: {}'.format(
--> 353                     objective_names, metrics.keys()))
    354 
    355     def _get_trial_dir(self, trial_id):

ValueError: Objective value missing in metrics reported to the Oracle, expected: ['mean_squared_error'], found: dict_keys(['loss', 'root_mean_squared_error', 'val_loss', 'val_root_mean_squared_error'])

which i do not get since i have specified to the model as mean squared error to follow, do you know what commands should i change to get the result i want?我没有得到,因为我已将 model 指定为要遵循的均方误差,你知道我应该更改哪些命令以获得我想要的结果吗?

Also can i call early stopping in the keras-tuner?我也可以在 keras-tuner 中提前停止吗?

you should use objective='root_mean_squared_error'你应该使用objective='root_mean_squared_error'

    tuner = RandomSearch(
    build_model_test,
    objective='root_mean_squared_error',
    max_trials=20,
    executions_per_trial=3,
    directory='my_dir',
    project_name='helloworld') 

I would rather use 'val_root_mean_squared_error' as most probably you are interested to decrease the error on the validation dataset.我宁愿使用“val_root_mean_squared_error”,因为您很可能有兴趣减少验证数据集上的错误。

In addition to what was given in the previous response, you have also inquired about the possibility of calling early stopping in the keras-tuner.除了之前的回复中给出的内容外,您还询问了在 keras-tuner 中调用提前停止的可能性。 This is indeed possible with an early stopping callback.这确实可以通过提前停止回调来实现。

First assign the EarlyStopping callback to a variable with the correct value to monitor.首先将 EarlyStopping 回调分配给具有要监视的正确值的变量。 In this case I use 'val_loss'.在这种情况下,我使用“val_loss”。 This would look like:这看起来像:

stop_early = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)

Then change the line where you start the hyperparameter search like so:然后更改开始超参数搜索的行,如下所示:

tuner.search(x_train, y_train,
         epochs=20,
         validation_data=(x_test, y_test), callbacks=[stop_early])

Note the callbacks argument.注意回调参数。 Feel free to change any of the arguments you define the callback with to your liking/ application随意更改您定义回调的任何 arguments 到您的喜好/应用程序

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM