繁体   English   中英

如何在 GridSearchCV 中应用 Dropout

[英]How to apply Dropout in GridSearchCV

我使用以下代码来调整 ANN 的超参数(隐藏层、隐藏神经元、批量大小、优化器)。

## Part 2 - Tuning the ANN
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
def build_regressor(hidden_nodes, hidden_layers, optimizer):
  regressor = Sequential()
  regressor.add(Dense(units = hidden_nodes, kernel_initializer = 'uniform', activation = 'relu', input_dim = 7))
  for layer_size in range(hidden_layers):
      regressor.add(Dense(hidden_nodes, kernel_initializer = 'uniform', activation = 'relu'))
  regressor.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'linear'))
  regressor.compile(optimizer = optimizer, loss = 'mse', metrics = ['mse'])
  return regressor
regressor = KerasRegressor(build_fn = build_regressor, epochs = 100)

# Create a dictionary of tuning parameters
parameters = {'hidden_nodes': list(range(2,101)), 'hidden_layers': [4,5,6,7], 'batch_size': [25,32], 'optimizer' : ['adam', 'nadam','RMSprop', 'adamax']}
grid_search = GridSearchCV(estimator = regressor, param_grid = parameters, scoring = 'neg_mean_squared_error', cv = 10, n_jobs = 4)

start = time.time()
grid_search = grid_search.fit(X_train, y_train)
end = time.time()
elapsed = (end - start)/3600

现在我想在每个隐藏层之后添加一个 droppout 层,如下所示:

regressor1 = Sequential()
regressor1.add(Dense(units = 41, kernel_initializer = 'uniform', activation = 'relu', input_dim = 7))
regressor1.add(Dropout(0.1))
regressor1.add(Dense(units = 41, kernel_initializer = 'uniform', activation = 'relu'))
regressor1.add(Dropout(0.1))
regressor1.add(Dense(units = 41, kernel_initializer = 'uniform', activation = 'relu'))
regressor1.add(Dropout(0.1))
regressor1.add(Dense(units = 41, kernel_initializer = 'uniform', activation = 'relu'))
regressor1.add(Dropout(0.1))
regressor1.add(Dense(units = 41, kernel_initializer = 'uniform', activation = 'relu'))
regressor1.add(Dropout(0.1))
regressor1.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'linear'))
regressor1.compile(optimizer = 'nadam', loss = 'mse', metrics = ['mse'])
history = regressor1.fit(X_train, y_train, batch_size = 25, epochs = 500, validation_data = (X_test, y_test), callbacks = [EarlyStopping(patience = 10)])

有没有办法与我当前的代码一起调整丢失层的数量(相同数量的隐藏层)和丢失率?

非常感谢你,

这是你的解决方案

https://machinelearningmastery.com/grid-search-hyperparameters-deep-learning-models-python-keras/

提示:我建议您寻找其他人创建的类似模型并使用他们使用的 dropout 值,这将为您节省大量时间

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM